All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/6] omapdrm: struct_mutex removal
@ 2018-05-25 16:39 Laurent Pinchart
  2018-05-25 16:39 ` [PATCH v2 1/6] drm/omap: gem: Rename GEM function with omap_gem_* prefix Laurent Pinchart
                   ` (6 more replies)
  0 siblings, 7 replies; 10+ messages in thread
From: Laurent Pinchart @ 2018-05-25 16:39 UTC (permalink / raw)
  To: dri-devel; +Cc: Daniel Vetter, Tomi Valkeinen

Hello,

This patch series removes the usage of struct_mutex from the omapdrm driver in
order to switch to gem_free_object_unlocked(). The series is inspired by
Daniel Vetter's recent gem_free_object_unlocked() patches (starting with
"[PATCH 1/5] staging/vboxvideo: Use gem_free_object_unlocked") and
includes patches "[PATCH 4/5] drm/omapdrm: Fix mm_list locking" and
"[PATCH] drm/omapdrm: Switch to gem_free_object_unlocked" (the latter
modified due to the rebase).

When reviewing Daniel's patches I noticed a potential issue in lock handling
which prompted me to go and remove all usage of struct_mutex from the omapdrm
driver. Instead of replacing it with a device-wide lock, I have decided to
create per-GEM object locks as there is no need, as far as I can see, to
serialize operations across separate GEM objects.

The series starts with a bit of cleanup in the form of renaming (1/6) and
refactoring (2/6), followed by removal of struct_mutex (3/6 and 4/6). It then
ends with Daniel's patches that switch to gem_free_object_unlocked().

The patches are based on top of the latest drm-misc. They have been tested on
a Pandaboard.

Daniel Vetter (2):
  drm/omap: gem: Fix mm_list locking
  drm/omap: gem: Switch to gem_free_object_unlocked()

Laurent Pinchart (4):
  drm/omap: gem: Rename GEM function with omap_gem_* prefix
  drm/omap: gem: Merge __omap_gem_get_pages() and
    omap_gem_attach_pages()
  drm/omap: gem: Don't take struct_mutex to get GEM object mmap offset
  drm/omap: gem: Replace struct_mutex usage with omap_obj private lock

 drivers/gpu/drm/omapdrm/omap_debugfs.c |   9 +-
 drivers/gpu/drm/omapdrm/omap_drv.c     |   4 +-
 drivers/gpu/drm/omapdrm/omap_drv.h     |   2 +-
 drivers/gpu/drm/omapdrm/omap_fbdev.c   |   8 +-
 drivers/gpu/drm/omapdrm/omap_gem.c     | 236 +++++++++++++++++----------------
 5 files changed, 131 insertions(+), 128 deletions(-)

-- 
Regards,

Laurent Pinchart

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2 1/6] drm/omap: gem: Rename GEM function with omap_gem_* prefix
  2018-05-25 16:39 [PATCH v2 0/6] omapdrm: struct_mutex removal Laurent Pinchart
@ 2018-05-25 16:39 ` Laurent Pinchart
  2018-05-25 16:39 ` [PATCH v2 2/6] drm/omap: gem: Merge __omap_gem_get_pages() and omap_gem_attach_pages() Laurent Pinchart
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Laurent Pinchart @ 2018-05-25 16:39 UTC (permalink / raw)
  To: dri-devel; +Cc: Daniel Vetter, Tomi Valkeinen

get_pages() as a local function name is too generic and easily confused
for a generic MM kernel function. Rename it to __omap_gem_get_pages().

Rename the is_contiguous(), is_cache_coherent(), evict(), evict_entry(),
fault_1d() and fault_2d() functions for the same reason.

Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
---
 drivers/gpu/drm/omapdrm/omap_gem.c | 48 +++++++++++++++++++++-----------------
 1 file changed, 26 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/omapdrm/omap_gem.c b/drivers/gpu/drm/omapdrm/omap_gem.c
index 0faf042b82e1..6cfcf60cffe3 100644
--- a/drivers/gpu/drm/omapdrm/omap_gem.c
+++ b/drivers/gpu/drm/omapdrm/omap_gem.c
@@ -156,7 +156,7 @@ static u64 mmap_offset(struct drm_gem_object *obj)
 	return drm_vma_node_offset_addr(&obj->vma_node);
 }
 
-static bool is_contiguous(struct omap_gem_object *omap_obj)
+static bool omap_gem_is_contiguous(struct omap_gem_object *omap_obj)
 {
 	if (omap_obj->flags & OMAP_BO_MEM_DMA_API)
 		return true;
@@ -171,7 +171,7 @@ static bool is_contiguous(struct omap_gem_object *omap_obj)
  * Eviction
  */
 
-static void evict_entry(struct drm_gem_object *obj,
+static void omap_gem_evict_entry(struct drm_gem_object *obj,
 		enum tiler_fmt fmt, struct omap_drm_usergart_entry *entry)
 {
 	struct omap_gem_object *omap_obj = to_omap_bo(obj);
@@ -199,7 +199,7 @@ static void evict_entry(struct drm_gem_object *obj,
 }
 
 /* Evict a buffer from usergart, if it is mapped there */
-static void evict(struct drm_gem_object *obj)
+static void omap_gem_evict(struct drm_gem_object *obj)
 {
 	struct omap_gem_object *omap_obj = to_omap_bo(obj);
 	struct omap_drm_private *priv = obj->dev->dev_private;
@@ -213,7 +213,7 @@ static void evict(struct drm_gem_object *obj)
 				&priv->usergart[fmt].entry[i];
 
 			if (entry->obj == obj)
-				evict_entry(obj, fmt, entry);
+				omap_gem_evict_entry(obj, fmt, entry);
 		}
 	}
 }
@@ -291,7 +291,8 @@ static int omap_gem_attach_pages(struct drm_gem_object *obj)
 /* acquire pages when needed (for example, for DMA where physically
  * contiguous buffer is not required
  */
-static int get_pages(struct drm_gem_object *obj, struct page ***pages)
+static int __omap_gem_get_pages(struct drm_gem_object *obj,
+				struct page ***pages)
 {
 	struct omap_gem_object *omap_obj = to_omap_bo(obj);
 	int ret = 0;
@@ -371,7 +372,7 @@ size_t omap_gem_mmap_size(struct drm_gem_object *obj)
  */
 
 /* Normal handling for the case of faulting in non-tiled buffers */
-static int fault_1d(struct drm_gem_object *obj,
+static int omap_gem_fault_1d(struct drm_gem_object *obj,
 		struct vm_area_struct *vma, struct vm_fault *vmf)
 {
 	struct omap_gem_object *omap_obj = to_omap_bo(obj);
@@ -385,7 +386,7 @@ static int fault_1d(struct drm_gem_object *obj,
 		omap_gem_cpu_sync_page(obj, pgoff);
 		pfn = page_to_pfn(omap_obj->pages[pgoff]);
 	} else {
-		BUG_ON(!is_contiguous(omap_obj));
+		BUG_ON(!omap_gem_is_contiguous(omap_obj));
 		pfn = (omap_obj->dma_addr >> PAGE_SHIFT) + pgoff;
 	}
 
@@ -396,7 +397,7 @@ static int fault_1d(struct drm_gem_object *obj,
 }
 
 /* Special handling for the case of faulting in 2d tiled buffers */
-static int fault_2d(struct drm_gem_object *obj,
+static int omap_gem_fault_2d(struct drm_gem_object *obj,
 		struct vm_area_struct *vma, struct vm_fault *vmf)
 {
 	struct omap_gem_object *omap_obj = to_omap_bo(obj);
@@ -443,7 +444,7 @@ static int fault_2d(struct drm_gem_object *obj,
 
 	/* evict previous buffer using this usergart entry, if any: */
 	if (entry->obj)
-		evict_entry(entry->obj, fmt, entry);
+		omap_gem_evict_entry(entry->obj, fmt, entry);
 
 	entry->obj = obj;
 	entry->obj_pgoff = base_pgoff;
@@ -524,7 +525,7 @@ int omap_gem_fault(struct vm_fault *vmf)
 	mutex_lock(&dev->struct_mutex);
 
 	/* if a shmem backed object, make sure we have pages attached now */
-	ret = get_pages(obj, &pages);
+	ret = __omap_gem_get_pages(obj, &pages);
 	if (ret)
 		goto fail;
 
@@ -535,9 +536,9 @@ int omap_gem_fault(struct vm_fault *vmf)
 	 */
 
 	if (omap_obj->flags & OMAP_BO_TILED)
-		ret = fault_2d(obj, vma, vmf);
+		ret = omap_gem_fault_2d(obj, vma, vmf);
 	else
-		ret = fault_1d(obj, vma, vmf);
+		ret = omap_gem_fault_1d(obj, vma, vmf);
 
 
 fail:
@@ -694,7 +695,8 @@ int omap_gem_roll(struct drm_gem_object *obj, u32 roll)
 	/* if we aren't mapped yet, we don't need to do anything */
 	if (omap_obj->block) {
 		struct page **pages;
-		ret = get_pages(obj, &pages);
+
+		ret = __omap_gem_get_pages(obj, &pages);
 		if (ret)
 			goto fail;
 		ret = tiler_pin(omap_obj->block, pages, npages, roll, true);
@@ -722,7 +724,7 @@ int omap_gem_roll(struct drm_gem_object *obj, u32 roll)
  * the omap_obj->dma_addrs[i] is set to the DMA address, and the page is
  * unmapped from the CPU.
  */
-static inline bool is_cached_coherent(struct drm_gem_object *obj)
+static inline bool omap_gem_is_cached_coherent(struct drm_gem_object *obj)
 {
 	struct omap_gem_object *omap_obj = to_omap_bo(obj);
 
@@ -738,7 +740,7 @@ void omap_gem_cpu_sync_page(struct drm_gem_object *obj, int pgoff)
 	struct drm_device *dev = obj->dev;
 	struct omap_gem_object *omap_obj = to_omap_bo(obj);
 
-	if (is_cached_coherent(obj))
+	if (omap_gem_is_cached_coherent(obj))
 		return;
 
 	if (omap_obj->dma_addrs[pgoff]) {
@@ -758,7 +760,7 @@ void omap_gem_dma_sync_buffer(struct drm_gem_object *obj,
 	struct page **pages = omap_obj->pages;
 	bool dirty = false;
 
-	if (is_cached_coherent(obj))
+	if (omap_gem_is_cached_coherent(obj))
 		return;
 
 	for (i = 0; i < npages; i++) {
@@ -806,7 +808,7 @@ int omap_gem_pin(struct drm_gem_object *obj, dma_addr_t *dma_addr)
 
 	mutex_lock(&obj->dev->struct_mutex);
 
-	if (!is_contiguous(omap_obj) && priv->has_dmm) {
+	if (!omap_gem_is_contiguous(omap_obj) && priv->has_dmm) {
 		if (omap_obj->dma_addr_cnt == 0) {
 			struct page **pages;
 			u32 npages = obj->size >> PAGE_SHIFT;
@@ -815,7 +817,7 @@ int omap_gem_pin(struct drm_gem_object *obj, dma_addr_t *dma_addr)
 
 			BUG_ON(omap_obj->block);
 
-			ret = get_pages(obj, &pages);
+			ret = __omap_gem_get_pages(obj, &pages);
 			if (ret)
 				goto fail;
 
@@ -853,7 +855,7 @@ int omap_gem_pin(struct drm_gem_object *obj, dma_addr_t *dma_addr)
 		omap_obj->dma_addr_cnt++;
 
 		*dma_addr = omap_obj->dma_addr;
-	} else if (is_contiguous(omap_obj)) {
+	} else if (omap_gem_is_contiguous(omap_obj)) {
 		*dma_addr = omap_obj->dma_addr;
 	} else {
 		ret = -EINVAL;
@@ -953,7 +955,7 @@ int omap_gem_get_pages(struct drm_gem_object *obj, struct page ***pages,
 		return 0;
 	}
 	mutex_lock(&obj->dev->struct_mutex);
-	ret = get_pages(obj, pages);
+	ret = __omap_gem_get_pages(obj, pages);
 	mutex_unlock(&obj->dev->struct_mutex);
 	return ret;
 }
@@ -979,7 +981,9 @@ void *omap_gem_vaddr(struct drm_gem_object *obj)
 	WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex));
 	if (!omap_obj->vaddr) {
 		struct page **pages;
-		int ret = get_pages(obj, &pages);
+		int ret;
+
+		ret = __omap_gem_get_pages(obj, &pages);
 		if (ret)
 			return ERR_PTR(ret);
 		omap_obj->vaddr = vmap(pages, obj->size >> PAGE_SHIFT,
@@ -1081,7 +1085,7 @@ void omap_gem_free_object(struct drm_gem_object *obj)
 	struct omap_drm_private *priv = dev->dev_private;
 	struct omap_gem_object *omap_obj = to_omap_bo(obj);
 
-	evict(obj);
+	omap_gem_evict(obj);
 
 	WARN_ON(!mutex_is_locked(&dev->struct_mutex));
 
-- 
Regards,

Laurent Pinchart

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 2/6] drm/omap: gem: Merge __omap_gem_get_pages() and omap_gem_attach_pages()
  2018-05-25 16:39 [PATCH v2 0/6] omapdrm: struct_mutex removal Laurent Pinchart
  2018-05-25 16:39 ` [PATCH v2 1/6] drm/omap: gem: Rename GEM function with omap_gem_* prefix Laurent Pinchart
@ 2018-05-25 16:39 ` Laurent Pinchart
  2018-05-25 16:39 ` [PATCH v2 3/6] drm/omap: gem: Don't take struct_mutex to get GEM object mmap offset Laurent Pinchart
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Laurent Pinchart @ 2018-05-25 16:39 UTC (permalink / raw)
  To: dri-devel; +Cc: Daniel Vetter, Tomi Valkeinen

The __omap_gem_get_pages() function is a wrapper around
omap_gem_attach_pages() that returns the omap_obj->pages pointer through
a function argument. Some callers don't need the pages pointer, and all
of them can access omap_obj->pages directly. To simplify the code merge
the __omap_gem_get_pages() wrapper with omap_gem_attach_pages() and
update the callers accordingly.

Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
---
Changes since v1:

- Remove locking requirement comment
---
 drivers/gpu/drm/omapdrm/omap_gem.c | 59 +++++++++++++-------------------------
 1 file changed, 20 insertions(+), 39 deletions(-)

diff --git a/drivers/gpu/drm/omapdrm/omap_gem.c b/drivers/gpu/drm/omapdrm/omap_gem.c
index 6cfcf60cffe3..e23e7ce170aa 100644
--- a/drivers/gpu/drm/omapdrm/omap_gem.c
+++ b/drivers/gpu/drm/omapdrm/omap_gem.c
@@ -222,7 +222,7 @@ static void omap_gem_evict(struct drm_gem_object *obj)
  * Page Management
  */
 
-/** ensure backing pages are allocated */
+/* Ensure backing pages are allocated. */
 static int omap_gem_attach_pages(struct drm_gem_object *obj)
 {
 	struct drm_device *dev = obj->dev;
@@ -232,7 +232,12 @@ static int omap_gem_attach_pages(struct drm_gem_object *obj)
 	int i, ret;
 	dma_addr_t *addrs;
 
-	WARN_ON(omap_obj->pages);
+	/*
+	 * If not using shmem (in which case backing pages don't need to be
+	 * allocated) or if pages are already allocated we're done.
+	 */
+	if (!(omap_obj->flags & OMAP_BO_MEM_SHMEM) || omap_obj->pages)
+		return 0;
 
 	pages = drm_gem_get_pages(obj);
 	if (IS_ERR(pages)) {
@@ -288,29 +293,6 @@ static int omap_gem_attach_pages(struct drm_gem_object *obj)
 	return ret;
 }
 
-/* acquire pages when needed (for example, for DMA where physically
- * contiguous buffer is not required
- */
-static int __omap_gem_get_pages(struct drm_gem_object *obj,
-				struct page ***pages)
-{
-	struct omap_gem_object *omap_obj = to_omap_bo(obj);
-	int ret = 0;
-
-	if ((omap_obj->flags & OMAP_BO_MEM_SHMEM) && !omap_obj->pages) {
-		ret = omap_gem_attach_pages(obj);
-		if (ret) {
-			dev_err(obj->dev->dev, "could not attach pages\n");
-			return ret;
-		}
-	}
-
-	/* TODO: even phys-contig.. we should have a list of pages? */
-	*pages = omap_obj->pages;
-
-	return 0;
-}
-
 /** release backing pages */
 static void omap_gem_detach_pages(struct drm_gem_object *obj)
 {
@@ -516,7 +498,6 @@ int omap_gem_fault(struct vm_fault *vmf)
 	struct drm_gem_object *obj = vma->vm_private_data;
 	struct omap_gem_object *omap_obj = to_omap_bo(obj);
 	struct drm_device *dev = obj->dev;
-	struct page **pages;
 	int ret;
 
 	/* Make sure we don't parallel update on a fault, nor move or remove
@@ -525,7 +506,7 @@ int omap_gem_fault(struct vm_fault *vmf)
 	mutex_lock(&dev->struct_mutex);
 
 	/* if a shmem backed object, make sure we have pages attached now */
-	ret = __omap_gem_get_pages(obj, &pages);
+	ret = omap_gem_attach_pages(obj);
 	if (ret)
 		goto fail;
 
@@ -694,12 +675,12 @@ int omap_gem_roll(struct drm_gem_object *obj, u32 roll)
 
 	/* if we aren't mapped yet, we don't need to do anything */
 	if (omap_obj->block) {
-		struct page **pages;
-
-		ret = __omap_gem_get_pages(obj, &pages);
+		ret = omap_gem_attach_pages(obj);
 		if (ret)
 			goto fail;
-		ret = tiler_pin(omap_obj->block, pages, npages, roll, true);
+
+		ret = tiler_pin(omap_obj->block, omap_obj->pages, npages,
+				roll, true);
 		if (ret)
 			dev_err(obj->dev->dev, "could not repin: %d\n", ret);
 	}
@@ -810,14 +791,13 @@ int omap_gem_pin(struct drm_gem_object *obj, dma_addr_t *dma_addr)
 
 	if (!omap_gem_is_contiguous(omap_obj) && priv->has_dmm) {
 		if (omap_obj->dma_addr_cnt == 0) {
-			struct page **pages;
 			u32 npages = obj->size >> PAGE_SHIFT;
 			enum tiler_fmt fmt = gem2fmt(omap_obj->flags);
 			struct tiler_block *block;
 
 			BUG_ON(omap_obj->block);
 
-			ret = __omap_gem_get_pages(obj, &pages);
+			ret = omap_gem_attach_pages(obj);
 			if (ret)
 				goto fail;
 
@@ -837,7 +817,7 @@ int omap_gem_pin(struct drm_gem_object *obj, dma_addr_t *dma_addr)
 			}
 
 			/* TODO: enable async refill.. */
-			ret = tiler_pin(block, pages, npages,
+			ret = tiler_pin(block, omap_obj->pages, npages,
 					omap_obj->roll, true);
 			if (ret) {
 				tiler_release(block);
@@ -946,16 +926,18 @@ int omap_gem_tiled_stride(struct drm_gem_object *obj, u32 orient)
 int omap_gem_get_pages(struct drm_gem_object *obj, struct page ***pages,
 		bool remap)
 {
+	struct omap_gem_object *omap_obj = to_omap_bo(obj);
 	int ret;
+
 	if (!remap) {
-		struct omap_gem_object *omap_obj = to_omap_bo(obj);
 		if (!omap_obj->pages)
 			return -ENOMEM;
 		*pages = omap_obj->pages;
 		return 0;
 	}
 	mutex_lock(&obj->dev->struct_mutex);
-	ret = __omap_gem_get_pages(obj, pages);
+	ret = omap_gem_attach_pages(obj);
+	*pages = omap_obj->pages;
 	mutex_unlock(&obj->dev->struct_mutex);
 	return ret;
 }
@@ -980,13 +962,12 @@ void *omap_gem_vaddr(struct drm_gem_object *obj)
 	struct omap_gem_object *omap_obj = to_omap_bo(obj);
 	WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex));
 	if (!omap_obj->vaddr) {
-		struct page **pages;
 		int ret;
 
-		ret = __omap_gem_get_pages(obj, &pages);
+		ret = omap_gem_attach_pages(obj);
 		if (ret)
 			return ERR_PTR(ret);
-		omap_obj->vaddr = vmap(pages, obj->size >> PAGE_SHIFT,
+		omap_obj->vaddr = vmap(omap_obj->pages, obj->size >> PAGE_SHIFT,
 				VM_MAP, pgprot_writecombine(PAGE_KERNEL));
 	}
 	return omap_obj->vaddr;
-- 
Regards,

Laurent Pinchart

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 3/6] drm/omap: gem: Don't take struct_mutex to get GEM object mmap offset
  2018-05-25 16:39 [PATCH v2 0/6] omapdrm: struct_mutex removal Laurent Pinchart
  2018-05-25 16:39 ` [PATCH v2 1/6] drm/omap: gem: Rename GEM function with omap_gem_* prefix Laurent Pinchart
  2018-05-25 16:39 ` [PATCH v2 2/6] drm/omap: gem: Merge __omap_gem_get_pages() and omap_gem_attach_pages() Laurent Pinchart
@ 2018-05-25 16:39 ` Laurent Pinchart
  2018-05-25 16:39 ` [PATCH v2 4/6] drm/omap: gem: Replace struct_mutex usage with omap_obj private lock Laurent Pinchart
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Laurent Pinchart @ 2018-05-25 16:39 UTC (permalink / raw)
  To: dri-devel; +Cc: Daniel Vetter, Tomi Valkeinen

GEM objects mmap offsets are created by calling
drm_gem_create_mmap_offset_size() that doesn't need struct_mutex
protection as it includes its own locking, based on a size that is
static across the object's life time. Remove the unneeded struct_mutex
locking.

Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
---
 drivers/gpu/drm/omapdrm/omap_gem.c | 16 ++--------------
 1 file changed, 2 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/omapdrm/omap_gem.c b/drivers/gpu/drm/omapdrm/omap_gem.c
index e23e7ce170aa..0a9ba1f711e6 100644
--- a/drivers/gpu/drm/omapdrm/omap_gem.c
+++ b/drivers/gpu/drm/omapdrm/omap_gem.c
@@ -137,14 +137,12 @@ struct omap_drm_usergart {
  */
 
 /** get mmap offset */
-static u64 mmap_offset(struct drm_gem_object *obj)
+u64 omap_gem_mmap_offset(struct drm_gem_object *obj)
 {
 	struct drm_device *dev = obj->dev;
 	int ret;
 	size_t size;
 
-	WARN_ON(!mutex_is_locked(&dev->struct_mutex));
-
 	/* Make it mmapable */
 	size = omap_gem_mmap_size(obj);
 	ret = drm_gem_create_mmap_offset_size(obj, size);
@@ -178,7 +176,7 @@ static void omap_gem_evict_entry(struct drm_gem_object *obj,
 	struct omap_drm_private *priv = obj->dev->dev_private;
 	int n = priv->usergart[fmt].height;
 	size_t size = PAGE_SIZE * n;
-	loff_t off = mmap_offset(obj) +
+	loff_t off = omap_gem_mmap_offset(obj) +
 			(entry->obj_pgoff << PAGE_SHIFT);
 	const int m = DIV_ROUND_UP(omap_obj->width << fmt, PAGE_SIZE);
 
@@ -319,16 +317,6 @@ u32 omap_gem_flags(struct drm_gem_object *obj)
 	return to_omap_bo(obj)->flags;
 }
 
-u64 omap_gem_mmap_offset(struct drm_gem_object *obj)
-{
-	u64 offset;
-
-	mutex_lock(&obj->dev->struct_mutex);
-	offset = mmap_offset(obj);
-	mutex_unlock(&obj->dev->struct_mutex);
-	return offset;
-}
-
 /** get mmap size */
 size_t omap_gem_mmap_size(struct drm_gem_object *obj)
 {
-- 
Regards,

Laurent Pinchart

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 4/6] drm/omap: gem: Replace struct_mutex usage with omap_obj private lock
  2018-05-25 16:39 [PATCH v2 0/6] omapdrm: struct_mutex removal Laurent Pinchart
                   ` (2 preceding siblings ...)
  2018-05-25 16:39 ` [PATCH v2 3/6] drm/omap: gem: Don't take struct_mutex to get GEM object mmap offset Laurent Pinchart
@ 2018-05-25 16:39 ` Laurent Pinchart
  2018-05-26 16:54   ` [PATCH v2.1 " Laurent Pinchart
  2018-05-26 17:10   ` [PATCH v2 " Laurent Pinchart
  2018-05-25 16:39 ` [PATCH v2 5/6] drm/omap: gem: Fix mm_list locking Laurent Pinchart
                   ` (2 subsequent siblings)
  6 siblings, 2 replies; 10+ messages in thread
From: Laurent Pinchart @ 2018-05-25 16:39 UTC (permalink / raw)
  To: dri-devel; +Cc: Daniel Vetter, Tomi Valkeinen

The DRM device struct_mutex is used to protect against concurrent GEM
object operations that deal with memory allocation and pinning. All
those operations are local to a GEM object and don't need to be
serialized across different GEM objects. Replace the struct_mutex with
a local omap_obj.lock or drop it altogether where not needed.

Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
---
Changes since v1:

- Use lockdep_assert_held() to verify locking correctness
---
 drivers/gpu/drm/omapdrm/omap_debugfs.c |   7 --
 drivers/gpu/drm/omapdrm/omap_fbdev.c   |   8 +--
 drivers/gpu/drm/omapdrm/omap_gem.c     | 122 +++++++++++++++++++++------------
 3 files changed, 81 insertions(+), 56 deletions(-)

diff --git a/drivers/gpu/drm/omapdrm/omap_debugfs.c b/drivers/gpu/drm/omapdrm/omap_debugfs.c
index b42e286616b0..95ade441caa8 100644
--- a/drivers/gpu/drm/omapdrm/omap_debugfs.c
+++ b/drivers/gpu/drm/omapdrm/omap_debugfs.c
@@ -30,17 +30,10 @@ static int gem_show(struct seq_file *m, void *arg)
 	struct drm_info_node *node = (struct drm_info_node *) m->private;
 	struct drm_device *dev = node->minor->dev;
 	struct omap_drm_private *priv = dev->dev_private;
-	int ret;
-
-	ret = mutex_lock_interruptible(&dev->struct_mutex);
-	if (ret)
-		return ret;
 
 	seq_printf(m, "All Objects:\n");
 	omap_gem_describe_objects(&priv->obj_list, m);
 
-	mutex_unlock(&dev->struct_mutex);
-
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/omapdrm/omap_fbdev.c b/drivers/gpu/drm/omapdrm/omap_fbdev.c
index 0f66c74a54b0..d958cc813a94 100644
--- a/drivers/gpu/drm/omapdrm/omap_fbdev.c
+++ b/drivers/gpu/drm/omapdrm/omap_fbdev.c
@@ -170,13 +170,11 @@ static int omap_fbdev_create(struct drm_fb_helper *helper,
 		goto fail;
 	}
 
-	mutex_lock(&dev->struct_mutex);
-
 	fbi = drm_fb_helper_alloc_fbi(helper);
 	if (IS_ERR(fbi)) {
 		dev_err(dev->dev, "failed to allocate fb info\n");
 		ret = PTR_ERR(fbi);
-		goto fail_unlock;
+		goto fail;
 	}
 
 	DBG("fbi=%p, dev=%p", fbi, dev);
@@ -212,12 +210,8 @@ static int omap_fbdev_create(struct drm_fb_helper *helper,
 	DBG("par=%p, %dx%d", fbi->par, fbi->var.xres, fbi->var.yres);
 	DBG("allocated %dx%d fb", fbdev->fb->width, fbdev->fb->height);
 
-	mutex_unlock(&dev->struct_mutex);
-
 	return 0;
 
-fail_unlock:
-	mutex_unlock(&dev->struct_mutex);
 fail:
 
 	if (ret) {
diff --git a/drivers/gpu/drm/omapdrm/omap_gem.c b/drivers/gpu/drm/omapdrm/omap_gem.c
index 0a9ba1f711e6..39cbc7273b29 100644
--- a/drivers/gpu/drm/omapdrm/omap_gem.c
+++ b/drivers/gpu/drm/omapdrm/omap_gem.c
@@ -47,6 +47,9 @@ struct omap_gem_object {
 	/** roll applied when mapping to DMM */
 	u32 roll;
 
+	/** protects dma_addr_cnt, block, pages, dma_addrs and vaddr */
+	struct mutex lock;
+
 	/**
 	 * dma_addr contains the buffer DMA address. It is valid for
 	 *
@@ -220,7 +223,10 @@ static void omap_gem_evict(struct drm_gem_object *obj)
  * Page Management
  */
 
-/* Ensure backing pages are allocated. */
+/*
+ * Ensure backing pages are allocated. Must be called with the omap_obj.lock
+ * held.
+ */
 static int omap_gem_attach_pages(struct drm_gem_object *obj)
 {
 	struct drm_device *dev = obj->dev;
@@ -230,6 +236,8 @@ static int omap_gem_attach_pages(struct drm_gem_object *obj)
 	int i, ret;
 	dma_addr_t *addrs;
 
+	lockdep_assert_held(&omap_obj->lock);
+
 	/*
 	 * If not using shmem (in which case backing pages don't need to be
 	 * allocated) or if pages are already allocated we're done.
@@ -291,13 +299,15 @@ static int omap_gem_attach_pages(struct drm_gem_object *obj)
 	return ret;
 }
 
-/** release backing pages */
+/* Release backing pages. Must be called with the omap_obj.lock held. */
 static void omap_gem_detach_pages(struct drm_gem_object *obj)
 {
 	struct omap_gem_object *omap_obj = to_omap_bo(obj);
 	unsigned int npages = obj->size >> PAGE_SHIFT;
 	unsigned int i;
 
+	lockdep_assert_held(&omap_obj->lock);
+
 	for (i = 0; i < npages; i++) {
 		if (omap_obj->dma_addrs[i])
 			dma_unmap_page(obj->dev->dev, omap_obj->dma_addrs[i],
@@ -485,13 +495,12 @@ int omap_gem_fault(struct vm_fault *vmf)
 	struct vm_area_struct *vma = vmf->vma;
 	struct drm_gem_object *obj = vma->vm_private_data;
 	struct omap_gem_object *omap_obj = to_omap_bo(obj);
-	struct drm_device *dev = obj->dev;
 	int ret;
 
 	/* Make sure we don't parallel update on a fault, nor move or remove
 	 * something from beneath our feet
 	 */
-	mutex_lock(&dev->struct_mutex);
+	mutex_lock(&omap_obj->lock);
 
 	/* if a shmem backed object, make sure we have pages attached now */
 	ret = omap_gem_attach_pages(obj);
@@ -511,7 +520,7 @@ int omap_gem_fault(struct vm_fault *vmf)
 
 
 fail:
-	mutex_unlock(&dev->struct_mutex);
+	mutex_unlock(&omap_obj->lock);
 	switch (ret) {
 	case 0:
 	case -ERESTARTSYS:
@@ -659,7 +668,7 @@ int omap_gem_roll(struct drm_gem_object *obj, u32 roll)
 
 	omap_obj->roll = roll;
 
-	mutex_lock(&obj->dev->struct_mutex);
+	mutex_lock(&omap_obj->lock);
 
 	/* if we aren't mapped yet, we don't need to do anything */
 	if (omap_obj->block) {
@@ -674,7 +683,7 @@ int omap_gem_roll(struct drm_gem_object *obj, u32 roll)
 	}
 
 fail:
-	mutex_unlock(&obj->dev->struct_mutex);
+	mutex_unlock(&omap_obj->lock);
 
 	return ret;
 }
@@ -775,7 +784,7 @@ int omap_gem_pin(struct drm_gem_object *obj, dma_addr_t *dma_addr)
 	struct omap_gem_object *omap_obj = to_omap_bo(obj);
 	int ret = 0;
 
-	mutex_lock(&obj->dev->struct_mutex);
+	mutex_lock(&omap_obj->lock);
 
 	if (!omap_gem_is_contiguous(omap_obj) && priv->has_dmm) {
 		if (omap_obj->dma_addr_cnt == 0) {
@@ -831,7 +840,7 @@ int omap_gem_pin(struct drm_gem_object *obj, dma_addr_t *dma_addr)
 	}
 
 fail:
-	mutex_unlock(&obj->dev->struct_mutex);
+	mutex_unlock(&omap_obj->lock);
 
 	return ret;
 }
@@ -849,7 +858,8 @@ void omap_gem_unpin(struct drm_gem_object *obj)
 	struct omap_gem_object *omap_obj = to_omap_bo(obj);
 	int ret;
 
-	mutex_lock(&obj->dev->struct_mutex);
+	mutex_lock(&omap_obj->lock);
+
 	if (omap_obj->dma_addr_cnt > 0) {
 		omap_obj->dma_addr_cnt--;
 		if (omap_obj->dma_addr_cnt == 0) {
@@ -868,7 +878,7 @@ void omap_gem_unpin(struct drm_gem_object *obj)
 		}
 	}
 
-	mutex_unlock(&obj->dev->struct_mutex);
+	mutex_unlock(&omap_obj->lock);
 }
 
 /* Get rotated scanout address (only valid if already pinned), at the
@@ -881,13 +891,16 @@ int omap_gem_rotated_dma_addr(struct drm_gem_object *obj, u32 orient,
 	struct omap_gem_object *omap_obj = to_omap_bo(obj);
 	int ret = -EINVAL;
 
-	mutex_lock(&obj->dev->struct_mutex);
+	mutex_lock(&omap_obj->lock);
+
 	if ((omap_obj->dma_addr_cnt > 0) && omap_obj->block &&
 			(omap_obj->flags & OMAP_BO_TILED)) {
 		*dma_addr = tiler_tsptr(omap_obj->block, orient, x, y);
 		ret = 0;
 	}
-	mutex_unlock(&obj->dev->struct_mutex);
+
+	mutex_unlock(&omap_obj->lock);
+
 	return ret;
 }
 
@@ -915,18 +928,26 @@ int omap_gem_get_pages(struct drm_gem_object *obj, struct page ***pages,
 		bool remap)
 {
 	struct omap_gem_object *omap_obj = to_omap_bo(obj);
-	int ret;
+	int ret = 0;
 
-	if (!remap) {
-		if (!omap_obj->pages)
-			return -ENOMEM;
-		*pages = omap_obj->pages;
-		return 0;
+	mutex_lock(&omap_obj->lock);
+
+	if (remap) {
+		ret = omap_gem_attach_pages(obj);
+		if (ret)
+			goto unlock;
 	}
-	mutex_lock(&obj->dev->struct_mutex);
-	ret = omap_gem_attach_pages(obj);
+
+	if (!omap_obj->pages) {
+		ret = -ENOMEM;
+		goto unlock;
+	}
+
 	*pages = omap_obj->pages;
-	mutex_unlock(&obj->dev->struct_mutex);
+
+unlock:
+	mutex_unlock(&omap_obj->lock);
+
 	return ret;
 }
 
@@ -941,24 +962,34 @@ int omap_gem_put_pages(struct drm_gem_object *obj)
 }
 
 #ifdef CONFIG_DRM_FBDEV_EMULATION
-/* Get kernel virtual address for CPU access.. this more or less only
- * exists for omap_fbdev.  This should be called with struct_mutex
- * held.
+/*
+ * Get kernel virtual address for CPU access.. this more or less only
+ * exists for omap_fbdev.
  */
 void *omap_gem_vaddr(struct drm_gem_object *obj)
 {
 	struct omap_gem_object *omap_obj = to_omap_bo(obj);
-	WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex));
-	if (!omap_obj->vaddr) {
-		int ret;
+	void *vaddr;
+	int ret;
+
+	mutex_lock(&omap_obj->lock);
 
+	if (!omap_obj->vaddr) {
 		ret = omap_gem_attach_pages(obj);
-		if (ret)
-			return ERR_PTR(ret);
+		if (ret) {
+			vaddr = ERR_PTR(ret);
+			goto unlock;
+		}
+
 		omap_obj->vaddr = vmap(omap_obj->pages, obj->size >> PAGE_SHIFT,
 				VM_MAP, pgprot_writecombine(PAGE_KERNEL));
 	}
-	return omap_obj->vaddr;
+
+	vaddr = omap_obj->vaddr;
+
+unlock:
+	mutex_unlock(&omap_obj->lock);
+	return vaddr;
 }
 #endif
 
@@ -1006,6 +1037,8 @@ void omap_gem_describe(struct drm_gem_object *obj, struct seq_file *m)
 
 	off = drm_vma_node_start(&obj->vma_node);
 
+	mutex_lock(&omap_obj->lock);
+
 	seq_printf(m, "%08x: %2d (%2d) %08llx %pad (%2d) %p %4d",
 			omap_obj->flags, obj->name, kref_read(&obj->refcount),
 			off, &omap_obj->dma_addr, omap_obj->dma_addr_cnt,
@@ -1023,6 +1056,8 @@ void omap_gem_describe(struct drm_gem_object *obj, struct seq_file *m)
 		seq_printf(m, " %zu", obj->size);
 	}
 
+	mutex_unlock(&omap_obj->lock);
+
 	seq_printf(m, "\n");
 }
 
@@ -1056,15 +1091,16 @@ void omap_gem_free_object(struct drm_gem_object *obj)
 
 	omap_gem_evict(obj);
 
-	WARN_ON(!mutex_is_locked(&dev->struct_mutex));
-
 	spin_lock(&priv->list_lock);
 	list_del(&omap_obj->mm_list);
 	spin_unlock(&priv->list_lock);
 
-	/* this means the object is still pinned.. which really should
-	 * not happen.  I think..
+	/*
+	 * No need to take the omap_obj.lock as at this point we own the sole
+	 * reference to the object.
 	 */
+
+	/* The object should not be pinned. */
 	WARN_ON(omap_obj->dma_addr_cnt > 0);
 
 	if (omap_obj->pages) {
@@ -1085,6 +1121,8 @@ void omap_gem_free_object(struct drm_gem_object *obj)
 
 	drm_gem_object_release(obj);
 
+	mutex_destroy(&omap_obj->lock);
+
 	kfree(omap_obj);
 }
 
@@ -1140,6 +1178,7 @@ struct drm_gem_object *omap_gem_new(struct drm_device *dev,
 
 	obj = &omap_obj->base;
 	omap_obj->flags = flags;
+	mutex_init(&omap_obj->lock);
 
 	if (flags & OMAP_BO_TILED) {
 		/*
@@ -1204,16 +1243,15 @@ struct drm_gem_object *omap_gem_new_dmabuf(struct drm_device *dev, size_t size,
 	if (sgt->orig_nents != 1 && !priv->has_dmm)
 		return ERR_PTR(-EINVAL);
 
-	mutex_lock(&dev->struct_mutex);
-
 	gsize.bytes = PAGE_ALIGN(size);
 	obj = omap_gem_new(dev, gsize, OMAP_BO_MEM_DMABUF | OMAP_BO_WC);
-	if (!obj) {
-		obj = ERR_PTR(-ENOMEM);
-		goto done;
-	}
+	if (!obj)
+		return ERR_PTR(-ENOMEM);
 
 	omap_obj = to_omap_bo(obj);
+
+	mutex_lock(&omap_obj->lock);
+
 	omap_obj->sgt = sgt;
 
 	if (sgt->orig_nents == 1) {
@@ -1249,7 +1287,7 @@ struct drm_gem_object *omap_gem_new_dmabuf(struct drm_device *dev, size_t size,
 	}
 
 done:
-	mutex_unlock(&dev->struct_mutex);
+	mutex_unlock(&omap_obj->lock);
 	return obj;
 }
 
-- 
Regards,

Laurent Pinchart

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 5/6] drm/omap: gem: Fix mm_list locking
  2018-05-25 16:39 [PATCH v2 0/6] omapdrm: struct_mutex removal Laurent Pinchart
                   ` (3 preceding siblings ...)
  2018-05-25 16:39 ` [PATCH v2 4/6] drm/omap: gem: Replace struct_mutex usage with omap_obj private lock Laurent Pinchart
@ 2018-05-25 16:39 ` Laurent Pinchart
  2018-05-25 16:39 ` [PATCH v2 6/6] drm/omap: gem: Switch to gem_free_object_unlocked() Laurent Pinchart
  2018-05-29  9:29 ` [PATCH v2 0/6] omapdrm: struct_mutex removal Tomi Valkeinen
  6 siblings, 0 replies; 10+ messages in thread
From: Laurent Pinchart @ 2018-05-25 16:39 UTC (permalink / raw)
  To: dri-devel; +Cc: Daniel Vetter, Tomi Valkeinen

From: Daniel Vetter <daniel.vetter@ffwll.ch>

- None of the list walkings where protected.

- Switch to a mutex since the list walking at device resume time can
  sleep when pinning buffers through the tiler.

Only thing we need to be careful with here is that while we walk the
list we can't unreference any gem objects, since the final unref would
result in a recursive deadlock. But the only functions that walk the
list is the device resume and debugfs dumping, so all safe.

Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Reviewed-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
---
Changes since v1:

- Add done label for mutex unlocking in omap_gem_resume()
- Updated commit message to mention that the resume operation can sleep
---
 drivers/gpu/drm/omapdrm/omap_debugfs.c |  2 ++
 drivers/gpu/drm/omapdrm/omap_drv.c     |  2 +-
 drivers/gpu/drm/omapdrm/omap_drv.h     |  2 +-
 drivers/gpu/drm/omapdrm/omap_gem.c     | 15 +++++++++------
 4 files changed, 13 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/omapdrm/omap_debugfs.c b/drivers/gpu/drm/omapdrm/omap_debugfs.c
index 95ade441caa8..91cf043f2b6b 100644
--- a/drivers/gpu/drm/omapdrm/omap_debugfs.c
+++ b/drivers/gpu/drm/omapdrm/omap_debugfs.c
@@ -32,7 +32,9 @@ static int gem_show(struct seq_file *m, void *arg)
 	struct omap_drm_private *priv = dev->dev_private;
 
 	seq_printf(m, "All Objects:\n");
+	mutex_lock(&priv->list_lock);
 	omap_gem_describe_objects(&priv->obj_list, m);
+	mutex_unlock(&priv->list_lock);
 
 	return 0;
 }
diff --git a/drivers/gpu/drm/omapdrm/omap_drv.c b/drivers/gpu/drm/omapdrm/omap_drv.c
index ef3b0e3571ec..5fcf9eaf3eaf 100644
--- a/drivers/gpu/drm/omapdrm/omap_drv.c
+++ b/drivers/gpu/drm/omapdrm/omap_drv.c
@@ -540,7 +540,7 @@ static int omapdrm_init(struct omap_drm_private *priv, struct device *dev)
 	priv->omaprev = soc ? (unsigned int)soc->data : 0;
 	priv->wq = alloc_ordered_workqueue("omapdrm", 0);
 
-	spin_lock_init(&priv->list_lock);
+	mutex_init(&priv->list_lock);
 	INIT_LIST_HEAD(&priv->obj_list);
 
 	/* Allocate and initialize the DRM device. */
diff --git a/drivers/gpu/drm/omapdrm/omap_drv.h b/drivers/gpu/drm/omapdrm/omap_drv.h
index 6eaee4df4559..f27c8e216adf 100644
--- a/drivers/gpu/drm/omapdrm/omap_drv.h
+++ b/drivers/gpu/drm/omapdrm/omap_drv.h
@@ -71,7 +71,7 @@ struct omap_drm_private {
 	struct workqueue_struct *wq;
 
 	/* lock for obj_list below */
-	spinlock_t list_lock;
+	struct mutex list_lock;
 
 	/* list of GEM objects: */
 	struct list_head obj_list;
diff --git a/drivers/gpu/drm/omapdrm/omap_gem.c b/drivers/gpu/drm/omapdrm/omap_gem.c
index 39cbc7273b29..5bccaf4d2d33 100644
--- a/drivers/gpu/drm/omapdrm/omap_gem.c
+++ b/drivers/gpu/drm/omapdrm/omap_gem.c
@@ -1005,6 +1005,7 @@ int omap_gem_resume(struct drm_device *dev)
 	struct omap_gem_object *omap_obj;
 	int ret = 0;
 
+	mutex_lock(&priv->list_lock);
 	list_for_each_entry(omap_obj, &priv->obj_list, mm_list) {
 		if (omap_obj->block) {
 			struct drm_gem_object *obj = &omap_obj->base;
@@ -1016,12 +1017,14 @@ int omap_gem_resume(struct drm_device *dev)
 					omap_obj->roll, true);
 			if (ret) {
 				dev_err(dev->dev, "could not repin: %d\n", ret);
-				return ret;
+				goto done;
 			}
 		}
 	}
 
-	return 0;
+done:
+	mutex_unlock(&priv->list_lock);
+	return ret;
 }
 #endif
 
@@ -1091,9 +1094,9 @@ void omap_gem_free_object(struct drm_gem_object *obj)
 
 	omap_gem_evict(obj);
 
-	spin_lock(&priv->list_lock);
+	mutex_lock(&priv->list_lock);
 	list_del(&omap_obj->mm_list);
-	spin_unlock(&priv->list_lock);
+	mutex_unlock(&priv->list_lock);
 
 	/*
 	 * No need to take the omap_obj.lock as at this point we own the sole
@@ -1218,9 +1221,9 @@ struct drm_gem_object *omap_gem_new(struct drm_device *dev,
 			goto err_release;
 	}
 
-	spin_lock(&priv->list_lock);
+	mutex_lock(&priv->list_lock);
 	list_add(&omap_obj->mm_list, &priv->obj_list);
-	spin_unlock(&priv->list_lock);
+	mutex_unlock(&priv->list_lock);
 
 	return obj;
 
-- 
Regards,

Laurent Pinchart

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 6/6] drm/omap: gem: Switch to gem_free_object_unlocked()
  2018-05-25 16:39 [PATCH v2 0/6] omapdrm: struct_mutex removal Laurent Pinchart
                   ` (4 preceding siblings ...)
  2018-05-25 16:39 ` [PATCH v2 5/6] drm/omap: gem: Fix mm_list locking Laurent Pinchart
@ 2018-05-25 16:39 ` Laurent Pinchart
  2018-05-29  9:29 ` [PATCH v2 0/6] omapdrm: struct_mutex removal Tomi Valkeinen
  6 siblings, 0 replies; 10+ messages in thread
From: Laurent Pinchart @ 2018-05-25 16:39 UTC (permalink / raw)
  To: dri-devel; +Cc: Daniel Vetter, Tomi Valkeinen

From: Daniel Vetter <daniel.vetter@ffwll.ch>

The only thing that omap_gem_free_object does that might need the magic
protection of struct_mutex (of keeping all objects alive if that lock is
held, even if the last reference is gone) is the mm_list manipulation.
This is already protected by the separate omapdrm->list_lock, which
means that struct_mutex is not needed by omapdrm. We can switch to
gem_free_object_unlocked()

Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Reviewed-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
---
Changes since v0:

- Rebased on top of struct_mutex removal and rephrased commit message
accordingly
---
 drivers/gpu/drm/omapdrm/omap_drv.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/omapdrm/omap_drv.c b/drivers/gpu/drm/omapdrm/omap_drv.c
index 5fcf9eaf3eaf..5005ecc284d2 100644
--- a/drivers/gpu/drm/omapdrm/omap_drv.c
+++ b/drivers/gpu/drm/omapdrm/omap_drv.c
@@ -493,7 +493,7 @@ static struct drm_driver omap_drm_driver = {
 	.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
 	.gem_prime_export = omap_gem_prime_export,
 	.gem_prime_import = omap_gem_prime_import,
-	.gem_free_object = omap_gem_free_object,
+	.gem_free_object_unlocked = omap_gem_free_object,
 	.gem_vm_ops = &omap_gem_vm_ops,
 	.dumb_create = omap_gem_dumb_create,
 	.dumb_map_offset = omap_gem_dumb_map_offset,
-- 
Regards,

Laurent Pinchart

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2.1 4/6] drm/omap: gem: Replace struct_mutex usage with omap_obj private lock
  2018-05-25 16:39 ` [PATCH v2 4/6] drm/omap: gem: Replace struct_mutex usage with omap_obj private lock Laurent Pinchart
@ 2018-05-26 16:54   ` Laurent Pinchart
  2018-05-26 17:10   ` [PATCH v2 " Laurent Pinchart
  1 sibling, 0 replies; 10+ messages in thread
From: Laurent Pinchart @ 2018-05-26 16:54 UTC (permalink / raw)
  To: dri-devel; +Cc: Daniel Vetter, Tomi Valkeinen

The DRM device struct_mutex is used to protect against concurrent GEM
object operations that deal with memory allocation and pinning. All
those operations are local to a GEM object and don't need to be
serialized across different GEM objects. Replace the struct_mutex with
a local omap_obj.lock or drop it altogether where not needed.

Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
---
Changes since v2:

- Take the omap_obj lock when destroying the GEM object to avoid a
  lockdep warning

Changes since v1:

- Use lockdep_assert_held() to verify locking correctness
---
 drivers/gpu/drm/omapdrm/omap_debugfs.c |   7 --
 drivers/gpu/drm/omapdrm/omap_fbdev.c   |   8 +--
 drivers/gpu/drm/omapdrm/omap_gem.c     | 127 ++++++++++++++++++++++-----------
 3 files changed, 86 insertions(+), 56 deletions(-)

diff --git a/drivers/gpu/drm/omapdrm/omap_debugfs.c b/drivers/gpu/drm/omapdrm/omap_debugfs.c
index b42e286616b0..95ade441caa8 100644
--- a/drivers/gpu/drm/omapdrm/omap_debugfs.c
+++ b/drivers/gpu/drm/omapdrm/omap_debugfs.c
@@ -30,17 +30,10 @@ static int gem_show(struct seq_file *m, void *arg)
 	struct drm_info_node *node = (struct drm_info_node *) m->private;
 	struct drm_device *dev = node->minor->dev;
 	struct omap_drm_private *priv = dev->dev_private;
-	int ret;
-
-	ret = mutex_lock_interruptible(&dev->struct_mutex);
-	if (ret)
-		return ret;
 
 	seq_printf(m, "All Objects:\n");
 	omap_gem_describe_objects(&priv->obj_list, m);
 
-	mutex_unlock(&dev->struct_mutex);
-
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/omapdrm/omap_fbdev.c b/drivers/gpu/drm/omapdrm/omap_fbdev.c
index 0f66c74a54b0..d958cc813a94 100644
--- a/drivers/gpu/drm/omapdrm/omap_fbdev.c
+++ b/drivers/gpu/drm/omapdrm/omap_fbdev.c
@@ -170,13 +170,11 @@ static int omap_fbdev_create(struct drm_fb_helper *helper,
 		goto fail;
 	}
 
-	mutex_lock(&dev->struct_mutex);
-
 	fbi = drm_fb_helper_alloc_fbi(helper);
 	if (IS_ERR(fbi)) {
 		dev_err(dev->dev, "failed to allocate fb info\n");
 		ret = PTR_ERR(fbi);
-		goto fail_unlock;
+		goto fail;
 	}
 
 	DBG("fbi=%p, dev=%p", fbi, dev);
@@ -212,12 +210,8 @@ static int omap_fbdev_create(struct drm_fb_helper *helper,
 	DBG("par=%p, %dx%d", fbi->par, fbi->var.xres, fbi->var.yres);
 	DBG("allocated %dx%d fb", fbdev->fb->width, fbdev->fb->height);
 
-	mutex_unlock(&dev->struct_mutex);
-
 	return 0;
 
-fail_unlock:
-	mutex_unlock(&dev->struct_mutex);
 fail:
 
 	if (ret) {
diff --git a/drivers/gpu/drm/omapdrm/omap_gem.c b/drivers/gpu/drm/omapdrm/omap_gem.c
index 0a9ba1f711e6..749e5abdd27c 100644
--- a/drivers/gpu/drm/omapdrm/omap_gem.c
+++ b/drivers/gpu/drm/omapdrm/omap_gem.c
@@ -47,6 +47,9 @@ struct omap_gem_object {
 	/** roll applied when mapping to DMM */
 	u32 roll;
 
+	/** protects dma_addr_cnt, block, pages, dma_addrs and vaddr */
+	struct mutex lock;
+
 	/**
 	 * dma_addr contains the buffer DMA address. It is valid for
 	 *
@@ -220,7 +223,10 @@ static void omap_gem_evict(struct drm_gem_object *obj)
  * Page Management
  */
 
-/* Ensure backing pages are allocated. */
+/*
+ * Ensure backing pages are allocated. Must be called with the omap_obj.lock
+ * held.
+ */
 static int omap_gem_attach_pages(struct drm_gem_object *obj)
 {
 	struct drm_device *dev = obj->dev;
@@ -230,6 +236,8 @@ static int omap_gem_attach_pages(struct drm_gem_object *obj)
 	int i, ret;
 	dma_addr_t *addrs;
 
+	lockdep_assert_held(&omap_obj->lock);
+
 	/*
 	 * If not using shmem (in which case backing pages don't need to be
 	 * allocated) or if pages are already allocated we're done.
@@ -291,13 +299,15 @@ static int omap_gem_attach_pages(struct drm_gem_object *obj)
 	return ret;
 }
 
-/** release backing pages */
+/* Release backing pages. Must be called with the omap_obj.lock held. */
 static void omap_gem_detach_pages(struct drm_gem_object *obj)
 {
 	struct omap_gem_object *omap_obj = to_omap_bo(obj);
 	unsigned int npages = obj->size >> PAGE_SHIFT;
 	unsigned int i;
 
+	lockdep_assert_held(&omap_obj->lock);
+
 	for (i = 0; i < npages; i++) {
 		if (omap_obj->dma_addrs[i])
 			dma_unmap_page(obj->dev->dev, omap_obj->dma_addrs[i],
@@ -485,13 +495,12 @@ int omap_gem_fault(struct vm_fault *vmf)
 	struct vm_area_struct *vma = vmf->vma;
 	struct drm_gem_object *obj = vma->vm_private_data;
 	struct omap_gem_object *omap_obj = to_omap_bo(obj);
-	struct drm_device *dev = obj->dev;
 	int ret;
 
 	/* Make sure we don't parallel update on a fault, nor move or remove
 	 * something from beneath our feet
 	 */
-	mutex_lock(&dev->struct_mutex);
+	mutex_lock(&omap_obj->lock);
 
 	/* if a shmem backed object, make sure we have pages attached now */
 	ret = omap_gem_attach_pages(obj);
@@ -511,7 +520,7 @@ int omap_gem_fault(struct vm_fault *vmf)
 
 
 fail:
-	mutex_unlock(&dev->struct_mutex);
+	mutex_unlock(&omap_obj->lock);
 	switch (ret) {
 	case 0:
 	case -ERESTARTSYS:
@@ -659,7 +668,7 @@ int omap_gem_roll(struct drm_gem_object *obj, u32 roll)
 
 	omap_obj->roll = roll;
 
-	mutex_lock(&obj->dev->struct_mutex);
+	mutex_lock(&omap_obj->lock);
 
 	/* if we aren't mapped yet, we don't need to do anything */
 	if (omap_obj->block) {
@@ -674,7 +683,7 @@ int omap_gem_roll(struct drm_gem_object *obj, u32 roll)
 	}
 
 fail:
-	mutex_unlock(&obj->dev->struct_mutex);
+	mutex_unlock(&omap_obj->lock);
 
 	return ret;
 }
@@ -775,7 +784,7 @@ int omap_gem_pin(struct drm_gem_object *obj, dma_addr_t *dma_addr)
 	struct omap_gem_object *omap_obj = to_omap_bo(obj);
 	int ret = 0;
 
-	mutex_lock(&obj->dev->struct_mutex);
+	mutex_lock(&omap_obj->lock);
 
 	if (!omap_gem_is_contiguous(omap_obj) && priv->has_dmm) {
 		if (omap_obj->dma_addr_cnt == 0) {
@@ -831,7 +840,7 @@ int omap_gem_pin(struct drm_gem_object *obj, dma_addr_t *dma_addr)
 	}
 
 fail:
-	mutex_unlock(&obj->dev->struct_mutex);
+	mutex_unlock(&omap_obj->lock);
 
 	return ret;
 }
@@ -849,7 +858,8 @@ void omap_gem_unpin(struct drm_gem_object *obj)
 	struct omap_gem_object *omap_obj = to_omap_bo(obj);
 	int ret;
 
-	mutex_lock(&obj->dev->struct_mutex);
+	mutex_lock(&omap_obj->lock);
+
 	if (omap_obj->dma_addr_cnt > 0) {
 		omap_obj->dma_addr_cnt--;
 		if (omap_obj->dma_addr_cnt == 0) {
@@ -868,7 +878,7 @@ void omap_gem_unpin(struct drm_gem_object *obj)
 		}
 	}
 
-	mutex_unlock(&obj->dev->struct_mutex);
+	mutex_unlock(&omap_obj->lock);
 }
 
 /* Get rotated scanout address (only valid if already pinned), at the
@@ -881,13 +891,16 @@ int omap_gem_rotated_dma_addr(struct drm_gem_object *obj, u32 orient,
 	struct omap_gem_object *omap_obj = to_omap_bo(obj);
 	int ret = -EINVAL;
 
-	mutex_lock(&obj->dev->struct_mutex);
+	mutex_lock(&omap_obj->lock);
+
 	if ((omap_obj->dma_addr_cnt > 0) && omap_obj->block &&
 			(omap_obj->flags & OMAP_BO_TILED)) {
 		*dma_addr = tiler_tsptr(omap_obj->block, orient, x, y);
 		ret = 0;
 	}
-	mutex_unlock(&obj->dev->struct_mutex);
+
+	mutex_unlock(&omap_obj->lock);
+
 	return ret;
 }
 
@@ -915,18 +928,26 @@ int omap_gem_get_pages(struct drm_gem_object *obj, struct page ***pages,
 		bool remap)
 {
 	struct omap_gem_object *omap_obj = to_omap_bo(obj);
-	int ret;
+	int ret = 0;
 
-	if (!remap) {
-		if (!omap_obj->pages)
-			return -ENOMEM;
-		*pages = omap_obj->pages;
-		return 0;
+	mutex_lock(&omap_obj->lock);
+
+	if (remap) {
+		ret = omap_gem_attach_pages(obj);
+		if (ret)
+			goto unlock;
 	}
-	mutex_lock(&obj->dev->struct_mutex);
-	ret = omap_gem_attach_pages(obj);
+
+	if (!omap_obj->pages) {
+		ret = -ENOMEM;
+		goto unlock;
+	}
+
 	*pages = omap_obj->pages;
-	mutex_unlock(&obj->dev->struct_mutex);
+
+unlock:
+	mutex_unlock(&omap_obj->lock);
+
 	return ret;
 }
 
@@ -941,24 +962,34 @@ int omap_gem_put_pages(struct drm_gem_object *obj)
 }
 
 #ifdef CONFIG_DRM_FBDEV_EMULATION
-/* Get kernel virtual address for CPU access.. this more or less only
- * exists for omap_fbdev.  This should be called with struct_mutex
- * held.
+/*
+ * Get kernel virtual address for CPU access.. this more or less only
+ * exists for omap_fbdev.
  */
 void *omap_gem_vaddr(struct drm_gem_object *obj)
 {
 	struct omap_gem_object *omap_obj = to_omap_bo(obj);
-	WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex));
-	if (!omap_obj->vaddr) {
-		int ret;
+	void *vaddr;
+	int ret;
+
+	mutex_lock(&omap_obj->lock);
 
+	if (!omap_obj->vaddr) {
 		ret = omap_gem_attach_pages(obj);
-		if (ret)
-			return ERR_PTR(ret);
+		if (ret) {
+			vaddr = ERR_PTR(ret);
+			goto unlock;
+		}
+
 		omap_obj->vaddr = vmap(omap_obj->pages, obj->size >> PAGE_SHIFT,
 				VM_MAP, pgprot_writecombine(PAGE_KERNEL));
 	}
-	return omap_obj->vaddr;
+
+	vaddr = omap_obj->vaddr;
+
+unlock:
+	mutex_unlock(&omap_obj->lock);
+	return vaddr;
 }
 #endif
 
@@ -1006,6 +1037,8 @@ void omap_gem_describe(struct drm_gem_object *obj, struct seq_file *m)
 
 	off = drm_vma_node_start(&obj->vma_node);
 
+	mutex_lock(&omap_obj->lock);
+
 	seq_printf(m, "%08x: %2d (%2d) %08llx %pad (%2d) %p %4d",
 			omap_obj->flags, obj->name, kref_read(&obj->refcount),
 			off, &omap_obj->dma_addr, omap_obj->dma_addr_cnt,
@@ -1023,6 +1056,8 @@ void omap_gem_describe(struct drm_gem_object *obj, struct seq_file *m)
 		seq_printf(m, " %zu", obj->size);
 	}
 
+	mutex_unlock(&omap_obj->lock);
+
 	seq_printf(m, "\n");
 }
 
@@ -1056,15 +1091,19 @@ void omap_gem_free_object(struct drm_gem_object *obj)
 
 	omap_gem_evict(obj);
 
-	WARN_ON(!mutex_is_locked(&dev->struct_mutex));
-
 	spin_lock(&priv->list_lock);
 	list_del(&omap_obj->mm_list);
 	spin_unlock(&priv->list_lock);
 
-	/* this means the object is still pinned.. which really should
-	 * not happen.  I think..
+	/*
+	 * We own the sole reference to the object at this point, but to keep
+	 * lockdep happy, we must still take the omap_obj_lock to call
+	 * omap_gem_detach_pages(). This should hardly make any difference as
+	 * there can't be any lock contention.
 	 */
+	mutex_lock(&omap_obj->lock);
+
+	/* The object should not be pinned. */
 	WARN_ON(omap_obj->dma_addr_cnt > 0);
 
 	if (omap_obj->pages) {
@@ -1083,8 +1122,12 @@ void omap_gem_free_object(struct drm_gem_object *obj)
 		drm_prime_gem_destroy(obj, omap_obj->sgt);
 	}
 
+	mutex_unlock(&omap_obj->lock);
+
 	drm_gem_object_release(obj);
 
+	mutex_destroy(&omap_obj->lock);
+
 	kfree(omap_obj);
 }
 
@@ -1140,6 +1183,7 @@ struct drm_gem_object *omap_gem_new(struct drm_device *dev,
 
 	obj = &omap_obj->base;
 	omap_obj->flags = flags;
+	mutex_init(&omap_obj->lock);
 
 	if (flags & OMAP_BO_TILED) {
 		/*
@@ -1204,16 +1248,15 @@ struct drm_gem_object *omap_gem_new_dmabuf(struct drm_device *dev, size_t size,
 	if (sgt->orig_nents != 1 && !priv->has_dmm)
 		return ERR_PTR(-EINVAL);
 
-	mutex_lock(&dev->struct_mutex);
-
 	gsize.bytes = PAGE_ALIGN(size);
 	obj = omap_gem_new(dev, gsize, OMAP_BO_MEM_DMABUF | OMAP_BO_WC);
-	if (!obj) {
-		obj = ERR_PTR(-ENOMEM);
-		goto done;
-	}
+	if (!obj)
+		return ERR_PTR(-ENOMEM);
 
 	omap_obj = to_omap_bo(obj);
+
+	mutex_lock(&omap_obj->lock);
+
 	omap_obj->sgt = sgt;
 
 	if (sgt->orig_nents == 1) {
@@ -1249,7 +1292,7 @@ struct drm_gem_object *omap_gem_new_dmabuf(struct drm_device *dev, size_t size,
 	}
 
 done:
-	mutex_unlock(&dev->struct_mutex);
+	mutex_unlock(&omap_obj->lock);
 	return obj;
 }
 
-- 
Regards,

Laurent Pinchart

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 4/6] drm/omap: gem: Replace struct_mutex usage with omap_obj private lock
  2018-05-25 16:39 ` [PATCH v2 4/6] drm/omap: gem: Replace struct_mutex usage with omap_obj private lock Laurent Pinchart
  2018-05-26 16:54   ` [PATCH v2.1 " Laurent Pinchart
@ 2018-05-26 17:10   ` Laurent Pinchart
  1 sibling, 0 replies; 10+ messages in thread
From: Laurent Pinchart @ 2018-05-26 17:10 UTC (permalink / raw)
  To: dri-devel; +Cc: Daniel Vetter, Tomi Valkeinen

Hello,

On Friday, 25 May 2018 19:39:23 EEST Laurent Pinchart wrote:
> The DRM device struct_mutex is used to protect against concurrent GEM
> object operations that deal with memory allocation and pinning. All
> those operations are local to a GEM object and don't need to be
> serialized across different GEM objects. Replace the struct_mutex with
> a local omap_obj.lock or drop it altogether where not needed.
> 
> Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
> Reviewed-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
> ---
> Changes since v1:
> 
> - Use lockdep_assert_held() to verify locking correctness
> ---
>  drivers/gpu/drm/omapdrm/omap_debugfs.c |   7 --
>  drivers/gpu/drm/omapdrm/omap_fbdev.c   |   8 +--
>  drivers/gpu/drm/omapdrm/omap_gem.c     | 122 ++++++++++++++++++++----------
>  3 files changed, 81 insertions(+), 56 deletions(-)

[snip]

> diff --git a/drivers/gpu/drm/omapdrm/omap_gem.c
> b/drivers/gpu/drm/omapdrm/omap_gem.c index 0a9ba1f711e6..39cbc7273b29
> 100644
> --- a/drivers/gpu/drm/omapdrm/omap_gem.c
> +++ b/drivers/gpu/drm/omapdrm/omap_gem.c

[snip]

> -/** release backing pages */
> +/* Release backing pages. Must be called with the omap_obj.lock held. */
>  static void omap_gem_detach_pages(struct drm_gem_object *obj)
>  {
>  	struct omap_gem_object *omap_obj = to_omap_bo(obj);
>  	unsigned int npages = obj->size >> PAGE_SHIFT;
>  	unsigned int i;
> 
> +	lockdep_assert_held(&omap_obj->lock);
> +
>  	for (i = 0; i < npages; i++) {
>  		if (omap_obj->dma_addrs[i])
>  			dma_unmap_page(obj->dev->dev, omap_obj->dma_addrs[i],

[snip]

> @@ -1056,15 +1091,16 @@ void omap_gem_free_object(struct drm_gem_object
> *obj)
> 
>  	omap_gem_evict(obj);
> 
> -	WARN_ON(!mutex_is_locked(&dev->struct_mutex));
> -
>  	spin_lock(&priv->list_lock);
>  	list_del(&omap_obj->mm_list);
>  	spin_unlock(&priv->list_lock);
> 
> -	/* this means the object is still pinned.. which really should
> -	 * not happen.  I think..
> +	/*
> +	 * No need to take the omap_obj.lock as at this point we own the sole
> +	 * reference to the object.
>  	 */

And this of course causes a lockdep warning, as the function calls 
omap_gem_detach_pages(), and the lockdep_assert_held() call there doesn't know 
we're the sole owner. I've sent a v2.1 to fix this.

> +
> +	/* The object should not be pinned. */
>  	WARN_ON(omap_obj->dma_addr_cnt > 0);
> 
>  	if (omap_obj->pages) {

[snip]

-- 
Regards,

Laurent Pinchart



_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 0/6] omapdrm: struct_mutex removal
  2018-05-25 16:39 [PATCH v2 0/6] omapdrm: struct_mutex removal Laurent Pinchart
                   ` (5 preceding siblings ...)
  2018-05-25 16:39 ` [PATCH v2 6/6] drm/omap: gem: Switch to gem_free_object_unlocked() Laurent Pinchart
@ 2018-05-29  9:29 ` Tomi Valkeinen
  6 siblings, 0 replies; 10+ messages in thread
From: Tomi Valkeinen @ 2018-05-29  9:29 UTC (permalink / raw)
  To: Laurent Pinchart, dri-devel; +Cc: Daniel Vetter

On 25/05/18 19:39, Laurent Pinchart wrote:
> Hello,
> 
> This patch series removes the usage of struct_mutex from the omapdrm driver in
> order to switch to gem_free_object_unlocked(). The series is inspired by
> Daniel Vetter's recent gem_free_object_unlocked() patches (starting with
> "[PATCH 1/5] staging/vboxvideo: Use gem_free_object_unlocked") and
> includes patches "[PATCH 4/5] drm/omapdrm: Fix mm_list locking" and
> "[PATCH] drm/omapdrm: Switch to gem_free_object_unlocked" (the latter
> modified due to the rebase).
> 
> When reviewing Daniel's patches I noticed a potential issue in lock handling
> which prompted me to go and remove all usage of struct_mutex from the omapdrm
> driver. Instead of replacing it with a device-wide lock, I have decided to
> create per-GEM object locks as there is no need, as far as I can see, to
> serialize operations across separate GEM objects.
> 
> The series starts with a bit of cleanup in the form of renaming (1/6) and
> refactoring (2/6), followed by removal of struct_mutex (3/6 and 4/6). It then
> ends with Daniel's patches that switch to gem_free_object_unlocked().
> 
> The patches are based on top of the latest drm-misc. They have been tested on
> a Pandaboard.

Thanks! I'll queue these for v4.19 (I think we're a bit late for v4.18).

 Tomi

-- 
Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2018-05-29  9:30 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-25 16:39 [PATCH v2 0/6] omapdrm: struct_mutex removal Laurent Pinchart
2018-05-25 16:39 ` [PATCH v2 1/6] drm/omap: gem: Rename GEM function with omap_gem_* prefix Laurent Pinchart
2018-05-25 16:39 ` [PATCH v2 2/6] drm/omap: gem: Merge __omap_gem_get_pages() and omap_gem_attach_pages() Laurent Pinchart
2018-05-25 16:39 ` [PATCH v2 3/6] drm/omap: gem: Don't take struct_mutex to get GEM object mmap offset Laurent Pinchart
2018-05-25 16:39 ` [PATCH v2 4/6] drm/omap: gem: Replace struct_mutex usage with omap_obj private lock Laurent Pinchart
2018-05-26 16:54   ` [PATCH v2.1 " Laurent Pinchart
2018-05-26 17:10   ` [PATCH v2 " Laurent Pinchart
2018-05-25 16:39 ` [PATCH v2 5/6] drm/omap: gem: Fix mm_list locking Laurent Pinchart
2018-05-25 16:39 ` [PATCH v2 6/6] drm/omap: gem: Switch to gem_free_object_unlocked() Laurent Pinchart
2018-05-29  9:29 ` [PATCH v2 0/6] omapdrm: struct_mutex removal Tomi Valkeinen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.