All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] kill omap_gem_helpers
@ 2013-07-07 18:58 Rob Clark
  2013-07-07 18:58 ` [PATCH 1/5] drm/gem: add drm_gem_create_mmap_offset_size() Rob Clark
                   ` (5 more replies)
  0 siblings, 6 replies; 13+ messages in thread
From: Rob Clark @ 2013-07-07 18:58 UTC (permalink / raw)
  To: dri-devel

Move omap_gem_helpers to core, since I'll need some of the same
helpers for msm driver.  Also update udl and gma500 to use the
helpers.  Possibly i915 could as well, although it does some more
sophisticated things so I'll leave that for now.

Rob Clark (5):
  drm/gem: add drm_gem_create_mmap_offset_size()
  drm/gem: add shmem get/put page helpers
  drm/gma500: use gem get/put page helpers
  drm/udl: use gem get/put page helpers
  drm/omap: kill omap_gem_helpers.c

 drivers/gpu/drm/drm_gem.c                  | 119 +++++++++++++++++++-
 drivers/gpu/drm/gma500/gtt.c               |  38 +------
 drivers/gpu/drm/omapdrm/Makefile           |   3 -
 drivers/gpu/drm/omapdrm/omap_gem.c         |   8 +-
 drivers/gpu/drm/omapdrm/omap_gem_helpers.c | 169 -----------------------------
 drivers/gpu/drm/udl/udl_gem.c              |  44 +-------
 include/drm/drmP.h                         |   5 +
 7 files changed, 136 insertions(+), 250 deletions(-)
 delete mode 100644 drivers/gpu/drm/omapdrm/omap_gem_helpers.c

-- 
1.8.1.4

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/5] drm/gem: add drm_gem_create_mmap_offset_size()
  2013-07-07 18:58 [PATCH 0/5] kill omap_gem_helpers Rob Clark
@ 2013-07-07 18:58 ` Rob Clark
  2013-07-07 18:58 ` [PATCH 2/5] drm/gem: add shmem get/put page helpers Rob Clark
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Rob Clark @ 2013-07-07 18:58 UTC (permalink / raw)
  To: dri-devel

Variant of drm_gem_create_mmap_offset() which doesn't make the
assumption that virtual size and physical size (obj->size) are the same.
This is needed in omapdrm to deal with tiled buffers.  And lets us get
rid of a duplicated and slightly modified version of
drm_gem_create_mmap_offset() in omapdrm.

Signed-off-by: Rob Clark <robdclark@gmail.com>
---
 drivers/gpu/drm/drm_gem.c | 28 ++++++++++++++++++++++++----
 include/drm/drmP.h        |  1 +
 2 files changed, 25 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index cf919e3..443eeff 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -320,18 +320,21 @@ drm_gem_free_mmap_offset(struct drm_gem_object *obj)
 EXPORT_SYMBOL(drm_gem_free_mmap_offset);
 
 /**
- * drm_gem_create_mmap_offset - create a fake mmap offset for an object
+ * drm_gem_create_mmap_offset_size - create a fake mmap offset for an object
  * @obj: obj in question
+ * @size: the virtual size
  *
  * GEM memory mapping works by handing back to userspace a fake mmap offset
  * it can use in a subsequent mmap(2) call.  The DRM core code then looks
  * up the object based on the offset and sets up the various memory mapping
  * structures.
  *
- * This routine allocates and attaches a fake offset for @obj.
+ * This routine allocates and attaches a fake offset for @obj, in cases where
+ * the virtual size differs from the physical size (ie. obj->size).  Otherwise
+ * just use drm_gem_create_mmap_offset().
  */
 int
-drm_gem_create_mmap_offset(struct drm_gem_object *obj)
+drm_gem_create_mmap_offset_size(struct drm_gem_object *obj, size_t size)
 {
 	struct drm_device *dev = obj->dev;
 	struct drm_gem_mm *mm = dev->mm_private;
@@ -347,7 +350,7 @@ drm_gem_create_mmap_offset(struct drm_gem_object *obj)
 
 	map = list->map;
 	map->type = _DRM_GEM;
-	map->size = obj->size;
+	map->size = size;
 	map->handle = obj;
 
 	/* Get a DRM GEM mmap offset allocated... */
@@ -384,6 +387,23 @@ out_free_list:
 
 	return ret;
 }
+EXPORT_SYMBOL(drm_gem_create_mmap_offset_size);
+
+/**
+ * drm_gem_create_mmap_offset - create a fake mmap offset for an object
+ * @obj: obj in question
+ *
+ * GEM memory mapping works by handing back to userspace a fake mmap offset
+ * it can use in a subsequent mmap(2) call.  The DRM core code then looks
+ * up the object based on the offset and sets up the various memory mapping
+ * structures.
+ *
+ * This routine allocates and attaches a fake offset for @obj.
+ */
+int drm_gem_create_mmap_offset(struct drm_gem_object *obj)
+{
+	return drm_gem_create_mmap_offset_size(obj, obj->size);
+}
 EXPORT_SYMBOL(drm_gem_create_mmap_offset);
 
 /** Returns a reference to the object named by the handle. */
diff --git a/include/drm/drmP.h b/include/drm/drmP.h
index 63d17ee..3cb1672 100644
--- a/include/drm/drmP.h
+++ b/include/drm/drmP.h
@@ -1728,6 +1728,7 @@ drm_gem_object_handle_unreference_unlocked(struct drm_gem_object *obj)
 
 void drm_gem_free_mmap_offset(struct drm_gem_object *obj);
 int drm_gem_create_mmap_offset(struct drm_gem_object *obj);
+int drm_gem_create_mmap_offset_size(struct drm_gem_object *obj, size_t size);
 
 struct drm_gem_object *drm_gem_object_lookup(struct drm_device *dev,
 					     struct drm_file *filp,
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/5] drm/gem: add shmem get/put page helpers
  2013-07-07 18:58 [PATCH 0/5] kill omap_gem_helpers Rob Clark
  2013-07-07 18:58 ` [PATCH 1/5] drm/gem: add drm_gem_create_mmap_offset_size() Rob Clark
@ 2013-07-07 18:58 ` Rob Clark
  2013-07-08  8:45   ` Patrik Jakobsson
  2013-07-07 18:58 ` [PATCH 3/5] drm/gma500: use gem " Rob Clark
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 13+ messages in thread
From: Rob Clark @ 2013-07-07 18:58 UTC (permalink / raw)
  To: dri-devel

Basically just extracting some code duplicated in gma500, omapdrm, udl,
and upcoming msm driver.

Signed-off-by: Rob Clark <robdclark@gmail.com>
---
 drivers/gpu/drm/drm_gem.c | 91 +++++++++++++++++++++++++++++++++++++++++++++++
 include/drm/drmP.h        |  4 +++
 2 files changed, 95 insertions(+)

diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 443eeff..853dea6 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -406,6 +406,97 @@ int drm_gem_create_mmap_offset(struct drm_gem_object *obj)
 }
 EXPORT_SYMBOL(drm_gem_create_mmap_offset);
 
+/**
+ * drm_gem_get_pages - helper to allocate backing pages for a GEM object
+ * from shmem
+ * @obj: obj in question
+ * @gfpmask: gfp mask of requested pages
+ */
+struct page **drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask)
+{
+	struct inode *inode;
+	struct address_space *mapping;
+	struct page *p, **pages;
+	int i, npages;
+
+	/* This is the shared memory object that backs the GEM resource */
+	inode = file_inode(obj->filp);
+	mapping = inode->i_mapping;
+
+	npages = obj->size >> PAGE_SHIFT;
+
+	pages = drm_malloc_ab(npages, sizeof(struct page *));
+	if (pages == NULL)
+		return ERR_PTR(-ENOMEM);
+
+	gfpmask |= mapping_gfp_mask(mapping);
+
+	for (i = 0; i < npages; i++) {
+		p = shmem_read_mapping_page_gfp(mapping, i, gfpmask);
+		if (IS_ERR(p))
+			goto fail;
+		pages[i] = p;
+
+		/* There is a hypothetical issue w/ drivers that require
+		 * buffer memory in the low 4GB.. if the pages are un-
+		 * pinned, and swapped out, they can end up swapped back
+		 * in above 4GB.  If pages are already in memory, then
+		 * shmem_read_mapping_page_gfp will ignore the gfpmask,
+		 * even if the already in-memory page disobeys the mask.
+		 *
+		 * It is only a theoretical issue today, because none of
+		 * the devices with this limitation can be populated with
+		 * enough memory to trigger the issue.  But this BUG_ON()
+		 * is here as a reminder in case the problem with
+		 * shmem_read_mapping_page_gfp() isn't solved by the time
+		 * it does become a real issue.
+		 *
+		 * See this thread: http://lkml.org/lkml/2011/7/11/238
+		 */
+		BUG_ON((gfpmask & __GFP_DMA32) &&
+				(page_to_pfn(p) >= 0x00100000UL));
+	}
+
+	return pages;
+
+fail:
+	while (i--)
+		page_cache_release(pages[i]);
+
+	drm_free_large(pages);
+	return ERR_CAST(p);
+}
+EXPORT_SYMBOL(drm_gem_get_pages);
+
+/**
+ * drm_gem_put_pages - helper to free backing pages for a GEM object
+ * @obj: obj in question
+ * @pages: pages to free
+ * @dirty: if true, pages will be marked as dirty
+ * @accessed: if true, the pages will be marked as accessed
+ */
+void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages,
+		bool dirty, bool accessed)
+{
+	int i, npages;
+
+	npages = obj->size >> PAGE_SHIFT;
+
+	for (i = 0; i < npages; i++) {
+		if (dirty)
+			set_page_dirty(pages[i]);
+
+		if (accessed)
+			mark_page_accessed(pages[i]);
+
+		/* Undo the reference we took when populating the table */
+		page_cache_release(pages[i]);
+	}
+
+	drm_free_large(pages);
+}
+EXPORT_SYMBOL(drm_gem_put_pages);
+
 /** Returns a reference to the object named by the handle. */
 struct drm_gem_object *
 drm_gem_object_lookup(struct drm_device *dev, struct drm_file *filp,
diff --git a/include/drm/drmP.h b/include/drm/drmP.h
index 3cb1672..7ec3fa4 100644
--- a/include/drm/drmP.h
+++ b/include/drm/drmP.h
@@ -1730,6 +1730,10 @@ void drm_gem_free_mmap_offset(struct drm_gem_object *obj);
 int drm_gem_create_mmap_offset(struct drm_gem_object *obj);
 int drm_gem_create_mmap_offset_size(struct drm_gem_object *obj, size_t size);
 
+struct page **drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask);
+void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages,
+		bool dirty, bool accessed);
+
 struct drm_gem_object *drm_gem_object_lookup(struct drm_device *dev,
 					     struct drm_file *filp,
 					     u32 handle);
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 3/5] drm/gma500: use gem get/put page helpers
  2013-07-07 18:58 [PATCH 0/5] kill omap_gem_helpers Rob Clark
  2013-07-07 18:58 ` [PATCH 1/5] drm/gem: add drm_gem_create_mmap_offset_size() Rob Clark
  2013-07-07 18:58 ` [PATCH 2/5] drm/gem: add shmem get/put page helpers Rob Clark
@ 2013-07-07 18:58 ` Rob Clark
  2013-07-07 18:58 ` [PATCH 4/5] drm/udl: " Rob Clark
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Rob Clark @ 2013-07-07 18:58 UTC (permalink / raw)
  To: dri-devel

Signed-off-by: Rob Clark <robdclark@gmail.com>
---
 drivers/gpu/drm/gma500/gtt.c | 38 ++++++--------------------------------
 1 file changed, 6 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/gma500/gtt.c b/drivers/gpu/drm/gma500/gtt.c
index 1f82183..92babac 100644
--- a/drivers/gpu/drm/gma500/gtt.c
+++ b/drivers/gpu/drm/gma500/gtt.c
@@ -196,37 +196,17 @@ void psb_gtt_roll(struct drm_device *dev, struct gtt_range *r, int roll)
  */
 static int psb_gtt_attach_pages(struct gtt_range *gt)
 {
-	struct inode *inode;
-	struct address_space *mapping;
-	int i;
-	struct page *p;
-	int pages = gt->gem.size / PAGE_SIZE;
+	struct page **pages;
 
 	WARN_ON(gt->pages);
 
-	/* This is the shared memory object that backs the GEM resource */
-	inode = file_inode(gt->gem.filp);
-	mapping = inode->i_mapping;
+	pages = drm_gem_get_pages(&gt->gem, 0);
+	if (IS_ERR(pages))
+		return PTR_ERR(pages);
 
-	gt->pages = kmalloc(pages * sizeof(struct page *), GFP_KERNEL);
-	if (gt->pages == NULL)
-		return -ENOMEM;
-	gt->npage = pages;
+	gt->pages = pages;
 
-	for (i = 0; i < pages; i++) {
-		p = shmem_read_mapping_page(mapping, i);
-		if (IS_ERR(p))
-			goto err;
-		gt->pages[i] = p;
-	}
 	return 0;
-
-err:
-	while (i--)
-		page_cache_release(gt->pages[i]);
-	kfree(gt->pages);
-	gt->pages = NULL;
-	return PTR_ERR(p);
 }
 
 /**
@@ -240,13 +220,7 @@ err:
  */
 static void psb_gtt_detach_pages(struct gtt_range *gt)
 {
-	int i;
-	for (i = 0; i < gt->npage; i++) {
-		/* FIXME: do we need to force dirty */
-		set_page_dirty(gt->pages[i]);
-		page_cache_release(gt->pages[i]);
-	}
-	kfree(gt->pages);
+	drm_gem_put_pages(&gt->gem, gt->pages, true, false);
 	gt->pages = NULL;
 }
 
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 4/5] drm/udl: use gem get/put page helpers
  2013-07-07 18:58 [PATCH 0/5] kill omap_gem_helpers Rob Clark
                   ` (2 preceding siblings ...)
  2013-07-07 18:58 ` [PATCH 3/5] drm/gma500: use gem " Rob Clark
@ 2013-07-07 18:58 ` Rob Clark
  2013-07-07 18:58 ` [PATCH 5/5] drm/omap: kill omap_gem_helpers.c Rob Clark
  2013-07-07 20:28 ` [PATCH 0/5] kill omap_gem_helpers David Herrmann
  5 siblings, 0 replies; 13+ messages in thread
From: Rob Clark @ 2013-07-07 18:58 UTC (permalink / raw)
  To: dri-devel

Signed-off-by: Rob Clark <robdclark@gmail.com>
---
 drivers/gpu/drm/udl/udl_gem.c | 44 ++++++-------------------------------------
 1 file changed, 6 insertions(+), 38 deletions(-)

diff --git a/drivers/gpu/drm/udl/udl_gem.c b/drivers/gpu/drm/udl/udl_gem.c
index ef034fa..e37e75a 100644
--- a/drivers/gpu/drm/udl/udl_gem.c
+++ b/drivers/gpu/drm/udl/udl_gem.c
@@ -123,55 +123,23 @@ int udl_gem_init_object(struct drm_gem_object *obj)
 
 static int udl_gem_get_pages(struct udl_gem_object *obj, gfp_t gfpmask)
 {
-	int page_count, i;
-	struct page *page;
-	struct inode *inode;
-	struct address_space *mapping;
+	struct page **pages;
 
 	if (obj->pages)
 		return 0;
 
-	page_count = obj->base.size / PAGE_SIZE;
-	BUG_ON(obj->pages != NULL);
-	obj->pages = drm_malloc_ab(page_count, sizeof(struct page *));
-	if (obj->pages == NULL)
-		return -ENOMEM;
-
-	inode = file_inode(obj->base.filp);
-	mapping = inode->i_mapping;
-	gfpmask |= mapping_gfp_mask(mapping);
+	pages = drm_gem_get_pages(&obj->base, gfpmask);
+	if (IS_ERR(pages))
+		return PTR_ERR(pages);
 
-	for (i = 0; i < page_count; i++) {
-		page = shmem_read_mapping_page_gfp(mapping, i, gfpmask);
-		if (IS_ERR(page))
-			goto err_pages;
-		obj->pages[i] = page;
-	}
+	obj->pages = pages;
 
 	return 0;
-err_pages:
-	while (i--)
-		page_cache_release(obj->pages[i]);
-	drm_free_large(obj->pages);
-	obj->pages = NULL;
-	return PTR_ERR(page);
 }
 
 static void udl_gem_put_pages(struct udl_gem_object *obj)
 {
-	int page_count = obj->base.size / PAGE_SIZE;
-	int i;
-
-	if (obj->base.import_attach) {
-		drm_free_large(obj->pages);
-		obj->pages = NULL;
-		return;
-	}
-
-	for (i = 0; i < page_count; i++)
-		page_cache_release(obj->pages[i]);
-
-	drm_free_large(obj->pages);
+	drm_gem_put_pages(&obj->base, obj->pages, false, false);
 	obj->pages = NULL;
 }
 
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 5/5] drm/omap: kill omap_gem_helpers.c
  2013-07-07 18:58 [PATCH 0/5] kill omap_gem_helpers Rob Clark
                   ` (3 preceding siblings ...)
  2013-07-07 18:58 ` [PATCH 4/5] drm/udl: " Rob Clark
@ 2013-07-07 18:58 ` Rob Clark
  2013-07-07 20:28 ` [PATCH 0/5] kill omap_gem_helpers David Herrmann
  5 siblings, 0 replies; 13+ messages in thread
From: Rob Clark @ 2013-07-07 18:58 UTC (permalink / raw)
  To: dri-devel

Signed-off-by: Rob Clark <robdclark@gmail.com>
---
 drivers/gpu/drm/omapdrm/Makefile           |   3 -
 drivers/gpu/drm/omapdrm/omap_gem.c         |   8 +-
 drivers/gpu/drm/omapdrm/omap_gem_helpers.c | 169 -----------------------------
 3 files changed, 4 insertions(+), 176 deletions(-)
 delete mode 100644 drivers/gpu/drm/omapdrm/omap_gem_helpers.c

diff --git a/drivers/gpu/drm/omapdrm/Makefile b/drivers/gpu/drm/omapdrm/Makefile
index d85e058..778372b 100644
--- a/drivers/gpu/drm/omapdrm/Makefile
+++ b/drivers/gpu/drm/omapdrm/Makefile
@@ -18,7 +18,4 @@ omapdrm-y := omap_drv.o \
 	omap_dmm_tiler.o \
 	tcm-sita.o
 
-# temporary:
-omapdrm-y += omap_gem_helpers.o
-
 obj-$(CONFIG_DRM_OMAP)	+= omapdrm.o
diff --git a/drivers/gpu/drm/omapdrm/omap_gem.c b/drivers/gpu/drm/omapdrm/omap_gem.c
index ebbdf41..1b5a724 100644
--- a/drivers/gpu/drm/omapdrm/omap_gem.c
+++ b/drivers/gpu/drm/omapdrm/omap_gem.c
@@ -236,7 +236,7 @@ static int omap_gem_attach_pages(struct drm_gem_object *obj)
 	 * mapping_gfp_mask(mapping) which conflicts w/ GFP_DMA32.. probably
 	 * we actually want CMA memory for it all anyways..
 	 */
-	pages = _drm_gem_get_pages(obj, GFP_KERNEL);
+	pages = drm_gem_get_pages(obj, GFP_KERNEL);
 	if (IS_ERR(pages)) {
 		dev_err(obj->dev->dev, "could not get pages: %ld\n", PTR_ERR(pages));
 		return PTR_ERR(pages);
@@ -270,7 +270,7 @@ static int omap_gem_attach_pages(struct drm_gem_object *obj)
 	return 0;
 
 free_pages:
-	_drm_gem_put_pages(obj, pages, true, false);
+	drm_gem_put_pages(obj, pages, true, false);
 
 	return ret;
 }
@@ -294,7 +294,7 @@ static void omap_gem_detach_pages(struct drm_gem_object *obj)
 	kfree(omap_obj->addrs);
 	omap_obj->addrs = NULL;
 
-	_drm_gem_put_pages(obj, omap_obj->pages, true, false);
+	drm_gem_put_pages(obj, omap_obj->pages, true, false);
 	omap_obj->pages = NULL;
 }
 
@@ -314,7 +314,7 @@ static uint64_t mmap_offset(struct drm_gem_object *obj)
 	if (!obj->map_list.map) {
 		/* Make it mmapable */
 		size_t size = omap_gem_mmap_size(obj);
-		int ret = _drm_gem_create_mmap_offset_size(obj, size);
+		int ret = drm_gem_create_mmap_offset_size(obj, size);
 
 		if (ret) {
 			dev_err(dev->dev, "could not allocate mmap offset\n");
diff --git a/drivers/gpu/drm/omapdrm/omap_gem_helpers.c b/drivers/gpu/drm/omapdrm/omap_gem_helpers.c
deleted file mode 100644
index f9eb679..0000000
--- a/drivers/gpu/drm/omapdrm/omap_gem_helpers.c
+++ /dev/null
@@ -1,169 +0,0 @@
-/*
- * drivers/gpu/drm/omapdrm/omap_gem_helpers.c
- *
- * Copyright (C) 2011 Texas Instruments
- * Author: Rob Clark <rob.clark@linaro.org>
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms of the GNU General Public License version 2 as published by
- * the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program.  If not, see <http://www.gnu.org/licenses/>.
- */
-
-/* temporary copy of drm_gem_{get,put}_pages() until the
- * "drm/gem: add functions to get/put pages" patch is merged..
- */
-
-#include <linux/module.h>
-#include <linux/types.h>
-#include <linux/shmem_fs.h>
-
-#include <drm/drmP.h>
-
-/**
- * drm_gem_get_pages - helper to allocate backing pages for a GEM object
- * @obj: obj in question
- * @gfpmask: gfp mask of requested pages
- */
-struct page **_drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask)
-{
-	struct inode *inode;
-	struct address_space *mapping;
-	struct page *p, **pages;
-	int i, npages;
-
-	/* This is the shared memory object that backs the GEM resource */
-	inode = file_inode(obj->filp);
-	mapping = inode->i_mapping;
-
-	npages = obj->size >> PAGE_SHIFT;
-
-	pages = drm_malloc_ab(npages, sizeof(struct page *));
-	if (pages == NULL)
-		return ERR_PTR(-ENOMEM);
-
-	gfpmask |= mapping_gfp_mask(mapping);
-
-	for (i = 0; i < npages; i++) {
-		p = shmem_read_mapping_page_gfp(mapping, i, gfpmask);
-		if (IS_ERR(p))
-			goto fail;
-		pages[i] = p;
-
-		/* There is a hypothetical issue w/ drivers that require
-		 * buffer memory in the low 4GB.. if the pages are un-
-		 * pinned, and swapped out, they can end up swapped back
-		 * in above 4GB.  If pages are already in memory, then
-		 * shmem_read_mapping_page_gfp will ignore the gfpmask,
-		 * even if the already in-memory page disobeys the mask.
-		 *
-		 * It is only a theoretical issue today, because none of
-		 * the devices with this limitation can be populated with
-		 * enough memory to trigger the issue.  But this BUG_ON()
-		 * is here as a reminder in case the problem with
-		 * shmem_read_mapping_page_gfp() isn't solved by the time
-		 * it does become a real issue.
-		 *
-		 * See this thread: http://lkml.org/lkml/2011/7/11/238
-		 */
-		BUG_ON((gfpmask & __GFP_DMA32) &&
-				(page_to_pfn(p) >= 0x00100000UL));
-	}
-
-	return pages;
-
-fail:
-	while (i--)
-		page_cache_release(pages[i]);
-
-	drm_free_large(pages);
-	return ERR_CAST(p);
-}
-
-/**
- * drm_gem_put_pages - helper to free backing pages for a GEM object
- * @obj: obj in question
- * @pages: pages to free
- */
-void _drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages,
-		bool dirty, bool accessed)
-{
-	int i, npages;
-
-	npages = obj->size >> PAGE_SHIFT;
-
-	for (i = 0; i < npages; i++) {
-		if (dirty)
-			set_page_dirty(pages[i]);
-
-		if (accessed)
-			mark_page_accessed(pages[i]);
-
-		/* Undo the reference we took when populating the table */
-		page_cache_release(pages[i]);
-	}
-
-	drm_free_large(pages);
-}
-
-int
-_drm_gem_create_mmap_offset_size(struct drm_gem_object *obj, size_t size)
-{
-	struct drm_device *dev = obj->dev;
-	struct drm_gem_mm *mm = dev->mm_private;
-	struct drm_map_list *list;
-	struct drm_local_map *map;
-	int ret = 0;
-
-	/* Set the object up for mmap'ing */
-	list = &obj->map_list;
-	list->map = kzalloc(sizeof(struct drm_map_list), GFP_KERNEL);
-	if (!list->map)
-		return -ENOMEM;
-
-	map = list->map;
-	map->type = _DRM_GEM;
-	map->size = size;
-	map->handle = obj;
-
-	/* Get a DRM GEM mmap offset allocated... */
-	list->file_offset_node = drm_mm_search_free(&mm->offset_manager,
-			size / PAGE_SIZE, 0, 0);
-
-	if (!list->file_offset_node) {
-		DRM_ERROR("failed to allocate offset for bo %d\n", obj->name);
-		ret = -ENOSPC;
-		goto out_free_list;
-	}
-
-	list->file_offset_node = drm_mm_get_block(list->file_offset_node,
-			size / PAGE_SIZE, 0);
-	if (!list->file_offset_node) {
-		ret = -ENOMEM;
-		goto out_free_list;
-	}
-
-	list->hash.key = list->file_offset_node->start;
-	ret = drm_ht_insert_item(&mm->offset_hash, &list->hash);
-	if (ret) {
-		DRM_ERROR("failed to add to map hash\n");
-		goto out_free_mm;
-	}
-
-	return 0;
-
-out_free_mm:
-	drm_mm_put_block(list->file_offset_node);
-out_free_list:
-	kfree(list->map);
-	list->map = NULL;
-
-	return ret;
-}
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/5] kill omap_gem_helpers
  2013-07-07 18:58 [PATCH 0/5] kill omap_gem_helpers Rob Clark
                   ` (4 preceding siblings ...)
  2013-07-07 18:58 ` [PATCH 5/5] drm/omap: kill omap_gem_helpers.c Rob Clark
@ 2013-07-07 20:28 ` David Herrmann
  5 siblings, 0 replies; 13+ messages in thread
From: David Herrmann @ 2013-07-07 20:28 UTC (permalink / raw)
  To: Rob Clark; +Cc: dri-devel

Hi

On Sun, Jul 7, 2013 at 8:58 PM, Rob Clark <robdclark@gmail.com> wrote:
> Move omap_gem_helpers to core, since I'll need some of the same
> helpers for msm driver.  Also update udl and gma500 to use the
> helpers.  Possibly i915 could as well, although it does some more
> sophisticated things so I'll leave that for now.
>
> Rob Clark (5):
>   drm/gem: add drm_gem_create_mmap_offset_size()
>   drm/gem: add shmem get/put page helpers
>   drm/gma500: use gem get/put page helpers
>   drm/udl: use gem get/put page helpers
>   drm/omap: kill omap_gem_helpers.c

This conflicts with my VMA-manager patches, but that should be easily
solvable. Anyway, all 5 patches:
  Reviewed-by: David Herrmann <dh.herrmann@gmail.com>

Cheers
David

>  drivers/gpu/drm/drm_gem.c                  | 119 +++++++++++++++++++-
>  drivers/gpu/drm/gma500/gtt.c               |  38 +------
>  drivers/gpu/drm/omapdrm/Makefile           |   3 -
>  drivers/gpu/drm/omapdrm/omap_gem.c         |   8 +-
>  drivers/gpu/drm/omapdrm/omap_gem_helpers.c | 169 -----------------------------
>  drivers/gpu/drm/udl/udl_gem.c              |  44 +-------
>  include/drm/drmP.h                         |   5 +
>  7 files changed, 136 insertions(+), 250 deletions(-)
>  delete mode 100644 drivers/gpu/drm/omapdrm/omap_gem_helpers.c
>
> --
> 1.8.1.4
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/5] drm/gem: add shmem get/put page helpers
  2013-07-07 18:58 ` [PATCH 2/5] drm/gem: add shmem get/put page helpers Rob Clark
@ 2013-07-08  8:45   ` Patrik Jakobsson
  2013-07-08 18:56     ` Rob Clark
  0 siblings, 1 reply; 13+ messages in thread
From: Patrik Jakobsson @ 2013-07-08  8:45 UTC (permalink / raw)
  To: Rob Clark; +Cc: dri-devel

On Sun, Jul 7, 2013 at 8:58 PM, Rob Clark <robdclark@gmail.com> wrote:
> Basically just extracting some code duplicated in gma500, omapdrm, udl,
> and upcoming msm driver.
>
> Signed-off-by: Rob Clark <robdclark@gmail.com>
> ---
>  drivers/gpu/drm/drm_gem.c | 91 +++++++++++++++++++++++++++++++++++++++++++++++
>  include/drm/drmP.h        |  4 +++
>  2 files changed, 95 insertions(+)
>
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index 443eeff..853dea6 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -406,6 +406,97 @@ int drm_gem_create_mmap_offset(struct drm_gem_object *obj)
>  }
>  EXPORT_SYMBOL(drm_gem_create_mmap_offset);
>
> +/**
> + * drm_gem_get_pages - helper to allocate backing pages for a GEM object
> + * from shmem
> + * @obj: obj in question
> + * @gfpmask: gfp mask of requested pages
> + */
> +struct page **drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask)
> +{
> +       struct inode *inode;
> +       struct address_space *mapping;
> +       struct page *p, **pages;
> +       int i, npages;
> +
> +       /* This is the shared memory object that backs the GEM resource */
> +       inode = file_inode(obj->filp);
> +       mapping = inode->i_mapping;
> +
> +       npages = obj->size >> PAGE_SHIFT;

Theoretical issue, but what if obj->size is not page aligned? Perhaps put a
roundup(obj->size, PAGE_SIZE) here?

> +
> +       pages = drm_malloc_ab(npages, sizeof(struct page *));
> +       if (pages == NULL)
> +               return ERR_PTR(-ENOMEM);
> +
> +       gfpmask |= mapping_gfp_mask(mapping);
> +
> +       for (i = 0; i < npages; i++) {
> +               p = shmem_read_mapping_page_gfp(mapping, i, gfpmask);
> +               if (IS_ERR(p))
> +                       goto fail;
> +               pages[i] = p;
> +
> +               /* There is a hypothetical issue w/ drivers that require
> +                * buffer memory in the low 4GB.. if the pages are un-
> +                * pinned, and swapped out, they can end up swapped back
> +                * in above 4GB.  If pages are already in memory, then
> +                * shmem_read_mapping_page_gfp will ignore the gfpmask,
> +                * even if the already in-memory page disobeys the mask.
> +                *
> +                * It is only a theoretical issue today, because none of
> +                * the devices with this limitation can be populated with
> +                * enough memory to trigger the issue.  But this BUG_ON()
> +                * is here as a reminder in case the problem with
> +                * shmem_read_mapping_page_gfp() isn't solved by the time
> +                * it does become a real issue.
> +                *
> +                * See this thread: http://lkml.org/lkml/2011/7/11/238
> +                */
> +               BUG_ON((gfpmask & __GFP_DMA32) &&
> +                               (page_to_pfn(p) >= 0x00100000UL));
> +       }
> +
> +       return pages;
> +
> +fail:
> +       while (i--)
> +               page_cache_release(pages[i]);
> +
> +       drm_free_large(pages);
> +       return ERR_CAST(p);
> +}
> +EXPORT_SYMBOL(drm_gem_get_pages);
> +
> +/**
> + * drm_gem_put_pages - helper to free backing pages for a GEM object
> + * @obj: obj in question
> + * @pages: pages to free
> + * @dirty: if true, pages will be marked as dirty
> + * @accessed: if true, the pages will be marked as accessed
> + */
> +void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages,
> +               bool dirty, bool accessed)
> +{
> +       int i, npages;
> +
> +       npages = obj->size >> PAGE_SHIFT;

Same thing here.

> +
> +       for (i = 0; i < npages; i++) {
> +               if (dirty)
> +                       set_page_dirty(pages[i]);
> +
> +               if (accessed)
> +                       mark_page_accessed(pages[i]);
> +
> +               /* Undo the reference we took when populating the table */
> +               page_cache_release(pages[i]);
> +       }
> +
> +       drm_free_large(pages);
> +}
> +EXPORT_SYMBOL(drm_gem_put_pages);
> +
>  /** Returns a reference to the object named by the handle. */
>  struct drm_gem_object *
>  drm_gem_object_lookup(struct drm_device *dev, struct drm_file *filp,
> diff --git a/include/drm/drmP.h b/include/drm/drmP.h
> index 3cb1672..7ec3fa4 100644
> --- a/include/drm/drmP.h
> +++ b/include/drm/drmP.h
> @@ -1730,6 +1730,10 @@ void drm_gem_free_mmap_offset(struct drm_gem_object *obj);
>  int drm_gem_create_mmap_offset(struct drm_gem_object *obj);
>  int drm_gem_create_mmap_offset_size(struct drm_gem_object *obj, size_t size);
>
> +struct page **drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask);
> +void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages,
> +               bool dirty, bool accessed);
> +
>  struct drm_gem_object *drm_gem_object_lookup(struct drm_device *dev,
>                                              struct drm_file *filp,
>                                              u32 handle);
> --
> 1.8.1.4

Looks good otherwise, so for all 5 patches:
Reviewed-by: Patrik Jakobsson <patrik.r.jakobsson@gmail.com>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/5] drm/gem: add shmem get/put page helpers
  2013-07-08  8:45   ` Patrik Jakobsson
@ 2013-07-08 18:56     ` Rob Clark
  2013-07-08 20:18       ` Daniel Vetter
  0 siblings, 1 reply; 13+ messages in thread
From: Rob Clark @ 2013-07-08 18:56 UTC (permalink / raw)
  To: Patrik Jakobsson; +Cc: dri-devel

On Mon, Jul 8, 2013 at 4:45 AM, Patrik Jakobsson
<patrik.r.jakobsson@gmail.com> wrote:
> On Sun, Jul 7, 2013 at 8:58 PM, Rob Clark <robdclark@gmail.com> wrote:
>> Basically just extracting some code duplicated in gma500, omapdrm, udl,
>> and upcoming msm driver.
>>
>> Signed-off-by: Rob Clark <robdclark@gmail.com>
>> ---
>>  drivers/gpu/drm/drm_gem.c | 91 +++++++++++++++++++++++++++++++++++++++++++++++
>>  include/drm/drmP.h        |  4 +++
>>  2 files changed, 95 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
>> index 443eeff..853dea6 100644
>> --- a/drivers/gpu/drm/drm_gem.c
>> +++ b/drivers/gpu/drm/drm_gem.c
>> @@ -406,6 +406,97 @@ int drm_gem_create_mmap_offset(struct drm_gem_object *obj)
>>  }
>>  EXPORT_SYMBOL(drm_gem_create_mmap_offset);
>>
>> +/**
>> + * drm_gem_get_pages - helper to allocate backing pages for a GEM object
>> + * from shmem
>> + * @obj: obj in question
>> + * @gfpmask: gfp mask of requested pages
>> + */
>> +struct page **drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask)
>> +{
>> +       struct inode *inode;
>> +       struct address_space *mapping;
>> +       struct page *p, **pages;
>> +       int i, npages;
>> +
>> +       /* This is the shared memory object that backs the GEM resource */
>> +       inode = file_inode(obj->filp);
>> +       mapping = inode->i_mapping;
>> +
>> +       npages = obj->size >> PAGE_SHIFT;
>
> Theoretical issue, but what if obj->size is not page aligned? Perhaps put a
> roundup(obj->size, PAGE_SIZE) here?

so, drm_gem_object_init() does have:

	BUG_ON((size & (PAGE_SIZE - 1)) != 0);

so I was kinda assuming that we can count on the size already being
aligned.  But I guess in case someone somehow bypasses
drm_gem_object_init() it wouldn't hurt to round up the size..

BR,
-R

>> +
>> +       pages = drm_malloc_ab(npages, sizeof(struct page *));
>> +       if (pages == NULL)
>> +               return ERR_PTR(-ENOMEM);
>> +
>> +       gfpmask |= mapping_gfp_mask(mapping);
>> +
>> +       for (i = 0; i < npages; i++) {
>> +               p = shmem_read_mapping_page_gfp(mapping, i, gfpmask);
>> +               if (IS_ERR(p))
>> +                       goto fail;
>> +               pages[i] = p;
>> +
>> +               /* There is a hypothetical issue w/ drivers that require
>> +                * buffer memory in the low 4GB.. if the pages are un-
>> +                * pinned, and swapped out, they can end up swapped back
>> +                * in above 4GB.  If pages are already in memory, then
>> +                * shmem_read_mapping_page_gfp will ignore the gfpmask,
>> +                * even if the already in-memory page disobeys the mask.
>> +                *
>> +                * It is only a theoretical issue today, because none of
>> +                * the devices with this limitation can be populated with
>> +                * enough memory to trigger the issue.  But this BUG_ON()
>> +                * is here as a reminder in case the problem with
>> +                * shmem_read_mapping_page_gfp() isn't solved by the time
>> +                * it does become a real issue.
>> +                *
>> +                * See this thread: http://lkml.org/lkml/2011/7/11/238
>> +                */
>> +               BUG_ON((gfpmask & __GFP_DMA32) &&
>> +                               (page_to_pfn(p) >= 0x00100000UL));
>> +       }
>> +
>> +       return pages;
>> +
>> +fail:
>> +       while (i--)
>> +               page_cache_release(pages[i]);
>> +
>> +       drm_free_large(pages);
>> +       return ERR_CAST(p);
>> +}
>> +EXPORT_SYMBOL(drm_gem_get_pages);
>> +
>> +/**
>> + * drm_gem_put_pages - helper to free backing pages for a GEM object
>> + * @obj: obj in question
>> + * @pages: pages to free
>> + * @dirty: if true, pages will be marked as dirty
>> + * @accessed: if true, the pages will be marked as accessed
>> + */
>> +void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages,
>> +               bool dirty, bool accessed)
>> +{
>> +       int i, npages;
>> +
>> +       npages = obj->size >> PAGE_SHIFT;
>
> Same thing here.
>
>> +
>> +       for (i = 0; i < npages; i++) {
>> +               if (dirty)
>> +                       set_page_dirty(pages[i]);
>> +
>> +               if (accessed)
>> +                       mark_page_accessed(pages[i]);
>> +
>> +               /* Undo the reference we took when populating the table */
>> +               page_cache_release(pages[i]);
>> +       }
>> +
>> +       drm_free_large(pages);
>> +}
>> +EXPORT_SYMBOL(drm_gem_put_pages);
>> +
>>  /** Returns a reference to the object named by the handle. */
>>  struct drm_gem_object *
>>  drm_gem_object_lookup(struct drm_device *dev, struct drm_file *filp,
>> diff --git a/include/drm/drmP.h b/include/drm/drmP.h
>> index 3cb1672..7ec3fa4 100644
>> --- a/include/drm/drmP.h
>> +++ b/include/drm/drmP.h
>> @@ -1730,6 +1730,10 @@ void drm_gem_free_mmap_offset(struct drm_gem_object *obj);
>>  int drm_gem_create_mmap_offset(struct drm_gem_object *obj);
>>  int drm_gem_create_mmap_offset_size(struct drm_gem_object *obj, size_t size);
>>
>> +struct page **drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask);
>> +void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages,
>> +               bool dirty, bool accessed);
>> +
>>  struct drm_gem_object *drm_gem_object_lookup(struct drm_device *dev,
>>                                              struct drm_file *filp,
>>                                              u32 handle);
>> --
>> 1.8.1.4
>
> Looks good otherwise, so for all 5 patches:
> Reviewed-by: Patrik Jakobsson <patrik.r.jakobsson@gmail.com>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/5] drm/gem: add shmem get/put page helpers
  2013-07-08 18:56     ` Rob Clark
@ 2013-07-08 20:18       ` Daniel Vetter
  2013-07-08 23:07         ` Rob Clark
  0 siblings, 1 reply; 13+ messages in thread
From: Daniel Vetter @ 2013-07-08 20:18 UTC (permalink / raw)
  To: Rob Clark; +Cc: dri-devel

On Mon, Jul 08, 2013 at 02:56:31PM -0400, Rob Clark wrote:
> On Mon, Jul 8, 2013 at 4:45 AM, Patrik Jakobsson
> <patrik.r.jakobsson@gmail.com> wrote:
> > On Sun, Jul 7, 2013 at 8:58 PM, Rob Clark <robdclark@gmail.com> wrote:
> >> Basically just extracting some code duplicated in gma500, omapdrm, udl,
> >> and upcoming msm driver.
> >>
> >> Signed-off-by: Rob Clark <robdclark@gmail.com>
> >> ---
> >>  drivers/gpu/drm/drm_gem.c | 91 +++++++++++++++++++++++++++++++++++++++++++++++
> >>  include/drm/drmP.h        |  4 +++
> >>  2 files changed, 95 insertions(+)
> >>
> >> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> >> index 443eeff..853dea6 100644
> >> --- a/drivers/gpu/drm/drm_gem.c
> >> +++ b/drivers/gpu/drm/drm_gem.c
> >> @@ -406,6 +406,97 @@ int drm_gem_create_mmap_offset(struct drm_gem_object *obj)
> >>  }
> >>  EXPORT_SYMBOL(drm_gem_create_mmap_offset);
> >>
> >> +/**
> >> + * drm_gem_get_pages - helper to allocate backing pages for a GEM object
> >> + * from shmem
> >> + * @obj: obj in question
> >> + * @gfpmask: gfp mask of requested pages
> >> + */
> >> +struct page **drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask)
> >> +{
> >> +       struct inode *inode;
> >> +       struct address_space *mapping;
> >> +       struct page *p, **pages;
> >> +       int i, npages;
> >> +
> >> +       /* This is the shared memory object that backs the GEM resource */
> >> +       inode = file_inode(obj->filp);
> >> +       mapping = inode->i_mapping;
> >> +
> >> +       npages = obj->size >> PAGE_SHIFT;
> >
> > Theoretical issue, but what if obj->size is not page aligned? Perhaps put a
> > roundup(obj->size, PAGE_SIZE) here?
> 
> so, drm_gem_object_init() does have:
> 
> 	BUG_ON((size & (PAGE_SIZE - 1)) != 0);
> 
> so I was kinda assuming that we can count on the size already being
> aligned.  But I guess in case someone somehow bypasses
> drm_gem_object_init() it wouldn't hurt to round up the size..

Would look funny to me to allow it in one place and not in another one.
Maybe just throw a new WARN_ON in here (WARN since it's not fatal)?
-Daniel

> 
> BR,
> -R
> 
> >> +
> >> +       pages = drm_malloc_ab(npages, sizeof(struct page *));
> >> +       if (pages == NULL)
> >> +               return ERR_PTR(-ENOMEM);
> >> +
> >> +       gfpmask |= mapping_gfp_mask(mapping);
> >> +
> >> +       for (i = 0; i < npages; i++) {
> >> +               p = shmem_read_mapping_page_gfp(mapping, i, gfpmask);
> >> +               if (IS_ERR(p))
> >> +                       goto fail;
> >> +               pages[i] = p;
> >> +
> >> +               /* There is a hypothetical issue w/ drivers that require
> >> +                * buffer memory in the low 4GB.. if the pages are un-
> >> +                * pinned, and swapped out, they can end up swapped back
> >> +                * in above 4GB.  If pages are already in memory, then
> >> +                * shmem_read_mapping_page_gfp will ignore the gfpmask,
> >> +                * even if the already in-memory page disobeys the mask.
> >> +                *
> >> +                * It is only a theoretical issue today, because none of
> >> +                * the devices with this limitation can be populated with
> >> +                * enough memory to trigger the issue.  But this BUG_ON()
> >> +                * is here as a reminder in case the problem with
> >> +                * shmem_read_mapping_page_gfp() isn't solved by the time
> >> +                * it does become a real issue.
> >> +                *
> >> +                * See this thread: http://lkml.org/lkml/2011/7/11/238
> >> +                */
> >> +               BUG_ON((gfpmask & __GFP_DMA32) &&
> >> +                               (page_to_pfn(p) >= 0x00100000UL));
> >> +       }
> >> +
> >> +       return pages;
> >> +
> >> +fail:
> >> +       while (i--)
> >> +               page_cache_release(pages[i]);
> >> +
> >> +       drm_free_large(pages);
> >> +       return ERR_CAST(p);
> >> +}
> >> +EXPORT_SYMBOL(drm_gem_get_pages);
> >> +
> >> +/**
> >> + * drm_gem_put_pages - helper to free backing pages for a GEM object
> >> + * @obj: obj in question
> >> + * @pages: pages to free
> >> + * @dirty: if true, pages will be marked as dirty
> >> + * @accessed: if true, the pages will be marked as accessed
> >> + */
> >> +void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages,
> >> +               bool dirty, bool accessed)
> >> +{
> >> +       int i, npages;
> >> +
> >> +       npages = obj->size >> PAGE_SHIFT;
> >
> > Same thing here.
> >
> >> +
> >> +       for (i = 0; i < npages; i++) {
> >> +               if (dirty)
> >> +                       set_page_dirty(pages[i]);
> >> +
> >> +               if (accessed)
> >> +                       mark_page_accessed(pages[i]);
> >> +
> >> +               /* Undo the reference we took when populating the table */
> >> +               page_cache_release(pages[i]);
> >> +       }
> >> +
> >> +       drm_free_large(pages);
> >> +}
> >> +EXPORT_SYMBOL(drm_gem_put_pages);
> >> +
> >>  /** Returns a reference to the object named by the handle. */
> >>  struct drm_gem_object *
> >>  drm_gem_object_lookup(struct drm_device *dev, struct drm_file *filp,
> >> diff --git a/include/drm/drmP.h b/include/drm/drmP.h
> >> index 3cb1672..7ec3fa4 100644
> >> --- a/include/drm/drmP.h
> >> +++ b/include/drm/drmP.h
> >> @@ -1730,6 +1730,10 @@ void drm_gem_free_mmap_offset(struct drm_gem_object *obj);
> >>  int drm_gem_create_mmap_offset(struct drm_gem_object *obj);
> >>  int drm_gem_create_mmap_offset_size(struct drm_gem_object *obj, size_t size);
> >>
> >> +struct page **drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask);
> >> +void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages,
> >> +               bool dirty, bool accessed);
> >> +
> >>  struct drm_gem_object *drm_gem_object_lookup(struct drm_device *dev,
> >>                                              struct drm_file *filp,
> >>                                              u32 handle);
> >> --
> >> 1.8.1.4
> >
> > Looks good otherwise, so for all 5 patches:
> > Reviewed-by: Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/5] drm/gem: add shmem get/put page helpers
  2013-07-08 20:18       ` Daniel Vetter
@ 2013-07-08 23:07         ` Rob Clark
  2013-07-09  9:03           ` Patrik Jakobsson
  0 siblings, 1 reply; 13+ messages in thread
From: Rob Clark @ 2013-07-08 23:07 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: dri-devel

On Mon, Jul 8, 2013 at 4:18 PM, Daniel Vetter <daniel@ffwll.ch> wrote:
> On Mon, Jul 08, 2013 at 02:56:31PM -0400, Rob Clark wrote:
>> On Mon, Jul 8, 2013 at 4:45 AM, Patrik Jakobsson
>> <patrik.r.jakobsson@gmail.com> wrote:
>> > On Sun, Jul 7, 2013 at 8:58 PM, Rob Clark <robdclark@gmail.com> wrote:
>> >> Basically just extracting some code duplicated in gma500, omapdrm, udl,
>> >> and upcoming msm driver.
>> >>
>> >> Signed-off-by: Rob Clark <robdclark@gmail.com>
>> >> ---
>> >>  drivers/gpu/drm/drm_gem.c | 91 +++++++++++++++++++++++++++++++++++++++++++++++
>> >>  include/drm/drmP.h        |  4 +++
>> >>  2 files changed, 95 insertions(+)
>> >>
>> >> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
>> >> index 443eeff..853dea6 100644
>> >> --- a/drivers/gpu/drm/drm_gem.c
>> >> +++ b/drivers/gpu/drm/drm_gem.c
>> >> @@ -406,6 +406,97 @@ int drm_gem_create_mmap_offset(struct drm_gem_object *obj)
>> >>  }
>> >>  EXPORT_SYMBOL(drm_gem_create_mmap_offset);
>> >>
>> >> +/**
>> >> + * drm_gem_get_pages - helper to allocate backing pages for a GEM object
>> >> + * from shmem
>> >> + * @obj: obj in question
>> >> + * @gfpmask: gfp mask of requested pages
>> >> + */
>> >> +struct page **drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask)
>> >> +{
>> >> +       struct inode *inode;
>> >> +       struct address_space *mapping;
>> >> +       struct page *p, **pages;
>> >> +       int i, npages;
>> >> +
>> >> +       /* This is the shared memory object that backs the GEM resource */
>> >> +       inode = file_inode(obj->filp);
>> >> +       mapping = inode->i_mapping;
>> >> +
>> >> +       npages = obj->size >> PAGE_SHIFT;
>> >
>> > Theoretical issue, but what if obj->size is not page aligned? Perhaps put a
>> > roundup(obj->size, PAGE_SIZE) here?
>>
>> so, drm_gem_object_init() does have:
>>
>>       BUG_ON((size & (PAGE_SIZE - 1)) != 0);
>>
>> so I was kinda assuming that we can count on the size already being
>> aligned.  But I guess in case someone somehow bypasses
>> drm_gem_object_init() it wouldn't hurt to round up the size..
>
> Would look funny to me to allow it in one place and not in another one.
> Maybe just throw a new WARN_ON in here (WARN since it's not fatal)?
> -Daniel

sounds good, I'll toss in a WARN_ON()

BR,
-R


>>
>> BR,
>> -R
>>
>> >> +
>> >> +       pages = drm_malloc_ab(npages, sizeof(struct page *));
>> >> +       if (pages == NULL)
>> >> +               return ERR_PTR(-ENOMEM);
>> >> +
>> >> +       gfpmask |= mapping_gfp_mask(mapping);
>> >> +
>> >> +       for (i = 0; i < npages; i++) {
>> >> +               p = shmem_read_mapping_page_gfp(mapping, i, gfpmask);
>> >> +               if (IS_ERR(p))
>> >> +                       goto fail;
>> >> +               pages[i] = p;
>> >> +
>> >> +               /* There is a hypothetical issue w/ drivers that require
>> >> +                * buffer memory in the low 4GB.. if the pages are un-
>> >> +                * pinned, and swapped out, they can end up swapped back
>> >> +                * in above 4GB.  If pages are already in memory, then
>> >> +                * shmem_read_mapping_page_gfp will ignore the gfpmask,
>> >> +                * even if the already in-memory page disobeys the mask.
>> >> +                *
>> >> +                * It is only a theoretical issue today, because none of
>> >> +                * the devices with this limitation can be populated with
>> >> +                * enough memory to trigger the issue.  But this BUG_ON()
>> >> +                * is here as a reminder in case the problem with
>> >> +                * shmem_read_mapping_page_gfp() isn't solved by the time
>> >> +                * it does become a real issue.
>> >> +                *
>> >> +                * See this thread: http://lkml.org/lkml/2011/7/11/238
>> >> +                */
>> >> +               BUG_ON((gfpmask & __GFP_DMA32) &&
>> >> +                               (page_to_pfn(p) >= 0x00100000UL));
>> >> +       }
>> >> +
>> >> +       return pages;
>> >> +
>> >> +fail:
>> >> +       while (i--)
>> >> +               page_cache_release(pages[i]);
>> >> +
>> >> +       drm_free_large(pages);
>> >> +       return ERR_CAST(p);
>> >> +}
>> >> +EXPORT_SYMBOL(drm_gem_get_pages);
>> >> +
>> >> +/**
>> >> + * drm_gem_put_pages - helper to free backing pages for a GEM object
>> >> + * @obj: obj in question
>> >> + * @pages: pages to free
>> >> + * @dirty: if true, pages will be marked as dirty
>> >> + * @accessed: if true, the pages will be marked as accessed
>> >> + */
>> >> +void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages,
>> >> +               bool dirty, bool accessed)
>> >> +{
>> >> +       int i, npages;
>> >> +
>> >> +       npages = obj->size >> PAGE_SHIFT;
>> >
>> > Same thing here.
>> >
>> >> +
>> >> +       for (i = 0; i < npages; i++) {
>> >> +               if (dirty)
>> >> +                       set_page_dirty(pages[i]);
>> >> +
>> >> +               if (accessed)
>> >> +                       mark_page_accessed(pages[i]);
>> >> +
>> >> +               /* Undo the reference we took when populating the table */
>> >> +               page_cache_release(pages[i]);
>> >> +       }
>> >> +
>> >> +       drm_free_large(pages);
>> >> +}
>> >> +EXPORT_SYMBOL(drm_gem_put_pages);
>> >> +
>> >>  /** Returns a reference to the object named by the handle. */
>> >>  struct drm_gem_object *
>> >>  drm_gem_object_lookup(struct drm_device *dev, struct drm_file *filp,
>> >> diff --git a/include/drm/drmP.h b/include/drm/drmP.h
>> >> index 3cb1672..7ec3fa4 100644
>> >> --- a/include/drm/drmP.h
>> >> +++ b/include/drm/drmP.h
>> >> @@ -1730,6 +1730,10 @@ void drm_gem_free_mmap_offset(struct drm_gem_object *obj);
>> >>  int drm_gem_create_mmap_offset(struct drm_gem_object *obj);
>> >>  int drm_gem_create_mmap_offset_size(struct drm_gem_object *obj, size_t size);
>> >>
>> >> +struct page **drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask);
>> >> +void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages,
>> >> +               bool dirty, bool accessed);
>> >> +
>> >>  struct drm_gem_object *drm_gem_object_lookup(struct drm_device *dev,
>> >>                                              struct drm_file *filp,
>> >>                                              u32 handle);
>> >> --
>> >> 1.8.1.4
>> >
>> > Looks good otherwise, so for all 5 patches:
>> > Reviewed-by: Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> +41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/5] drm/gem: add shmem get/put page helpers
  2013-07-08 23:07         ` Rob Clark
@ 2013-07-09  9:03           ` Patrik Jakobsson
  0 siblings, 0 replies; 13+ messages in thread
From: Patrik Jakobsson @ 2013-07-09  9:03 UTC (permalink / raw)
  To: Rob Clark; +Cc: dri-devel

On Tue, Jul 9, 2013 at 1:07 AM, Rob Clark <robdclark@gmail.com> wrote:
> On Mon, Jul 8, 2013 at 4:18 PM, Daniel Vetter <daniel@ffwll.ch> wrote:
>> On Mon, Jul 08, 2013 at 02:56:31PM -0400, Rob Clark wrote:
>>> On Mon, Jul 8, 2013 at 4:45 AM, Patrik Jakobsson
>>> <patrik.r.jakobsson@gmail.com> wrote:
>>> > On Sun, Jul 7, 2013 at 8:58 PM, Rob Clark <robdclark@gmail.com> wrote:
>>> >> Basically just extracting some code duplicated in gma500, omapdrm, udl,
>>> >> and upcoming msm driver.
>>> >>
>>> >> Signed-off-by: Rob Clark <robdclark@gmail.com>
>>> >> ---
>>> >>  drivers/gpu/drm/drm_gem.c | 91 +++++++++++++++++++++++++++++++++++++++++++++++
>>> >>  include/drm/drmP.h        |  4 +++
>>> >>  2 files changed, 95 insertions(+)
>>> >>
>>> >> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
>>> >> index 443eeff..853dea6 100644
>>> >> --- a/drivers/gpu/drm/drm_gem.c
>>> >> +++ b/drivers/gpu/drm/drm_gem.c
>>> >> @@ -406,6 +406,97 @@ int drm_gem_create_mmap_offset(struct drm_gem_object *obj)
>>> >>  }
>>> >>  EXPORT_SYMBOL(drm_gem_create_mmap_offset);
>>> >>
>>> >> +/**
>>> >> + * drm_gem_get_pages - helper to allocate backing pages for a GEM object
>>> >> + * from shmem
>>> >> + * @obj: obj in question
>>> >> + * @gfpmask: gfp mask of requested pages
>>> >> + */
>>> >> +struct page **drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask)
>>> >> +{
>>> >> +       struct inode *inode;
>>> >> +       struct address_space *mapping;
>>> >> +       struct page *p, **pages;
>>> >> +       int i, npages;
>>> >> +
>>> >> +       /* This is the shared memory object that backs the GEM resource */
>>> >> +       inode = file_inode(obj->filp);
>>> >> +       mapping = inode->i_mapping;
>>> >> +
>>> >> +       npages = obj->size >> PAGE_SHIFT;
>>> >
>>> > Theoretical issue, but what if obj->size is not page aligned? Perhaps put a
>>> > roundup(obj->size, PAGE_SIZE) here?
>>>
>>> so, drm_gem_object_init() does have:
>>>
>>>       BUG_ON((size & (PAGE_SIZE - 1)) != 0);
>>>
>>> so I was kinda assuming that we can count on the size already being
>>> aligned.  But I guess in case someone somehow bypasses
>>> drm_gem_object_init() it wouldn't hurt to round up the size..
>>
>> Would look funny to me to allow it in one place and not in another one.
>> Maybe just throw a new WARN_ON in here (WARN since it's not fatal)?
>> -Daniel
>
> sounds good, I'll toss in a WARN_ON()

Yes, sounds good.

Patrik

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 2/5] drm/gem: add shmem get/put page helpers
@ 2013-07-13 22:39 Rob Clark
  0 siblings, 0 replies; 13+ messages in thread
From: Rob Clark @ 2013-07-13 22:39 UTC (permalink / raw)
  To: dri-devel

Basically just extracting some code duplicated in gma500, omapdrm, udl,
and upcoming msm driver.

Signed-off-by: Rob Clark <robdclark@gmail.com>
CC: Daniel Vetter <daniel@ffwll.ch>
CC: Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
---
v1: original
v2: add WARN_ON()'s for non-page-aligned sizes as suggested by Patrik
    Jakobsson and Daniel Vetter

 drivers/gpu/drm/drm_gem.c | 103 ++++++++++++++++++++++++++++++++++++++++++++++
 include/drm/drmP.h        |   4 ++
 2 files changed, 107 insertions(+)

diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 7995466..bf299b3 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -408,6 +408,109 @@ int drm_gem_create_mmap_offset(struct drm_gem_object *obj)
 }
 EXPORT_SYMBOL(drm_gem_create_mmap_offset);
 
+/**
+ * drm_gem_get_pages - helper to allocate backing pages for a GEM object
+ * from shmem
+ * @obj: obj in question
+ * @gfpmask: gfp mask of requested pages
+ */
+struct page **drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask)
+{
+	struct inode *inode;
+	struct address_space *mapping;
+	struct page *p, **pages;
+	int i, npages;
+
+	/* This is the shared memory object that backs the GEM resource */
+	inode = file_inode(obj->filp);
+	mapping = inode->i_mapping;
+
+	/* We already BUG_ON() for non-page-aligned sizes in
+	 * drm_gem_object_init(), so we should never hit this unless
+	 * driver author is doing something really wrong:
+	 */
+	WARN_ON((obj->size & (PAGE_SIZE - 1)) != 0);
+
+	npages = obj->size >> PAGE_SHIFT;
+
+	pages = drm_malloc_ab(npages, sizeof(struct page *));
+	if (pages == NULL)
+		return ERR_PTR(-ENOMEM);
+
+	gfpmask |= mapping_gfp_mask(mapping);
+
+	for (i = 0; i < npages; i++) {
+		p = shmem_read_mapping_page_gfp(mapping, i, gfpmask);
+		if (IS_ERR(p))
+			goto fail;
+		pages[i] = p;
+
+		/* There is a hypothetical issue w/ drivers that require
+		 * buffer memory in the low 4GB.. if the pages are un-
+		 * pinned, and swapped out, they can end up swapped back
+		 * in above 4GB.  If pages are already in memory, then
+		 * shmem_read_mapping_page_gfp will ignore the gfpmask,
+		 * even if the already in-memory page disobeys the mask.
+		 *
+		 * It is only a theoretical issue today, because none of
+		 * the devices with this limitation can be populated with
+		 * enough memory to trigger the issue.  But this BUG_ON()
+		 * is here as a reminder in case the problem with
+		 * shmem_read_mapping_page_gfp() isn't solved by the time
+		 * it does become a real issue.
+		 *
+		 * See this thread: http://lkml.org/lkml/2011/7/11/238
+		 */
+		BUG_ON((gfpmask & __GFP_DMA32) &&
+				(page_to_pfn(p) >= 0x00100000UL));
+	}
+
+	return pages;
+
+fail:
+	while (i--)
+		page_cache_release(pages[i]);
+
+	drm_free_large(pages);
+	return ERR_CAST(p);
+}
+EXPORT_SYMBOL(drm_gem_get_pages);
+
+/**
+ * drm_gem_put_pages - helper to free backing pages for a GEM object
+ * @obj: obj in question
+ * @pages: pages to free
+ * @dirty: if true, pages will be marked as dirty
+ * @accessed: if true, the pages will be marked as accessed
+ */
+void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages,
+		bool dirty, bool accessed)
+{
+	int i, npages;
+
+	/* We already BUG_ON() for non-page-aligned sizes in
+	 * drm_gem_object_init(), so we should never hit this unless
+	 * driver author is doing something really wrong:
+	 */
+	WARN_ON((obj->size & (PAGE_SIZE - 1)) != 0);
+
+	npages = obj->size >> PAGE_SHIFT;
+
+	for (i = 0; i < npages; i++) {
+		if (dirty)
+			set_page_dirty(pages[i]);
+
+		if (accessed)
+			mark_page_accessed(pages[i]);
+
+		/* Undo the reference we took when populating the table */
+		page_cache_release(pages[i]);
+	}
+
+	drm_free_large(pages);
+}
+EXPORT_SYMBOL(drm_gem_put_pages);
+
 /** Returns a reference to the object named by the handle. */
 struct drm_gem_object *
 drm_gem_object_lookup(struct drm_device *dev, struct drm_file *filp,
diff --git a/include/drm/drmP.h b/include/drm/drmP.h
index fefbbda..853557a 100644
--- a/include/drm/drmP.h
+++ b/include/drm/drmP.h
@@ -1730,6 +1730,10 @@ void drm_gem_free_mmap_offset(struct drm_gem_object *obj);
 int drm_gem_create_mmap_offset(struct drm_gem_object *obj);
 int drm_gem_create_mmap_offset_size(struct drm_gem_object *obj, size_t size);
 
+struct page **drm_gem_get_pages(struct drm_gem_object *obj, gfp_t gfpmask);
+void drm_gem_put_pages(struct drm_gem_object *obj, struct page **pages,
+		bool dirty, bool accessed);
+
 struct drm_gem_object *drm_gem_object_lookup(struct drm_device *dev,
 					     struct drm_file *filp,
 					     u32 handle);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2013-07-13 22:39 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-07-07 18:58 [PATCH 0/5] kill omap_gem_helpers Rob Clark
2013-07-07 18:58 ` [PATCH 1/5] drm/gem: add drm_gem_create_mmap_offset_size() Rob Clark
2013-07-07 18:58 ` [PATCH 2/5] drm/gem: add shmem get/put page helpers Rob Clark
2013-07-08  8:45   ` Patrik Jakobsson
2013-07-08 18:56     ` Rob Clark
2013-07-08 20:18       ` Daniel Vetter
2013-07-08 23:07         ` Rob Clark
2013-07-09  9:03           ` Patrik Jakobsson
2013-07-07 18:58 ` [PATCH 3/5] drm/gma500: use gem " Rob Clark
2013-07-07 18:58 ` [PATCH 4/5] drm/udl: " Rob Clark
2013-07-07 18:58 ` [PATCH 5/5] drm/omap: kill omap_gem_helpers.c Rob Clark
2013-07-07 20:28 ` [PATCH 0/5] kill omap_gem_helpers David Herrmann
2013-07-13 22:39 [PATCH 2/5] drm/gem: add shmem get/put page helpers Rob Clark

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.