All of lore.kernel.org
 help / color / mirror / Atom feed
* [EARLY RFC][PATCH 0/4] dmabuf pools infrastructure (destaging ION)
@ 2019-02-21  7:40 John Stultz
  2019-02-21  7:40 ` [EARLY RFC][PATCH 1/4] dma-buf: Add dma-buf pools framework John Stultz
                   ` (5 more replies)
  0 siblings, 6 replies; 11+ messages in thread
From: John Stultz @ 2019-02-21  7:40 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Chenbo Feng, Alistair Strachan, Liam Mark, Andrew F . Davis, dri-devel

Here is a very early peek at my dmabuf pools patchset, which
tries to destage a fair chunk of ION functionality.

This build and boots, but I've not gotten to testing the actual
pool devices yet (need to write some kselftests)! I just wanted
some early feedback on the overall direction.

The patchset implements per-pool devices (extending my ion
per-heap devices patchset from last week), which can be opened
directly and then an ioctl is used to allocate a dmabuf from the
pool.

The interface is similar, but simpler then IONs, only providing
an ALLOC ioctl.

Also, I've only destaged the system/system-contig and cma pools,
since the ION carveout and chunk heaps depended on out of tree
board files to initialize those heaps. I'll leave that to folks
who are actually using those heaps.

Let me know what you think!

thanks
-john

Cc: Laura Abbott <labbott@redhat.com>
Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Liam Mark <lmark@codeaurora.org>
Cc: Brian Starkey <Brian.Starkey@arm.com>
Cc: Andrew F. Davis <afd@ti.com>
Cc: Chenbo Feng <fengc@google.com>
Cc: Alistair Strachan <astrachan@google.com>
Cc: dri-devel@lists.freedesktop.org

John Stultz (4):
  dma-buf: Add dma-buf pools framework
  dma-buf: pools: Add page-pool for dma-buf pools
  dma-buf: pools: Add system/system-contig pools to dmabuf pools
  dma-buf: pools: Add CMA pool to dmabuf pools

 MAINTAINERS                          |  13 +
 drivers/dma-buf/Kconfig              |   2 +
 drivers/dma-buf/Makefile             |   1 +
 drivers/dma-buf/pools/Kconfig        |  25 ++
 drivers/dma-buf/pools/Makefile       |   4 +
 drivers/dma-buf/pools/cma_pool.c     | 143 ++++++++
 drivers/dma-buf/pools/dmabuf-pools.c | 670 +++++++++++++++++++++++++++++++++++
 drivers/dma-buf/pools/dmabuf-pools.h | 295 +++++++++++++++
 drivers/dma-buf/pools/page_pool.c    | 157 ++++++++
 drivers/dma-buf/pools/pool-helpers.c | 317 +++++++++++++++++
 drivers/dma-buf/pools/pool-ioctl.c   |  94 +++++
 drivers/dma-buf/pools/system_pool.c  | 374 +++++++++++++++++++
 include/uapi/linux/dmabuf-pools.h    |  59 +++
 13 files changed, 2154 insertions(+)
 create mode 100644 drivers/dma-buf/pools/Kconfig
 create mode 100644 drivers/dma-buf/pools/Makefile
 create mode 100644 drivers/dma-buf/pools/cma_pool.c
 create mode 100644 drivers/dma-buf/pools/dmabuf-pools.c
 create mode 100644 drivers/dma-buf/pools/dmabuf-pools.h
 create mode 100644 drivers/dma-buf/pools/page_pool.c
 create mode 100644 drivers/dma-buf/pools/pool-helpers.c
 create mode 100644 drivers/dma-buf/pools/pool-ioctl.c
 create mode 100644 drivers/dma-buf/pools/system_pool.c
 create mode 100644 include/uapi/linux/dmabuf-pools.h

-- 
2.7.4

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [EARLY RFC][PATCH 1/4] dma-buf: Add dma-buf pools framework
  2019-02-21  7:40 [EARLY RFC][PATCH 0/4] dmabuf pools infrastructure (destaging ION) John Stultz
@ 2019-02-21  7:40 ` John Stultz
  2019-02-21  7:40 ` [EARLY RFC][PATCH 2/4] dma-buf: pools: Add page-pool for dma-buf pools John Stultz
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: John Stultz @ 2019-02-21  7:40 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Chenbo Feng, Alistair Strachan, Liam Mark, Andrew F . Davis, dri-devel

This patch introduces the dma-buf pools framework.

This framework allows for different pool implementations to be
created, which act as dma-buf exporters, allowing userland to
allocate specific types of memory for use in dma-buf sharing.

This resembles the Android ION framework in that it takes that
code and renames most of the variables. (Following Rafael
Wysocki's earlier theory that the "easiest way to sell a once
rejected feature is to advertise it under a different name" :)

However, the API has been greatly simplified compared to ION.
This patchset extends some of my (and Benjamin's) earlier work
with the ION per-heap device nodes.

Each pool (previously "heap" in ION) has its own device node,
which one can allocate from using the DMABUF_POOL_IOC_ALLOC
which is very similar to the ION_IOC_ALLOC call.

There is no equivalent ION_IOC_HEAP_QUERY interface, as the
pools all have their own device nodes.

Additionally, any unused code from ION was removed.

NOTE: Reworked the per-pool devices to create a proper class so
Android can have a nice /dev/dmabuf_pools/ directory. Its working
but I'm almost sure if I did it wrong, as its much more complex
then just using a miscdevice. Extra review would be helpful.

Cc: Laura Abbott <labbott@redhat.com>
Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Liam Mark <lmark@codeaurora.org>
Cc: Brian Starkey <Brian.Starkey@arm.com>
Cc: Andrew F. Davis <afd@ti.com>
Cc: Chenbo Feng <fengc@google.com>
Cc: Alistair Strachan <astrachan@google.com>
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
 MAINTAINERS                          |  13 +
 drivers/dma-buf/Kconfig              |   2 +
 drivers/dma-buf/Makefile             |   1 +
 drivers/dma-buf/pools/Kconfig        |  10 +
 drivers/dma-buf/pools/Makefile       |   2 +
 drivers/dma-buf/pools/dmabuf-pools.c | 670 +++++++++++++++++++++++++++++++++++
 drivers/dma-buf/pools/dmabuf-pools.h | 244 +++++++++++++
 drivers/dma-buf/pools/pool-helpers.c | 317 +++++++++++++++++
 drivers/dma-buf/pools/pool-ioctl.c   |  94 +++++
 include/uapi/linux/dmabuf-pools.h    |  59 +++
 10 files changed, 1412 insertions(+)
 create mode 100644 drivers/dma-buf/pools/Kconfig
 create mode 100644 drivers/dma-buf/pools/Makefile
 create mode 100644 drivers/dma-buf/pools/dmabuf-pools.c
 create mode 100644 drivers/dma-buf/pools/dmabuf-pools.h
 create mode 100644 drivers/dma-buf/pools/pool-helpers.c
 create mode 100644 drivers/dma-buf/pools/pool-ioctl.c
 create mode 100644 include/uapi/linux/dmabuf-pools.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 8e798ce..6bc3ab0 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -4614,6 +4614,19 @@ F:	include/linux/*fence.h
 F:	Documentation/driver-api/dma-buf.rst
 T:	git git://anongit.freedesktop.org/drm/drm-misc
 
+DMA-BUF POOLS FRAMEWORK
+M:	Laura Abbott <labbott@redhat.com>
+R:	Liam Mark <lmark@codeaurora.org>
+R:	Brian Starkey <Brian.Starkey@arm.com>
+R:	"Andrew F. Davis" <afd@ti.com>
+R:	John Stultz <john.stultz@linaro.org>
+S:	Maintained
+L:	linux-media@vger.kernel.org
+L:	dri-devel@lists.freedesktop.org
+L:	linaro-mm-sig@lists.linaro.org (moderated for non-subscribers)
+F:	drivers/dma-buf/pools/*
+T:	git git://anongit.freedesktop.org/drm/drm-misc
+
 DMA GENERIC OFFLOAD ENGINE SUBSYSTEM
 M:	Vinod Koul <vkoul@kernel.org>
 L:	dmaengine@vger.kernel.org
diff --git a/drivers/dma-buf/Kconfig b/drivers/dma-buf/Kconfig
index 2e5a0fa..c746510 100644
--- a/drivers/dma-buf/Kconfig
+++ b/drivers/dma-buf/Kconfig
@@ -39,4 +39,6 @@ config UDMABUF
 	  A driver to let userspace turn memfd regions into dma-bufs.
 	  Qemu can use this to create host dmabufs for guest framebuffers.
 
+source "drivers/dma-buf/pools/Kconfig"
+
 endmenu
diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
index 0913a6c..9c295df 100644
--- a/drivers/dma-buf/Makefile
+++ b/drivers/dma-buf/Makefile
@@ -1,4 +1,5 @@
 obj-y := dma-buf.o dma-fence.o dma-fence-array.o reservation.o seqno-fence.o
+obj-y				+= pools/
 obj-$(CONFIG_SYNC_FILE)		+= sync_file.o
 obj-$(CONFIG_SW_SYNC)		+= sw_sync.o sync_debug.o
 obj-$(CONFIG_UDMABUF)		+= udmabuf.o
diff --git a/drivers/dma-buf/pools/Kconfig b/drivers/dma-buf/pools/Kconfig
new file mode 100644
index 0000000..caa7eb8
--- /dev/null
+++ b/drivers/dma-buf/pools/Kconfig
@@ -0,0 +1,10 @@
+menuconfig DMABUF_POOLS
+	bool "DMA-BUF Userland Memory Pools"
+	depends on HAS_DMA && MMU
+	select GENERIC_ALLOCATOR
+	select DMA_SHARED_BUFFER
+	help
+	  Choose this option to enable the DMA-BUF userland, memory pools,
+	  which allow userspace to allocate dma-bufs that can be shared between
+	  drivers.
+	  If you're not using Android its probably safe to say N here.
diff --git a/drivers/dma-buf/pools/Makefile b/drivers/dma-buf/pools/Makefile
new file mode 100644
index 0000000..6cb1284
--- /dev/null
+++ b/drivers/dma-buf/pools/Makefile
@@ -0,0 +1,2 @@
+# SPDX-License-Identifier: GPL-2.0
+obj-$(CONFIG_DMABUF_POOLS)		+= dmabuf-pools.o pool-ioctl.o pool-helpers.o
diff --git a/drivers/dma-buf/pools/dmabuf-pools.c b/drivers/dma-buf/pools/dmabuf-pools.c
new file mode 100644
index 0000000..706b0eb
--- /dev/null
+++ b/drivers/dma-buf/pools/dmabuf-pools.c
@@ -0,0 +1,670 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * drivers/dma-buf/pools/dmabuf-pools.c
+ *
+ * Copyright (C) 2011 Google, Inc.
+ * Copyright (C) 2019 Linaro Ltd.
+ */
+
+#include <linux/anon_inodes.h>
+#include <linux/debugfs.h>
+#include <linux/device.h>
+#include <linux/dma-buf.h>
+#include <linux/err.h>
+#include <linux/export.h>
+#include <linux/file.h>
+#include <linux/freezer.h>
+#include <linux/fs.h>
+#include <linux/idr.h>
+#include <linux/kthread.h>
+#include <linux/list.h>
+#include <linux/memblock.h>
+#include <linux/miscdevice.h>
+#include <linux/mm.h>
+#include <linux/mm_types.h>
+#include <linux/rbtree.h>
+#include <linux/sched/task.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+#include <linux/vmalloc.h>
+
+#include "dmabuf-pools.h"
+
+#define DEVNAME "dmabuf_pools"
+
+#define NUM_POOL_MINORS 128
+static DEFINE_IDR(dmabuf_pool_idr);
+static DEFINE_MUTEX(minor_lock); /* Protect idr accesses */
+
+struct dmabuf_pool_device {
+	struct rw_semaphore lock;
+	struct plist_head pools;
+	struct dentry *debug_root;
+	dev_t device_devt;
+	struct class *pool_class;
+};
+
+static struct dmabuf_pool_device *internal_dev;
+static int pool_id;
+
+/* this function should only be called while dev->lock is held */
+static struct dmabuf_pool_buffer *dmabuf_pool_buffer_create(
+						struct dmabuf_pool *pool,
+						struct dmabuf_pool_device *dev,
+						unsigned long len,
+						unsigned long flags)
+{
+	struct dmabuf_pool_buffer *buffer;
+	int ret;
+
+	buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);
+	if (!buffer)
+		return ERR_PTR(-ENOMEM);
+
+	buffer->pool = pool;
+	buffer->flags = flags;
+	buffer->size = len;
+
+	ret = pool->ops->allocate(pool, buffer, len, flags);
+
+	if (ret) {
+		if (!(pool->flags & DMABUF_POOL_FLAG_DEFER_FREE))
+			goto err2;
+
+		dmabuf_pool_freelist_drain(pool, 0);
+		ret = pool->ops->allocate(pool, buffer, len, flags);
+		if (ret)
+			goto err2;
+	}
+
+	if (!buffer->sg_table) {
+		WARN_ONCE(1, "This pool needs to set the sgtable");
+		ret = -EINVAL;
+		goto err1;
+	}
+
+	spin_lock(&pool->stat_lock);
+	pool->num_of_buffers++;
+	pool->num_of_alloc_bytes += len;
+	if (pool->num_of_alloc_bytes > pool->alloc_bytes_wm)
+		pool->alloc_bytes_wm = pool->num_of_alloc_bytes;
+	spin_unlock(&pool->stat_lock);
+
+	INIT_LIST_HEAD(&buffer->attachments);
+	mutex_init(&buffer->lock);
+	return buffer;
+
+err1:
+	pool->ops->free(buffer);
+err2:
+	kfree(buffer);
+	return ERR_PTR(ret);
+}
+
+void dmabuf_pool_buffer_destroy(struct dmabuf_pool_buffer *buffer)
+{
+	if (buffer->kmap_cnt > 0) {
+		pr_warn_once("%s: buffer still mapped in the kernel\n",
+			     __func__);
+		buffer->pool->ops->unmap_kernel(buffer->pool, buffer);
+	}
+	buffer->pool->ops->free(buffer);
+	spin_lock(&buffer->pool->stat_lock);
+	buffer->pool->num_of_buffers--;
+	buffer->pool->num_of_alloc_bytes -= buffer->size;
+	spin_unlock(&buffer->pool->stat_lock);
+
+	kfree(buffer);
+}
+
+static void _dmabuf_pool_buffer_destroy(struct dmabuf_pool_buffer *buffer)
+{
+	struct dmabuf_pool *pool = buffer->pool;
+
+	if (pool->flags & DMABUF_POOL_FLAG_DEFER_FREE)
+		dmabuf_pool_freelist_add(pool, buffer);
+	else
+		dmabuf_pool_buffer_destroy(buffer);
+}
+
+static void *dmabuf_pool_buffer_kmap_get(struct dmabuf_pool_buffer *buffer)
+{
+	void *vaddr;
+
+	if (buffer->kmap_cnt) {
+		buffer->kmap_cnt++;
+		return buffer->vaddr;
+	}
+	vaddr = buffer->pool->ops->map_kernel(buffer->pool, buffer);
+	if (WARN_ONCE(!vaddr,
+		      "pool->ops->map_kernel should return ERR_PTR on error"))
+		return ERR_PTR(-EINVAL);
+	if (IS_ERR(vaddr))
+		return vaddr;
+	buffer->vaddr = vaddr;
+	buffer->kmap_cnt++;
+	return vaddr;
+}
+
+static void dmabuf_pool_buffer_kmap_put(struct dmabuf_pool_buffer *buffer)
+{
+	buffer->kmap_cnt--;
+	if (!buffer->kmap_cnt) {
+		buffer->pool->ops->unmap_kernel(buffer->pool, buffer);
+		buffer->vaddr = NULL;
+	}
+}
+
+static struct sg_table *dup_sg_table(struct sg_table *table)
+{
+	struct sg_table *new_table;
+	int ret, i;
+	struct scatterlist *sg, *new_sg;
+
+	new_table = kzalloc(sizeof(*new_table), GFP_KERNEL);
+	if (!new_table)
+		return ERR_PTR(-ENOMEM);
+
+	ret = sg_alloc_table(new_table, table->nents, GFP_KERNEL);
+	if (ret) {
+		kfree(new_table);
+		return ERR_PTR(-ENOMEM);
+	}
+
+	new_sg = new_table->sgl;
+	for_each_sg(table->sgl, sg, table->nents, i) {
+		memcpy(new_sg, sg, sizeof(*sg));
+		new_sg->dma_address = 0;
+		new_sg = sg_next(new_sg);
+	}
+
+	return new_table;
+}
+
+static void free_duped_table(struct sg_table *table)
+{
+	sg_free_table(table);
+	kfree(table);
+}
+
+struct dmabuf_pools_attachment {
+	struct device *dev;
+	struct sg_table *table;
+	struct list_head list;
+	enum dma_data_direction dir;
+};
+
+static int dmabuf_pool_attach(struct dma_buf *dmabuf,
+			      struct dma_buf_attachment *attachment)
+{
+	struct dmabuf_pools_attachment *a;
+	struct sg_table *table;
+	struct dmabuf_pool_buffer *buffer = dmabuf->priv;
+
+	a = kzalloc(sizeof(*a), GFP_KERNEL);
+	if (!a)
+		return -ENOMEM;
+
+	table = dup_sg_table(buffer->sg_table);
+	if (IS_ERR(table)) {
+		kfree(a);
+		return -ENOMEM;
+	}
+
+	a->table = table;
+	a->dev = attachment->dev;
+	a->dir = DMA_NONE;
+	INIT_LIST_HEAD(&a->list);
+
+	attachment->priv = a;
+
+	mutex_lock(&buffer->lock);
+	list_add(&a->list, &buffer->attachments);
+	mutex_unlock(&buffer->lock);
+
+	return 0;
+}
+
+static void dmabuf_pool_detatch(struct dma_buf *dmabuf,
+				struct dma_buf_attachment *attachment)
+{
+	struct dmabuf_pools_attachment *a = attachment->priv;
+	struct dmabuf_pool_buffer *buffer = dmabuf->priv;
+	struct sg_table *table;
+
+	if (!a)
+		return;
+
+	table = a->table;
+	if (table) {
+		if (a->dir != DMA_NONE)
+			dma_unmap_sg(attachment->dev, table->sgl, table->nents,
+				     a->dir);
+		sg_free_table(table);
+	}
+
+	mutex_lock(&buffer->lock);
+	list_del(&a->list);
+	mutex_unlock(&buffer->lock);
+	free_duped_table(a->table);
+
+	kfree(a);
+	attachment->priv = NULL;
+}
+
+static struct sg_table *dmabuf_pool_map_dma_buf(
+					struct dma_buf_attachment *attachment,
+					enum dma_data_direction direction)
+{
+	struct dmabuf_pools_attachment *a = attachment->priv;
+	struct sg_table *table;
+
+	if (WARN_ON(direction == DMA_NONE || !a))
+		return ERR_PTR(-EINVAL);
+
+	if (a->dir == direction)
+		return a->table;
+
+	if (WARN_ON(a->dir != DMA_NONE))
+		return ERR_PTR(-EBUSY);
+
+	table = a->table;
+	if (!IS_ERR(table)) {
+		if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
+				direction)) {
+			table = ERR_PTR(-ENOMEM);
+		} else {
+			a->dir = direction;
+		}
+	}
+	return table;
+}
+
+static void dmabuf_pool_unmap_dma_buf(struct dma_buf_attachment *attachment,
+			      struct sg_table *table,
+			      enum dma_data_direction direction)
+{
+}
+
+static int dmabuf_pool_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
+{
+	struct dmabuf_pool_buffer *buffer = dmabuf->priv;
+	int ret = 0;
+
+	if (!buffer->pool->ops->map_user) {
+		pr_err("%s: this pool does not define a method for mapping to userspace\n",
+		       __func__);
+		return -EINVAL;
+	}
+
+	if (!(buffer->flags & DMABUF_POOL_FLAG_CACHED))
+		vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
+
+	mutex_lock(&buffer->lock);
+	/* now map it to userspace */
+	ret = buffer->pool->ops->map_user(buffer->pool, buffer, vma);
+	mutex_unlock(&buffer->lock);
+
+	if (ret)
+		pr_err("%s: failure mapping buffer to userspace\n",
+		       __func__);
+
+	return ret;
+}
+
+static void dmabuf_pool_dma_buf_release(struct dma_buf *dmabuf)
+{
+	struct dmabuf_pool_buffer *buffer = dmabuf->priv;
+
+	_dmabuf_pool_buffer_destroy(buffer);
+}
+
+static void *dmabuf_pool_dma_buf_kmap(struct dma_buf *dmabuf,
+					unsigned long offset)
+{
+	struct dmabuf_pool_buffer *buffer = dmabuf->priv;
+
+	return buffer->vaddr + offset * PAGE_SIZE;
+}
+
+static void dmabuf_pool_dma_buf_kunmap(struct dma_buf *dmabuf,
+					unsigned long offset,
+					void *ptr)
+{
+}
+
+static int dmabuf_pool_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
+					enum dma_data_direction direction)
+{
+	struct dmabuf_pool_buffer *buffer = dmabuf->priv;
+	void *vaddr;
+	struct dmabuf_pools_attachment *a;
+	int ret = 0;
+
+	/*
+	 * TODO: Move this elsewhere because we don't always need a vaddr
+	 */
+	if (buffer->pool->ops->map_kernel) {
+		mutex_lock(&buffer->lock);
+		vaddr = dmabuf_pool_buffer_kmap_get(buffer);
+		if (IS_ERR(vaddr)) {
+			ret = PTR_ERR(vaddr);
+			goto unlock;
+		}
+		mutex_unlock(&buffer->lock);
+	}
+
+	mutex_lock(&buffer->lock);
+	list_for_each_entry(a, &buffer->attachments, list) {
+		dma_sync_sg_for_cpu(a->dev, a->table->sgl, a->table->nents,
+				    direction);
+	}
+
+unlock:
+	mutex_unlock(&buffer->lock);
+	return ret;
+}
+
+static int dmabuf_pool_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
+				      enum dma_data_direction direction)
+{
+	struct dmabuf_pool_buffer *buffer = dmabuf->priv;
+	struct dmabuf_pools_attachment *a;
+
+	if (buffer->pool->ops->map_kernel) {
+		mutex_lock(&buffer->lock);
+		dmabuf_pool_buffer_kmap_put(buffer);
+		mutex_unlock(&buffer->lock);
+	}
+
+	mutex_lock(&buffer->lock);
+	list_for_each_entry(a, &buffer->attachments, list) {
+		dma_sync_sg_for_device(a->dev, a->table->sgl, a->table->nents,
+				       direction);
+	}
+	mutex_unlock(&buffer->lock);
+
+	return 0;
+}
+
+static const struct dma_buf_ops dma_buf_ops = {
+	.map_dma_buf = dmabuf_pool_map_dma_buf,
+	.unmap_dma_buf = dmabuf_pool_unmap_dma_buf,
+	.mmap = dmabuf_pool_mmap,
+	.release = dmabuf_pool_dma_buf_release,
+	.attach = dmabuf_pool_attach,
+	.detach = dmabuf_pool_detatch,
+	.begin_cpu_access = dmabuf_pool_dma_buf_begin_cpu_access,
+	.end_cpu_access = dmabuf_pool_dma_buf_end_cpu_access,
+	.map = dmabuf_pool_dma_buf_kmap,
+	.unmap = dmabuf_pool_dma_buf_kunmap,
+};
+
+int dmabuf_pool_alloc(struct dmabuf_pool *pool, size_t len, unsigned int flags)
+{
+	struct dmabuf_pool_device *dev = internal_dev;
+	struct dmabuf_pool_buffer *buffer = NULL;
+	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+	int fd;
+	struct dma_buf *dmabuf;
+
+	pr_debug("%s: pool: %s len %zu flags %x\n", __func__,
+		 pool->name, len, flags);
+
+	len = PAGE_ALIGN(len);
+
+	if (!len)
+		return -EINVAL;
+
+	down_read(&dev->lock);
+	buffer = dmabuf_pool_buffer_create(pool, dev, len, flags);
+	up_read(&dev->lock);
+
+	if (!buffer)
+		return -ENODEV;
+
+	if (IS_ERR(buffer))
+		return PTR_ERR(buffer);
+
+	exp_info.ops = &dma_buf_ops;
+	exp_info.size = buffer->size;
+	exp_info.flags = O_RDWR;
+	exp_info.priv = buffer;
+
+	dmabuf = dma_buf_export(&exp_info);
+	if (IS_ERR(dmabuf)) {
+		_dmabuf_pool_buffer_destroy(buffer);
+		return PTR_ERR(dmabuf);
+	}
+
+	fd = dma_buf_fd(dmabuf, O_CLOEXEC);
+	if (fd < 0)
+		dma_buf_put(dmabuf);
+
+	return fd;
+}
+
+static int dmabuf_pool_open(struct inode *inode, struct file *filp)
+{
+	struct dmabuf_pool *pool;
+
+	mutex_lock(&minor_lock);
+	pool = idr_find(&dmabuf_pool_idr, iminor(inode));
+	mutex_unlock(&minor_lock);
+	if (!pool) {
+		pr_debug("device: minor %d unknown.\n", iminor(inode));
+		return -ENODEV;
+	}
+
+	/* instance data as context */
+	filp->private_data = pool;
+	nonseekable_open(inode, filp);
+
+	return 0;
+}
+
+static int dmabuf_pool_release(struct inode *inode, struct file *filp)
+{
+	filp->private_data = NULL;
+
+	return 0;
+}
+
+
+static const struct file_operations dmabuf_pool_fops = {
+	.owner          = THIS_MODULE,
+	.open		= dmabuf_pool_open,
+	.release	= dmabuf_pool_release,
+	.unlocked_ioctl = dmabuf_pool_ioctl,
+#ifdef CONFIG_COMPAT
+	.compat_ioctl	= dmabuf_pool_ioctl,
+#endif
+};
+
+static int debug_shrink_set(void *data, u64 val)
+{
+	struct dmabuf_pool *pool = data;
+	struct shrink_control sc;
+	int objs;
+
+	sc.gfp_mask = GFP_HIGHUSER;
+	sc.nr_to_scan = val;
+
+	if (!val) {
+		objs = pool->shrinker.count_objects(&pool->shrinker, &sc);
+		sc.nr_to_scan = objs;
+	}
+
+	pool->shrinker.scan_objects(&pool->shrinker, &sc);
+	return 0;
+}
+
+static int debug_shrink_get(void *data, u64 *val)
+{
+	struct dmabuf_pool *pool = data;
+	struct shrink_control sc;
+	int objs;
+
+	sc.gfp_mask = GFP_HIGHUSER;
+	sc.nr_to_scan = 0;
+
+	objs = pool->shrinker.count_objects(&pool->shrinker, &sc);
+	*val = objs;
+	return 0;
+}
+
+DEFINE_SIMPLE_ATTRIBUTE(debug_shrink_fops, debug_shrink_get,
+			debug_shrink_set, "%llu\n");
+
+
+static int dmabuf_pool_get_minor(struct dmabuf_pool *pool)
+{
+	int retval = -ENOMEM;
+
+	mutex_lock(&minor_lock);
+	retval = idr_alloc(&dmabuf_pool_idr, pool, 0, NUM_POOL_MINORS,
+			   GFP_KERNEL);
+	if (retval >= 0) {
+		pool->minor = retval;
+		retval = 0;
+	} else if (retval == -ENOSPC) {
+		printk("%s: Too many dmabuf-pools\n", __func__);
+		retval = -EINVAL;
+	}
+	mutex_unlock(&minor_lock);
+	return retval;
+}
+
+static void dmabuf_pool_free_minor(struct dmabuf_pool *pool)
+{
+	mutex_lock(&minor_lock);
+	idr_remove(&dmabuf_pool_idr, pool->minor);
+	mutex_unlock(&minor_lock);
+}
+
+
+void dmabuf_pool_add(struct dmabuf_pool *pool)
+{
+	struct dmabuf_pool_device *dev = internal_dev;
+	int ret;
+	struct device *dev_ret;
+	struct dentry *pool_root;
+	char debug_name[64];
+
+	if (!pool->ops->allocate || !pool->ops->free)
+		pr_err("%s: can not add pool with invalid ops struct.\n",
+		       __func__);
+
+	/* determ minor number */
+	ret = dmabuf_pool_get_minor(pool);
+	if (ret) {
+		printk("%s: get minor number failed", __func__);
+		return;
+	}
+
+	/* create device */
+	pool->pool_devt = MKDEV(MAJOR(dev->device_devt), pool->minor);
+	dev_ret = device_create(dev->pool_class,
+				NULL,
+				pool->pool_devt,
+				NULL,
+				pool->name);
+	if (IS_ERR(dev_ret)) {
+		pr_err("dmabuf-pools: failed to create char device.\n");
+		return;
+	}
+
+	cdev_init(&pool->pool_dev, &dmabuf_pool_fops);
+	ret = cdev_add(&pool->pool_dev, dev->device_devt, NUM_POOL_MINORS);
+	if (ret < 0) {
+		device_destroy(dev->pool_class, pool->pool_devt);
+		pr_err("dmabuf-pools: failed to add char device.\n");
+	}
+
+	spin_lock_init(&pool->free_lock);
+	spin_lock_init(&pool->stat_lock);
+	pool->free_list_size = 0;
+
+	if (pool->flags & DMABUF_POOL_FLAG_DEFER_FREE)
+		dmabuf_pool_init_deferred_free(pool);
+
+	if ((pool->flags & DMABUF_POOL_FLAG_DEFER_FREE) || pool->ops->shrink) {
+		ret = dmabuf_pool_init_shrinker(pool);
+		if (ret)
+			pr_err("%s: Failed to register shrinker\n", __func__);
+	}
+
+	pool->num_of_buffers = 0;
+	pool->num_of_alloc_bytes = 0;
+	pool->alloc_bytes_wm = 0;
+
+	pool_root = debugfs_create_dir(pool->name, dev->debug_root);
+	debugfs_create_u64("num_of_buffers",
+			   0444, pool_root,
+			   &pool->num_of_buffers);
+	debugfs_create_u64("num_of_alloc_bytes",
+			   0444,
+			   pool_root,
+			   &pool->num_of_alloc_bytes);
+	debugfs_create_u64("alloc_bytes_wm",
+			   0444,
+			   pool_root,
+			   &pool->alloc_bytes_wm);
+
+	if (pool->shrinker.count_objects &&
+	    pool->shrinker.scan_objects) {
+		snprintf(debug_name, 64, "%s_shrink", pool->name);
+		debugfs_create_file(debug_name,
+				    0644,
+				    pool_root,
+				    pool,
+				    &debug_shrink_fops);
+	}
+
+	down_write(&dev->lock);
+	pool->id = pool_id++;
+	/*
+	 * use negative pool->id to reverse the priority -- when traversing
+	 * the list later attempt higher id numbers first
+	 */
+	plist_node_init(&pool->node, -pool->id);
+	plist_add(&pool->node, &dev->pools);
+
+	up_write(&dev->lock);
+}
+EXPORT_SYMBOL(dmabuf_pool_add);
+
+static int dmabuf_pool_device_create(void)
+{
+	struct dmabuf_pool_device *idev;
+	int ret;
+
+	idev = kzalloc(sizeof(*idev), GFP_KERNEL);
+	if (!idev)
+		return -ENOMEM;
+
+	ret = alloc_chrdev_region(&idev->device_devt, 0, NUM_POOL_MINORS,
+				  DEVNAME);
+	if (ret)
+		goto free_idev;
+
+	idev->pool_class = class_create(THIS_MODULE, DEVNAME);
+	if (IS_ERR(idev->pool_class)) {
+		ret = PTR_ERR(idev->pool_class);
+		goto unreg_region;
+	}
+
+	idev->debug_root = debugfs_create_dir("ion", NULL);
+	init_rwsem(&idev->lock);
+	plist_head_init(&idev->pools);
+	internal_dev = idev;
+	return 0;
+
+unreg_region:
+	unregister_chrdev_region(idev->device_devt, NUM_POOL_MINORS);
+free_idev:
+	kfree(idev);
+	return ret;
+
+}
+subsys_initcall(dmabuf_pool_device_create);
diff --git a/drivers/dma-buf/pools/dmabuf-pools.h b/drivers/dma-buf/pools/dmabuf-pools.h
new file mode 100644
index 0000000..12110f2
--- /dev/null
+++ b/drivers/dma-buf/pools/dmabuf-pools.h
@@ -0,0 +1,244 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * drivers/dma-buf/pools/dmabuf-pools.h
+ *
+ * Copyright (C) 2011 Google, Inc.
+ */
+
+#ifndef _DMABUF_POOLS_H
+#define _DMABUF_POOLS_H
+
+#include <linux/cdev.h>
+#include <linux/device.h>
+#include <linux/dma-direction.h>
+#include <linux/kref.h>
+#include <linux/mm_types.h>
+#include <linux/mutex.h>
+#include <linux/rbtree.h>
+#include <linux/sched.h>
+#include <linux/shrinker.h>
+#include <linux/types.h>
+#include <linux/miscdevice.h>
+#include <uapi/linux/dmabuf-pools.h>
+
+/**
+ * struct dmabuf_pool_buffer - metadata for a particular buffer
+ * @pool:		back pointer to the pool the buffer came from
+ * @flags:		buffer specific flags
+ * @private_flags:	internal buffer specific flags
+ * @size:		size of the buffer
+ * @priv_virt:		private data to the buffer representable as
+ *			a void *
+ * @lock:		protects the buffers cnt fields
+ * @kmap_cnt:		number of times the buffer is mapped to the kernel
+ * @vaddr:		the kernel mapping if kmap_cnt is not zero
+ * @sg_table:		the sg table for the buffer if dmap_cnt is not zero
+ * @attachments:	list head for device attachments
+ * @list:		list head for deferred freeing
+ */
+struct dmabuf_pool_buffer {
+	struct dmabuf_pool *pool;
+	unsigned long flags;
+	unsigned long private_flags;
+	size_t size;
+	void *priv_virt;
+	struct mutex lock;
+	int kmap_cnt;
+	void *vaddr;
+	struct sg_table *sg_table;
+	struct list_head attachments;
+	struct list_head list;
+};
+
+
+/**
+ * struct dmabuf_pool_ops - ops to operate on a given pool
+ * @allocate:		allocate memory
+ * @free:		free memory
+ * @map_kernel		map memory to the kernel
+ * @unmap_kernel	unmap memory to the kernel
+ * @map_user		map memory to userspace
+ * @shrink		shrinker hook to reduce pool memory usage
+ *
+ * allocate, phys, and map_user return 0 on success, -errno on error.
+ * map_dma and map_kernel return pointer on success, ERR_PTR on
+ * error. @free will be called with DMABUF_POOL_PRIV_FLAG_SHRINKER_FREE set in
+ * the buffer's private_flags when called from a shrinker. In that
+ * case, the pages being free'd must be truly free'd back to the
+ * system, not put in a page pool or otherwise cached.
+ */
+struct dmabuf_pool_ops {
+	int (*allocate)(struct dmabuf_pool *pool,
+			struct dmabuf_pool_buffer *buffer, unsigned long len,
+			unsigned long flags);
+	void (*free)(struct dmabuf_pool_buffer *buffer);
+	void * (*map_kernel)(struct dmabuf_pool *pool,
+			     struct dmabuf_pool_buffer *buffer);
+	void (*unmap_kernel)(struct dmabuf_pool *pool,
+			     struct dmabuf_pool_buffer *buffer);
+	int (*map_user)(struct dmabuf_pool *mapper,
+			struct dmabuf_pool_buffer *buffer,
+			struct vm_area_struct *vma);
+	int (*shrink)(struct dmabuf_pool *pool, gfp_t gfp_mask,
+		      int nr_to_scan);
+};
+
+/**
+ * pool flags - flags between the dmabuf pools and core dmabuf code
+ */
+#define DMABUF_POOL_FLAG_DEFER_FREE BIT(0)
+
+/**
+ * private flags - flags internal to dmabuf_pools
+ */
+/*
+ * Buffer is being freed from a shrinker function. Skip any possible
+ * pool-specific caching mechanism (e.g. page pools). Guarantees that
+ * any buffer storage that came from the system allocator will be
+ * returned to the system allocator.
+ */
+#define DMABUF_POOL_PRIV_FLAG_SHRINKER_FREE BIT(0)
+
+/**
+ * struct dmabuf_pool - represents a dmabuf pool in the system
+ * @node:		rb node to put the pool on the device's tree of pools
+ * @pool_dev		miscdevice for pool devicenode
+ * @ops:		ops struct as above
+ * @flags:		flags
+ * @id:			id of pool
+ * @name:		used for debugging/device-node name
+ * @shrinker:		a shrinker for the pool
+ * @free_list:		free list head if deferred free is used
+ * @free_list_size	size of the deferred free list in bytes
+ * @lock:		protects the free list
+ * @waitqueue:		queue to wait on from deferred free thread
+ * @task:		task struct of deferred free thread
+ * @num_of_buffers	the number of currently allocated buffers
+ * @num_of_alloc_bytes	the number of allocated bytes
+ * @alloc_bytes_wm	the number of allocated bytes watermarka
+ * @stat_lock		lock for pool statistics
+ *
+ * Represents a pool of memory from which buffers can be made.  In some
+ * systems the only pool is regular system memory allocated via vmalloc.
+ * On others, some blocks might require large physically contiguous buffers
+ * that are allocated from a specially reserved pool.
+ */
+struct dmabuf_pool {
+	struct plist_node node;
+	dev_t pool_devt;
+	struct cdev pool_dev;
+	unsigned int minor;
+	struct dmabuf_pool_ops *ops;
+	unsigned long flags;
+	unsigned int id;
+	const char *name;
+	struct shrinker shrinker;
+	struct list_head free_list;
+	size_t free_list_size;
+	spinlock_t free_lock;
+	wait_queue_head_t waitqueue;
+	struct task_struct *task;
+	u64 num_of_buffers;
+	u64 num_of_alloc_bytes;
+	u64 alloc_bytes_wm;
+	spinlock_t stat_lock;
+};
+
+/**
+ * dmabuf_pool_add - adds a pool to dmabuf pools
+ * @pool:		the pool to add
+ */
+void dmabuf_pool_add(struct dmabuf_pool *pool);
+
+/**
+ * some helpers for common operations on buffers using the sg_table
+ * and vaddr fields
+ */
+void *dmabuf_pool_map_kernel(struct dmabuf_pool *pool,
+			     struct dmabuf_pool_buffer *buffer);
+void dmabuf_pool_unmap_kernel(struct dmabuf_pool *pool,
+			      struct dmabuf_pool_buffer *buffer);
+int dmabuf_pool_map_user(struct dmabuf_pool *pool,
+			 struct dmabuf_pool_buffer *buffer,
+			 struct vm_area_struct *vma);
+int dmabuf_pool_buffer_zero(struct dmabuf_pool_buffer *buffer);
+int dmabuf_pool_pages_zero(struct page *page, size_t size, pgprot_t pgprot);
+int dmabuf_pool_alloc(struct dmabuf_pool *pool, size_t len,
+		      unsigned int flags);
+void dmabuf_pool_buffer_destroy(struct dmabuf_pool_buffer *buffer);
+
+/**
+ * dmabuf_pool_init_shrinker
+ * @pool:		the pool
+ *
+ * If a pool sets the DMABUF_POOL_FLAG_DEFER_FREE flag or defines the shrink op
+ * this function will be called to setup a shrinker to shrink the freelists
+ * and call the pool's shrink op.
+ */
+int dmabuf_pool_init_shrinker(struct dmabuf_pool *pool);
+
+/**
+ * dmabuf_pool_init_deferred_free -- initialize deferred free functionality
+ * @pool:		the pool
+ *
+ * If a pool sets the DMABUF_POOL_FLAG_DEFER_FREE flag this function will
+ * be called to setup deferred frees. Calls to free the buffer will
+ * return immediately and the actual free will occur some time later
+ */
+int dmabuf_pool_init_deferred_free(struct dmabuf_pool *pool);
+
+/**
+ * dmabuf_pool_freelist_add - add a buffer to the deferred free list
+ * @pool:		the pool
+ * @buffer:		the buffer
+ *
+ * Adds an item to the deferred freelist.
+ */
+void dmabuf_pool_freelist_add(struct dmabuf_pool *pool,
+			      struct dmabuf_pool_buffer *buffer);
+
+/**
+ * dmabuf_pool_freelist_drain - drain the deferred free list
+ * @pool:		the pool
+ * @size:		amount of memory to drain in bytes
+ *
+ * Drains the indicated amount of memory from the deferred freelist immediately.
+ * Returns the total amount freed.  The total freed may be higher depending
+ * on the size of the items in the list, or lower if there is insufficient
+ * total memory on the freelist.
+ */
+size_t dmabuf_pool_freelist_drain(struct dmabuf_pool *pool, size_t size);
+
+/**
+ * dmabuf_pool_freelist_shrink - drain the deferred free
+ *				list, skipping any pool-specific
+ *				pooling or caching mechanisms
+ *
+ * @pool:		the pool
+ * @size:		amount of memory to drain in bytes
+ *
+ * Drains the indicated amount of memory from the deferred freelist immediately.
+ * Returns the total amount freed.  The total freed may be higher depending
+ * on the size of the items in the list, or lower if there is insufficient
+ * total memory on the freelist.
+ *
+ * Unlike with @dmabuf_pool_freelist_drain, don't put any pages back into
+ * page pools or otherwise cache the pages. Everything must be
+ * genuinely free'd back to the system. If you're free'ing from a
+ * shrinker you probably want to use this. Note that this relies on
+ * the pool.ops.free callback honoring the DMABUF_POOL_PRIV_FLAG_SHRINKER_FREE
+ * flag.
+ */
+size_t dmabuf_pool_freelist_shrink(struct dmabuf_pool *pool,
+				size_t size);
+
+/**
+ * dmabuf_pool_freelist_size - returns the size of the freelist in bytes
+ * @pool:		the pool
+ */
+size_t dmabuf_pool_freelist_size(struct dmabuf_pool *pool);
+
+
+long dmabuf_pool_ioctl(struct file *filp, unsigned int cmd, unsigned long arg);
+
+#endif /* _DMABUF_POOLS_H */
diff --git a/drivers/dma-buf/pools/pool-helpers.c b/drivers/dma-buf/pools/pool-helpers.c
new file mode 100644
index 0000000..d8bdfb9
--- /dev/null
+++ b/drivers/dma-buf/pools/pool-helpers.c
@@ -0,0 +1,317 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * drivers/dma-buf/pool/pool-helpers.c
+ *
+ * Copyright (C) 2011 Google, Inc.
+ * Copyright (C) 2019 Linaro Ltd.
+ */
+
+#include <linux/err.h>
+#include <linux/freezer.h>
+#include <linux/kthread.h>
+#include <linux/mm.h>
+#include <linux/rtmutex.h>
+#include <linux/sched.h>
+#include <uapi/linux/sched/types.h>
+#include <linux/scatterlist.h>
+#include <linux/vmalloc.h>
+#include "dmabuf-pools.h"
+
+void *dmabuf_pool_map_kernel(struct dmabuf_pool *pool,
+			  struct dmabuf_pool_buffer *buffer)
+{
+	struct scatterlist *sg;
+	int i, j;
+	void *vaddr;
+	pgprot_t pgprot;
+	struct sg_table *table = buffer->sg_table;
+	int npages = PAGE_ALIGN(buffer->size) / PAGE_SIZE;
+	struct page **pages = vmalloc(array_size(npages,
+						 sizeof(struct page *)));
+	struct page **tmp = pages;
+
+	if (!pages)
+		return ERR_PTR(-ENOMEM);
+
+	if (buffer->flags & DMABUF_POOL_FLAG_CACHED)
+		pgprot = PAGE_KERNEL;
+	else
+		pgprot = pgprot_writecombine(PAGE_KERNEL);
+
+	for_each_sg(table->sgl, sg, table->nents, i) {
+		int npages_this_entry = PAGE_ALIGN(sg->length) / PAGE_SIZE;
+		struct page *page = sg_page(sg);
+
+		WARN_ON(i >= npages);
+		for (j = 0; j < npages_this_entry; j++)
+			*(tmp++) = page++;
+	}
+	vaddr = vmap(pages, npages, VM_MAP, pgprot);
+	vfree(pages);
+
+	if (!vaddr)
+		return ERR_PTR(-ENOMEM);
+
+	return vaddr;
+}
+
+void dmabuf_pool_unmap_kernel(struct dmabuf_pool *pool,
+			   struct dmabuf_pool_buffer *buffer)
+{
+	vunmap(buffer->vaddr);
+}
+
+int dmabuf_pool_map_user(struct dmabuf_pool *pool,
+			 struct dmabuf_pool_buffer *buffer,
+			 struct vm_area_struct *vma)
+{
+	struct sg_table *table = buffer->sg_table;
+	unsigned long addr = vma->vm_start;
+	unsigned long offset = vma->vm_pgoff * PAGE_SIZE;
+	struct scatterlist *sg;
+	int i;
+	int ret;
+
+	for_each_sg(table->sgl, sg, table->nents, i) {
+		struct page *page = sg_page(sg);
+		unsigned long remainder = vma->vm_end - addr;
+		unsigned long len = sg->length;
+
+		if (offset >= sg->length) {
+			offset -= sg->length;
+			continue;
+		} else if (offset) {
+			page += offset / PAGE_SIZE;
+			len = sg->length - offset;
+			offset = 0;
+		}
+		len = min(len, remainder);
+		ret = remap_pfn_range(vma, addr, page_to_pfn(page), len,
+				      vma->vm_page_prot);
+		if (ret)
+			return ret;
+		addr += len;
+		if (addr >= vma->vm_end)
+			return 0;
+	}
+	return 0;
+}
+
+static int dmabuf_pool_clear_pages(struct page **pages, int num,
+				   pgprot_t pgprot)
+{
+	void *addr = vm_map_ram(pages, num, -1, pgprot);
+
+	if (!addr)
+		return -ENOMEM;
+	memset(addr, 0, PAGE_SIZE * num);
+	vm_unmap_ram(addr, num);
+
+	return 0;
+}
+
+static int dmabuf_pool_sglist_zero(struct scatterlist *sgl, unsigned int nents,
+				pgprot_t pgprot)
+{
+	int p = 0;
+	int ret = 0;
+	struct sg_page_iter piter;
+	struct page *pages[32];
+
+	for_each_sg_page(sgl, &piter, nents, 0) {
+		pages[p++] = sg_page_iter_page(&piter);
+		if (p == ARRAY_SIZE(pages)) {
+			ret = dmabuf_pool_clear_pages(pages, p, pgprot);
+			if (ret)
+				return ret;
+			p = 0;
+		}
+	}
+	if (p)
+		ret = dmabuf_pool_clear_pages(pages, p, pgprot);
+
+	return ret;
+}
+
+int dmabuf_pool_buffer_zero(struct dmabuf_pool_buffer *buffer)
+{
+	struct sg_table *table = buffer->sg_table;
+	pgprot_t pgprot;
+
+	if (buffer->flags & DMABUF_POOL_FLAG_CACHED)
+		pgprot = PAGE_KERNEL;
+	else
+		pgprot = pgprot_writecombine(PAGE_KERNEL);
+
+	return dmabuf_pool_sglist_zero(table->sgl, table->nents, pgprot);
+}
+
+int dmabuf_pool_pages_zero(struct page *page, size_t size, pgprot_t pgprot)
+{
+	struct scatterlist sg;
+
+	sg_init_table(&sg, 1);
+	sg_set_page(&sg, page, size, 0);
+	return dmabuf_pool_sglist_zero(&sg, 1, pgprot);
+}
+
+void dmabuf_pool_freelist_add(struct dmabuf_pool *pool,
+			      struct dmabuf_pool_buffer *buffer)
+{
+	spin_lock(&pool->free_lock);
+	list_add(&buffer->list, &pool->free_list);
+	pool->free_list_size += buffer->size;
+	spin_unlock(&pool->free_lock);
+	wake_up(&pool->waitqueue);
+}
+
+size_t dmabuf_pool_freelist_size(struct dmabuf_pool *pool)
+{
+	size_t size;
+
+	spin_lock(&pool->free_lock);
+	size = pool->free_list_size;
+	spin_unlock(&pool->free_lock);
+
+	return size;
+}
+
+static size_t _dmabuf_pool_freelist_drain(struct dmabuf_pool *pool, size_t size,
+					  bool skip_pools)
+{
+	struct dmabuf_pool_buffer *buffer;
+	size_t total_drained = 0;
+
+	if (dmabuf_pool_freelist_size(pool) == 0)
+		return 0;
+
+	spin_lock(&pool->free_lock);
+	if (size == 0)
+		size = pool->free_list_size;
+
+	while (!list_empty(&pool->free_list)) {
+		if (total_drained >= size)
+			break;
+		buffer = list_first_entry(&pool->free_list,
+					  struct dmabuf_pool_buffer,
+					  list);
+		list_del(&buffer->list);
+		pool->free_list_size -= buffer->size;
+		if (skip_pools)
+			buffer->private_flags |=
+					DMABUF_POOL_PRIV_FLAG_SHRINKER_FREE;
+		total_drained += buffer->size;
+		spin_unlock(&pool->free_lock);
+		dmabuf_pool_buffer_destroy(buffer);
+		spin_lock(&pool->free_lock);
+	}
+	spin_unlock(&pool->free_lock);
+
+	return total_drained;
+}
+
+size_t dmabuf_pool_freelist_drain(struct dmabuf_pool *pool, size_t size)
+{
+	return _dmabuf_pool_freelist_drain(pool, size, false);
+}
+
+size_t dmabuf_pool_freelist_shrink(struct dmabuf_pool *pool, size_t size)
+{
+	return _dmabuf_pool_freelist_drain(pool, size, true);
+}
+
+static int dmabuf_pool_deferred_free(void *data)
+{
+	struct dmabuf_pool *pool = data;
+
+	while (true) {
+		struct dmabuf_pool_buffer *buffer;
+
+		wait_event_freezable(pool->waitqueue,
+				     dmabuf_pool_freelist_size(pool) > 0);
+
+		spin_lock(&pool->free_lock);
+		if (list_empty(&pool->free_list)) {
+			spin_unlock(&pool->free_lock);
+			continue;
+		}
+		buffer = list_first_entry(&pool->free_list,
+					  struct dmabuf_pool_buffer,
+					  list);
+		list_del(&buffer->list);
+		pool->free_list_size -= buffer->size;
+		spin_unlock(&pool->free_lock);
+		dmabuf_pool_buffer_destroy(buffer);
+	}
+
+	return 0;
+}
+
+int dmabuf_pool_init_deferred_free(struct dmabuf_pool *pool)
+{
+	struct sched_param param = { .sched_priority = 0 };
+
+	INIT_LIST_HEAD(&pool->free_list);
+	init_waitqueue_head(&pool->waitqueue);
+	pool->task = kthread_run(dmabuf_pool_deferred_free, pool,
+				 "%s", pool->name);
+	if (IS_ERR(pool->task)) {
+		pr_err("%s: creating thread for deferred free failed\n",
+		       __func__);
+		return PTR_ERR_OR_ZERO(pool->task);
+	}
+	sched_setscheduler(pool->task, SCHED_IDLE, &param);
+	return 0;
+}
+
+static unsigned long dmabuf_pool_shrink_count(struct shrinker *shrinker,
+					   struct shrink_control *sc)
+{
+	struct dmabuf_pool *pool = container_of(shrinker, struct dmabuf_pool,
+						shrinker);
+	int total = 0;
+
+	total = dmabuf_pool_freelist_size(pool) / PAGE_SIZE;
+	if (pool->ops->shrink)
+		total += pool->ops->shrink(pool, sc->gfp_mask, 0);
+	return total;
+}
+
+static unsigned long dmabuf_pool_shrink_scan(struct shrinker *shrinker,
+					  struct shrink_control *sc)
+{
+	struct dmabuf_pool *pool = container_of(shrinker, struct dmabuf_pool,
+						shrinker);
+	int freed = 0;
+	int to_scan = sc->nr_to_scan;
+
+	if (to_scan == 0)
+		return 0;
+
+	/*
+	 * shrink the free list first, no point in zeroing the memory if we're
+	 * just going to reclaim it. Also, skip any possible page pooling.
+	 */
+	if (pool->flags & DMABUF_POOL_FLAG_DEFER_FREE) {
+		freed = dmabuf_pool_freelist_shrink(pool, to_scan * PAGE_SIZE);
+		freed /= PAGE_SIZE;
+	}
+
+	to_scan -= freed;
+	if (to_scan <= 0)
+		return freed;
+
+	if (pool->ops->shrink)
+		freed += pool->ops->shrink(pool, sc->gfp_mask, to_scan);
+	return freed;
+}
+
+int dmabuf_pool_init_shrinker(struct dmabuf_pool *pool)
+{
+	pool->shrinker.count_objects = dmabuf_pool_shrink_count;
+	pool->shrinker.scan_objects = dmabuf_pool_shrink_scan;
+	pool->shrinker.seeks = DEFAULT_SEEKS;
+	pool->shrinker.batch = 0;
+
+	return register_shrinker(&pool->shrinker);
+}
diff --git a/drivers/dma-buf/pools/pool-ioctl.c b/drivers/dma-buf/pools/pool-ioctl.c
new file mode 100644
index 0000000..53153fc
--- /dev/null
+++ b/drivers/dma-buf/pools/pool-ioctl.c
@@ -0,0 +1,94 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2011 Google, Inc.
+ */
+
+#include <linux/kernel.h>
+#include <linux/file.h>
+#include <linux/fs.h>
+#include <linux/uaccess.h>
+#include "dmabuf-pools.h"
+
+union pool_ioctl_arg {
+	struct dmabuf_pool_allocation_data pool_allocation;
+};
+
+static int validate_ioctl_arg(unsigned int cmd, union pool_ioctl_arg *arg)
+{
+	switch (cmd) {
+	case DMABUF_POOL_IOC_ALLOC:
+		if (arg->pool_allocation.reserved0 ||
+		    arg->pool_allocation.reserved1)
+			return -EINVAL;
+	default:
+		break;
+	}
+
+	return 0;
+}
+
+/* fix up the cases where the ioctl direction bits are incorrect */
+static unsigned int pool_ioctl_dir(unsigned int cmd)
+{
+	switch (cmd) {
+	default:
+		return _IOC_DIR(cmd);
+	}
+}
+
+long dmabuf_pool_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+{
+	int ret = 0;
+	unsigned int dir;
+	union pool_ioctl_arg data;
+
+	dir = pool_ioctl_dir(cmd);
+
+	if (_IOC_SIZE(cmd) > sizeof(data))
+		return -EINVAL;
+
+	/*
+	 * The copy_from_user is unconditional here for both read and write
+	 * to do the validate. If there is no write for the ioctl, the
+	 * buffer is cleared
+	 */
+	if (copy_from_user(&data, (void __user *)arg, _IOC_SIZE(cmd)))
+		return -EFAULT;
+
+	ret = validate_ioctl_arg(cmd, &data);
+	if (ret) {
+		pr_warn_once("%s: ioctl validate failed\n", __func__);
+		return ret;
+	}
+
+	if (!(dir & _IOC_WRITE))
+		memset(&data, 0, sizeof(data));
+
+	switch (cmd) {
+	case DMABUF_POOL_IOC_ALLOC:
+	{
+		struct cdev *cdev = filp->private_data;
+		struct dmabuf_pool *pool;
+		int fd;
+
+		pool = container_of(cdev, struct dmabuf_pool, pool_dev);
+
+		fd = dmabuf_pool_alloc(pool, data.pool_allocation.len,
+			       data.pool_allocation.flags);
+		if (fd < 0)
+			return fd;
+
+		data.pool_allocation.fd = fd;
+
+		break;
+	}
+	default:
+		return -ENOTTY;
+	}
+
+	if (dir & _IOC_READ) {
+		if (copy_to_user((void __user *)arg, &data, _IOC_SIZE(cmd)))
+			return -EFAULT;
+	}
+	return ret;
+}
diff --git a/include/uapi/linux/dmabuf-pools.h b/include/uapi/linux/dmabuf-pools.h
new file mode 100644
index 0000000..bad9b11
--- /dev/null
+++ b/include/uapi/linux/dmabuf-pools.h
@@ -0,0 +1,59 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * drivers/staging/android/uapi/ion.h
+ *
+ * Copyright (C) 2011 Google, Inc.
+ * Copyright (C) 2019 Linaro Ltd.
+ */
+#ifndef _UAPI_LINUX_DMABUF_POOL_H
+#define _UAPI_LINUX_DMABUF_POOL_H
+
+#include <linux/ioctl.h>
+#include <linux/types.h>
+
+/**
+ * allocation flags - the lower 16 bits are used by core dmabuf pools, the
+ * upper 16 bits are reserved for use by the pools themselves.
+ */
+
+/*
+ * mappings of this buffer should be cached, dmabuf pools will do cache
+ * maintenance when the buffer is mapped for dma
+ */
+#define DMABUF_POOL_FLAG_CACHED 1
+
+/**
+ * DOC: DMABUF Pool Userspace API
+ *
+ */
+
+/**
+ * struct dmabuf_pool_allocation_data - metadata passed from userspace for
+ *                                      allocations
+ * @len:		size of the allocation
+ * @flags:		flags passed to pool
+ * @fd:			will be populated with a fd which provdes the
+ *			handle to the allocated dma-buf
+ *
+ * Provided by userspace as an argument to the ioctl
+ */
+struct dmabuf_pool_allocation_data {
+	__u64 len;
+	__u32 flags;
+	__u32 fd;
+	__u32 reserved0;
+	__u32 reserved1;
+};
+
+#define DMABUF_POOL_IOC_MAGIC		'P'
+
+/**
+ * DOC: DMABUF_POOL_IOC_ALLOC - allocate memory from pool
+ *
+ * Takes an dmabuf_pool_allocation_data struct and returns it with the fd field
+ * populated with the dmabuf handle of the allocation.
+ */
+#define DMABUF_POOL_IOC_ALLOC	_IOWR(DMABUF_POOL_IOC_MAGIC, 0, \
+				      struct dmabuf_pool_allocation_data)
+
+#endif /* _UAPI_LINUX_DMABUF_POOL_H */
-- 
2.7.4

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [EARLY RFC][PATCH 2/4] dma-buf: pools: Add page-pool for dma-buf pools
  2019-02-21  7:40 [EARLY RFC][PATCH 0/4] dmabuf pools infrastructure (destaging ION) John Stultz
  2019-02-21  7:40 ` [EARLY RFC][PATCH 1/4] dma-buf: Add dma-buf pools framework John Stultz
@ 2019-02-21  7:40 ` John Stultz
  2019-02-21  7:40 ` [EARLY RFC][PATCH 3/4] dma-buf: pools: Add system/system-contig pools to dmabuf pools John Stultz
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: John Stultz @ 2019-02-21  7:40 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Chenbo Feng, Alistair Strachan, Liam Mark, Andrew F . Davis, dri-devel

This adds the page-pool logic to the dma-buf pools which allows
a pool to keep pre-allocated/flushed pages around which can
speed up allocation performance.

NOTE: The page-pools name is term preserved from ION, but it has
potential to be easily confused with dma-buf pools. Suggestions
for alternatives here would be great.

Cc: Laura Abbott <labbott@redhat.com>
Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Liam Mark <lmark@codeaurora.org>
Cc: Brian Starkey <Brian.Starkey@arm.com>
Cc: Andrew F. Davis <afd@ti.com>
Cc: Chenbo Feng <fengc@google.com>
Cc: Alistair Strachan <astrachan@google.com>
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
 drivers/dma-buf/pools/Makefile       |   2 +-
 drivers/dma-buf/pools/dmabuf-pools.h |  51 ++++++++++++
 drivers/dma-buf/pools/page_pool.c    | 157 +++++++++++++++++++++++++++++++++++
 3 files changed, 209 insertions(+), 1 deletion(-)
 create mode 100644 drivers/dma-buf/pools/page_pool.c

diff --git a/drivers/dma-buf/pools/Makefile b/drivers/dma-buf/pools/Makefile
index 6cb1284..a51ec25 100644
--- a/drivers/dma-buf/pools/Makefile
+++ b/drivers/dma-buf/pools/Makefile
@@ -1,2 +1,2 @@
 # SPDX-License-Identifier: GPL-2.0
-obj-$(CONFIG_DMABUF_POOLS)		+= dmabuf-pools.o pool-ioctl.o pool-helpers.o
+obj-$(CONFIG_DMABUF_POOLS)		+= dmabuf-pools.o pool-ioctl.o pool-helpers.o page_pool.o
diff --git a/drivers/dma-buf/pools/dmabuf-pools.h b/drivers/dma-buf/pools/dmabuf-pools.h
index 12110f2..e3a0aac 100644
--- a/drivers/dma-buf/pools/dmabuf-pools.h
+++ b/drivers/dma-buf/pools/dmabuf-pools.h
@@ -238,6 +238,57 @@ size_t dmabuf_pool_freelist_shrink(struct dmabuf_pool *pool,
  */
 size_t dmabuf_pool_freelist_size(struct dmabuf_pool *pool);
 
+/**
+ * functions for creating and destroying a page pool -- allows you
+ * to keep a page pool of pre allocated memory to use from your pool.  Keeping
+ * a page pool of memory that is ready for dma, ie any cached mapping have been
+ * invalidated from the cache, provides a significant performance benefit on
+ * many systems
+ */
+
+/**
+ * struct dmabuf_page_pool - pagepool struct
+ * @high_count:		number of highmem items in the pool
+ * @low_count:		number of lowmem items in the pool
+ * @high_items:		list of highmem items
+ * @low_items:		list of lowmem items
+ * @mutex:		lock protecting this struct and especially the count
+ *			item list
+ * @gfp_mask:		gfp_mask to use from alloc
+ * @order:		order of pages in the pool
+ * @list:		plist node for list of pools
+ *
+ * Allows you to keep a page pool of pre allocated pages to use from your pool.
+ * Keeping a pool of pages that is ready for dma, ie any cached mapping have
+ * been invalidated from the cache, provides a significant performance benefit
+ * on many systems
+ */
+struct dmabuf_page_pool {
+	int high_count;
+	int low_count;
+	struct list_head high_items;
+	struct list_head low_items;
+	struct mutex mutex;
+	gfp_t gfp_mask;
+	unsigned int order;
+	struct plist_node list;
+};
+
+struct dmabuf_page_pool *dmabuf_page_pool_create(gfp_t gfp_mask,
+						 unsigned int order);
+void dmabuf_page_pool_destroy(struct dmabuf_page_pool *pool);
+struct page *dmabuf_page_pool_alloc(struct dmabuf_page_pool *pool);
+void dmabuf_page_pool_free(struct dmabuf_page_pool *pool, struct page *page);
+
+/** dmabuf_page_pool_shrink - shrinks the size of the memory cached in the pool
+ * @pool:		the page pool
+ * @gfp_mask:		the memory type to reclaim
+ * @nr_to_scan:		number of items to shrink in pages
+ *
+ * returns the number of items freed in pages
+ */
+int dmabuf_page_pool_shrink(struct dmabuf_page_pool *pool, gfp_t gfp_mask,
+			    int nr_to_scan);
 
 long dmabuf_pool_ioctl(struct file *filp, unsigned int cmd, unsigned long arg);
 
diff --git a/drivers/dma-buf/pools/page_pool.c b/drivers/dma-buf/pools/page_pool.c
new file mode 100644
index 0000000..c1fe994
--- /dev/null
+++ b/drivers/dma-buf/pools/page_pool.c
@@ -0,0 +1,157 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * drivers/dma-buf/pools/page_pool.c
+ *
+ * Copyright (C) 2011 Google, Inc.
+ * Copyright (C) 2019 Linaro Ltd.
+ */
+
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/swap.h>
+
+#include "dmabuf-pools.h"
+
+static inline struct page *dmabuf_page_pool_alloc_pages(
+						struct dmabuf_page_pool *pool)
+{
+	return alloc_pages(pool->gfp_mask, pool->order);
+}
+
+static void dmabuf_page_pool_free_pages(struct dmabuf_page_pool *pool,
+					struct page *page)
+{
+	__free_pages(page, pool->order);
+}
+
+static void dmabuf_page_pool_add(struct dmabuf_page_pool *pool,
+				 struct page *page)
+{
+	mutex_lock(&pool->mutex);
+	if (PageHighMem(page)) {
+		list_add_tail(&page->lru, &pool->high_items);
+		pool->high_count++;
+	} else {
+		list_add_tail(&page->lru, &pool->low_items);
+		pool->low_count++;
+	}
+
+	mod_node_page_state(page_pgdat(page), NR_KERNEL_MISC_RECLAIMABLE,
+							1 << pool->order);
+	mutex_unlock(&pool->mutex);
+}
+
+static struct page *dmabuf_page_pool_remove(struct dmabuf_page_pool *pool,
+					    bool high)
+{
+	struct page *page;
+
+	if (high) {
+		WARN_ON(!pool->high_count);
+		page = list_first_entry(&pool->high_items, struct page, lru);
+		pool->high_count--;
+	} else {
+		WARN_ON(!pool->low_count);
+		page = list_first_entry(&pool->low_items, struct page, lru);
+		pool->low_count--;
+	}
+
+	list_del(&page->lru);
+	mod_node_page_state(page_pgdat(page), NR_KERNEL_MISC_RECLAIMABLE,
+							-(1 << pool->order));
+	return page;
+}
+
+struct page *dmabuf_page_pool_alloc(struct dmabuf_page_pool *pool)
+{
+	struct page *page = NULL;
+
+	WARN_ON(!pool);
+
+	mutex_lock(&pool->mutex);
+	if (pool->high_count)
+		page = dmabuf_page_pool_remove(pool, true);
+	else if (pool->low_count)
+		page = dmabuf_page_pool_remove(pool, false);
+	mutex_unlock(&pool->mutex);
+
+	if (!page)
+		page = dmabuf_page_pool_alloc_pages(pool);
+
+	return page;
+}
+
+void dmabuf_page_pool_free(struct dmabuf_page_pool *pool, struct page *page)
+{
+	WARN_ON(pool->order != compound_order(page));
+
+	dmabuf_page_pool_add(pool, page);
+}
+
+static int dmabuf_page_pool_total(struct dmabuf_page_pool *pool, bool high)
+{
+	int count = pool->low_count;
+
+	if (high)
+		count += pool->high_count;
+
+	return count << pool->order;
+}
+
+int dmabuf_page_pool_shrink(struct dmabuf_page_pool *pool, gfp_t gfp_mask,
+			    int nr_to_scan)
+{
+	int freed = 0;
+	bool high;
+
+	if (current_is_kswapd())
+		high = true;
+	else
+		high = !!(gfp_mask & __GFP_HIGHMEM);
+
+	if (nr_to_scan == 0)
+		return dmabuf_page_pool_total(pool, high);
+
+	while (freed < nr_to_scan) {
+		struct page *page;
+
+		mutex_lock(&pool->mutex);
+		if (pool->low_count) {
+			page = dmabuf_page_pool_remove(pool, false);
+		} else if (high && pool->high_count) {
+			page = dmabuf_page_pool_remove(pool, true);
+		} else {
+			mutex_unlock(&pool->mutex);
+			break;
+		}
+		mutex_unlock(&pool->mutex);
+		dmabuf_page_pool_free_pages(pool, page);
+		freed += (1 << pool->order);
+	}
+
+	return freed;
+}
+
+struct dmabuf_page_pool *dmabuf_page_pool_create(gfp_t gfp_mask,
+						 unsigned int order)
+{
+	struct dmabuf_page_pool *pool = kmalloc(sizeof(*pool), GFP_KERNEL);
+
+	if (!pool)
+		return NULL;
+	pool->high_count = 0;
+	pool->low_count = 0;
+	INIT_LIST_HEAD(&pool->low_items);
+	INIT_LIST_HEAD(&pool->high_items);
+	pool->gfp_mask = gfp_mask | __GFP_COMP;
+	pool->order = order;
+	mutex_init(&pool->mutex);
+	plist_node_init(&pool->list, order);
+
+	return pool;
+}
+
+void dmabuf_page_pool_destroy(struct dmabuf_page_pool *pool)
+{
+	kfree(pool);
+}
-- 
2.7.4

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [EARLY RFC][PATCH 3/4] dma-buf: pools: Add system/system-contig pools to dmabuf pools
  2019-02-21  7:40 [EARLY RFC][PATCH 0/4] dmabuf pools infrastructure (destaging ION) John Stultz
  2019-02-21  7:40 ` [EARLY RFC][PATCH 1/4] dma-buf: Add dma-buf pools framework John Stultz
  2019-02-21  7:40 ` [EARLY RFC][PATCH 2/4] dma-buf: pools: Add page-pool for dma-buf pools John Stultz
@ 2019-02-21  7:40 ` John Stultz
  2019-02-21  7:40 ` [EARLY RFC][PATCH 4/4] dma-buf: pools: Add CMA pool " John Stultz
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: John Stultz @ 2019-02-21  7:40 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Chenbo Feng, Alistair Strachan, Liam Mark, Andrew F . Davis, dri-devel

This patch adds system and system-contig pools to the dma-buf
pools framework.

This allows applications to get a page-allocator backed
dma-buf, of either non-contiguous or contiguous memory.

Cc: Laura Abbott <labbott@redhat.com>
Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Liam Mark <lmark@codeaurora.org>
Cc: Brian Starkey <Brian.Starkey@arm.com>
Cc: Andrew F. Davis <afd@ti.com>
Cc: Chenbo Feng <fengc@google.com>
Cc: Alistair Strachan <astrachan@google.com>
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
 drivers/dma-buf/pools/Kconfig       |   7 +
 drivers/dma-buf/pools/Makefile      |   1 +
 drivers/dma-buf/pools/system_pool.c | 374 ++++++++++++++++++++++++++++++++++++
 3 files changed, 382 insertions(+)
 create mode 100644 drivers/dma-buf/pools/system_pool.c

diff --git a/drivers/dma-buf/pools/Kconfig b/drivers/dma-buf/pools/Kconfig
index caa7eb8..787b2a6 100644
--- a/drivers/dma-buf/pools/Kconfig
+++ b/drivers/dma-buf/pools/Kconfig
@@ -8,3 +8,10 @@ menuconfig DMABUF_POOLS
 	  which allow userspace to allocate dma-bufs that can be shared between
 	  drivers.
 	  If you're not using Android its probably safe to say N here.
+
+config DMABUF_POOLS_SYSTEM
+	bool "DMA-BUF System Pool"
+	depends on DMABUF_POOLS
+	help
+	  Choose this option to enable the system dmabuf pool. The system pool
+	  is backed by pages from the buddy allocator. If in doubt, say Y.
diff --git a/drivers/dma-buf/pools/Makefile b/drivers/dma-buf/pools/Makefile
index a51ec25..2ccf2a1 100644
--- a/drivers/dma-buf/pools/Makefile
+++ b/drivers/dma-buf/pools/Makefile
@@ -1,2 +1,3 @@
 # SPDX-License-Identifier: GPL-2.0
 obj-$(CONFIG_DMABUF_POOLS)		+= dmabuf-pools.o pool-ioctl.o pool-helpers.o page_pool.o
+obj-$(CONFIG_DMABUF_POOLS_SYSTEM)	+= system_pool.o
diff --git a/drivers/dma-buf/pools/system_pool.c b/drivers/dma-buf/pools/system_pool.c
new file mode 100644
index 0000000..1756990
--- /dev/null
+++ b/drivers/dma-buf/pools/system_pool.c
@@ -0,0 +1,374 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * drivers/dma-buf/pools/system_pool.c
+ *
+ * Copyright (C) 2011 Google, Inc.
+ * Copyright (C) 2019 Linaro Ltd.
+ */
+
+#include <asm/page.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/highmem.h>
+#include <linux/mm.h>
+#include <linux/scatterlist.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+#include "dmabuf-pools.h"
+
+#define NUM_ORDERS ARRAY_SIZE(orders)
+
+static gfp_t high_order_gfp_flags = (GFP_HIGHUSER | __GFP_ZERO | __GFP_NOWARN |
+				     __GFP_NORETRY) & ~__GFP_RECLAIM;
+static gfp_t low_order_gfp_flags  = GFP_HIGHUSER | __GFP_ZERO;
+static const unsigned int orders[] = {8, 4, 0};
+
+static int order_to_index(unsigned int order)
+{
+	int i;
+
+	for (i = 0; i < NUM_ORDERS; i++)
+		if (order == orders[i])
+			return i;
+	WARN_ON(1);
+	return -1;
+}
+
+static inline unsigned int order_to_size(int order)
+{
+	return PAGE_SIZE << order;
+}
+
+struct system_pool {
+	struct dmabuf_pool pool;
+	struct dmabuf_page_pool *page_pools[NUM_ORDERS];
+};
+
+static struct page *alloc_buffer_page(struct system_pool *sys_pool,
+				      struct dmabuf_pool_buffer *buffer,
+				      unsigned long order)
+{
+	struct dmabuf_page_pool *pagepool =
+				sys_pool->page_pools[order_to_index(order)];
+
+	return dmabuf_page_pool_alloc(pagepool);
+}
+
+static void free_buffer_page(struct system_pool *sys_pool,
+			     struct dmabuf_pool_buffer *buffer,
+			     struct page *page)
+{
+	struct dmabuf_page_pool *pagepool;
+	unsigned int order = compound_order(page);
+
+	/* go to system */
+	if (buffer->private_flags & DMABUF_POOL_PRIV_FLAG_SHRINKER_FREE) {
+		__free_pages(page, order);
+		return;
+	}
+
+	pagepool = sys_pool->page_pools[order_to_index(order)];
+
+	dmabuf_page_pool_free(pagepool, page);
+}
+
+static struct page *alloc_largest_available(struct system_pool *sys_pool,
+					    struct dmabuf_pool_buffer *buffer,
+					    unsigned long size,
+					    unsigned int max_order)
+{
+	struct page *page;
+	int i;
+
+	for (i = 0; i < NUM_ORDERS; i++) {
+		if (size < order_to_size(orders[i]))
+			continue;
+		if (max_order < orders[i])
+			continue;
+
+		page = alloc_buffer_page(sys_pool, buffer, orders[i]);
+		if (!page)
+			continue;
+
+		return page;
+	}
+
+	return NULL;
+}
+
+static int system_pool_allocate(struct dmabuf_pool *pool,
+				    struct dmabuf_pool_buffer *buffer,
+				    unsigned long size,
+				    unsigned long flags)
+{
+	struct system_pool *sys_pool = container_of(pool,
+							struct system_pool,
+							pool);
+	struct sg_table *table;
+	struct scatterlist *sg;
+	struct list_head pages;
+	struct page *page, *tmp_page;
+	int i = 0;
+	unsigned long size_remaining = PAGE_ALIGN(size);
+	unsigned int max_order = orders[0];
+
+	if (size / PAGE_SIZE > totalram_pages() / 2)
+		return -ENOMEM;
+
+	INIT_LIST_HEAD(&pages);
+	while (size_remaining > 0) {
+		page = alloc_largest_available(sys_pool, buffer, size_remaining,
+					       max_order);
+		if (!page)
+			goto free_pages;
+		list_add_tail(&page->lru, &pages);
+		size_remaining -= PAGE_SIZE << compound_order(page);
+		max_order = compound_order(page);
+		i++;
+	}
+	table = kmalloc(sizeof(*table), GFP_KERNEL);
+	if (!table)
+		goto free_pages;
+
+	if (sg_alloc_table(table, i, GFP_KERNEL))
+		goto free_table;
+
+	sg = table->sgl;
+	list_for_each_entry_safe(page, tmp_page, &pages, lru) {
+		sg_set_page(sg, page, PAGE_SIZE << compound_order(page), 0);
+		sg = sg_next(sg);
+		list_del(&page->lru);
+	}
+
+	buffer->sg_table = table;
+	return 0;
+
+free_table:
+	kfree(table);
+free_pages:
+	list_for_each_entry_safe(page, tmp_page, &pages, lru)
+		free_buffer_page(sys_pool, buffer, page);
+	return -ENOMEM;
+}
+
+static void system_pool_free(struct dmabuf_pool_buffer *buffer)
+{
+	struct system_pool *sys_pool = container_of(buffer->pool,
+							struct system_pool,
+							pool);
+	struct sg_table *table = buffer->sg_table;
+	struct scatterlist *sg;
+	int i;
+
+	/* zero the buffer before goto page pool */
+	if (!(buffer->private_flags & DMABUF_POOL_PRIV_FLAG_SHRINKER_FREE))
+		dmabuf_pool_buffer_zero(buffer);
+
+	for_each_sg(table->sgl, sg, table->nents, i)
+		free_buffer_page(sys_pool, buffer, sg_page(sg));
+	sg_free_table(table);
+	kfree(table);
+}
+
+static int system_pool_shrink(struct dmabuf_pool *pool, gfp_t gfp_mask,
+				  int nr_to_scan)
+{
+	struct dmabuf_page_pool *page_pool;
+	struct system_pool *sys_pool;
+	int nr_total = 0;
+	int i, nr_freed;
+	int only_scan = 0;
+
+	sys_pool = container_of(pool, struct system_pool, pool);
+
+	if (!nr_to_scan)
+		only_scan = 1;
+
+	for (i = 0; i < NUM_ORDERS; i++) {
+		page_pool = sys_pool->page_pools[i];
+
+		if (only_scan) {
+			nr_total += dmabuf_page_pool_shrink(page_pool,
+							 gfp_mask,
+							 nr_to_scan);
+
+		} else {
+			nr_freed = dmabuf_page_pool_shrink(page_pool,
+							gfp_mask,
+							nr_to_scan);
+			nr_to_scan -= nr_freed;
+			nr_total += nr_freed;
+			if (nr_to_scan <= 0)
+				break;
+		}
+	}
+	return nr_total;
+}
+
+static struct dmabuf_pool_ops system_pool_ops = {
+	.allocate = system_pool_allocate,
+	.free = system_pool_free,
+	.map_kernel = dmabuf_pool_map_kernel,
+	.unmap_kernel = dmabuf_pool_unmap_kernel,
+	.map_user = dmabuf_pool_map_user,
+	.shrink = system_pool_shrink,
+};
+
+static void system_pool_destroy_pools(struct dmabuf_page_pool **page_pools)
+{
+	int i;
+
+	for (i = 0; i < NUM_ORDERS; i++)
+		if (page_pools[i])
+			dmabuf_page_pool_destroy(page_pools[i]);
+}
+
+static int system_pool_create_pools(struct dmabuf_page_pool **page_pools)
+{
+	int i;
+	gfp_t gfp_flags = low_order_gfp_flags;
+
+	for (i = 0; i < NUM_ORDERS; i++) {
+		struct dmabuf_page_pool *pool;
+
+		if (orders[i] > 4)
+			gfp_flags = high_order_gfp_flags;
+
+		pool = dmabuf_page_pool_create(gfp_flags, orders[i]);
+		if (!pool)
+			goto err_create_pool;
+		page_pools[i] = pool;
+	}
+	return 0;
+
+err_create_pool:
+	system_pool_destroy_pools(page_pools);
+	return -ENOMEM;
+}
+
+static struct dmabuf_pool *__system_pool_create(void)
+{
+	struct system_pool *sys_pool;
+
+	sys_pool = kzalloc(sizeof(*sys_pool), GFP_KERNEL);
+	if (!sys_pool)
+		return ERR_PTR(-ENOMEM);
+	sys_pool->pool.ops = &system_pool_ops;
+	sys_pool->pool.flags = DMABUF_POOL_FLAG_DEFER_FREE;
+
+	if (system_pool_create_pools(sys_pool->page_pools))
+		goto free_pool;
+
+	return &sys_pool->pool;
+
+free_pool:
+	kfree(sys_pool);
+	return ERR_PTR(-ENOMEM);
+}
+
+static int system_pool_create(void)
+{
+	struct dmabuf_pool *pool;
+
+	pool = __system_pool_create();
+	if (IS_ERR(pool))
+		return PTR_ERR(pool);
+	pool->name = "system_pool";
+
+	dmabuf_pool_add(pool);
+	return 0;
+}
+device_initcall(system_pool_create);
+
+static int system_contig_pool_allocate(struct dmabuf_pool *pool,
+					   struct dmabuf_pool_buffer *buffer,
+					   unsigned long len,
+					   unsigned long flags)
+{
+	int order = get_order(len);
+	struct page *page;
+	struct sg_table *table;
+	unsigned long i;
+	int ret;
+
+	page = alloc_pages(low_order_gfp_flags | __GFP_NOWARN, order);
+	if (!page)
+		return -ENOMEM;
+
+	split_page(page, order);
+
+	len = PAGE_ALIGN(len);
+	for (i = len >> PAGE_SHIFT; i < (1 << order); i++)
+		__free_page(page + i);
+
+	table = kmalloc(sizeof(*table), GFP_KERNEL);
+	if (!table) {
+		ret = -ENOMEM;
+		goto free_pages;
+	}
+
+	ret = sg_alloc_table(table, 1, GFP_KERNEL);
+	if (ret)
+		goto free_table;
+
+	sg_set_page(table->sgl, page, len, 0);
+
+	buffer->sg_table = table;
+
+	return 0;
+
+free_table:
+	kfree(table);
+free_pages:
+	for (i = 0; i < len >> PAGE_SHIFT; i++)
+		__free_page(page + i);
+
+	return ret;
+}
+
+static void system_contig_pool_free(struct dmabuf_pool_buffer *buffer)
+{
+	struct sg_table *table = buffer->sg_table;
+	struct page *page = sg_page(table->sgl);
+	unsigned long pages = PAGE_ALIGN(buffer->size) >> PAGE_SHIFT;
+	unsigned long i;
+
+	for (i = 0; i < pages; i++)
+		__free_page(page + i);
+	sg_free_table(table);
+	kfree(table);
+}
+
+static struct dmabuf_pool_ops kmalloc_ops = {
+	.allocate = system_contig_pool_allocate,
+	.free = system_contig_pool_free,
+	.map_kernel = dmabuf_pool_map_kernel,
+	.unmap_kernel = dmabuf_pool_unmap_kernel,
+	.map_user = dmabuf_pool_map_user,
+};
+
+static struct dmabuf_pool *__system_contig_pool_create(void)
+{
+	struct dmabuf_pool *pool;
+
+	pool = kzalloc(sizeof(*pool), GFP_KERNEL);
+	if (!pool)
+		return ERR_PTR(-ENOMEM);
+	pool->ops = &kmalloc_ops;
+	pool->name = "system_contig_pool";
+	return pool;
+}
+
+static int system_contig_pool_create(void)
+{
+	struct dmabuf_pool *pool;
+
+	pool = __system_contig_pool_create();
+	if (IS_ERR(pool))
+		return PTR_ERR(pool);
+
+	dmabuf_pool_add(pool);
+	return 0;
+}
+device_initcall(system_contig_pool_create);
+
-- 
2.7.4

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [EARLY RFC][PATCH 4/4] dma-buf: pools: Add CMA pool to dmabuf pools
  2019-02-21  7:40 [EARLY RFC][PATCH 0/4] dmabuf pools infrastructure (destaging ION) John Stultz
                   ` (2 preceding siblings ...)
  2019-02-21  7:40 ` [EARLY RFC][PATCH 3/4] dma-buf: pools: Add system/system-contig pools to dmabuf pools John Stultz
@ 2019-02-21  7:40 ` John Stultz
  2019-02-22  7:19 ` [EARLY RFC][PATCH 0/4] dmabuf pools infrastructure (destaging ION) John Stultz
  2019-02-22 16:55 ` Andrew F. Davis
  5 siblings, 0 replies; 11+ messages in thread
From: John Stultz @ 2019-02-21  7:40 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Chenbo Feng, Alistair Strachan, Liam Mark, Andrew F . Davis, dri-devel

This adds a CMA pool, which allows userspace to allocate
a dma-buf of contiguous memory out of a CMA region.

Cc: Laura Abbott <labbott@redhat.com>
Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Liam Mark <lmark@codeaurora.org>
Cc: Brian Starkey <Brian.Starkey@arm.com>
Cc: Andrew F. Davis <afd@ti.com>
Cc: Chenbo Feng <fengc@google.com>
Cc: Alistair Strachan <astrachan@google.com>
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
 drivers/dma-buf/pools/Kconfig    |   8 +++
 drivers/dma-buf/pools/Makefile   |   1 +
 drivers/dma-buf/pools/cma_pool.c | 143 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 152 insertions(+)
 create mode 100644 drivers/dma-buf/pools/cma_pool.c

diff --git a/drivers/dma-buf/pools/Kconfig b/drivers/dma-buf/pools/Kconfig
index 787b2a6..e984304 100644
--- a/drivers/dma-buf/pools/Kconfig
+++ b/drivers/dma-buf/pools/Kconfig
@@ -15,3 +15,11 @@ config DMABUF_POOLS_SYSTEM
 	help
 	  Choose this option to enable the system dmabuf pool. The system pool
 	  is backed by pages from the buddy allocator. If in doubt, say Y.
+
+config DMABUF_POOLS_CMA
+	bool "DMA-BUF CMA Pool"
+	depends on DMABUF_POOLS && DMA_CMA
+	help
+	  Choose this option to enable dma-buf CMA pools. This pool is backed
+	  by the Contiguous Memory Allocator (CMA). If your system has these
+	  regions, you should say Y here.
diff --git a/drivers/dma-buf/pools/Makefile b/drivers/dma-buf/pools/Makefile
index 2ccf2a1..ac8aa28 100644
--- a/drivers/dma-buf/pools/Makefile
+++ b/drivers/dma-buf/pools/Makefile
@@ -1,3 +1,4 @@
 # SPDX-License-Identifier: GPL-2.0
 obj-$(CONFIG_DMABUF_POOLS)		+= dmabuf-pools.o pool-ioctl.o pool-helpers.o page_pool.o
 obj-$(CONFIG_DMABUF_POOLS_SYSTEM)	+= system_pool.o
+obj-$(CONFIG_DMABUF_POOLS_CMA)		+= cma_pool.o
diff --git a/drivers/dma-buf/pools/cma_pool.c b/drivers/dma-buf/pools/cma_pool.c
new file mode 100644
index 0000000..0bd783f
--- /dev/null
+++ b/drivers/dma-buf/pools/cma_pool.c
@@ -0,0 +1,143 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * drivers/dma-buf/pools/cma_pool.c
+ *
+ * Copyright (C) 2012, 2019 Linaro Ltd.
+ * Author: <benjamin.gaignard@linaro.org> for ST-Ericsson.
+ */
+
+#include <linux/device.h>
+#include <linux/slab.h>
+#include <linux/errno.h>
+#include <linux/err.h>
+#include <linux/cma.h>
+#include <linux/scatterlist.h>
+#include <linux/highmem.h>
+
+#include "dmabuf-pools.h"
+
+struct cma_pool {
+	struct dmabuf_pool pool;
+	struct cma *cma;
+};
+
+#define to_cma_pool(x) container_of(x, struct cma_pool, pool)
+
+/* dmabuf pool CMA operations functions */
+static int cma_pool_allocate(struct dmabuf_pool *pool,
+			     struct dmabuf_pool_buffer *buffer,
+			     unsigned long len,
+			     unsigned long flags)
+{
+	struct cma_pool *cma_pool = to_cma_pool(pool);
+	struct sg_table *table;
+	struct page *pages;
+	unsigned long size = PAGE_ALIGN(len);
+	unsigned long nr_pages = size >> PAGE_SHIFT;
+	unsigned long align = get_order(size);
+	int ret;
+
+	if (align > CONFIG_CMA_ALIGNMENT)
+		align = CONFIG_CMA_ALIGNMENT;
+
+	pages = cma_alloc(cma_pool->cma, nr_pages, align, false);
+	if (!pages)
+		return -ENOMEM;
+
+	if (PageHighMem(pages)) {
+		unsigned long nr_clear_pages = nr_pages;
+		struct page *page = pages;
+
+		while (nr_clear_pages > 0) {
+			void *vaddr = kmap_atomic(page);
+
+			memset(vaddr, 0, PAGE_SIZE);
+			kunmap_atomic(vaddr);
+			page++;
+			nr_clear_pages--;
+		}
+	} else {
+		memset(page_address(pages), 0, size);
+	}
+
+	table = kmalloc(sizeof(*table), GFP_KERNEL);
+	if (!table)
+		goto err;
+
+	ret = sg_alloc_table(table, 1, GFP_KERNEL);
+	if (ret)
+		goto free_mem;
+
+	sg_set_page(table->sgl, pages, size, 0);
+
+	buffer->priv_virt = pages;
+	buffer->sg_table = table;
+	return 0;
+
+free_mem:
+	kfree(table);
+err:
+	cma_release(cma_pool->cma, pages, nr_pages);
+	return -ENOMEM;
+}
+
+static void cma_pool_free(struct dmabuf_pool_buffer *buffer)
+{
+	struct cma_pool *cma_pool = to_cma_pool(buffer->pool);
+	struct page *pages = buffer->priv_virt;
+	unsigned long nr_pages = PAGE_ALIGN(buffer->size) >> PAGE_SHIFT;
+
+	/* release memory */
+	cma_release(cma_pool->cma, pages, nr_pages);
+	/* release sg table */
+	sg_free_table(buffer->sg_table);
+	kfree(buffer->sg_table);
+}
+
+static struct dmabuf_pool_ops cma_pool_ops = {
+	.allocate = cma_pool_allocate,
+	.free = cma_pool_free,
+	.map_user = dmabuf_pool_map_user,
+	.map_kernel = dmabuf_pool_map_kernel,
+	.unmap_kernel = dmabuf_pool_unmap_kernel,
+};
+
+static struct dmabuf_pool *__cma_pool_create(struct cma *cma)
+{
+	struct cma_pool *cma_pool;
+
+	cma_pool = kzalloc(sizeof(*cma_pool), GFP_KERNEL);
+
+	if (!cma_pool)
+		return ERR_PTR(-ENOMEM);
+
+	cma_pool->pool.ops = &cma_pool_ops;
+	/*
+	 * get device from private pools data, later it will be
+	 * used to make the link with reserved CMA memory
+	 */
+	cma_pool->cma = cma;
+	return &cma_pool->pool;
+}
+
+static int __add_cma_pools(struct cma *cma, void *data)
+{
+	struct dmabuf_pool *pool;
+
+	pool = __cma_pool_create(cma);
+	if (IS_ERR(pool))
+		return PTR_ERR(pool);
+
+	pool->name = cma_get_name(cma);
+
+	dmabuf_pool_add(pool);
+
+	return 0;
+}
+
+static int add_cma_pools(void)
+{
+	cma_for_each_area(__add_cma_pools, NULL);
+	return 0;
+}
+device_initcall(add_cma_pools);
-- 
2.7.4

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [EARLY RFC][PATCH 0/4] dmabuf pools infrastructure (destaging ION)
  2019-02-21  7:40 [EARLY RFC][PATCH 0/4] dmabuf pools infrastructure (destaging ION) John Stultz
                   ` (3 preceding siblings ...)
  2019-02-21  7:40 ` [EARLY RFC][PATCH 4/4] dma-buf: pools: Add CMA pool " John Stultz
@ 2019-02-22  7:19 ` John Stultz
  2019-02-22 16:55 ` Andrew F. Davis
  5 siblings, 0 replies; 11+ messages in thread
From: John Stultz @ 2019-02-22  7:19 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Chenbo Feng, Alistair Strachan, Liam Mark, Andrew F . Davis, dri-devel

On Wed, Feb 20, 2019 at 11:40 PM John Stultz <john.stultz@linaro.org> wrote:
>
> Here is a very early peek at my dmabuf pools patchset, which
> tries to destage a fair chunk of ION functionality.
>
> This build and boots, but I've not gotten to testing the actual
> pool devices yet (need to write some kselftests)! I just wanted
> some early feedback on the overall direction.
>
> The patchset implements per-pool devices (extending my ion
> per-heap devices patchset from last week), which can be opened
> directly and then an ioctl is used to allocate a dmabuf from the
> pool.
>
> The interface is similar, but simpler then IONs, only providing
> an ALLOC ioctl.
>
> Also, I've only destaged the system/system-contig and cma pools,
> since the ION carveout and chunk heaps depended on out of tree
> board files to initialize those heaps. I'll leave that to folks
> who are actually using those heaps.
>
> Let me know what you think!

I also managed to get this validated under AOSP w/ HiKey960 today.

There were some bugs, so I've got updated patches here (on top of
HiKey960 kernel changes - even includes the beginnings of a
kselftest):
   https://git.linaro.org/people/john.stultz/android-dev.git/log/?h=dev/dma-buf-pools

And the userland changes HiKey960's gralloc (so it can dynamically
support legacy ion, modern ion and dmabuf pools) are here:
  https://android-review.googlesource.com/c/device/linaro/hikey/+/909436

I still need to flesh out the kselftest code some more to actually do
some validation and error testing, and do a bunch of cleanups (and I'm
sure find a few more bugs), but hopefully this can now be considered
viable.

thanks
-john
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [EARLY RFC][PATCH 0/4] dmabuf pools infrastructure (destaging ION)
  2019-02-21  7:40 [EARLY RFC][PATCH 0/4] dmabuf pools infrastructure (destaging ION) John Stultz
                   ` (4 preceding siblings ...)
  2019-02-22  7:19 ` [EARLY RFC][PATCH 0/4] dmabuf pools infrastructure (destaging ION) John Stultz
@ 2019-02-22 16:55 ` Andrew F. Davis
  2019-02-22 17:24   ` John Stultz
  5 siblings, 1 reply; 11+ messages in thread
From: Andrew F. Davis @ 2019-02-22 16:55 UTC (permalink / raw)
  To: John Stultz, Laura Abbott
  Cc: Chenbo Feng, Alistair Strachan, Liam Mark, dri-devel

On 2/21/19 1:40 AM, John Stultz wrote:
> Here is a very early peek at my dmabuf pools patchset, which
> tries to destage a fair chunk of ION functionality.
> 
> This build and boots, but I've not gotten to testing the actual
> pool devices yet (need to write some kselftests)! I just wanted
> some early feedback on the overall direction.
> 
> The patchset implements per-pool devices (extending my ion
> per-heap devices patchset from last week), which can be opened
> directly and then an ioctl is used to allocate a dmabuf from the
> pool.
> 
> The interface is similar, but simpler then IONs, only providing
> an ALLOC ioctl.
> 
> Also, I've only destaged the system/system-contig and cma pools,
> since the ION carveout and chunk heaps depended on out of tree
> board files to initialize those heaps. I'll leave that to folks
> who are actually using those heaps.
> 
> Let me know what you think!
> 

+1

Was this source not pulled from -next, I have some fixes in next that I
don't see in this code, so I won't review the code itself just yet (it
is and early RFC after all). For the concept itself I have a couple
small suggestions:

I'm not sure I like the name. "Pool" in the context of DMA-BUF feels
like it means something else, like some new feature of DMA-BUFs
exporters/importers can use for making buffer pools. How about just keep
the "heap" terminology to prevent too much re-wording. Maybe just call
this dma-buf/heaps/ ?

Although the differed free stuff is nice and should be available, I
don't think it needs to be part of the first set of de-staged features.
It is a bolt-on feature that can be added later, making this first
patchset more simple.

In the same way I would like to see the changes suggested in one of the
other threads implemented. Basically let the heaps(pools?) provide their
own struct dma_buf_ops. If this is to be an extension of dma-buf then it
shouldn't be making forcing the use of its own dma_buf_ops like ION did.
Instead it should just handle the userspace exporting API only. We can
always provide helpers for the basic dma_buf_ops for consistency and
code-reuse, but the heaps themselves should have full control if/when to
use them.

It might be easier to show this all by example, I'll put together my own
rough RFC over the weekend if that is okay with you (not trying to walk
over your work here or anything, just want to show what I'm thinking if
any of the above doesn't make sense) :)

Thanks,
Andrew

> thanks
> -john
> 
> Cc: Laura Abbott <labbott@redhat.com>
> Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: Liam Mark <lmark@codeaurora.org>
> Cc: Brian Starkey <Brian.Starkey@arm.com>
> Cc: Andrew F. Davis <afd@ti.com>
> Cc: Chenbo Feng <fengc@google.com>
> Cc: Alistair Strachan <astrachan@google.com>
> Cc: dri-devel@lists.freedesktop.org
> 
> John Stultz (4):
>   dma-buf: Add dma-buf pools framework
>   dma-buf: pools: Add page-pool for dma-buf pools
>   dma-buf: pools: Add system/system-contig pools to dmabuf pools
>   dma-buf: pools: Add CMA pool to dmabuf pools
> 
>  MAINTAINERS                          |  13 +
>  drivers/dma-buf/Kconfig              |   2 +
>  drivers/dma-buf/Makefile             |   1 +
>  drivers/dma-buf/pools/Kconfig        |  25 ++
>  drivers/dma-buf/pools/Makefile       |   4 +
>  drivers/dma-buf/pools/cma_pool.c     | 143 ++++++++
>  drivers/dma-buf/pools/dmabuf-pools.c | 670 +++++++++++++++++++++++++++++++++++
>  drivers/dma-buf/pools/dmabuf-pools.h | 295 +++++++++++++++
>  drivers/dma-buf/pools/page_pool.c    | 157 ++++++++
>  drivers/dma-buf/pools/pool-helpers.c | 317 +++++++++++++++++
>  drivers/dma-buf/pools/pool-ioctl.c   |  94 +++++
>  drivers/dma-buf/pools/system_pool.c  | 374 +++++++++++++++++++
>  include/uapi/linux/dmabuf-pools.h    |  59 +++
>  13 files changed, 2154 insertions(+)
>  create mode 100644 drivers/dma-buf/pools/Kconfig
>  create mode 100644 drivers/dma-buf/pools/Makefile
>  create mode 100644 drivers/dma-buf/pools/cma_pool.c
>  create mode 100644 drivers/dma-buf/pools/dmabuf-pools.c
>  create mode 100644 drivers/dma-buf/pools/dmabuf-pools.h
>  create mode 100644 drivers/dma-buf/pools/page_pool.c
>  create mode 100644 drivers/dma-buf/pools/pool-helpers.c
>  create mode 100644 drivers/dma-buf/pools/pool-ioctl.c
>  create mode 100644 drivers/dma-buf/pools/system_pool.c
>  create mode 100644 include/uapi/linux/dmabuf-pools.h
> 
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [EARLY RFC][PATCH 0/4] dmabuf pools infrastructure (destaging ION)
  2019-02-22 16:55 ` Andrew F. Davis
@ 2019-02-22 17:24   ` John Stultz
  2019-02-22 20:45     ` John Stultz
  2019-02-22 22:30     ` Laura Abbott
  0 siblings, 2 replies; 11+ messages in thread
From: John Stultz @ 2019-02-22 17:24 UTC (permalink / raw)
  To: Andrew F. Davis; +Cc: Chenbo Feng, Alistair Strachan, Liam Mark, dri-devel

On Fri, Feb 22, 2019 at 8:55 AM Andrew F. Davis <afd@ti.com> wrote:
> On 2/21/19 1:40 AM, John Stultz wrote:
> > Here is a very early peek at my dmabuf pools patchset, which
> > tries to destage a fair chunk of ION functionality.
> >
> > This build and boots, but I've not gotten to testing the actual
> > pool devices yet (need to write some kselftests)! I just wanted
> > some early feedback on the overall direction.
> >
> > The patchset implements per-pool devices (extending my ion
> > per-heap devices patchset from last week), which can be opened
> > directly and then an ioctl is used to allocate a dmabuf from the
> > pool.
> >
> > The interface is similar, but simpler then IONs, only providing
> > an ALLOC ioctl.
> >
> > Also, I've only destaged the system/system-contig and cma pools,
> > since the ION carveout and chunk heaps depended on out of tree
> > board files to initialize those heaps. I'll leave that to folks
> > who are actually using those heaps.
> >
> > Let me know what you think!
> >
>
> +1
>
> Was this source not pulled from -next, I have some fixes in next that I
> don't see in this code, so I won't review the code itself just yet (it
> is and early RFC after all). For the concept itself I have a couple
> small suggestions:

Oh, no, I've missed those. I was working off -rc7. I'll try to
re-integrate them in.

> I'm not sure I like the name. "Pool" in the context of DMA-BUF feels
> like it means something else, like some new feature of DMA-BUFs
> exporters/importers can use for making buffer pools. How about just keep
> the "heap" terminology to prevent too much re-wording. Maybe just call
> this dma-buf/heaps/ ?

The name changing was mostly as Laura noted that the term heap has
caused confusion historically. I'm not really particular, and I do
worry about the naming overlap between dmabuf-pools and the pagepool
code was problematic. Due to that overlap, renaming things back will
be a small chore, but I've got only myself to blame there :)


> Although the differed free stuff is nice and should be available, I
> don't think it needs to be part of the first set of de-staged features.
> It is a bolt-on feature that can be added later, making this first
> patchset more simple.

Yea, I tried to split that out, but I included it as I didn't really
want to do the surgery to the system heaps to remove it right off.

> In the same way I would like to see the changes suggested in one of the
> other threads implemented. Basically let the heaps(pools?) provide their
> own struct dma_buf_ops. If this is to be an extension of dma-buf then it
> shouldn't be making forcing the use of its own dma_buf_ops like ION did.

Yea. For the most part, the primary goal of my efforts are to just get
the userland ABI that folks can agree on established. Letting the
heaps have their own dma_buf_ops sounds reasonable, but that's also
something that hopefully won't have userspace impact, so can be done
at any point.

> Instead it should just handle the userspace exporting API only. We can
> always provide helpers for the basic dma_buf_ops for consistency and
> code-reuse, but the heaps themselves should have full control if/when to
> use them.

Sounds nice.

> It might be easier to show this all by example, I'll put together my own
> rough RFC over the weekend if that is okay with you (not trying to walk
> over your work here or anything, just want to show what I'm thinking if
> any of the above doesn't make sense) :)

Please do! I just had a quiet last few days and after seeing your
earlier patch figured I should do more then just handwave at it so we
can make some progress so we can get it out of
staging/changing-abi-limbo.

thanks
-john
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [EARLY RFC][PATCH 0/4] dmabuf pools infrastructure (destaging ION)
  2019-02-22 17:24   ` John Stultz
@ 2019-02-22 20:45     ` John Stultz
  2019-02-23  6:21       ` John Stultz
  2019-02-22 22:30     ` Laura Abbott
  1 sibling, 1 reply; 11+ messages in thread
From: John Stultz @ 2019-02-22 20:45 UTC (permalink / raw)
  To: Andrew F. Davis; +Cc: Chenbo Feng, Alistair Strachan, Liam Mark, dri-devel

On Fri, Feb 22, 2019 at 9:24 AM John Stultz <john.stultz@linaro.org> wrote:
>
> On Fri, Feb 22, 2019 at 8:55 AM Andrew F. Davis <afd@ti.com> wrote:
> > On 2/21/19 1:40 AM, John Stultz wrote:
> > > Here is a very early peek at my dmabuf pools patchset, which
> > > tries to destage a fair chunk of ION functionality.
> > >
> > > This build and boots, but I've not gotten to testing the actual
> > > pool devices yet (need to write some kselftests)! I just wanted
> > > some early feedback on the overall direction.
> > >
> > > The patchset implements per-pool devices (extending my ion
> > > per-heap devices patchset from last week), which can be opened
> > > directly and then an ioctl is used to allocate a dmabuf from the
> > > pool.
> > >
> > > The interface is similar, but simpler then IONs, only providing
> > > an ALLOC ioctl.
> > >
> > > Also, I've only destaged the system/system-contig and cma pools,
> > > since the ION carveout and chunk heaps depended on out of tree
> > > board files to initialize those heaps. I'll leave that to folks
> > > who are actually using those heaps.
> > >
> > > Let me know what you think!
> > >
> >
> > +1
> >
> > Was this source not pulled from -next, I have some fixes in next that I
> > don't see in this code, so I won't review the code itself just yet (it
> > is and early RFC after all). For the concept itself I have a couple
> > small suggestions:
>
> Oh, no, I've missed those. I was working off -rc7. I'll try to
> re-integrate them in.
>
> > I'm not sure I like the name. "Pool" in the context of DMA-BUF feels
> > like it means something else, like some new feature of DMA-BUFs
> > exporters/importers can use for making buffer pools. How about just keep
> > the "heap" terminology to prevent too much re-wording. Maybe just call
> > this dma-buf/heaps/ ?
>
> The name changing was mostly as Laura noted that the term heap has
> caused confusion historically. I'm not really particular, and I do
> worry about the naming overlap between dmabuf-pools and the pagepool
> code was problematic. Due to that overlap, renaming things back will
> be a small chore, but I've got only myself to blame there :)

Ok, I've renamed things back to heaps, and updated the patches here
(sorry, I didn't rename the git branch :):
  kernel: https://git.linaro.org/people/john.stultz/android-dev.git/log/?h=dev/dma-buf-pools
  userland: https://android-review.googlesource.com/c/device/linaro/hikey/+/909436

I'll get to integrating your changes in -next, and see about splitting
the page pool/deferred freeing logic out to the end here soon.

thanks
-john
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [EARLY RFC][PATCH 0/4] dmabuf pools infrastructure (destaging ION)
  2019-02-22 17:24   ` John Stultz
  2019-02-22 20:45     ` John Stultz
@ 2019-02-22 22:30     ` Laura Abbott
  1 sibling, 0 replies; 11+ messages in thread
From: Laura Abbott @ 2019-02-22 22:30 UTC (permalink / raw)
  To: John Stultz, Andrew F. Davis
  Cc: Chenbo Feng, Alistair Strachan, Liam Mark, dri-devel

On 2/22/19 9:24 AM, John Stultz wrote:
> On Fri, Feb 22, 2019 at 8:55 AM Andrew F. Davis <afd@ti.com> wrote:
>> On 2/21/19 1:40 AM, John Stultz wrote:
>>> Here is a very early peek at my dmabuf pools patchset, which
>>> tries to destage a fair chunk of ION functionality.
>>>
>>> This build and boots, but I've not gotten to testing the actual
>>> pool devices yet (need to write some kselftests)! I just wanted
>>> some early feedback on the overall direction.
>>>
>>> The patchset implements per-pool devices (extending my ion
>>> per-heap devices patchset from last week), which can be opened
>>> directly and then an ioctl is used to allocate a dmabuf from the
>>> pool.
>>>
>>> The interface is similar, but simpler then IONs, only providing
>>> an ALLOC ioctl.
>>>
>>> Also, I've only destaged the system/system-contig and cma pools,
>>> since the ION carveout and chunk heaps depended on out of tree
>>> board files to initialize those heaps. I'll leave that to folks
>>> who are actually using those heaps.
>>>
>>> Let me know what you think!
>>>
>>
>> +1
>>
>> Was this source not pulled from -next, I have some fixes in next that I
>> don't see in this code, so I won't review the code itself just yet (it
>> is and early RFC after all). For the concept itself I have a couple
>> small suggestions:
> 
> Oh, no, I've missed those. I was working off -rc7. I'll try to
> re-integrate them in.
> 
>> I'm not sure I like the name. "Pool" in the context of DMA-BUF feels
>> like it means something else, like some new feature of DMA-BUFs
>> exporters/importers can use for making buffer pools. How about just keep
>> the "heap" terminology to prevent too much re-wording. Maybe just call
>> this dma-buf/heaps/ ?
> 
> The name changing was mostly as Laura noted that the term heap has
> caused confusion historically. I'm not really particular, and I do
> worry about the naming overlap between dmabuf-pools and the pagepool
> code was problematic. Due to that overlap, renaming things back will
> be a small chore, but I've got only myself to blame there :)
> 
> 

Yeah I'm not set on changing the names. If everyone else finds
heap to be descriptive enough, we can keep it.

Thanks,
Laura
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [EARLY RFC][PATCH 0/4] dmabuf pools infrastructure (destaging ION)
  2019-02-22 20:45     ` John Stultz
@ 2019-02-23  6:21       ` John Stultz
  0 siblings, 0 replies; 11+ messages in thread
From: John Stultz @ 2019-02-23  6:21 UTC (permalink / raw)
  To: Andrew F. Davis; +Cc: Chenbo Feng, Alistair Strachan, Liam Mark, dri-devel

On Fri, Feb 22, 2019 at 12:45 PM John Stultz <john.stultz@linaro.org> wrote:
> Ok, I've renamed things back to heaps, and updated the patches here
> (sorry, I didn't rename the git branch :):
>   kernel: https://git.linaro.org/people/john.stultz/android-dev.git/log/?h=dev/dma-buf-pools
>   userland: https://android-review.googlesource.com/c/device/linaro/hikey/+/909436
>
> I'll get to integrating your changes in -next, and see about splitting
> the page pool/deferred freeing logic out to the end here soon.

Ok, I've gone through the -next changes and ported them over.

I've also split out the shrinker/deferred freeing and page-pool logic
into separate patches at the end (so we can skip them or include
them).

Though, while I have done some initial validation, I'd trust the
page-pool verison of the system heap code that is closer to the more
widely tested ION version then my paired down version (which roughly
implements the original 3.4 era ION system heap).

I've updated my tree here:
   https://git.linaro.org/people/john.stultz/android-dev.git/log/?h=dev/dma-buf-pools

I still need to rework/improve the commit logs, and will probably wait
to see what Andrew is working on before I resend it, but I wanted to
share in case anyone is wanting to check it out.

thanks
-john
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2019-02-23  6:22 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-21  7:40 [EARLY RFC][PATCH 0/4] dmabuf pools infrastructure (destaging ION) John Stultz
2019-02-21  7:40 ` [EARLY RFC][PATCH 1/4] dma-buf: Add dma-buf pools framework John Stultz
2019-02-21  7:40 ` [EARLY RFC][PATCH 2/4] dma-buf: pools: Add page-pool for dma-buf pools John Stultz
2019-02-21  7:40 ` [EARLY RFC][PATCH 3/4] dma-buf: pools: Add system/system-contig pools to dmabuf pools John Stultz
2019-02-21  7:40 ` [EARLY RFC][PATCH 4/4] dma-buf: pools: Add CMA pool " John Stultz
2019-02-22  7:19 ` [EARLY RFC][PATCH 0/4] dmabuf pools infrastructure (destaging ION) John Stultz
2019-02-22 16:55 ` Andrew F. Davis
2019-02-22 17:24   ` John Stultz
2019-02-22 20:45     ` John Stultz
2019-02-23  6:21       ` John Stultz
2019-02-22 22:30     ` Laura Abbott

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.