linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC][PATCH 0/5 v2] DMA-BUF Heaps (destaging ION)
@ 2019-03-05 20:54 John Stultz
  2019-03-05 20:54 ` [RFC][PATCH 1/5 v2] dma-buf: Add dma-buf heaps framework John Stultz
                   ` (7 more replies)
  0 siblings, 8 replies; 68+ messages in thread
From: John Stultz @ 2019-03-05 20:54 UTC (permalink / raw)
  To: lkml
  Cc: John Stultz, Laura Abbott, Benjamin Gaignard, Greg KH,
	Sumit Semwal, Liam Mark, Brian Starkey, Andrew F . Davis,
	Chenbo Feng, Alistair Strachan, dri-devel

Here is a initial RFC of the dma-buf heaps patchset Andrew and I
have been working on which tries to destage a fair chunk of ION
functionality.

The patchset implements per-heap devices which can be opened
directly and then an ioctl is used to allocate a dmabuf from the
heap.

The interface is similar, but much simpler then IONs, only
providing an ALLOC ioctl.

Also, I've provided simple system and cma heaps. The system
heap in particular is missing the page-pool optimizations ION
had, but works well enough to validate the interface.

I've booted and tested these patches with AOSP on the HiKey960
using the kernel tree here:
  https://git.linaro.org/people/john.stultz/android-dev.git/log/?h=dev/dma-buf-heap

And the userspace changes here:
  https://android-review.googlesource.com/c/device/linaro/hikey/+/909436


Compared to ION, this patchset is missing the system-contig,
carveout and chunk heaps, as I don't have a device that uses
those, so I'm unable to do much useful validation there.
Additionally we have no upstream users of chunk or carveout,
and the system-contig has been deprecated in the common/andoid-*
kernels, so this should be ok.

I've also removed the stats accounting for now, since it should
be implemented by the heaps themselves.

Eventual TODOS:
* Reimplement page-pool for system heap (working on this)
* Add stats accounting to system/cma heaps
* Make the kselftest actually useful
* Add other heaps folks see as useful (would love to get
  some help from actual carveout/chunk users)!

That said, the main user-interface is shaping up and I wanted
to get some input on the device model (particularly from GreKH)
and any other API/ABI specific input. 

thanks
-john

Cc: Laura Abbott <labbott@redhat.com>
Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Liam Mark <lmark@codeaurora.org>
Cc: Brian Starkey <Brian.Starkey@arm.com>
Cc: Andrew F. Davis <afd@ti.com>
Cc: Chenbo Feng <fengc@google.com>
Cc: Alistair Strachan <astrachan@google.com>
Cc: dri-devel@lists.freedesktop.org

Andrew F. Davis (1):
  dma-buf: Add dma-buf heaps framework

John Stultz (4):
  dma-buf: heaps: Add heap helpers
  dma-buf: heaps: Add system heap to dmabuf heaps
  dma-buf: heaps: Add CMA heap to dmabuf heapss
  kselftests: Add dma-heap test

 MAINTAINERS                                        |  16 +
 drivers/dma-buf/Kconfig                            |  10 +
 drivers/dma-buf/Makefile                           |   2 +
 drivers/dma-buf/dma-heap.c                         | 191 ++++++++++++
 drivers/dma-buf/heaps/Kconfig                      |  14 +
 drivers/dma-buf/heaps/Makefile                     |   4 +
 drivers/dma-buf/heaps/cma_heap.c                   | 164 ++++++++++
 drivers/dma-buf/heaps/heap-helpers.c               | 335 +++++++++++++++++++++
 drivers/dma-buf/heaps/heap-helpers.h               |  48 +++
 drivers/dma-buf/heaps/system_heap.c                | 132 ++++++++
 include/linux/dma-heap.h                           |  65 ++++
 include/uapi/linux/dma-heap.h                      |  52 ++++
 tools/testing/selftests/dmabuf-heaps/Makefile      |  11 +
 tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c |  96 ++++++
 14 files changed, 1140 insertions(+)
 create mode 100644 drivers/dma-buf/dma-heap.c
 create mode 100644 drivers/dma-buf/heaps/Kconfig
 create mode 100644 drivers/dma-buf/heaps/Makefile
 create mode 100644 drivers/dma-buf/heaps/cma_heap.c
 create mode 100644 drivers/dma-buf/heaps/heap-helpers.c
 create mode 100644 drivers/dma-buf/heaps/heap-helpers.h
 create mode 100644 drivers/dma-buf/heaps/system_heap.c
 create mode 100644 include/linux/dma-heap.h
 create mode 100644 include/uapi/linux/dma-heap.h
 create mode 100644 tools/testing/selftests/dmabuf-heaps/Makefile
 create mode 100644 tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c

-- 
2.7.4


^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC][PATCH 1/5 v2] dma-buf: Add dma-buf heaps framework
  2019-03-05 20:54 [RFC][PATCH 0/5 v2] DMA-BUF Heaps (destaging ION) John Stultz
@ 2019-03-05 20:54 ` John Stultz
  2019-03-06 16:12   ` Benjamin Gaignard
                     ` (5 more replies)
  2019-03-05 20:54 ` [RFC][PATCH 2/5 v2] dma-buf: heaps: Add heap helpers John Stultz
                   ` (6 subsequent siblings)
  7 siblings, 6 replies; 68+ messages in thread
From: John Stultz @ 2019-03-05 20:54 UTC (permalink / raw)
  To: lkml
  Cc: Andrew F. Davis, Laura Abbott, Benjamin Gaignard, Greg KH,
	Sumit Semwal, Liam Mark, Brian Starkey, Chenbo Feng,
	Alistair Strachan, dri-devel, John Stultz

From: "Andrew F. Davis" <afd@ti.com>

This framework allows a unified userspace interface for dma-buf
exporters, allowing userland to allocate specific types of
memory for use in dma-buf sharing.

Each heap is given its own device node, which a user can
allocate a dma-buf fd from using the DMA_HEAP_IOC_ALLOC.

This code is an evoluiton of the Android ION implementation,
and a big thanks is due to its authors/maintainers over time
for their effort:
  Rebecca Schultz Zavin, Colin Cross, Benjamin Gaignard,
  Laura Abbott, and many other contributors!

Cc: Laura Abbott <labbott@redhat.com>
Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Liam Mark <lmark@codeaurora.org>
Cc: Brian Starkey <Brian.Starkey@arm.com>
Cc: Andrew F. Davis <afd@ti.com>
Cc: Chenbo Feng <fengc@google.com>
Cc: Alistair Strachan <astrachan@google.com>
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Andrew F. Davis <afd@ti.com>
[jstultz: reworded commit message, and lots of cleanups]
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
v2:
* Folded down fixes I had previously shared in implementing
  heaps
* Make flags a u64 (Suggested by Laura)
* Add PAGE_ALIGN() fix to the core alloc funciton
* IOCTL fixups suggested by Brian
* Added fixes suggested by Benjamin
* Removed core stats mgmt, as that should be implemented by
  per-heap code
* Changed alloc to return a dma-buf fd, rather then a buffer
  (as it simplifies error handling)
---
 MAINTAINERS                   |  16 ++++
 drivers/dma-buf/Kconfig       |   8 ++
 drivers/dma-buf/Makefile      |   1 +
 drivers/dma-buf/dma-heap.c    | 191 ++++++++++++++++++++++++++++++++++++++++++
 include/linux/dma-heap.h      |  65 ++++++++++++++
 include/uapi/linux/dma-heap.h |  52 ++++++++++++
 6 files changed, 333 insertions(+)
 create mode 100644 drivers/dma-buf/dma-heap.c
 create mode 100644 include/linux/dma-heap.h
 create mode 100644 include/uapi/linux/dma-heap.h

diff --git a/MAINTAINERS b/MAINTAINERS
index ac2e518..a661e19 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -4621,6 +4621,22 @@ F:	include/linux/*fence.h
 F:	Documentation/driver-api/dma-buf.rst
 T:	git git://anongit.freedesktop.org/drm/drm-misc
 
+DMA-BUF HEAPS FRAMEWORK
+M:	Laura Abbott <labbott@redhat.com>
+R:	Liam Mark <lmark@codeaurora.org>
+R:	Brian Starkey <Brian.Starkey@arm.com>
+R:	"Andrew F. Davis" <afd@ti.com>
+R:	John Stultz <john.stultz@linaro.org>
+S:	Maintained
+L:	linux-media@vger.kernel.org
+L:	dri-devel@lists.freedesktop.org
+L:	linaro-mm-sig@lists.linaro.org (moderated for non-subscribers)
+F:	include/uapi/linux/dma-heap.h
+F:	include/linux/dma-heap.h
+F:	drivers/dma-buf/dma-heap.c
+F:	drivers/dma-buf/heaps/*
+T:	git git://anongit.freedesktop.org/drm/drm-misc
+
 DMA GENERIC OFFLOAD ENGINE SUBSYSTEM
 M:	Vinod Koul <vkoul@kernel.org>
 L:	dmaengine@vger.kernel.org
diff --git a/drivers/dma-buf/Kconfig b/drivers/dma-buf/Kconfig
index 2e5a0fa..09c61db 100644
--- a/drivers/dma-buf/Kconfig
+++ b/drivers/dma-buf/Kconfig
@@ -39,4 +39,12 @@ config UDMABUF
 	  A driver to let userspace turn memfd regions into dma-bufs.
 	  Qemu can use this to create host dmabufs for guest framebuffers.
 
+menuconfig DMABUF_HEAPS
+	bool "DMA-BUF Userland Memory Heaps"
+	select DMA_SHARED_BUFFER
+	help
+	  Choose this option to enable the DMA-BUF userland memory heaps,
+	  this allows userspace to allocate dma-bufs that can be shared between
+	  drivers.
+
 endmenu
diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
index 0913a6c..b0332f1 100644
--- a/drivers/dma-buf/Makefile
+++ b/drivers/dma-buf/Makefile
@@ -1,4 +1,5 @@
 obj-y := dma-buf.o dma-fence.o dma-fence-array.o reservation.o seqno-fence.o
+obj-$(CONFIG_DMABUF_HEAPS)	+= dma-heap.o
 obj-$(CONFIG_SYNC_FILE)		+= sync_file.o
 obj-$(CONFIG_SW_SYNC)		+= sw_sync.o sync_debug.o
 obj-$(CONFIG_UDMABUF)		+= udmabuf.o
diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c
new file mode 100644
index 0000000..14b3975
--- /dev/null
+++ b/drivers/dma-buf/dma-heap.c
@@ -0,0 +1,191 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Framework for userspace DMA-BUF allocations
+ *
+ * Copyright (C) 2011 Google, Inc.
+ * Copyright (C) 2019 Linaro Ltd.
+ */
+
+#include <linux/cdev.h>
+#include <linux/debugfs.h>
+#include <linux/device.h>
+#include <linux/dma-buf.h>
+#include <linux/err.h>
+#include <linux/idr.h>
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+
+#include <linux/dma-heap.h>
+#include <uapi/linux/dma-heap.h>
+
+#define DEVNAME "dma_heap"
+
+#define NUM_HEAP_MINORS 128
+static DEFINE_IDR(dma_heap_idr);
+static DEFINE_MUTEX(minor_lock); /* Protect idr accesses */
+
+dev_t dma_heap_devt;
+struct class *dma_heap_class;
+struct list_head dma_heap_list;
+struct dentry *dma_heap_debug_root;
+
+static int dma_heap_buffer_alloc(struct dma_heap *heap, size_t len,
+				 unsigned int flags)
+{
+	len = PAGE_ALIGN(len);
+	if (!len)
+		return -EINVAL;
+
+	return heap->ops->allocate(heap, len, flags);
+}
+
+static int dma_heap_open(struct inode *inode, struct file *filp)
+{
+	struct dma_heap *heap;
+
+	mutex_lock(&minor_lock);
+	heap = idr_find(&dma_heap_idr, iminor(inode));
+	mutex_unlock(&minor_lock);
+	if (!heap) {
+		pr_err("dma_heap: minor %d unknown.\n", iminor(inode));
+		return -ENODEV;
+	}
+
+	/* instance data as context */
+	filp->private_data = heap;
+	nonseekable_open(inode, filp);
+
+	return 0;
+}
+
+static int dma_heap_release(struct inode *inode, struct file *filp)
+{
+	filp->private_data = NULL;
+
+	return 0;
+}
+
+static long dma_heap_ioctl(struct file *filp, unsigned int cmd,
+			   unsigned long arg)
+{
+	switch (cmd) {
+	case DMA_HEAP_IOC_ALLOC:
+	{
+		struct dma_heap_allocation_data heap_allocation;
+		struct dma_heap *heap = filp->private_data;
+		int fd;
+
+		if (copy_from_user(&heap_allocation, (void __user *)arg,
+				   sizeof(heap_allocation)))
+			return -EFAULT;
+
+		if (heap_allocation.fd ||
+		    heap_allocation.reserved0 ||
+		    heap_allocation.reserved1 ||
+		    heap_allocation.reserved2) {
+			pr_warn_once("dma_heap: ioctl data not valid\n");
+			return -EINVAL;
+		}
+		if (heap_allocation.flags & ~DMA_HEAP_VALID_FLAGS) {
+			pr_warn_once("dma_heap: flags has invalid or unsupported flags set\n");
+			return -EINVAL;
+		}
+
+		fd = dma_heap_buffer_alloc(heap, heap_allocation.len,
+					   heap_allocation.flags);
+		if (fd < 0)
+			return fd;
+
+		heap_allocation.fd = fd;
+
+		if (copy_to_user((void __user *)arg, &heap_allocation,
+				 sizeof(heap_allocation)))
+			return -EFAULT;
+
+		break;
+	}
+	default:
+		return -ENOTTY;
+	}
+
+	return 0;
+}
+
+static const struct file_operations dma_heap_fops = {
+	.owner          = THIS_MODULE,
+	.open		= dma_heap_open,
+	.release	= dma_heap_release,
+	.unlocked_ioctl = dma_heap_ioctl,
+#ifdef CONFIG_COMPAT
+	.compat_ioctl	= dma_heap_ioctl,
+#endif
+};
+
+int dma_heap_add(struct dma_heap *heap)
+{
+	struct device *dev_ret;
+	int ret;
+
+	if (!heap->name || !strcmp(heap->name, "")) {
+		pr_err("dma_heap: Cannot add heap without a name\n");
+		return -EINVAL;
+	}
+
+	if (!heap->ops || !heap->ops->allocate) {
+		pr_err("dma_heap: Cannot add heap with invalid ops struct\n");
+		return -EINVAL;
+	}
+
+	/* Find unused minor number */
+	mutex_lock(&minor_lock);
+	ret = idr_alloc(&dma_heap_idr, heap, 0, NUM_HEAP_MINORS, GFP_KERNEL);
+	mutex_unlock(&minor_lock);
+	if (ret < 0) {
+		pr_err("dma_heap: Unable to get minor number for heap\n");
+		return ret;
+	}
+	heap->minor = ret;
+
+	/* Create device */
+	heap->heap_devt = MKDEV(MAJOR(dma_heap_devt), heap->minor);
+	dev_ret = device_create(dma_heap_class,
+				NULL,
+				heap->heap_devt,
+				NULL,
+				heap->name);
+	if (IS_ERR(dev_ret)) {
+		pr_err("dma_heap: Unable to create char device\n");
+		return PTR_ERR(dev_ret);
+	}
+
+	/* Add device */
+	cdev_init(&heap->heap_cdev, &dma_heap_fops);
+	ret = cdev_add(&heap->heap_cdev, dma_heap_devt, NUM_HEAP_MINORS);
+	if (ret < 0) {
+		device_destroy(dma_heap_class, heap->heap_devt);
+		pr_err("dma_heap: Unable to add char device\n");
+		return ret;
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL(dma_heap_add);
+
+static int dma_heap_init(void)
+{
+	int ret;
+
+	ret = alloc_chrdev_region(&dma_heap_devt, 0, NUM_HEAP_MINORS, DEVNAME);
+	if (ret)
+		return ret;
+
+	dma_heap_class = class_create(THIS_MODULE, DEVNAME);
+	if (IS_ERR(dma_heap_class)) {
+		unregister_chrdev_region(dma_heap_devt, NUM_HEAP_MINORS);
+		return PTR_ERR(dma_heap_class);
+	}
+
+	return 0;
+}
+subsys_initcall(dma_heap_init);
diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h
new file mode 100644
index 0000000..ed86a8e
--- /dev/null
+++ b/include/linux/dma-heap.h
@@ -0,0 +1,65 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * DMABUF Heaps Allocation Infrastructure
+ *
+ * Copyright (C) 2011 Google, Inc.
+ * Copyright (C) 2019 Linaro Ltd.
+ */
+
+#ifndef _DMA_HEAPS_H
+#define _DMA_HEAPS_H
+
+#include <linux/cdev.h>
+#include <linux/types.h>
+
+/**
+ * struct dma_heap_buffer - metadata for a particular buffer
+ * @heap:		back pointer to the heap the buffer came from
+ * @dmabuf:		backing dma-buf for this buffer
+ * @size:		size of the buffer
+ * @flags:		buffer specific flags
+ */
+struct dma_heap_buffer {
+	struct dma_heap *heap;
+	struct dma_buf *dmabuf;
+	size_t size;
+	unsigned long flags;
+};
+
+/**
+ * struct dma_heap - represents a dmabuf heap in the system
+ * @name:		used for debugging/device-node name
+ * @ops:		ops struct for this heap
+ * @minor		minor number of this heap device
+ * @heap_devt		heap device node
+ * @heap_cdev		heap char device
+ *
+ * Represents a heap of memory from which buffers can be made.
+ */
+struct dma_heap {
+	const char *name;
+	struct dma_heap_ops *ops;
+	unsigned int minor;
+	dev_t heap_devt;
+	struct cdev heap_cdev;
+};
+
+/**
+ * struct dma_heap_ops - ops to operate on a given heap
+ * @allocate:		allocate dmabuf and return fd
+ *
+ * allocate returns dmabuf fd  on success, -errno on error.
+ */
+struct dma_heap_ops {
+	int (*allocate)(struct dma_heap *heap,
+			unsigned long len,
+			unsigned long flags);
+};
+
+/**
+ * dma_heap_add - adds a heap to dmabuf heaps
+ * @heap:		the heap to add
+ */
+int dma_heap_add(struct dma_heap *heap);
+
+#endif /* _DMA_HEAPS_H */
diff --git a/include/uapi/linux/dma-heap.h b/include/uapi/linux/dma-heap.h
new file mode 100644
index 0000000..75c5d3f
--- /dev/null
+++ b/include/uapi/linux/dma-heap.h
@@ -0,0 +1,52 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * DMABUF Heaps Userspace API
+ *
+ * Copyright (C) 2011 Google, Inc.
+ * Copyright (C) 2019 Linaro Ltd.
+ */
+#ifndef _UAPI_LINUX_DMABUF_POOL_H
+#define _UAPI_LINUX_DMABUF_POOL_H
+
+#include <linux/ioctl.h>
+#include <linux/types.h>
+
+/**
+ * DOC: DMABUF Heaps Userspace API
+ *
+ */
+
+/* Currently no flags */
+#define DMA_HEAP_VALID_FLAGS (0)
+
+/**
+ * struct dma_heap_allocation_data - metadata passed from userspace for
+ *                                      allocations
+ * @len:		size of the allocation
+ * @flags:		flags passed to pool
+ * @fd:			will be populated with a fd which provdes the
+ *			handle to the allocated dma-buf
+ *
+ * Provided by userspace as an argument to the ioctl
+ */
+struct dma_heap_allocation_data {
+	__u64 len;
+	__u64 flags;
+	__u32 fd;
+	__u32 reserved0;
+	__u32 reserved1;
+	__u32 reserved2;
+};
+
+#define DMA_HEAP_IOC_MAGIC		'H'
+
+/**
+ * DOC: DMA_HEAP_IOC_ALLOC - allocate memory from pool
+ *
+ * Takes an dma_heap_allocation_data struct and returns it with the fd field
+ * populated with the dmabuf handle of the allocation.
+ */
+#define DMA_HEAP_IOC_ALLOC	_IOWR(DMA_HEAP_IOC_MAGIC, 0, \
+				      struct dma_heap_allocation_data)
+
+#endif /* _UAPI_LINUX_DMABUF_POOL_H */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC][PATCH 2/5 v2] dma-buf: heaps: Add heap helpers
  2019-03-05 20:54 [RFC][PATCH 0/5 v2] DMA-BUF Heaps (destaging ION) John Stultz
  2019-03-05 20:54 ` [RFC][PATCH 1/5 v2] dma-buf: Add dma-buf heaps framework John Stultz
@ 2019-03-05 20:54 ` John Stultz
  2019-03-13 20:18   ` Liam Mark
                     ` (3 more replies)
  2019-03-05 20:54 ` [RFC][PATCH 3/5 v2] dma-buf: heaps: Add system heap to dmabuf heaps John Stultz
                   ` (5 subsequent siblings)
  7 siblings, 4 replies; 68+ messages in thread
From: John Stultz @ 2019-03-05 20:54 UTC (permalink / raw)
  To: lkml
  Cc: John Stultz, Laura Abbott, Benjamin Gaignard, Greg KH,
	Sumit Semwal, Liam Mark, Brian Starkey, Andrew F . Davis,
	Chenbo Feng, Alistair Strachan, dri-devel

Add generic helper dmabuf ops for dma heaps, so we can reduce
the amount of duplicative code for the exported dmabufs.

This code is an evolution of the Android ION implementation, so
thanks to its original authors and maintainters:
  Rebecca Schultz Zavin, Colin Cross, Laura Abbott, and others!

Cc: Laura Abbott <labbott@redhat.com>
Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Liam Mark <lmark@codeaurora.org>
Cc: Brian Starkey <Brian.Starkey@arm.com>
Cc: Andrew F. Davis <afd@ti.com>
Cc: Chenbo Feng <fengc@google.com>
Cc: Alistair Strachan <astrachan@google.com>
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
v2:
* Removed cache management performance hack that I had
  accidentally folded in.
* Removed stats code that was in helpers
* Lots of checkpatch cleanups
---
 drivers/dma-buf/Makefile             |   1 +
 drivers/dma-buf/heaps/Makefile       |   2 +
 drivers/dma-buf/heaps/heap-helpers.c | 335 +++++++++++++++++++++++++++++++++++
 drivers/dma-buf/heaps/heap-helpers.h |  48 +++++
 4 files changed, 386 insertions(+)
 create mode 100644 drivers/dma-buf/heaps/Makefile
 create mode 100644 drivers/dma-buf/heaps/heap-helpers.c
 create mode 100644 drivers/dma-buf/heaps/heap-helpers.h

diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
index b0332f1..09c2f2d 100644
--- a/drivers/dma-buf/Makefile
+++ b/drivers/dma-buf/Makefile
@@ -1,4 +1,5 @@
 obj-y := dma-buf.o dma-fence.o dma-fence-array.o reservation.o seqno-fence.o
+obj-$(CONFIG_DMABUF_HEAPS)	+= heaps/
 obj-$(CONFIG_DMABUF_HEAPS)	+= dma-heap.o
 obj-$(CONFIG_SYNC_FILE)		+= sync_file.o
 obj-$(CONFIG_SW_SYNC)		+= sw_sync.o sync_debug.o
diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
new file mode 100644
index 0000000..de49898
--- /dev/null
+++ b/drivers/dma-buf/heaps/Makefile
@@ -0,0 +1,2 @@
+# SPDX-License-Identifier: GPL-2.0
+obj-y					+= heap-helpers.o
diff --git a/drivers/dma-buf/heaps/heap-helpers.c b/drivers/dma-buf/heaps/heap-helpers.c
new file mode 100644
index 0000000..ae5e9d0
--- /dev/null
+++ b/drivers/dma-buf/heaps/heap-helpers.c
@@ -0,0 +1,335 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/device.h>
+#include <linux/dma-buf.h>
+#include <linux/err.h>
+#include <linux/idr.h>
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+#include <uapi/linux/dma-heap.h>
+
+#include "heap-helpers.h"
+
+
+static void *dma_heap_map_kernel(struct heap_helper_buffer *buffer)
+{
+	struct scatterlist *sg;
+	int i, j;
+	void *vaddr;
+	pgprot_t pgprot;
+	struct sg_table *table = buffer->sg_table;
+	int npages = PAGE_ALIGN(buffer->heap_buffer.size) / PAGE_SIZE;
+	struct page **pages = vmalloc(array_size(npages,
+						 sizeof(struct page *)));
+	struct page **tmp = pages;
+
+	if (!pages)
+		return ERR_PTR(-ENOMEM);
+
+	pgprot = PAGE_KERNEL;
+
+	for_each_sg(table->sgl, sg, table->nents, i) {
+		int npages_this_entry = PAGE_ALIGN(sg->length) / PAGE_SIZE;
+		struct page *page = sg_page(sg);
+
+		WARN_ON(i >= npages);
+		for (j = 0; j < npages_this_entry; j++)
+			*(tmp++) = page++;
+	}
+	vaddr = vmap(pages, npages, VM_MAP, pgprot);
+	vfree(pages);
+
+	if (!vaddr)
+		return ERR_PTR(-ENOMEM);
+
+	return vaddr;
+}
+
+static int dma_heap_map_user(struct heap_helper_buffer *buffer,
+			 struct vm_area_struct *vma)
+{
+	struct sg_table *table = buffer->sg_table;
+	unsigned long addr = vma->vm_start;
+	unsigned long offset = vma->vm_pgoff * PAGE_SIZE;
+	struct scatterlist *sg;
+	int i;
+	int ret;
+
+	for_each_sg(table->sgl, sg, table->nents, i) {
+		struct page *page = sg_page(sg);
+		unsigned long remainder = vma->vm_end - addr;
+		unsigned long len = sg->length;
+
+		if (offset >= sg->length) {
+			offset -= sg->length;
+			continue;
+		} else if (offset) {
+			page += offset / PAGE_SIZE;
+			len = sg->length - offset;
+			offset = 0;
+		}
+		len = min(len, remainder);
+		ret = remap_pfn_range(vma, addr, page_to_pfn(page), len,
+				      vma->vm_page_prot);
+		if (ret)
+			return ret;
+		addr += len;
+		if (addr >= vma->vm_end)
+			return 0;
+	}
+
+	return 0;
+}
+
+
+void dma_heap_buffer_destroy(struct dma_heap_buffer *heap_buffer)
+{
+	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
+
+	if (buffer->kmap_cnt > 0) {
+		pr_warn_once("%s: buffer still mapped in the kernel\n",
+			     __func__);
+		vunmap(buffer->vaddr);
+	}
+
+	buffer->free(buffer);
+}
+
+static void *dma_heap_buffer_kmap_get(struct dma_heap_buffer *heap_buffer)
+{
+	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
+	void *vaddr;
+
+	if (buffer->kmap_cnt) {
+		buffer->kmap_cnt++;
+		return buffer->vaddr;
+	}
+	vaddr = dma_heap_map_kernel(buffer);
+	if (WARN_ONCE(!vaddr,
+		      "heap->ops->map_kernel should return ERR_PTR on error"))
+		return ERR_PTR(-EINVAL);
+	if (IS_ERR(vaddr))
+		return vaddr;
+	buffer->vaddr = vaddr;
+	buffer->kmap_cnt++;
+	return vaddr;
+}
+
+static void dma_heap_buffer_kmap_put(struct dma_heap_buffer *heap_buffer)
+{
+	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
+
+	buffer->kmap_cnt--;
+	if (!buffer->kmap_cnt) {
+		vunmap(buffer->vaddr);
+		buffer->vaddr = NULL;
+	}
+}
+
+static struct sg_table *dup_sg_table(struct sg_table *table)
+{
+	struct sg_table *new_table;
+	int ret, i;
+	struct scatterlist *sg, *new_sg;
+
+	new_table = kzalloc(sizeof(*new_table), GFP_KERNEL);
+	if (!new_table)
+		return ERR_PTR(-ENOMEM);
+
+	ret = sg_alloc_table(new_table, table->nents, GFP_KERNEL);
+	if (ret) {
+		kfree(new_table);
+		return ERR_PTR(-ENOMEM);
+	}
+
+	new_sg = new_table->sgl;
+	for_each_sg(table->sgl, sg, table->nents, i) {
+		memcpy(new_sg, sg, sizeof(*sg));
+		new_sg->dma_address = 0;
+		new_sg = sg_next(new_sg);
+	}
+
+	return new_table;
+}
+
+static void free_duped_table(struct sg_table *table)
+{
+	sg_free_table(table);
+	kfree(table);
+}
+
+struct dma_heaps_attachment {
+	struct device *dev;
+	struct sg_table *table;
+	struct list_head list;
+};
+
+static int dma_heap_attach(struct dma_buf *dmabuf,
+			      struct dma_buf_attachment *attachment)
+{
+	struct dma_heaps_attachment *a;
+	struct sg_table *table;
+	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
+	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
+
+	a = kzalloc(sizeof(*a), GFP_KERNEL);
+	if (!a)
+		return -ENOMEM;
+
+	table = dup_sg_table(buffer->sg_table);
+	if (IS_ERR(table)) {
+		kfree(a);
+		return -ENOMEM;
+	}
+
+	a->table = table;
+	a->dev = attachment->dev;
+	INIT_LIST_HEAD(&a->list);
+
+	attachment->priv = a;
+
+	mutex_lock(&buffer->lock);
+	list_add(&a->list, &buffer->attachments);
+	mutex_unlock(&buffer->lock);
+
+	return 0;
+}
+
+static void dma_heap_detatch(struct dma_buf *dmabuf,
+				struct dma_buf_attachment *attachment)
+{
+	struct dma_heaps_attachment *a = attachment->priv;
+	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
+	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
+
+	mutex_lock(&buffer->lock);
+	list_del(&a->list);
+	mutex_unlock(&buffer->lock);
+	free_duped_table(a->table);
+
+	kfree(a);
+}
+
+static struct sg_table *dma_heap_map_dma_buf(
+					struct dma_buf_attachment *attachment,
+					enum dma_data_direction direction)
+{
+	struct dma_heaps_attachment *a = attachment->priv;
+	struct sg_table *table;
+
+	table = a->table;
+
+	if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
+			direction))
+		table = ERR_PTR(-ENOMEM);
+	return table;
+}
+
+static void dma_heap_unmap_dma_buf(struct dma_buf_attachment *attachment,
+			      struct sg_table *table,
+			      enum dma_data_direction direction)
+{
+	dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
+}
+
+static int dma_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
+{
+	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
+	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
+	int ret = 0;
+
+	mutex_lock(&buffer->lock);
+	/* now map it to userspace */
+	ret = dma_heap_map_user(buffer, vma);
+	mutex_unlock(&buffer->lock);
+
+	if (ret)
+		pr_err("%s: failure mapping buffer to userspace\n",
+		       __func__);
+
+	return ret;
+}
+
+static void dma_heap_dma_buf_release(struct dma_buf *dmabuf)
+{
+	struct dma_heap_buffer *buffer = dmabuf->priv;
+
+	dma_heap_buffer_destroy(buffer);
+}
+
+static void *dma_heap_dma_buf_kmap(struct dma_buf *dmabuf,
+					unsigned long offset)
+{
+	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
+	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
+
+	return buffer->vaddr + offset * PAGE_SIZE;
+}
+
+static void dma_heap_dma_buf_kunmap(struct dma_buf *dmabuf,
+					unsigned long offset,
+					void *ptr)
+{
+}
+
+static int dma_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
+					enum dma_data_direction direction)
+{
+	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
+	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
+	void *vaddr;
+	struct dma_heaps_attachment *a;
+	int ret = 0;
+
+	mutex_lock(&buffer->lock);
+	vaddr = dma_heap_buffer_kmap_get(heap_buffer);
+	if (IS_ERR(vaddr)) {
+		ret = PTR_ERR(vaddr);
+		goto unlock;
+	}
+	mutex_unlock(&buffer->lock);
+
+	mutex_lock(&buffer->lock);
+	list_for_each_entry(a, &buffer->attachments, list) {
+		dma_sync_sg_for_cpu(a->dev, a->table->sgl, a->table->nents,
+				    direction);
+	}
+
+unlock:
+	mutex_unlock(&buffer->lock);
+	return ret;
+}
+
+static int dma_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
+				      enum dma_data_direction direction)
+{
+	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
+	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
+	struct dma_heaps_attachment *a;
+
+	mutex_lock(&buffer->lock);
+	dma_heap_buffer_kmap_put(heap_buffer);
+	mutex_unlock(&buffer->lock);
+
+	mutex_lock(&buffer->lock);
+	list_for_each_entry(a, &buffer->attachments, list) {
+		dma_sync_sg_for_device(a->dev, a->table->sgl, a->table->nents,
+				       direction);
+	}
+	mutex_unlock(&buffer->lock);
+
+	return 0;
+}
+
+const struct dma_buf_ops heap_helper_ops = {
+	.map_dma_buf = dma_heap_map_dma_buf,
+	.unmap_dma_buf = dma_heap_unmap_dma_buf,
+	.mmap = dma_heap_mmap,
+	.release = dma_heap_dma_buf_release,
+	.attach = dma_heap_attach,
+	.detach = dma_heap_detatch,
+	.begin_cpu_access = dma_heap_dma_buf_begin_cpu_access,
+	.end_cpu_access = dma_heap_dma_buf_end_cpu_access,
+	.map = dma_heap_dma_buf_kmap,
+	.unmap = dma_heap_dma_buf_kunmap,
+};
diff --git a/drivers/dma-buf/heaps/heap-helpers.h b/drivers/dma-buf/heaps/heap-helpers.h
new file mode 100644
index 0000000..0bd8643
--- /dev/null
+++ b/drivers/dma-buf/heaps/heap-helpers.h
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * DMABUF Heaps helper code
+ *
+ * Copyright (C) 2011 Google, Inc.
+ * Copyright (C) 2019 Linaro Ltd.
+ */
+
+#ifndef _HEAP_HELPERS_H
+#define _HEAP_HELPERS_H
+
+#include <linux/dma-heap.h>
+#include <linux/list.h>
+
+struct heap_helper_buffer {
+	struct dma_heap_buffer heap_buffer;
+
+	unsigned long private_flags;
+	void *priv_virt;
+	struct mutex lock;
+	int kmap_cnt;
+	void *vaddr;
+	struct sg_table *sg_table;
+	struct list_head attachments;
+
+	void (*free)(struct heap_helper_buffer *buffer);
+
+};
+
+#define to_helper_buffer(x) \
+	container_of(x, struct heap_helper_buffer, heap_buffer)
+
+static inline void INIT_HEAP_HELPER_BUFFER(struct heap_helper_buffer *buffer,
+				 void (*free)(struct heap_helper_buffer *))
+{
+	buffer->private_flags = 0;
+	buffer->priv_virt = NULL;
+	mutex_init(&buffer->lock);
+	buffer->kmap_cnt = 0;
+	buffer->vaddr = NULL;
+	buffer->sg_table = NULL;
+	INIT_LIST_HEAD(&buffer->attachments);
+	buffer->free = free;
+}
+
+extern const struct dma_buf_ops heap_helper_ops;
+
+#endif /* _HEAP_HELPERS_H */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC][PATCH 3/5 v2] dma-buf: heaps: Add system heap to dmabuf heaps
  2019-03-05 20:54 [RFC][PATCH 0/5 v2] DMA-BUF Heaps (destaging ION) John Stultz
  2019-03-05 20:54 ` [RFC][PATCH 1/5 v2] dma-buf: Add dma-buf heaps framework John Stultz
  2019-03-05 20:54 ` [RFC][PATCH 2/5 v2] dma-buf: heaps: Add heap helpers John Stultz
@ 2019-03-05 20:54 ` John Stultz
  2019-03-06 16:01   ` Benjamin Gaignard
                     ` (2 more replies)
  2019-03-05 20:54 ` [RFC][PATCH 4/5 v2] dma-buf: heaps: Add CMA heap to dmabuf heapss John Stultz
                   ` (4 subsequent siblings)
  7 siblings, 3 replies; 68+ messages in thread
From: John Stultz @ 2019-03-05 20:54 UTC (permalink / raw)
  To: lkml
  Cc: John Stultz, Laura Abbott, Benjamin Gaignard, Greg KH,
	Sumit Semwal, Liam Mark, Brian Starkey, Andrew F . Davis,
	Chenbo Feng, Alistair Strachan, dri-devel

This patch adds system heap to the dma-buf heaps framework.

This allows applications to get a page-allocator backed dma-buf
for non-contiguous memory.

This code is an evolution of the Android ION implementation, so
thanks to its original authors and maintainters:
  Rebecca Schultz Zavin, Colin Cross, Laura Abbott, and others!

Cc: Laura Abbott <labbott@redhat.com>
Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Liam Mark <lmark@codeaurora.org>
Cc: Brian Starkey <Brian.Starkey@arm.com>
Cc: Andrew F. Davis <afd@ti.com>
Cc: Chenbo Feng <fengc@google.com>
Cc: Alistair Strachan <astrachan@google.com>
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
v2:
* Switch allocate to return dmabuf fd
* Simplify init code
* Checkpatch fixups
* Droped dead system-contig code
---
 drivers/dma-buf/Kconfig             |   2 +
 drivers/dma-buf/heaps/Kconfig       |   6 ++
 drivers/dma-buf/heaps/Makefile      |   1 +
 drivers/dma-buf/heaps/system_heap.c | 132 ++++++++++++++++++++++++++++++++++++
 4 files changed, 141 insertions(+)
 create mode 100644 drivers/dma-buf/heaps/Kconfig
 create mode 100644 drivers/dma-buf/heaps/system_heap.c

diff --git a/drivers/dma-buf/Kconfig b/drivers/dma-buf/Kconfig
index 09c61db..63c139d 100644
--- a/drivers/dma-buf/Kconfig
+++ b/drivers/dma-buf/Kconfig
@@ -47,4 +47,6 @@ menuconfig DMABUF_HEAPS
 	  this allows userspace to allocate dma-bufs that can be shared between
 	  drivers.
 
+source "drivers/dma-buf/heaps/Kconfig"
+
 endmenu
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
new file mode 100644
index 0000000..2050527
--- /dev/null
+++ b/drivers/dma-buf/heaps/Kconfig
@@ -0,0 +1,6 @@
+config DMABUF_HEAPS_SYSTEM
+	bool "DMA-BUF System Heap"
+	depends on DMABUF_HEAPS
+	help
+	  Choose this option to enable the system dmabuf heap. The system heap
+	  is backed by pages from the buddy allocator. If in doubt, say Y.
diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
index de49898..d1808ec 100644
--- a/drivers/dma-buf/heaps/Makefile
+++ b/drivers/dma-buf/heaps/Makefile
@@ -1,2 +1,3 @@
 # SPDX-License-Identifier: GPL-2.0
 obj-y					+= heap-helpers.o
+obj-$(CONFIG_DMABUF_HEAPS_SYSTEM)	+= system_heap.o
diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
new file mode 100644
index 0000000..e001661
--- /dev/null
+++ b/drivers/dma-buf/heaps/system_heap.c
@@ -0,0 +1,132 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * DMABUF System heap exporter
+ *
+ * Copyright (C) 2011 Google, Inc.
+ * Copyright (C) 2019 Linaro Ltd.
+ */
+
+#include <asm/page.h>
+#include <linux/dma-buf.h>
+#include <linux/dma-mapping.h>
+#include <linux/dma-heap.h>
+#include <linux/err.h>
+#include <linux/highmem.h>
+#include <linux/mm.h>
+#include <linux/scatterlist.h>
+#include <linux/slab.h>
+
+#include "heap-helpers.h"
+
+
+struct system_heap {
+	struct dma_heap heap;
+};
+
+
+static void system_heap_free(struct heap_helper_buffer *buffer)
+{
+	int i;
+	struct scatterlist *sg;
+	struct sg_table *table = buffer->sg_table;
+
+	for_each_sg(table->sgl, sg, table->nents, i)
+		__free_page(sg_page(sg));
+
+	sg_free_table(table);
+	kfree(table);
+	kfree(buffer);
+}
+
+static int system_heap_allocate(struct dma_heap *heap,
+				unsigned long len,
+				unsigned long flags)
+{
+	struct heap_helper_buffer *helper_buffer;
+	struct sg_table *table;
+	struct scatterlist *sg;
+	int i, j;
+	int npages = PAGE_ALIGN(len) / PAGE_SIZE;
+	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+	struct dma_buf *dmabuf;
+	int ret = -ENOMEM;
+
+	helper_buffer = kzalloc(sizeof(*helper_buffer), GFP_KERNEL);
+	if (!helper_buffer)
+		return -ENOMEM;
+
+	INIT_HEAP_HELPER_BUFFER(helper_buffer, system_heap_free);
+	helper_buffer->heap_buffer.flags = flags;
+	helper_buffer->heap_buffer.heap = heap;
+	helper_buffer->heap_buffer.size = len;
+
+	table = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
+	if (!table)
+		goto err0;
+
+	i = sg_alloc_table(table, npages, GFP_KERNEL);
+	if (i)
+		goto err1;
+	for_each_sg(table->sgl, sg, table->nents, i) {
+		struct page *page;
+
+		page = alloc_page(GFP_KERNEL);
+		if (!page)
+			goto err2;
+		sg_set_page(sg, page, PAGE_SIZE, 0);
+	}
+
+	/* create the dmabuf */
+	exp_info.ops = &heap_helper_ops;
+	exp_info.size = len;
+	exp_info.flags = O_RDWR;
+	exp_info.priv = &helper_buffer->heap_buffer;
+	dmabuf = dma_buf_export(&exp_info);
+	if (IS_ERR(dmabuf)) {
+		ret = PTR_ERR(dmabuf);
+		goto err2;
+	}
+
+	helper_buffer->heap_buffer.dmabuf = dmabuf;
+	helper_buffer->sg_table = table;
+
+	ret = dma_buf_fd(dmabuf, O_CLOEXEC);
+	if (ret < 0) {
+		dma_buf_put(dmabuf);
+		/* just return, as put will call release and that will free */
+		return ret;
+	}
+
+	return ret;
+
+err2:
+	for_each_sg(table->sgl, sg, i, j)
+		__free_page(sg_page(sg));
+	sg_free_table(table);
+err1:
+	kfree(table);
+err0:
+	kfree(helper_buffer);
+	return -ENOMEM;
+}
+
+
+static struct dma_heap_ops system_heap_ops = {
+	.allocate = system_heap_allocate,
+};
+
+static int system_heap_create(void)
+{
+	struct system_heap *sys_heap;
+
+	sys_heap = kzalloc(sizeof(*sys_heap), GFP_KERNEL);
+	if (!sys_heap)
+		return -ENOMEM;
+	sys_heap->heap.name = "system_heap";
+	sys_heap->heap.ops = &system_heap_ops;
+
+	dma_heap_add(&sys_heap->heap);
+
+	return 0;
+}
+device_initcall(system_heap_create);
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC][PATCH 4/5 v2] dma-buf: heaps: Add CMA heap to dmabuf heapss
  2019-03-05 20:54 [RFC][PATCH 0/5 v2] DMA-BUF Heaps (destaging ION) John Stultz
                   ` (2 preceding siblings ...)
  2019-03-05 20:54 ` [RFC][PATCH 3/5 v2] dma-buf: heaps: Add system heap to dmabuf heaps John Stultz
@ 2019-03-05 20:54 ` John Stultz
  2019-03-06 16:05   ` Benjamin Gaignard
                     ` (2 more replies)
  2019-03-05 20:54 ` [RFC][PATCH 5/5 v2] kselftests: Add dma-heap test John Stultz
                   ` (3 subsequent siblings)
  7 siblings, 3 replies; 68+ messages in thread
From: John Stultz @ 2019-03-05 20:54 UTC (permalink / raw)
  To: lkml
  Cc: John Stultz, Laura Abbott, Benjamin Gaignard, Greg KH,
	Sumit Semwal, Liam Mark, Brian Starkey, Andrew F . Davis,
	Chenbo Feng, Alistair Strachan, dri-devel

This adds a CMA heap, which allows userspace to allocate
a dma-buf of contiguous memory out of a CMA region.

This code is an evolution of the Android ION implementation, so
thanks to its original author and maintainters:
  Benjamin Gaignard, Laura Abbott, and others!

Cc: Laura Abbott <labbott@redhat.com>
Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Liam Mark <lmark@codeaurora.org>
Cc: Brian Starkey <Brian.Starkey@arm.com>
Cc: Andrew F. Davis <afd@ti.com>
Cc: Chenbo Feng <fengc@google.com>
Cc: Alistair Strachan <astrachan@google.com>
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
v2:
* Switch allocate to return dmabuf fd
* Simplify init code
* Checkpatch fixups
---
 drivers/dma-buf/heaps/Kconfig    |   8 ++
 drivers/dma-buf/heaps/Makefile   |   1 +
 drivers/dma-buf/heaps/cma_heap.c | 164 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 173 insertions(+)
 create mode 100644 drivers/dma-buf/heaps/cma_heap.c

diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
index 2050527..a5eef06 100644
--- a/drivers/dma-buf/heaps/Kconfig
+++ b/drivers/dma-buf/heaps/Kconfig
@@ -4,3 +4,11 @@ config DMABUF_HEAPS_SYSTEM
 	help
 	  Choose this option to enable the system dmabuf heap. The system heap
 	  is backed by pages from the buddy allocator. If in doubt, say Y.
+
+config DMABUF_HEAPS_CMA
+	bool "DMA-BUF CMA Heap"
+	depends on DMABUF_HEAPS && DMA_CMA
+	help
+	  Choose this option to enable dma-buf CMA heap. This heap is backed
+	  by the Contiguous Memory Allocator (CMA). If your system has these
+	  regions, you should say Y here.
diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
index d1808ec..6e54cde 100644
--- a/drivers/dma-buf/heaps/Makefile
+++ b/drivers/dma-buf/heaps/Makefile
@@ -1,3 +1,4 @@
 # SPDX-License-Identifier: GPL-2.0
 obj-y					+= heap-helpers.o
 obj-$(CONFIG_DMABUF_HEAPS_SYSTEM)	+= system_heap.o
+obj-$(CONFIG_DMABUF_HEAPS_CMA)		+= cma_heap.o
diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c
new file mode 100644
index 0000000..33c18ec
--- /dev/null
+++ b/drivers/dma-buf/heaps/cma_heap.c
@@ -0,0 +1,164 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * DMABUF CMA heap exporter
+ *
+ * Copyright (C) 2012, 2019 Linaro Ltd.
+ * Author: <benjamin.gaignard@linaro.org> for ST-Ericsson.
+ */
+
+#include <linux/device.h>
+#include <linux/dma-buf.h>
+#include <linux/dma-heap.h>
+#include <linux/slab.h>
+#include <linux/errno.h>
+#include <linux/err.h>
+#include <linux/cma.h>
+#include <linux/scatterlist.h>
+#include <linux/highmem.h>
+
+#include "heap-helpers.h"
+
+struct cma_heap {
+	struct dma_heap heap;
+	struct cma *cma;
+};
+
+
+#define to_cma_heap(x) container_of(x, struct cma_heap, heap)
+
+
+static void cma_heap_free(struct heap_helper_buffer *buffer)
+{
+	struct cma_heap *cma_heap = to_cma_heap(buffer->heap_buffer.heap);
+	struct page *pages = buffer->priv_virt;
+	unsigned long nr_pages;
+
+	nr_pages = PAGE_ALIGN(buffer->heap_buffer.size) >> PAGE_SHIFT;
+
+	/* release memory */
+	cma_release(cma_heap->cma, pages, nr_pages);
+	/* release sg table */
+	sg_free_table(buffer->sg_table);
+	kfree(buffer->sg_table);
+	kfree(buffer);
+}
+
+/* dmabuf heap CMA operations functions */
+static int cma_heap_allocate(struct dma_heap *heap,
+				unsigned long len,
+				unsigned long flags)
+{
+	struct cma_heap *cma_heap = to_cma_heap(heap);
+	struct heap_helper_buffer *helper_buffer;
+	struct sg_table *table;
+	struct page *pages;
+	size_t size = PAGE_ALIGN(len);
+	unsigned long nr_pages = size >> PAGE_SHIFT;
+	unsigned long align = get_order(size);
+	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+	struct dma_buf *dmabuf;
+	int ret = -ENOMEM;
+
+	if (align > CONFIG_CMA_ALIGNMENT)
+		align = CONFIG_CMA_ALIGNMENT;
+
+	helper_buffer = kzalloc(sizeof(*helper_buffer), GFP_KERNEL);
+	if (!helper_buffer)
+		return -ENOMEM;
+
+	INIT_HEAP_HELPER_BUFFER(helper_buffer, cma_heap_free);
+	helper_buffer->heap_buffer.flags = flags;
+	helper_buffer->heap_buffer.heap = heap;
+	helper_buffer->heap_buffer.size = len;
+
+
+	pages = cma_alloc(cma_heap->cma, nr_pages, align, false);
+	if (!pages)
+		goto free_buf;
+
+	if (PageHighMem(pages)) {
+		unsigned long nr_clear_pages = nr_pages;
+		struct page *page = pages;
+
+		while (nr_clear_pages > 0) {
+			void *vaddr = kmap_atomic(page);
+
+			memset(vaddr, 0, PAGE_SIZE);
+			kunmap_atomic(vaddr);
+			page++;
+			nr_clear_pages--;
+		}
+	} else {
+		memset(page_address(pages), 0, size);
+	}
+
+	table = kmalloc(sizeof(*table), GFP_KERNEL);
+	if (!table)
+		goto free_cma;
+
+	ret = sg_alloc_table(table, 1, GFP_KERNEL);
+	if (ret)
+		goto free_table;
+
+	sg_set_page(table->sgl, pages, size, 0);
+
+	/* create the dmabuf */
+	exp_info.ops = &heap_helper_ops;
+	exp_info.size = len;
+	exp_info.flags = O_RDWR;
+	exp_info.priv = &helper_buffer->heap_buffer;
+	dmabuf = dma_buf_export(&exp_info);
+	if (IS_ERR(dmabuf)) {
+		ret = PTR_ERR(dmabuf);
+		goto free_table;
+	}
+
+	helper_buffer->heap_buffer.dmabuf = dmabuf;
+	helper_buffer->priv_virt = pages;
+	helper_buffer->sg_table = table;
+
+	ret = dma_buf_fd(dmabuf, O_CLOEXEC);
+	if (ret < 0) {
+		dma_buf_put(dmabuf);
+		/* just return, as put will call release and that will free */
+		return ret;
+	}
+
+	return ret;
+free_table:
+	kfree(table);
+free_cma:
+	cma_release(cma_heap->cma, pages, nr_pages);
+free_buf:
+	kfree(helper_buffer);
+	return ret;
+}
+
+static struct dma_heap_ops cma_heap_ops = {
+	.allocate = cma_heap_allocate,
+};
+
+static int __add_cma_heaps(struct cma *cma, void *data)
+{
+	struct cma_heap *cma_heap;
+
+	cma_heap = kzalloc(sizeof(*cma_heap), GFP_KERNEL);
+
+	if (!cma_heap)
+		return -ENOMEM;
+
+	cma_heap->heap.name = cma_get_name(cma);
+	cma_heap->heap.ops = &cma_heap_ops;
+	cma_heap->cma = cma;
+
+	dma_heap_add(&cma_heap->heap);
+
+	return 0;
+}
+
+static int add_cma_heaps(void)
+{
+	cma_for_each_area(__add_cma_heaps, NULL);
+	return 0;
+}
+device_initcall(add_cma_heaps);
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC][PATCH 5/5 v2] kselftests: Add dma-heap test
  2019-03-05 20:54 [RFC][PATCH 0/5 v2] DMA-BUF Heaps (destaging ION) John Stultz
                   ` (3 preceding siblings ...)
  2019-03-05 20:54 ` [RFC][PATCH 4/5 v2] dma-buf: heaps: Add CMA heap to dmabuf heapss John Stultz
@ 2019-03-05 20:54 ` John Stultz
  2019-03-06 16:14   ` Benjamin Gaignard
  2019-03-13 20:23   ` Liam Mark
  2019-03-13 20:11 ` [RFC][PATCH 0/5 v2] DMA-BUF Heaps (destaging ION) Liam Mark
                   ` (2 subsequent siblings)
  7 siblings, 2 replies; 68+ messages in thread
From: John Stultz @ 2019-03-05 20:54 UTC (permalink / raw)
  To: lkml
  Cc: John Stultz, Laura Abbott, Benjamin Gaignard, Greg KH,
	Sumit Semwal, Liam Mark, Brian Starkey, Andrew F . Davis,
	Chenbo Feng, Alistair Strachan, dri-devel

Add very trivial allocation test for dma-heaps.

TODO: Need to actually do some validation on
the returned dma-buf.

Cc: Laura Abbott <labbott@redhat.com>
Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Liam Mark <lmark@codeaurora.org>
Cc: Brian Starkey <Brian.Starkey@arm.com>
Cc: Andrew F. Davis <afd@ti.com>
Cc: Chenbo Feng <fengc@google.com>
Cc: Alistair Strachan <astrachan@google.com>
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
v2: Switched to use reworked dma-heap apis
---
 tools/testing/selftests/dmabuf-heaps/Makefile      | 11 +++
 tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c | 96 ++++++++++++++++++++++
 2 files changed, 107 insertions(+)
 create mode 100644 tools/testing/selftests/dmabuf-heaps/Makefile
 create mode 100644 tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c

diff --git a/tools/testing/selftests/dmabuf-heaps/Makefile b/tools/testing/selftests/dmabuf-heaps/Makefile
new file mode 100644
index 0000000..c414ad3
--- /dev/null
+++ b/tools/testing/selftests/dmabuf-heaps/Makefile
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: GPL-2.0
+CFLAGS += -static -O3 -Wl,-no-as-needed -Wall
+#LDLIBS += -lrt -lpthread -lm
+
+# these are all "safe" tests that don't modify
+# system time or require escalated privileges
+TEST_GEN_PROGS = dmabuf-heap
+
+
+include ../lib.mk
+
diff --git a/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
new file mode 100644
index 0000000..06837a4
--- /dev/null
+++ b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
@@ -0,0 +1,96 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <dirent.h>
+#include <errno.h>
+#include <fcntl.h>
+#include <stdio.h>
+#include <string.h>
+#include <unistd.h>
+#include <sys/ioctl.h>
+#include <sys/mman.h>
+#include <sys/types.h>
+
+#include "../../../../include/uapi/linux/dma-heap.h"
+
+#define DEVPATH "/dev/dma_heap"
+
+int dmabuf_heap_open(char *name)
+{
+	int ret, fd;
+	char buf[256];
+
+	ret = sprintf(buf, "%s/%s", DEVPATH, name);
+	if (ret < 0) {
+		printf("sprintf failed!\n");
+		return ret;
+	}
+
+	fd = open(buf, O_RDWR);
+	if (fd < 0)
+		printf("open %s failed!\n", buf);
+	return fd;
+}
+
+int dmabuf_heap_alloc(int fd, size_t len, unsigned int flags, int *dmabuf_fd)
+{
+	struct dma_heap_allocation_data data = {
+		.len = len,
+		.flags = flags,
+	};
+	int ret;
+
+	if (dmabuf_fd == NULL)
+		return -EINVAL;
+
+	ret = ioctl(fd, DMA_HEAP_IOC_ALLOC, &data);
+	if (ret < 0)
+		return ret;
+	*dmabuf_fd = (int)data.fd;
+	return ret;
+}
+
+#define ONE_MEG (1024*1024)
+
+void do_test(char *heap_name)
+{
+	int heap_fd = -1, dmabuf_fd = -1;
+	int ret;
+
+	printf("Testing heap: %s\n", heap_name);
+
+	heap_fd = dmabuf_heap_open(heap_name);
+	if (heap_fd < 0)
+		return;
+
+	printf("Allocating 1 MEG\n");
+	ret = dmabuf_heap_alloc(heap_fd, ONE_MEG, 0, &dmabuf_fd);
+	if (ret)
+		goto out;
+
+	/* DO SOMETHING WITH THE DMABUF HERE? */
+
+out:
+	if (dmabuf_fd >= 0)
+		close(dmabuf_fd);
+	if (heap_fd >= 0)
+		close(heap_fd);
+}
+
+
+int main(void)
+{
+	DIR *d;
+	struct dirent *dir;
+
+	d = opendir(DEVPATH);
+	if (!d) {
+		printf("No %s directory?\n", DEVPATH);
+		return -1;
+	}
+
+	while ((dir = readdir(d)) != NULL)
+		do_test(dir->d_name);
+
+
+	return 0;
+}
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 3/5 v2] dma-buf: heaps: Add system heap to dmabuf heaps
  2019-03-05 20:54 ` [RFC][PATCH 3/5 v2] dma-buf: heaps: Add system heap to dmabuf heaps John Stultz
@ 2019-03-06 16:01   ` Benjamin Gaignard
  2019-03-11  5:48     ` John Stultz
  2019-03-13 20:20   ` Liam Mark
  2019-03-15  9:06   ` Christoph Hellwig
  2 siblings, 1 reply; 68+ messages in thread
From: Benjamin Gaignard @ 2019-03-06 16:01 UTC (permalink / raw)
  To: John Stultz
  Cc: lkml, Laura Abbott, Greg KH, Sumit Semwal, Liam Mark,
	Brian Starkey, Andrew F . Davis, Chenbo Feng, Alistair Strachan,
	ML dri-devel

Le mar. 5 mars 2019 à 21:54, John Stultz <john.stultz@linaro.org> a écrit :
>
> This patch adds system heap to the dma-buf heaps framework.
>
> This allows applications to get a page-allocator backed dma-buf
> for non-contiguous memory.
>
> This code is an evolution of the Android ION implementation, so
> thanks to its original authors and maintainters:
>   Rebecca Schultz Zavin, Colin Cross, Laura Abbott, and others!
>
> Cc: Laura Abbott <labbott@redhat.com>
> Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
> Cc: Greg KH <gregkh@linuxfoundation.org>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: Liam Mark <lmark@codeaurora.org>
> Cc: Brian Starkey <Brian.Starkey@arm.com>
> Cc: Andrew F. Davis <afd@ti.com>
> Cc: Chenbo Feng <fengc@google.com>
> Cc: Alistair Strachan <astrachan@google.com>
> Cc: dri-devel@lists.freedesktop.org
> Signed-off-by: John Stultz <john.stultz@linaro.org>
> ---
> v2:
> * Switch allocate to return dmabuf fd
> * Simplify init code
> * Checkpatch fixups
> * Droped dead system-contig code

just few blank lines to remove.

Reveiwed-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
> ---
>  drivers/dma-buf/Kconfig             |   2 +
>  drivers/dma-buf/heaps/Kconfig       |   6 ++
>  drivers/dma-buf/heaps/Makefile      |   1 +
>  drivers/dma-buf/heaps/system_heap.c | 132 ++++++++++++++++++++++++++++++++++++
>  4 files changed, 141 insertions(+)
>  create mode 100644 drivers/dma-buf/heaps/Kconfig
>  create mode 100644 drivers/dma-buf/heaps/system_heap.c
>
> diff --git a/drivers/dma-buf/Kconfig b/drivers/dma-buf/Kconfig
> index 09c61db..63c139d 100644
> --- a/drivers/dma-buf/Kconfig
> +++ b/drivers/dma-buf/Kconfig
> @@ -47,4 +47,6 @@ menuconfig DMABUF_HEAPS
>           this allows userspace to allocate dma-bufs that can be shared between
>           drivers.
>
> +source "drivers/dma-buf/heaps/Kconfig"
> +
>  endmenu
> diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
> new file mode 100644
> index 0000000..2050527
> --- /dev/null
> +++ b/drivers/dma-buf/heaps/Kconfig
> @@ -0,0 +1,6 @@
> +config DMABUF_HEAPS_SYSTEM
> +       bool "DMA-BUF System Heap"
> +       depends on DMABUF_HEAPS
> +       help
> +         Choose this option to enable the system dmabuf heap. The system heap
> +         is backed by pages from the buddy allocator. If in doubt, say Y.
> diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
> index de49898..d1808ec 100644
> --- a/drivers/dma-buf/heaps/Makefile
> +++ b/drivers/dma-buf/heaps/Makefile
> @@ -1,2 +1,3 @@
>  # SPDX-License-Identifier: GPL-2.0
>  obj-y                                  += heap-helpers.o
> +obj-$(CONFIG_DMABUF_HEAPS_SYSTEM)      += system_heap.o
> diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
> new file mode 100644
> index 0000000..e001661
> --- /dev/null
> +++ b/drivers/dma-buf/heaps/system_heap.c
> @@ -0,0 +1,132 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * DMABUF System heap exporter
> + *
> + * Copyright (C) 2011 Google, Inc.
> + * Copyright (C) 2019 Linaro Ltd.
> + */
> +
> +#include <asm/page.h>
> +#include <linux/dma-buf.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/dma-heap.h>
> +#include <linux/err.h>
> +#include <linux/highmem.h>
> +#include <linux/mm.h>
> +#include <linux/scatterlist.h>
> +#include <linux/slab.h>
> +
> +#include "heap-helpers.h"
> +
> +
remove blank line
> +struct system_heap {
> +       struct dma_heap heap;
> +};
> +
> +
remove blank line
> +static void system_heap_free(struct heap_helper_buffer *buffer)
> +{
> +       int i;
> +       struct scatterlist *sg;
> +       struct sg_table *table = buffer->sg_table;
> +
> +       for_each_sg(table->sgl, sg, table->nents, i)
> +               __free_page(sg_page(sg));
> +
> +       sg_free_table(table);
> +       kfree(table);
> +       kfree(buffer);
> +}
> +
> +static int system_heap_allocate(struct dma_heap *heap,
> +                               unsigned long len,
> +                               unsigned long flags)
> +{
> +       struct heap_helper_buffer *helper_buffer;
> +       struct sg_table *table;
> +       struct scatterlist *sg;
> +       int i, j;
> +       int npages = PAGE_ALIGN(len) / PAGE_SIZE;
> +       DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
> +       struct dma_buf *dmabuf;
> +       int ret = -ENOMEM;
> +
> +       helper_buffer = kzalloc(sizeof(*helper_buffer), GFP_KERNEL);
> +       if (!helper_buffer)
> +               return -ENOMEM;
> +
> +       INIT_HEAP_HELPER_BUFFER(helper_buffer, system_heap_free);
> +       helper_buffer->heap_buffer.flags = flags;
> +       helper_buffer->heap_buffer.heap = heap;
> +       helper_buffer->heap_buffer.size = len;
> +
> +       table = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
> +       if (!table)
> +               goto err0;
> +
> +       i = sg_alloc_table(table, npages, GFP_KERNEL);
> +       if (i)
> +               goto err1;
> +       for_each_sg(table->sgl, sg, table->nents, i) {
> +               struct page *page;
> +
> +               page = alloc_page(GFP_KERNEL);
> +               if (!page)
> +                       goto err2;
> +               sg_set_page(sg, page, PAGE_SIZE, 0);
> +       }
> +
> +       /* create the dmabuf */
> +       exp_info.ops = &heap_helper_ops;
> +       exp_info.size = len;
> +       exp_info.flags = O_RDWR;
> +       exp_info.priv = &helper_buffer->heap_buffer;
> +       dmabuf = dma_buf_export(&exp_info);
> +       if (IS_ERR(dmabuf)) {
> +               ret = PTR_ERR(dmabuf);
> +               goto err2;
> +       }
> +
> +       helper_buffer->heap_buffer.dmabuf = dmabuf;
> +       helper_buffer->sg_table = table;
> +
> +       ret = dma_buf_fd(dmabuf, O_CLOEXEC);
> +       if (ret < 0) {
> +               dma_buf_put(dmabuf);
> +               /* just return, as put will call release and that will free */
> +               return ret;
> +       }
> +
> +       return ret;
> +
> +err2:
> +       for_each_sg(table->sgl, sg, i, j)
> +               __free_page(sg_page(sg));
> +       sg_free_table(table);
> +err1:
> +       kfree(table);
> +err0:
> +       kfree(helper_buffer);
> +       return -ENOMEM;
> +}
> +
> +
remove blank line
> +static struct dma_heap_ops system_heap_ops = {
> +       .allocate = system_heap_allocate,
> +};
> +
> +static int system_heap_create(void)
> +{
> +       struct system_heap *sys_heap;
> +
> +       sys_heap = kzalloc(sizeof(*sys_heap), GFP_KERNEL);
> +       if (!sys_heap)
> +               return -ENOMEM;
> +       sys_heap->heap.name = "system_heap";
> +       sys_heap->heap.ops = &system_heap_ops;
> +
> +       dma_heap_add(&sys_heap->heap);
> +
> +       return 0;
> +}
> +device_initcall(system_heap_create);
> --
> 2.7.4
>


-- 
Benjamin Gaignard

Graphic Study Group

Linaro.org │ Open source software for ARM SoCs

Follow Linaro: Facebook | Twitter | Blog

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 4/5 v2] dma-buf: heaps: Add CMA heap to dmabuf heapss
  2019-03-05 20:54 ` [RFC][PATCH 4/5 v2] dma-buf: heaps: Add CMA heap to dmabuf heapss John Stultz
@ 2019-03-06 16:05   ` Benjamin Gaignard
  2019-03-21 20:15     ` John Stultz
  2019-03-15  9:06   ` Christoph Hellwig
  2019-03-19 14:53   ` Brian Starkey
  2 siblings, 1 reply; 68+ messages in thread
From: Benjamin Gaignard @ 2019-03-06 16:05 UTC (permalink / raw)
  To: John Stultz
  Cc: lkml, Laura Abbott, Greg KH, Sumit Semwal, Liam Mark,
	Brian Starkey, Andrew F . Davis, Chenbo Feng, Alistair Strachan,
	ML dri-devel

Le mar. 5 mars 2019 à 21:54, John Stultz <john.stultz@linaro.org> a écrit :
>
> This adds a CMA heap, which allows userspace to allocate
> a dma-buf of contiguous memory out of a CMA region.
>
> This code is an evolution of the Android ION implementation, so
> thanks to its original author and maintainters:
>   Benjamin Gaignard, Laura Abbott, and others!
>
> Cc: Laura Abbott <labbott@redhat.com>
> Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
> Cc: Greg KH <gregkh@linuxfoundation.org>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: Liam Mark <lmark@codeaurora.org>
> Cc: Brian Starkey <Brian.Starkey@arm.com>
> Cc: Andrew F. Davis <afd@ti.com>
> Cc: Chenbo Feng <fengc@google.com>
> Cc: Alistair Strachan <astrachan@google.com>
> Cc: dri-devel@lists.freedesktop.org
> Signed-off-by: John Stultz <john.stultz@linaro.org>
> ---
> v2:
> * Switch allocate to return dmabuf fd
> * Simplify init code
> * Checkpatch fixups
> ---
>  drivers/dma-buf/heaps/Kconfig    |   8 ++
>  drivers/dma-buf/heaps/Makefile   |   1 +
>  drivers/dma-buf/heaps/cma_heap.c | 164 +++++++++++++++++++++++++++++++++++++++
>  3 files changed, 173 insertions(+)
>  create mode 100644 drivers/dma-buf/heaps/cma_heap.c
>
> diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
> index 2050527..a5eef06 100644
> --- a/drivers/dma-buf/heaps/Kconfig
> +++ b/drivers/dma-buf/heaps/Kconfig
> @@ -4,3 +4,11 @@ config DMABUF_HEAPS_SYSTEM
>         help
>           Choose this option to enable the system dmabuf heap. The system heap
>           is backed by pages from the buddy allocator. If in doubt, say Y.
> +
> +config DMABUF_HEAPS_CMA
> +       bool "DMA-BUF CMA Heap"
> +       depends on DMABUF_HEAPS && DMA_CMA
> +       help
> +         Choose this option to enable dma-buf CMA heap. This heap is backed
> +         by the Contiguous Memory Allocator (CMA). If your system has these
> +         regions, you should say Y here.
> diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
> index d1808ec..6e54cde 100644
> --- a/drivers/dma-buf/heaps/Makefile
> +++ b/drivers/dma-buf/heaps/Makefile
> @@ -1,3 +1,4 @@
>  # SPDX-License-Identifier: GPL-2.0
>  obj-y                                  += heap-helpers.o
>  obj-$(CONFIG_DMABUF_HEAPS_SYSTEM)      += system_heap.o
> +obj-$(CONFIG_DMABUF_HEAPS_CMA)         += cma_heap.o
> diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c
> new file mode 100644
> index 0000000..33c18ec
> --- /dev/null
> +++ b/drivers/dma-buf/heaps/cma_heap.c
> @@ -0,0 +1,164 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * DMABUF CMA heap exporter
> + *
> + * Copyright (C) 2012, 2019 Linaro Ltd.
> + * Author: <benjamin.gaignard@linaro.org> for ST-Ericsson.
> + */
> +
> +#include <linux/device.h>
> +#include <linux/dma-buf.h>
> +#include <linux/dma-heap.h>
> +#include <linux/slab.h>
> +#include <linux/errno.h>
> +#include <linux/err.h>
> +#include <linux/cma.h>
> +#include <linux/scatterlist.h>
> +#include <linux/highmem.h>
> +
> +#include "heap-helpers.h"
> +
> +struct cma_heap {
> +       struct dma_heap heap;
> +       struct cma *cma;
> +};
> +
> +
> +#define to_cma_heap(x) container_of(x, struct cma_heap, heap)

Even if I had write this macro years ago, now I would prefer to have a
static inline function
to be able to check the types.

with that:
Reviewed-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>

> +
> +
> +static void cma_heap_free(struct heap_helper_buffer *buffer)
> +{
> +       struct cma_heap *cma_heap = to_cma_heap(buffer->heap_buffer.heap);
> +       struct page *pages = buffer->priv_virt;
> +       unsigned long nr_pages;
> +
> +       nr_pages = PAGE_ALIGN(buffer->heap_buffer.size) >> PAGE_SHIFT;
> +
> +       /* release memory */
> +       cma_release(cma_heap->cma, pages, nr_pages);
> +       /* release sg table */
> +       sg_free_table(buffer->sg_table);
> +       kfree(buffer->sg_table);
> +       kfree(buffer);
> +}
> +
> +/* dmabuf heap CMA operations functions */
> +static int cma_heap_allocate(struct dma_heap *heap,
> +                               unsigned long len,
> +                               unsigned long flags)
> +{
> +       struct cma_heap *cma_heap = to_cma_heap(heap);
> +       struct heap_helper_buffer *helper_buffer;
> +       struct sg_table *table;
> +       struct page *pages;
> +       size_t size = PAGE_ALIGN(len);
> +       unsigned long nr_pages = size >> PAGE_SHIFT;
> +       unsigned long align = get_order(size);
> +       DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
> +       struct dma_buf *dmabuf;
> +       int ret = -ENOMEM;
> +
> +       if (align > CONFIG_CMA_ALIGNMENT)
> +               align = CONFIG_CMA_ALIGNMENT;
> +
> +       helper_buffer = kzalloc(sizeof(*helper_buffer), GFP_KERNEL);
> +       if (!helper_buffer)
> +               return -ENOMEM;
> +
> +       INIT_HEAP_HELPER_BUFFER(helper_buffer, cma_heap_free);
> +       helper_buffer->heap_buffer.flags = flags;
> +       helper_buffer->heap_buffer.heap = heap;
> +       helper_buffer->heap_buffer.size = len;
> +
> +
> +       pages = cma_alloc(cma_heap->cma, nr_pages, align, false);
> +       if (!pages)
> +               goto free_buf;
> +
> +       if (PageHighMem(pages)) {
> +               unsigned long nr_clear_pages = nr_pages;
> +               struct page *page = pages;
> +
> +               while (nr_clear_pages > 0) {
> +                       void *vaddr = kmap_atomic(page);
> +
> +                       memset(vaddr, 0, PAGE_SIZE);
> +                       kunmap_atomic(vaddr);
> +                       page++;
> +                       nr_clear_pages--;
> +               }
> +       } else {
> +               memset(page_address(pages), 0, size);
> +       }
> +
> +       table = kmalloc(sizeof(*table), GFP_KERNEL);
> +       if (!table)
> +               goto free_cma;
> +
> +       ret = sg_alloc_table(table, 1, GFP_KERNEL);
> +       if (ret)
> +               goto free_table;
> +
> +       sg_set_page(table->sgl, pages, size, 0);
> +
> +       /* create the dmabuf */
> +       exp_info.ops = &heap_helper_ops;
> +       exp_info.size = len;
> +       exp_info.flags = O_RDWR;
> +       exp_info.priv = &helper_buffer->heap_buffer;
> +       dmabuf = dma_buf_export(&exp_info);
> +       if (IS_ERR(dmabuf)) {
> +               ret = PTR_ERR(dmabuf);
> +               goto free_table;
> +       }
> +
> +       helper_buffer->heap_buffer.dmabuf = dmabuf;
> +       helper_buffer->priv_virt = pages;
> +       helper_buffer->sg_table = table;
> +
> +       ret = dma_buf_fd(dmabuf, O_CLOEXEC);
> +       if (ret < 0) {
> +               dma_buf_put(dmabuf);
> +               /* just return, as put will call release and that will free */
> +               return ret;
> +       }
> +
> +       return ret;
> +free_table:
> +       kfree(table);
> +free_cma:
> +       cma_release(cma_heap->cma, pages, nr_pages);
> +free_buf:
> +       kfree(helper_buffer);
> +       return ret;
> +}
> +
> +static struct dma_heap_ops cma_heap_ops = {
> +       .allocate = cma_heap_allocate,
> +};
> +
> +static int __add_cma_heaps(struct cma *cma, void *data)
> +{
> +       struct cma_heap *cma_heap;
> +
> +       cma_heap = kzalloc(sizeof(*cma_heap), GFP_KERNEL);
> +
> +       if (!cma_heap)
> +               return -ENOMEM;
> +
> +       cma_heap->heap.name = cma_get_name(cma);
> +       cma_heap->heap.ops = &cma_heap_ops;
> +       cma_heap->cma = cma;
> +
> +       dma_heap_add(&cma_heap->heap);
> +
> +       return 0;
> +}
> +
> +static int add_cma_heaps(void)
> +{
> +       cma_for_each_area(__add_cma_heaps, NULL);
> +       return 0;
> +}
> +device_initcall(add_cma_heaps);
> --
> 2.7.4
>


-- 
Benjamin Gaignard

Graphic Study Group

Linaro.org │ Open source software for ARM SoCs

Follow Linaro: Facebook | Twitter | Blog

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 1/5 v2] dma-buf: Add dma-buf heaps framework
  2019-03-05 20:54 ` [RFC][PATCH 1/5 v2] dma-buf: Add dma-buf heaps framework John Stultz
@ 2019-03-06 16:12   ` Benjamin Gaignard
  2019-03-06 16:57     ` John Stultz
  2019-03-15  8:55     ` Christoph Hellwig
  2019-03-06 16:27   ` Andrew F. Davis
                     ` (4 subsequent siblings)
  5 siblings, 2 replies; 68+ messages in thread
From: Benjamin Gaignard @ 2019-03-06 16:12 UTC (permalink / raw)
  To: John Stultz
  Cc: lkml, Andrew F. Davis, Laura Abbott, Greg KH, Sumit Semwal,
	Liam Mark, Brian Starkey, Chenbo Feng, Alistair Strachan,
	ML dri-devel

Le mar. 5 mars 2019 à 21:54, John Stultz <john.stultz@linaro.org> a écrit :
>
> From: "Andrew F. Davis" <afd@ti.com>
>
> This framework allows a unified userspace interface for dma-buf
> exporters, allowing userland to allocate specific types of
> memory for use in dma-buf sharing.
>
> Each heap is given its own device node, which a user can
> allocate a dma-buf fd from using the DMA_HEAP_IOC_ALLOC.
>
> This code is an evoluiton of the Android ION implementation,
> and a big thanks is due to its authors/maintainers over time
> for their effort:
>   Rebecca Schultz Zavin, Colin Cross, Benjamin Gaignard,
>   Laura Abbott, and many other contributors!
>
> Cc: Laura Abbott <labbott@redhat.com>
> Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
> Cc: Greg KH <gregkh@linuxfoundation.org>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: Liam Mark <lmark@codeaurora.org>
> Cc: Brian Starkey <Brian.Starkey@arm.com>
> Cc: Andrew F. Davis <afd@ti.com>
> Cc: Chenbo Feng <fengc@google.com>
> Cc: Alistair Strachan <astrachan@google.com>
> Cc: dri-devel@lists.freedesktop.org
> Signed-off-by: Andrew F. Davis <afd@ti.com>
> [jstultz: reworded commit message, and lots of cleanups]
> Signed-off-by: John Stultz <john.stultz@linaro.org>
> ---
> v2:
> * Folded down fixes I had previously shared in implementing
>   heaps
> * Make flags a u64 (Suggested by Laura)
> * Add PAGE_ALIGN() fix to the core alloc funciton
> * IOCTL fixups suggested by Brian
> * Added fixes suggested by Benjamin
> * Removed core stats mgmt, as that should be implemented by
>   per-heap code
> * Changed alloc to return a dma-buf fd, rather then a buffer
>   (as it simplifies error handling)
> ---
>  MAINTAINERS                   |  16 ++++
>  drivers/dma-buf/Kconfig       |   8 ++
>  drivers/dma-buf/Makefile      |   1 +
>  drivers/dma-buf/dma-heap.c    | 191 ++++++++++++++++++++++++++++++++++++++++++
>  include/linux/dma-heap.h      |  65 ++++++++++++++
>  include/uapi/linux/dma-heap.h |  52 ++++++++++++
>  6 files changed, 333 insertions(+)
>  create mode 100644 drivers/dma-buf/dma-heap.c
>  create mode 100644 include/linux/dma-heap.h
>  create mode 100644 include/uapi/linux/dma-heap.h
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index ac2e518..a661e19 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -4621,6 +4621,22 @@ F:       include/linux/*fence.h
>  F:     Documentation/driver-api/dma-buf.rst
>  T:     git git://anongit.freedesktop.org/drm/drm-misc
>
> +DMA-BUF HEAPS FRAMEWORK
> +M:     Laura Abbott <labbott@redhat.com>
> +R:     Liam Mark <lmark@codeaurora.org>
> +R:     Brian Starkey <Brian.Starkey@arm.com>
> +R:     "Andrew F. Davis" <afd@ti.com>
> +R:     John Stultz <john.stultz@linaro.org>
> +S:     Maintained
> +L:     linux-media@vger.kernel.org
> +L:     dri-devel@lists.freedesktop.org
> +L:     linaro-mm-sig@lists.linaro.org (moderated for non-subscribers)
> +F:     include/uapi/linux/dma-heap.h
> +F:     include/linux/dma-heap.h
> +F:     drivers/dma-buf/dma-heap.c
> +F:     drivers/dma-buf/heaps/*
> +T:     git git://anongit.freedesktop.org/drm/drm-misc
> +
>  DMA GENERIC OFFLOAD ENGINE SUBSYSTEM
>  M:     Vinod Koul <vkoul@kernel.org>
>  L:     dmaengine@vger.kernel.org
> diff --git a/drivers/dma-buf/Kconfig b/drivers/dma-buf/Kconfig
> index 2e5a0fa..09c61db 100644
> --- a/drivers/dma-buf/Kconfig
> +++ b/drivers/dma-buf/Kconfig
> @@ -39,4 +39,12 @@ config UDMABUF
>           A driver to let userspace turn memfd regions into dma-bufs.
>           Qemu can use this to create host dmabufs for guest framebuffers.
>
> +menuconfig DMABUF_HEAPS
> +       bool "DMA-BUF Userland Memory Heaps"
> +       select DMA_SHARED_BUFFER
> +       help
> +         Choose this option to enable the DMA-BUF userland memory heaps,
> +         this allows userspace to allocate dma-bufs that can be shared between
> +         drivers.
> +
>  endmenu
> diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
> index 0913a6c..b0332f1 100644
> --- a/drivers/dma-buf/Makefile
> +++ b/drivers/dma-buf/Makefile
> @@ -1,4 +1,5 @@
>  obj-y := dma-buf.o dma-fence.o dma-fence-array.o reservation.o seqno-fence.o
> +obj-$(CONFIG_DMABUF_HEAPS)     += dma-heap.o
>  obj-$(CONFIG_SYNC_FILE)                += sync_file.o
>  obj-$(CONFIG_SW_SYNC)          += sw_sync.o sync_debug.o
>  obj-$(CONFIG_UDMABUF)          += udmabuf.o
> diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c
> new file mode 100644
> index 0000000..14b3975
> --- /dev/null
> +++ b/drivers/dma-buf/dma-heap.c
> @@ -0,0 +1,191 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Framework for userspace DMA-BUF allocations
> + *
> + * Copyright (C) 2011 Google, Inc.
> + * Copyright (C) 2019 Linaro Ltd.
> + */
> +
> +#include <linux/cdev.h>
> +#include <linux/debugfs.h>
> +#include <linux/device.h>
> +#include <linux/dma-buf.h>
> +#include <linux/err.h>
> +#include <linux/idr.h>
> +#include <linux/list.h>
> +#include <linux/slab.h>
> +#include <linux/uaccess.h>
> +
> +#include <linux/dma-heap.h>
> +#include <uapi/linux/dma-heap.h>
> +
> +#define DEVNAME "dma_heap"
> +
> +#define NUM_HEAP_MINORS 128
> +static DEFINE_IDR(dma_heap_idr);
> +static DEFINE_MUTEX(minor_lock); /* Protect idr accesses */
> +
> +dev_t dma_heap_devt;
> +struct class *dma_heap_class;
> +struct list_head dma_heap_list;
> +struct dentry *dma_heap_debug_root;
> +
> +static int dma_heap_buffer_alloc(struct dma_heap *heap, size_t len,
> +                                unsigned int flags)
> +{
> +       len = PAGE_ALIGN(len);
> +       if (!len)
> +               return -EINVAL;
> +
> +       return heap->ops->allocate(heap, len, flags);
> +}
> +
> +static int dma_heap_open(struct inode *inode, struct file *filp)
> +{
> +       struct dma_heap *heap;
> +
> +       mutex_lock(&minor_lock);
> +       heap = idr_find(&dma_heap_idr, iminor(inode));
> +       mutex_unlock(&minor_lock);
> +       if (!heap) {
> +               pr_err("dma_heap: minor %d unknown.\n", iminor(inode));
> +               return -ENODEV;
> +       }
> +
> +       /* instance data as context */
> +       filp->private_data = heap;
> +       nonseekable_open(inode, filp);
> +
> +       return 0;
> +}
> +
> +static int dma_heap_release(struct inode *inode, struct file *filp)
> +{
> +       filp->private_data = NULL;
> +
> +       return 0;
> +}
> +
> +static long dma_heap_ioctl(struct file *filp, unsigned int cmd,
> +                          unsigned long arg)
> +{
> +       switch (cmd) {
> +       case DMA_HEAP_IOC_ALLOC:
> +       {
> +               struct dma_heap_allocation_data heap_allocation;
> +               struct dma_heap *heap = filp->private_data;
> +               int fd;
> +
> +               if (copy_from_user(&heap_allocation, (void __user *)arg,
> +                                  sizeof(heap_allocation)))
> +                       return -EFAULT;
> +
> +               if (heap_allocation.fd ||
> +                   heap_allocation.reserved0 ||
> +                   heap_allocation.reserved1 ||
> +                   heap_allocation.reserved2) {
> +                       pr_warn_once("dma_heap: ioctl data not valid\n");
> +                       return -EINVAL;
> +               }
> +               if (heap_allocation.flags & ~DMA_HEAP_VALID_FLAGS) {
> +                       pr_warn_once("dma_heap: flags has invalid or unsupported flags set\n");
> +                       return -EINVAL;
> +               }
> +
> +               fd = dma_heap_buffer_alloc(heap, heap_allocation.len,
> +                                          heap_allocation.flags);
> +               if (fd < 0)
> +                       return fd;
> +
> +               heap_allocation.fd = fd;
> +
> +               if (copy_to_user((void __user *)arg, &heap_allocation,
> +                                sizeof(heap_allocation)))
> +                       return -EFAULT;
> +
> +               break;
> +       }
> +       default:
> +               return -ENOTTY;
> +       }
> +
> +       return 0;
> +}
> +
> +static const struct file_operations dma_heap_fops = {
> +       .owner          = THIS_MODULE,
> +       .open           = dma_heap_open,
> +       .release        = dma_heap_release,
> +       .unlocked_ioctl = dma_heap_ioctl,
> +#ifdef CONFIG_COMPAT
> +       .compat_ioctl   = dma_heap_ioctl,
> +#endif
> +};
> +
> +int dma_heap_add(struct dma_heap *heap)
> +{
> +       struct device *dev_ret;
> +       int ret;
> +
> +       if (!heap->name || !strcmp(heap->name, "")) {
> +               pr_err("dma_heap: Cannot add heap without a name\n");
> +               return -EINVAL;
> +       }
> +
> +       if (!heap->ops || !heap->ops->allocate) {
> +               pr_err("dma_heap: Cannot add heap with invalid ops struct\n");
> +               return -EINVAL;
> +       }
> +
> +       /* Find unused minor number */
> +       mutex_lock(&minor_lock);
> +       ret = idr_alloc(&dma_heap_idr, heap, 0, NUM_HEAP_MINORS, GFP_KERNEL);
> +       mutex_unlock(&minor_lock);
> +       if (ret < 0) {
> +               pr_err("dma_heap: Unable to get minor number for heap\n");
> +               return ret;
> +       }
> +       heap->minor = ret;
> +
> +       /* Create device */
> +       heap->heap_devt = MKDEV(MAJOR(dma_heap_devt), heap->minor);
> +       dev_ret = device_create(dma_heap_class,
> +                               NULL,
> +                               heap->heap_devt,
> +                               NULL,
> +                               heap->name);
> +       if (IS_ERR(dev_ret)) {
> +               pr_err("dma_heap: Unable to create char device\n");
> +               return PTR_ERR(dev_ret);
> +       }
> +
> +       /* Add device */
> +       cdev_init(&heap->heap_cdev, &dma_heap_fops);
> +       ret = cdev_add(&heap->heap_cdev, dma_heap_devt, NUM_HEAP_MINORS);
> +       if (ret < 0) {
> +               device_destroy(dma_heap_class, heap->heap_devt);
> +               pr_err("dma_heap: Unable to add char device\n");
> +               return ret;
> +       }
> +
> +       return 0;
> +}
> +EXPORT_SYMBOL(dma_heap_add);
> +
> +static int dma_heap_init(void)
> +{
> +       int ret;
> +
> +       ret = alloc_chrdev_region(&dma_heap_devt, 0, NUM_HEAP_MINORS, DEVNAME);
> +       if (ret)
> +               return ret;
> +
> +       dma_heap_class = class_create(THIS_MODULE, DEVNAME);
> +       if (IS_ERR(dma_heap_class)) {
> +               unregister_chrdev_region(dma_heap_devt, NUM_HEAP_MINORS);
> +               return PTR_ERR(dma_heap_class);
> +       }
> +
> +       return 0;
> +}
> +subsys_initcall(dma_heap_init);
> diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h
> new file mode 100644
> index 0000000..ed86a8e
> --- /dev/null
> +++ b/include/linux/dma-heap.h
> @@ -0,0 +1,65 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * DMABUF Heaps Allocation Infrastructure
> + *
> + * Copyright (C) 2011 Google, Inc.
> + * Copyright (C) 2019 Linaro Ltd.
> + */
> +
> +#ifndef _DMA_HEAPS_H
> +#define _DMA_HEAPS_H
> +
> +#include <linux/cdev.h>
> +#include <linux/types.h>
> +
> +/**
> + * struct dma_heap_buffer - metadata for a particular buffer
> + * @heap:              back pointer to the heap the buffer came from
> + * @dmabuf:            backing dma-buf for this buffer
> + * @size:              size of the buffer
> + * @flags:             buffer specific flags
> + */
> +struct dma_heap_buffer {
> +       struct dma_heap *heap;
> +       struct dma_buf *dmabuf;
> +       size_t size;
> +       unsigned long flags;
> +};
> +
> +/**
> + * struct dma_heap - represents a dmabuf heap in the system
> + * @name:              used for debugging/device-node name
> + * @ops:               ops struct for this heap
> + * @minor              minor number of this heap device
> + * @heap_devt          heap device node
> + * @heap_cdev          heap char device
> + *
> + * Represents a heap of memory from which buffers can be made.
> + */
> +struct dma_heap {
> +       const char *name;
> +       struct dma_heap_ops *ops;
> +       unsigned int minor;
> +       dev_t heap_devt;
> +       struct cdev heap_cdev;
> +};
> +
> +/**
> + * struct dma_heap_ops - ops to operate on a given heap
> + * @allocate:          allocate dmabuf and return fd
> + *
> + * allocate returns dmabuf fd  on success, -errno on error.
> + */
> +struct dma_heap_ops {
> +       int (*allocate)(struct dma_heap *heap,
> +                       unsigned long len,
> +                       unsigned long flags);
> +};
> +
> +/**
> + * dma_heap_add - adds a heap to dmabuf heaps
> + * @heap:              the heap to add
> + */
> +int dma_heap_add(struct dma_heap *heap);
> +
> +#endif /* _DMA_HEAPS_H */
> diff --git a/include/uapi/linux/dma-heap.h b/include/uapi/linux/dma-heap.h
> new file mode 100644
> index 0000000..75c5d3f
> --- /dev/null
> +++ b/include/uapi/linux/dma-heap.h
> @@ -0,0 +1,52 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * DMABUF Heaps Userspace API
> + *
> + * Copyright (C) 2011 Google, Inc.
> + * Copyright (C) 2019 Linaro Ltd.
> + */
> +#ifndef _UAPI_LINUX_DMABUF_POOL_H
> +#define _UAPI_LINUX_DMABUF_POOL_H
> +
> +#include <linux/ioctl.h>
> +#include <linux/types.h>
> +
> +/**
> + * DOC: DMABUF Heaps Userspace API
> + *
> + */
> +
> +/* Currently no flags */
> +#define DMA_HEAP_VALID_FLAGS (0)

I think here you need to allow flags like O_RDWR or O_CLOEXEC else
mmap will fail.

Benjamin

> +
> +/**
> + * struct dma_heap_allocation_data - metadata passed from userspace for
> + *                                      allocations
> + * @len:               size of the allocation
> + * @flags:             flags passed to pool
> + * @fd:                        will be populated with a fd which provdes the
> + *                     handle to the allocated dma-buf
> + *
> + * Provided by userspace as an argument to the ioctl
> + */
> +struct dma_heap_allocation_data {
> +       __u64 len;
> +       __u64 flags;
> +       __u32 fd;
> +       __u32 reserved0;
> +       __u32 reserved1;
> +       __u32 reserved2;
> +};
> +
> +#define DMA_HEAP_IOC_MAGIC             'H'
> +
> +/**
> + * DOC: DMA_HEAP_IOC_ALLOC - allocate memory from pool
> + *
> + * Takes an dma_heap_allocation_data struct and returns it with the fd field
> + * populated with the dmabuf handle of the allocation.
> + */
> +#define DMA_HEAP_IOC_ALLOC     _IOWR(DMA_HEAP_IOC_MAGIC, 0, \
> +                                     struct dma_heap_allocation_data)
> +
> +#endif /* _UAPI_LINUX_DMABUF_POOL_H */
> --
> 2.7.4
>


-- 
Benjamin Gaignard

Graphic Study Group

Linaro.org │ Open source software for ARM SoCs

Follow Linaro: Facebook | Twitter | Blog

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 5/5 v2] kselftests: Add dma-heap test
  2019-03-05 20:54 ` [RFC][PATCH 5/5 v2] kselftests: Add dma-heap test John Stultz
@ 2019-03-06 16:14   ` Benjamin Gaignard
  2019-03-06 16:35     ` Andrew F. Davis
  2019-03-06 17:01     ` John Stultz
  2019-03-13 20:23   ` Liam Mark
  1 sibling, 2 replies; 68+ messages in thread
From: Benjamin Gaignard @ 2019-03-06 16:14 UTC (permalink / raw)
  To: John Stultz
  Cc: lkml, Laura Abbott, Greg KH, Sumit Semwal, Liam Mark,
	Brian Starkey, Andrew F . Davis, Chenbo Feng, Alistair Strachan,
	ML dri-devel

Le mar. 5 mars 2019 à 21:54, John Stultz <john.stultz@linaro.org> a écrit :
>
> Add very trivial allocation test for dma-heaps.
>
> TODO: Need to actually do some validation on
> the returned dma-buf.
>
> Cc: Laura Abbott <labbott@redhat.com>
> Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
> Cc: Greg KH <gregkh@linuxfoundation.org>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: Liam Mark <lmark@codeaurora.org>
> Cc: Brian Starkey <Brian.Starkey@arm.com>
> Cc: Andrew F. Davis <afd@ti.com>
> Cc: Chenbo Feng <fengc@google.com>
> Cc: Alistair Strachan <astrachan@google.com>
> Cc: dri-devel@lists.freedesktop.org
> Signed-off-by: John Stultz <john.stultz@linaro.org>
> ---
> v2: Switched to use reworked dma-heap apis
> ---
>  tools/testing/selftests/dmabuf-heaps/Makefile      | 11 +++
>  tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c | 96 ++++++++++++++++++++++
>  2 files changed, 107 insertions(+)
>  create mode 100644 tools/testing/selftests/dmabuf-heaps/Makefile
>  create mode 100644 tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
>
> diff --git a/tools/testing/selftests/dmabuf-heaps/Makefile b/tools/testing/selftests/dmabuf-heaps/Makefile
> new file mode 100644
> index 0000000..c414ad3
> --- /dev/null
> +++ b/tools/testing/selftests/dmabuf-heaps/Makefile
> @@ -0,0 +1,11 @@
> +# SPDX-License-Identifier: GPL-2.0
> +CFLAGS += -static -O3 -Wl,-no-as-needed -Wall
> +#LDLIBS += -lrt -lpthread -lm
> +
> +# these are all "safe" tests that don't modify
> +# system time or require escalated privileges
> +TEST_GEN_PROGS = dmabuf-heap
> +
> +
> +include ../lib.mk
> +
> diff --git a/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
> new file mode 100644
> index 0000000..06837a4
> --- /dev/null
> +++ b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
> @@ -0,0 +1,96 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include <dirent.h>
> +#include <errno.h>
> +#include <fcntl.h>
> +#include <stdio.h>
> +#include <string.h>
> +#include <unistd.h>
> +#include <sys/ioctl.h>
> +#include <sys/mman.h>
> +#include <sys/types.h>
> +
> +#include "../../../../include/uapi/linux/dma-heap.h"
> +
> +#define DEVPATH "/dev/dma_heap"
> +
> +int dmabuf_heap_open(char *name)
> +{
> +       int ret, fd;
> +       char buf[256];
> +
> +       ret = sprintf(buf, "%s/%s", DEVPATH, name);
> +       if (ret < 0) {
> +               printf("sprintf failed!\n");
> +               return ret;
> +       }
> +
> +       fd = open(buf, O_RDWR);
> +       if (fd < 0)
> +               printf("open %s failed!\n", buf);
> +       return fd;
> +}
> +
> +int dmabuf_heap_alloc(int fd, size_t len, unsigned int flags, int *dmabuf_fd)
> +{
> +       struct dma_heap_allocation_data data = {
> +               .len = len,
> +               .flags = flags,
> +       };
> +       int ret;
> +
> +       if (dmabuf_fd == NULL)
> +               return -EINVAL;
> +
> +       ret = ioctl(fd, DMA_HEAP_IOC_ALLOC, &data);
> +       if (ret < 0)
> +               return ret;
> +       *dmabuf_fd = (int)data.fd;
> +       return ret;
> +}
> +
> +#define ONE_MEG (1024*1024)
> +
> +void do_test(char *heap_name)
> +{
> +       int heap_fd = -1, dmabuf_fd = -1;
> +       int ret;
> +
> +       printf("Testing heap: %s\n", heap_name);
> +
> +       heap_fd = dmabuf_heap_open(heap_name);
> +       if (heap_fd < 0)
> +               return;
> +
> +       printf("Allocating 1 MEG\n");
> +       ret = dmabuf_heap_alloc(heap_fd, ONE_MEG, 0, &dmabuf_fd);
> +       if (ret)
> +               goto out;
> +
> +       /* DO SOMETHING WITH THE DMABUF HERE? */

You can do a call to mmap and write a pattern in the buffer.

Benjamin
> +
> +out:
> +       if (dmabuf_fd >= 0)
> +               close(dmabuf_fd);
> +       if (heap_fd >= 0)
> +               close(heap_fd);
> +}
> +
> +
> +int main(void)
> +{
> +       DIR *d;
> +       struct dirent *dir;
> +
> +       d = opendir(DEVPATH);
> +       if (!d) {
> +               printf("No %s directory?\n", DEVPATH);
> +               return -1;
> +       }
> +
> +       while ((dir = readdir(d)) != NULL)
> +               do_test(dir->d_name);
> +
> +
> +       return 0;
> +}
> --
> 2.7.4
>


-- 
Benjamin Gaignard

Graphic Study Group

Linaro.org │ Open source software for ARM SoCs

Follow Linaro: Facebook | Twitter | Blog

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 1/5 v2] dma-buf: Add dma-buf heaps framework
  2019-03-05 20:54 ` [RFC][PATCH 1/5 v2] dma-buf: Add dma-buf heaps framework John Stultz
  2019-03-06 16:12   ` Benjamin Gaignard
@ 2019-03-06 16:27   ` Andrew F. Davis
  2019-03-06 19:03     ` John Stultz
  2019-03-15  8:54   ` Christoph Hellwig
                     ` (3 subsequent siblings)
  5 siblings, 1 reply; 68+ messages in thread
From: Andrew F. Davis @ 2019-03-06 16:27 UTC (permalink / raw)
  To: John Stultz, lkml
  Cc: Laura Abbott, Benjamin Gaignard, Greg KH, Sumit Semwal,
	Liam Mark, Brian Starkey, Chenbo Feng, Alistair Strachan,
	dri-devel

On 3/5/19 2:54 PM, John Stultz wrote:
> From: "Andrew F. Davis" <afd@ti.com>
> 
> This framework allows a unified userspace interface for dma-buf
> exporters, allowing userland to allocate specific types of
> memory for use in dma-buf sharing.
> 
> Each heap is given its own device node, which a user can
> allocate a dma-buf fd from using the DMA_HEAP_IOC_ALLOC.
> 
> This code is an evoluiton of the Android ION implementation,
> and a big thanks is due to its authors/maintainers over time
> for their effort:
>   Rebecca Schultz Zavin, Colin Cross, Benjamin Gaignard,
>   Laura Abbott, and many other contributors!
> 
> Cc: Laura Abbott <labbott@redhat.com>
> Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
> Cc: Greg KH <gregkh@linuxfoundation.org>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: Liam Mark <lmark@codeaurora.org>
> Cc: Brian Starkey <Brian.Starkey@arm.com>
> Cc: Andrew F. Davis <afd@ti.com>
> Cc: Chenbo Feng <fengc@google.com>
> Cc: Alistair Strachan <astrachan@google.com>
> Cc: dri-devel@lists.freedesktop.org
> Signed-off-by: Andrew F. Davis <afd@ti.com>
> [jstultz: reworded commit message, and lots of cleanups]
> Signed-off-by: John Stultz <john.stultz@linaro.org>
> ---
> v2:
> * Folded down fixes I had previously shared in implementing
>   heaps
> * Make flags a u64 (Suggested by Laura)
> * Add PAGE_ALIGN() fix to the core alloc funciton
> * IOCTL fixups suggested by Brian
> * Added fixes suggested by Benjamin
> * Removed core stats mgmt, as that should be implemented by
>   per-heap code
> * Changed alloc to return a dma-buf fd, rather then a buffer
>   (as it simplifies error handling)
> ---
>  MAINTAINERS                   |  16 ++++
>  drivers/dma-buf/Kconfig       |   8 ++
>  drivers/dma-buf/Makefile      |   1 +
>  drivers/dma-buf/dma-heap.c    | 191 ++++++++++++++++++++++++++++++++++++++++++
>  include/linux/dma-heap.h      |  65 ++++++++++++++
>  include/uapi/linux/dma-heap.h |  52 ++++++++++++
>  6 files changed, 333 insertions(+)
>  create mode 100644 drivers/dma-buf/dma-heap.c
>  create mode 100644 include/linux/dma-heap.h
>  create mode 100644 include/uapi/linux/dma-heap.h
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index ac2e518..a661e19 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -4621,6 +4621,22 @@ F:	include/linux/*fence.h
>  F:	Documentation/driver-api/dma-buf.rst
>  T:	git git://anongit.freedesktop.org/drm/drm-misc
>  
> +DMA-BUF HEAPS FRAMEWORK
> +M:	Laura Abbott <labbott@redhat.com>
> +R:	Liam Mark <lmark@codeaurora.org>
> +R:	Brian Starkey <Brian.Starkey@arm.com>
> +R:	"Andrew F. Davis" <afd@ti.com>

Quotes not needed in maintainers file.

> +R:	John Stultz <john.stultz@linaro.org>
> +S:	Maintained
> +L:	linux-media@vger.kernel.org
> +L:	dri-devel@lists.freedesktop.org
> +L:	linaro-mm-sig@lists.linaro.org (moderated for non-subscribers)
> +F:	include/uapi/linux/dma-heap.h
> +F:	include/linux/dma-heap.h
> +F:	drivers/dma-buf/dma-heap.c
> +F:	drivers/dma-buf/heaps/*
> +T:	git git://anongit.freedesktop.org/drm/drm-misc
> +
>  DMA GENERIC OFFLOAD ENGINE SUBSYSTEM
>  M:	Vinod Koul <vkoul@kernel.org>
>  L:	dmaengine@vger.kernel.org
> diff --git a/drivers/dma-buf/Kconfig b/drivers/dma-buf/Kconfig
> index 2e5a0fa..09c61db 100644
> --- a/drivers/dma-buf/Kconfig
> +++ b/drivers/dma-buf/Kconfig
> @@ -39,4 +39,12 @@ config UDMABUF
>  	  A driver to let userspace turn memfd regions into dma-bufs.
>  	  Qemu can use this to create host dmabufs for guest framebuffers.
>  
> +menuconfig DMABUF_HEAPS
> +	bool "DMA-BUF Userland Memory Heaps"
> +	select DMA_SHARED_BUFFER
> +	help
> +	  Choose this option to enable the DMA-BUF userland memory heaps,
> +	  this allows userspace to allocate dma-bufs that can be shared between
> +	  drivers.
> +
>  endmenu
> diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
> index 0913a6c..b0332f1 100644
> --- a/drivers/dma-buf/Makefile
> +++ b/drivers/dma-buf/Makefile
> @@ -1,4 +1,5 @@
>  obj-y := dma-buf.o dma-fence.o dma-fence-array.o reservation.o seqno-fence.o
> +obj-$(CONFIG_DMABUF_HEAPS)	+= dma-heap.o
>  obj-$(CONFIG_SYNC_FILE)		+= sync_file.o
>  obj-$(CONFIG_SW_SYNC)		+= sw_sync.o sync_debug.o
>  obj-$(CONFIG_UDMABUF)		+= udmabuf.o
> diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c
> new file mode 100644
> index 0000000..14b3975
> --- /dev/null
> +++ b/drivers/dma-buf/dma-heap.c
> @@ -0,0 +1,191 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Framework for userspace DMA-BUF allocations
> + *
> + * Copyright (C) 2011 Google, Inc.
> + * Copyright (C) 2019 Linaro Ltd.
> + */
> +
> +#include <linux/cdev.h>
> +#include <linux/debugfs.h>
> +#include <linux/device.h>
> +#include <linux/dma-buf.h>
> +#include <linux/err.h>
> +#include <linux/idr.h>
> +#include <linux/list.h>
> +#include <linux/slab.h>
> +#include <linux/uaccess.h>
> +
> +#include <linux/dma-heap.h>
> +#include <uapi/linux/dma-heap.h>
> +
> +#define DEVNAME "dma_heap"
> +
> +#define NUM_HEAP_MINORS 128
> +static DEFINE_IDR(dma_heap_idr);
> +static DEFINE_MUTEX(minor_lock); /* Protect idr accesses */
> +
> +dev_t dma_heap_devt;
> +struct class *dma_heap_class;
> +struct list_head dma_heap_list;
> +struct dentry *dma_heap_debug_root;
> +
> +static int dma_heap_buffer_alloc(struct dma_heap *heap, size_t len,
> +				 unsigned int flags)
> +{
> +	len = PAGE_ALIGN(len);
> +	if (!len)
> +		return -EINVAL;
> +
> +	return heap->ops->allocate(heap, len, flags);
> +}
> +
> +static int dma_heap_open(struct inode *inode, struct file *filp)
> +{
> +	struct dma_heap *heap;
> +
> +	mutex_lock(&minor_lock);
> +	heap = idr_find(&dma_heap_idr, iminor(inode));
> +	mutex_unlock(&minor_lock);
> +	if (!heap) {
> +		pr_err("dma_heap: minor %d unknown.\n", iminor(inode));
> +		return -ENODEV;
> +	}
> +
> +	/* instance data as context */
> +	filp->private_data = heap;
> +	nonseekable_open(inode, filp);
> +
> +	return 0;
> +}
> +
> +static int dma_heap_release(struct inode *inode, struct file *filp)
> +{
> +	filp->private_data = NULL;
> +
> +	return 0;
> +}
> +
> +static long dma_heap_ioctl(struct file *filp, unsigned int cmd,
> +			   unsigned long arg)
> +{
> +	switch (cmd) {
> +	case DMA_HEAP_IOC_ALLOC:
> +	{
> +		struct dma_heap_allocation_data heap_allocation;
> +		struct dma_heap *heap = filp->private_data;
> +		int fd;
> +
> +		if (copy_from_user(&heap_allocation, (void __user *)arg,
> +				   sizeof(heap_allocation)))
> +			return -EFAULT;
> +
> +		if (heap_allocation.fd ||
> +		    heap_allocation.reserved0 ||
> +		    heap_allocation.reserved1 ||
> +		    heap_allocation.reserved2) {

Seems too many reserved, I can understand one, but if we ever needed all
of these we would be better off just adding another alloc ioctl.

> +			pr_warn_once("dma_heap: ioctl data not valid\n");
> +			return -EINVAL;
> +		}
> +		if (heap_allocation.flags & ~DMA_HEAP_VALID_FLAGS) {
> +			pr_warn_once("dma_heap: flags has invalid or unsupported flags set\n");
> +			return -EINVAL;
> +		}
> +
> +		fd = dma_heap_buffer_alloc(heap, heap_allocation.len,
> +					   heap_allocation.flags);
> +		if (fd < 0)
> +			return fd;
> +
> +		heap_allocation.fd = fd;
> +
> +		if (copy_to_user((void __user *)arg, &heap_allocation,
> +				 sizeof(heap_allocation)))
> +			return -EFAULT;
> +
> +		break;
> +	}
> +	default:
> +		return -ENOTTY;
> +	}
> +
> +	return 0;
> +}
> +
> +static const struct file_operations dma_heap_fops = {
> +	.owner          = THIS_MODULE,
> +	.open		= dma_heap_open,
> +	.release	= dma_heap_release,
> +	.unlocked_ioctl = dma_heap_ioctl,
> +#ifdef CONFIG_COMPAT
> +	.compat_ioctl	= dma_heap_ioctl,
> +#endif
> +};
> +
> +int dma_heap_add(struct dma_heap *heap)
> +{
> +	struct device *dev_ret;
> +	int ret;
> +
> +	if (!heap->name || !strcmp(heap->name, "")) {
> +		pr_err("dma_heap: Cannot add heap without a name\n");

As these names end up as the dev name in the file system we may want to
check for invalid names, there is probably a helper for that somewhere.

> +		return -EINVAL;
> +	}
> +
> +	if (!heap->ops || !heap->ops->allocate) {
> +		pr_err("dma_heap: Cannot add heap with invalid ops struct\n");
> +		return -EINVAL;
> +	}
> +
> +	/* Find unused minor number */
> +	mutex_lock(&minor_lock);
> +	ret = idr_alloc(&dma_heap_idr, heap, 0, NUM_HEAP_MINORS, GFP_KERNEL);
> +	mutex_unlock(&minor_lock);
> +	if (ret < 0) {
> +		pr_err("dma_heap: Unable to get minor number for heap\n");
> +		return ret;
> +	}
> +	heap->minor = ret;
> +
> +	/* Create device */
> +	heap->heap_devt = MKDEV(MAJOR(dma_heap_devt), heap->minor);
> +	dev_ret = device_create(dma_heap_class,
> +				NULL,
> +				heap->heap_devt,
> +				NULL,
> +				heap->name);
> +	if (IS_ERR(dev_ret)) {
> +		pr_err("dma_heap: Unable to create char device\n");
> +		return PTR_ERR(dev_ret);
> +	}
> +
> +	/* Add device */
> +	cdev_init(&heap->heap_cdev, &dma_heap_fops);
> +	ret = cdev_add(&heap->heap_cdev, dma_heap_devt, NUM_HEAP_MINORS);
> +	if (ret < 0) {
> +		device_destroy(dma_heap_class, heap->heap_devt);
> +		pr_err("dma_heap: Unable to add char device\n");
> +		return ret;
> +	}
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(dma_heap_add);
> +
> +static int dma_heap_init(void)
> +{
> +	int ret;
> +
> +	ret = alloc_chrdev_region(&dma_heap_devt, 0, NUM_HEAP_MINORS, DEVNAME);
> +	if (ret)
> +		return ret;
> +
> +	dma_heap_class = class_create(THIS_MODULE, DEVNAME);
> +	if (IS_ERR(dma_heap_class)) {
> +		unregister_chrdev_region(dma_heap_devt, NUM_HEAP_MINORS);
> +		return PTR_ERR(dma_heap_class);
> +	}
> +
> +	return 0;
> +}
> +subsys_initcall(dma_heap_init);
> diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h
> new file mode 100644
> index 0000000..ed86a8e
> --- /dev/null
> +++ b/include/linux/dma-heap.h
> @@ -0,0 +1,65 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * DMABUF Heaps Allocation Infrastructure
> + *
> + * Copyright (C) 2011 Google, Inc.
> + * Copyright (C) 2019 Linaro Ltd.
> + */
> +
> +#ifndef _DMA_HEAPS_H
> +#define _DMA_HEAPS_H
> +
> +#include <linux/cdev.h>
> +#include <linux/types.h>
> +
> +/**
> + * struct dma_heap_buffer - metadata for a particular buffer
> + * @heap:		back pointer to the heap the buffer came from
> + * @dmabuf:		backing dma-buf for this buffer
> + * @size:		size of the buffer
> + * @flags:		buffer specific flags
> + */
> +struct dma_heap_buffer {
> +	struct dma_heap *heap;
> +	struct dma_buf *dmabuf;
> +	size_t size;
> +	unsigned long flags;
> +};
> +
> +/**
> + * struct dma_heap - represents a dmabuf heap in the system
> + * @name:		used for debugging/device-node name
> + * @ops:		ops struct for this heap
> + * @minor		minor number of this heap device
> + * @heap_devt		heap device node
> + * @heap_cdev		heap char device
> + *
> + * Represents a heap of memory from which buffers can be made.
> + */
> +struct dma_heap {
> +	const char *name;
> +	struct dma_heap_ops *ops;
> +	unsigned int minor;
> +	dev_t heap_devt;
> +	struct cdev heap_cdev;
> +};

Still not sure about this, all of the members in this struct are
strictly internally used by the framework. The users of this framework
should not have access to them and only need to deal with an opaque
pointer for tracking themselves (can store it in a private struct of
their own then container_of to get back out their struct).

Anyway, not a big deal, and if it really bugs me enough I can always go
fix it later, it's all kernel internal so not a blocker here. :)

Andrew

> +
> +/**
> + * struct dma_heap_ops - ops to operate on a given heap
> + * @allocate:		allocate dmabuf and return fd
> + *
> + * allocate returns dmabuf fd  on success, -errno on error.
> + */
> +struct dma_heap_ops {
> +	int (*allocate)(struct dma_heap *heap,
> +			unsigned long len,
> +			unsigned long flags);
> +};
> +
> +/**
> + * dma_heap_add - adds a heap to dmabuf heaps
> + * @heap:		the heap to add
> + */
> +int dma_heap_add(struct dma_heap *heap);
> +
> +#endif /* _DMA_HEAPS_H */
> diff --git a/include/uapi/linux/dma-heap.h b/include/uapi/linux/dma-heap.h
> new file mode 100644
> index 0000000..75c5d3f
> --- /dev/null
> +++ b/include/uapi/linux/dma-heap.h
> @@ -0,0 +1,52 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * DMABUF Heaps Userspace API
> + *
> + * Copyright (C) 2011 Google, Inc.
> + * Copyright (C) 2019 Linaro Ltd.
> + */
> +#ifndef _UAPI_LINUX_DMABUF_POOL_H
> +#define _UAPI_LINUX_DMABUF_POOL_H
> +
> +#include <linux/ioctl.h>
> +#include <linux/types.h>
> +
> +/**
> + * DOC: DMABUF Heaps Userspace API
> + *
> + */
> +
> +/* Currently no flags */
> +#define DMA_HEAP_VALID_FLAGS (0)
> +
> +/**
> + * struct dma_heap_allocation_data - metadata passed from userspace for
> + *                                      allocations
> + * @len:		size of the allocation
> + * @flags:		flags passed to pool
> + * @fd:			will be populated with a fd which provdes the
> + *			handle to the allocated dma-buf
> + *
> + * Provided by userspace as an argument to the ioctl
> + */
> +struct dma_heap_allocation_data {
> +	__u64 len;
> +	__u64 flags;
> +	__u32 fd;
> +	__u32 reserved0;
> +	__u32 reserved1;
> +	__u32 reserved2;
> +};
> +
> +#define DMA_HEAP_IOC_MAGIC		'H'
> +
> +/**
> + * DOC: DMA_HEAP_IOC_ALLOC - allocate memory from pool
> + *
> + * Takes an dma_heap_allocation_data struct and returns it with the fd field
> + * populated with the dmabuf handle of the allocation.
> + */
> +#define DMA_HEAP_IOC_ALLOC	_IOWR(DMA_HEAP_IOC_MAGIC, 0, \
> +				      struct dma_heap_allocation_data)
> +
> +#endif /* _UAPI_LINUX_DMABUF_POOL_H */
> 

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 5/5 v2] kselftests: Add dma-heap test
  2019-03-06 16:14   ` Benjamin Gaignard
@ 2019-03-06 16:35     ` Andrew F. Davis
  2019-03-06 18:19       ` John Stultz
  2019-03-06 17:01     ` John Stultz
  1 sibling, 1 reply; 68+ messages in thread
From: Andrew F. Davis @ 2019-03-06 16:35 UTC (permalink / raw)
  To: Benjamin Gaignard, John Stultz
  Cc: lkml, Laura Abbott, Greg KH, Sumit Semwal, Liam Mark,
	Brian Starkey, Chenbo Feng, Alistair Strachan, ML dri-devel

On 3/6/19 10:14 AM, Benjamin Gaignard wrote:
> Le mar. 5 mars 2019 à 21:54, John Stultz <john.stultz@linaro.org> a écrit :
>>
>> Add very trivial allocation test for dma-heaps.
>>
>> TODO: Need to actually do some validation on
>> the returned dma-buf.
>>
>> Cc: Laura Abbott <labbott@redhat.com>
>> Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
>> Cc: Greg KH <gregkh@linuxfoundation.org>
>> Cc: Sumit Semwal <sumit.semwal@linaro.org>
>> Cc: Liam Mark <lmark@codeaurora.org>
>> Cc: Brian Starkey <Brian.Starkey@arm.com>
>> Cc: Andrew F. Davis <afd@ti.com>
>> Cc: Chenbo Feng <fengc@google.com>
>> Cc: Alistair Strachan <astrachan@google.com>
>> Cc: dri-devel@lists.freedesktop.org
>> Signed-off-by: John Stultz <john.stultz@linaro.org>
>> ---
>> v2: Switched to use reworked dma-heap apis
>> ---
>>  tools/testing/selftests/dmabuf-heaps/Makefile      | 11 +++
>>  tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c | 96 ++++++++++++++++++++++
>>  2 files changed, 107 insertions(+)
>>  create mode 100644 tools/testing/selftests/dmabuf-heaps/Makefile
>>  create mode 100644 tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
>>
>> diff --git a/tools/testing/selftests/dmabuf-heaps/Makefile b/tools/testing/selftests/dmabuf-heaps/Makefile
>> new file mode 100644
>> index 0000000..c414ad3
>> --- /dev/null
>> +++ b/tools/testing/selftests/dmabuf-heaps/Makefile
>> @@ -0,0 +1,11 @@
>> +# SPDX-License-Identifier: GPL-2.0
>> +CFLAGS += -static -O3 -Wl,-no-as-needed -Wall
>> +#LDLIBS += -lrt -lpthread -lm
>> +
>> +# these are all "safe" tests that don't modify
>> +# system time or require escalated privileges
>> +TEST_GEN_PROGS = dmabuf-heap
>> +
>> +
>> +include ../lib.mk
>> +
>> diff --git a/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
>> new file mode 100644
>> index 0000000..06837a4
>> --- /dev/null
>> +++ b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
>> @@ -0,0 +1,96 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +
>> +#include <dirent.h>
>> +#include <errno.h>
>> +#include <fcntl.h>
>> +#include <stdio.h>
>> +#include <string.h>
>> +#include <unistd.h>
>> +#include <sys/ioctl.h>
>> +#include <sys/mman.h>
>> +#include <sys/types.h>
>> +
>> +#include "../../../../include/uapi/linux/dma-heap.h"
>> +
>> +#define DEVPATH "/dev/dma_heap"
>> +
>> +int dmabuf_heap_open(char *name)
>> +{
>> +       int ret, fd;
>> +       char buf[256];
>> +
>> +       ret = sprintf(buf, "%s/%s", DEVPATH, name);
>> +       if (ret < 0) {
>> +               printf("sprintf failed!\n");
>> +               return ret;
>> +       }
>> +
>> +       fd = open(buf, O_RDWR);
>> +       if (fd < 0)
>> +               printf("open %s failed!\n", buf);
>> +       return fd;
>> +}
>> +
>> +int dmabuf_heap_alloc(int fd, size_t len, unsigned int flags, int *dmabuf_fd)
>> +{
>> +       struct dma_heap_allocation_data data = {
>> +               .len = len,
>> +               .flags = flags,
>> +       };
>> +       int ret;
>> +
>> +       if (dmabuf_fd == NULL)
>> +               return -EINVAL;
>> +
>> +       ret = ioctl(fd, DMA_HEAP_IOC_ALLOC, &data);
>> +       if (ret < 0)
>> +               return ret;
>> +       *dmabuf_fd = (int)data.fd;
>> +       return ret;
>> +}
>> +
>> +#define ONE_MEG (1024*1024)
>> +
>> +void do_test(char *heap_name)
>> +{
>> +       int heap_fd = -1, dmabuf_fd = -1;
>> +       int ret;
>> +
>> +       printf("Testing heap: %s\n", heap_name);
>> +
>> +       heap_fd = dmabuf_heap_open(heap_name);
>> +       if (heap_fd < 0)
>> +               return;
>> +
>> +       printf("Allocating 1 MEG\n");
>> +       ret = dmabuf_heap_alloc(heap_fd, ONE_MEG, 0, &dmabuf_fd);
>> +       if (ret)
>> +               goto out;
>> +
>> +       /* DO SOMETHING WITH THE DMABUF HERE? */
> 
> You can do a call to mmap and write a pattern in the buffer.
> 

mmap is optional for DMA-BUFs, only attach/map are required. To test
those we would need a dummy device, so a test kernel module may be
needed to really exercise this.

I have one I use for ION buffer testing, it consumes a DMA-BUF passed
from userspace, attach/maps it to a dummy device then return the
physical address of the first page of the buffer for validation. Might
be a good test, but dummy devices don't always have the proper dma
attributes set like a real device does, so it may also fail for some
otherwise valid buffers.

Andrew

> Benjamin
>> +
>> +out:
>> +       if (dmabuf_fd >= 0)
>> +               close(dmabuf_fd);
>> +       if (heap_fd >= 0)
>> +               close(heap_fd);
>> +}
>> +
>> +
>> +int main(void)
>> +{
>> +       DIR *d;
>> +       struct dirent *dir;
>> +
>> +       d = opendir(DEVPATH);
>> +       if (!d) {
>> +               printf("No %s directory?\n", DEVPATH);
>> +               return -1;
>> +       }
>> +
>> +       while ((dir = readdir(d)) != NULL)
>> +               do_test(dir->d_name);
>> +
>> +
>> +       return 0;
>> +}
>> --
>> 2.7.4
>>
> 
> 

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 1/5 v2] dma-buf: Add dma-buf heaps framework
  2019-03-06 16:12   ` Benjamin Gaignard
@ 2019-03-06 16:57     ` John Stultz
  2019-03-15  8:55     ` Christoph Hellwig
  1 sibling, 0 replies; 68+ messages in thread
From: John Stultz @ 2019-03-06 16:57 UTC (permalink / raw)
  To: Benjamin Gaignard
  Cc: lkml, Andrew F. Davis, Laura Abbott, Greg KH, Sumit Semwal,
	Liam Mark, Brian Starkey, Chenbo Feng, Alistair Strachan,
	ML dri-devel

On Wed, Mar 6, 2019 at 8:12 AM Benjamin Gaignard
<benjamin.gaignard@linaro.org> wrote:
> Le mar. 5 mars 2019 à 21:54, John Stultz <john.stultz@linaro.org> a écrit :
> > +/**
> > + * DOC: DMABUF Heaps Userspace API
> > + *
> > + */
> > +
> > +/* Currently no flags */
> > +#define DMA_HEAP_VALID_FLAGS (0)
>
> I think here you need to allow flags like O_RDWR or O_CLOEXEC else
> mmap will fail.
>

Hm. So I meant for these to be just the bitmask of valid flags for the
allocate IOCTL (used to make sure no one is passing junk flags
accidentally), rather then valid flags for the heap open call.  But
this probably suggests I should call it something like
DMA_HEAP_ALLOC_VALID_FLAGS instead?

thanks
-john

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 5/5 v2] kselftests: Add dma-heap test
  2019-03-06 16:14   ` Benjamin Gaignard
  2019-03-06 16:35     ` Andrew F. Davis
@ 2019-03-06 17:01     ` John Stultz
  2019-03-15 20:07       ` Laura Abbott
  1 sibling, 1 reply; 68+ messages in thread
From: John Stultz @ 2019-03-06 17:01 UTC (permalink / raw)
  To: Benjamin Gaignard
  Cc: lkml, Laura Abbott, Greg KH, Sumit Semwal, Liam Mark,
	Brian Starkey, Andrew F . Davis, Chenbo Feng, Alistair Strachan,
	ML dri-devel

On Wed, Mar 6, 2019 at 8:14 AM Benjamin Gaignard
<benjamin.gaignard@linaro.org> wrote:
> Le mar. 5 mars 2019 à 21:54, John Stultz <john.stultz@linaro.org> a écrit :
> > +
> > +       printf("Allocating 1 MEG\n");
> > +       ret = dmabuf_heap_alloc(heap_fd, ONE_MEG, 0, &dmabuf_fd);
> > +       if (ret)
> > +               goto out;
> > +
> > +       /* DO SOMETHING WITH THE DMABUF HERE? */
>
> You can do a call to mmap and write a pattern in the buffer.

Yea. I can also do some invalid allocations to make sure things fail properly.

But I was talking a bit w/ Sumit about the lack of any general dmabuf
tests, and am curious if we need to have a importer device driver that
can validate its a real dmabuf and exercise more of the dmabuf ops.

thanks
-john

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 5/5 v2] kselftests: Add dma-heap test
  2019-03-06 16:35     ` Andrew F. Davis
@ 2019-03-06 18:19       ` John Stultz
  2019-03-06 18:32         ` Andrew F. Davis
  0 siblings, 1 reply; 68+ messages in thread
From: John Stultz @ 2019-03-06 18:19 UTC (permalink / raw)
  To: Andrew F. Davis
  Cc: Benjamin Gaignard, lkml, Laura Abbott, Greg KH, Sumit Semwal,
	Liam Mark, Brian Starkey, Chenbo Feng, Alistair Strachan,
	ML dri-devel

On Wed, Mar 6, 2019 at 10:15 AM Andrew F. Davis <afd@ti.com> wrote:
>
> On 3/6/19 10:14 AM, Benjamin Gaignard wrote:
> > Le mar. 5 mars 2019 à 21:54, John Stultz <john.stultz@linaro.org> a écrit :
> >>
> >> Add very trivial allocation test for dma-heaps.
> >>
> >> TODO: Need to actually do some validation on
> >> the returned dma-buf.
> >>
> >> Cc: Laura Abbott <labbott@redhat.com>
> >> Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
> >> Cc: Greg KH <gregkh@linuxfoundation.org>
> >> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> >> Cc: Liam Mark <lmark@codeaurora.org>
> >> Cc: Brian Starkey <Brian.Starkey@arm.com>
> >> Cc: Andrew F. Davis <afd@ti.com>
> >> Cc: Chenbo Feng <fengc@google.com>
> >> Cc: Alistair Strachan <astrachan@google.com>
> >> Cc: dri-devel@lists.freedesktop.org
> >> Signed-off-by: John Stultz <john.stultz@linaro.org>
> >> ---
> >> v2: Switched to use reworked dma-heap apis
> >> ---
> >>  tools/testing/selftests/dmabuf-heaps/Makefile      | 11 +++
> >>  tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c | 96 ++++++++++++++++++++++
> >>  2 files changed, 107 insertions(+)
> >>  create mode 100644 tools/testing/selftests/dmabuf-heaps/Makefile
> >>  create mode 100644 tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
> >>
> >> diff --git a/tools/testing/selftests/dmabuf-heaps/Makefile b/tools/testing/selftests/dmabuf-heaps/Makefile
> >> new file mode 100644
> >> index 0000000..c414ad3
> >> --- /dev/null
> >> +++ b/tools/testing/selftests/dmabuf-heaps/Makefile
> >> @@ -0,0 +1,11 @@
> >> +# SPDX-License-Identifier: GPL-2.0
> >> +CFLAGS += -static -O3 -Wl,-no-as-needed -Wall
> >> +#LDLIBS += -lrt -lpthread -lm
> >> +
> >> +# these are all "safe" tests that don't modify
> >> +# system time or require escalated privileges
> >> +TEST_GEN_PROGS = dmabuf-heap
> >> +
> >> +
> >> +include ../lib.mk
> >> +
> >> diff --git a/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
> >> new file mode 100644
> >> index 0000000..06837a4
> >> --- /dev/null
> >> +++ b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
> >> @@ -0,0 +1,96 @@
> >> +// SPDX-License-Identifier: GPL-2.0
> >> +
> >> +#include <dirent.h>
> >> +#include <errno.h>
> >> +#include <fcntl.h>
> >> +#include <stdio.h>
> >> +#include <string.h>
> >> +#include <unistd.h>
> >> +#include <sys/ioctl.h>
> >> +#include <sys/mman.h>
> >> +#include <sys/types.h>
> >> +
> >> +#include "../../../../include/uapi/linux/dma-heap.h"
> >> +
> >> +#define DEVPATH "/dev/dma_heap"
> >> +
> >> +int dmabuf_heap_open(char *name)
> >> +{
> >> +       int ret, fd;
> >> +       char buf[256];
> >> +
> >> +       ret = sprintf(buf, "%s/%s", DEVPATH, name);
> >> +       if (ret < 0) {
> >> +               printf("sprintf failed!\n");
> >> +               return ret;
> >> +       }
> >> +
> >> +       fd = open(buf, O_RDWR);
> >> +       if (fd < 0)
> >> +               printf("open %s failed!\n", buf);
> >> +       return fd;
> >> +}
> >> +
> >> +int dmabuf_heap_alloc(int fd, size_t len, unsigned int flags, int *dmabuf_fd)
> >> +{
> >> +       struct dma_heap_allocation_data data = {
> >> +               .len = len,
> >> +               .flags = flags,
> >> +       };
> >> +       int ret;
> >> +
> >> +       if (dmabuf_fd == NULL)
> >> +               return -EINVAL;
> >> +
> >> +       ret = ioctl(fd, DMA_HEAP_IOC_ALLOC, &data);
> >> +       if (ret < 0)
> >> +               return ret;
> >> +       *dmabuf_fd = (int)data.fd;
> >> +       return ret;
> >> +}
> >> +
> >> +#define ONE_MEG (1024*1024)
> >> +
> >> +void do_test(char *heap_name)
> >> +{
> >> +       int heap_fd = -1, dmabuf_fd = -1;
> >> +       int ret;
> >> +
> >> +       printf("Testing heap: %s\n", heap_name);
> >> +
> >> +       heap_fd = dmabuf_heap_open(heap_name);
> >> +       if (heap_fd < 0)
> >> +               return;
> >> +
> >> +       printf("Allocating 1 MEG\n");
> >> +       ret = dmabuf_heap_alloc(heap_fd, ONE_MEG, 0, &dmabuf_fd);
> >> +       if (ret)
> >> +               goto out;
> >> +
> >> +       /* DO SOMETHING WITH THE DMABUF HERE? */
> >
> > You can do a call to mmap and write a pattern in the buffer.
> >
>
> mmap is optional for DMA-BUFs, only attach/map are required. To test
> those we would need a dummy device, so a test kernel module may be
> needed to really exercise this.
>
> I have one I use for ION buffer testing, it consumes a DMA-BUF passed
> from userspace, attach/maps it to a dummy device then return the
> physical address of the first page of the buffer for validation. Might
> be a good test, but dummy devices don't always have the proper dma
> attributes set like a real device does, so it may also fail for some
> otherwise valid buffers.

Cool! Do you mind sharing that? I might try to rework and integrate it
into this patchset?

thanks
-john

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 5/5 v2] kselftests: Add dma-heap test
  2019-03-06 18:19       ` John Stultz
@ 2019-03-06 18:32         ` Andrew F. Davis
  0 siblings, 0 replies; 68+ messages in thread
From: Andrew F. Davis @ 2019-03-06 18:32 UTC (permalink / raw)
  To: John Stultz
  Cc: Benjamin Gaignard, lkml, Laura Abbott, Greg KH, Sumit Semwal,
	Liam Mark, Brian Starkey, Chenbo Feng, Alistair Strachan,
	ML dri-devel

On 3/6/19 12:19 PM, John Stultz wrote:
> On Wed, Mar 6, 2019 at 10:15 AM Andrew F. Davis <afd@ti.com> wrote:
>>
>> On 3/6/19 10:14 AM, Benjamin Gaignard wrote:
>>> Le mar. 5 mars 2019 à 21:54, John Stultz <john.stultz@linaro.org> a écrit :
>>>>
>>>> Add very trivial allocation test for dma-heaps.
>>>>
>>>> TODO: Need to actually do some validation on
>>>> the returned dma-buf.
>>>>
>>>> Cc: Laura Abbott <labbott@redhat.com>
>>>> Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
>>>> Cc: Greg KH <gregkh@linuxfoundation.org>
>>>> Cc: Sumit Semwal <sumit.semwal@linaro.org>
>>>> Cc: Liam Mark <lmark@codeaurora.org>
>>>> Cc: Brian Starkey <Brian.Starkey@arm.com>
>>>> Cc: Andrew F. Davis <afd@ti.com>
>>>> Cc: Chenbo Feng <fengc@google.com>
>>>> Cc: Alistair Strachan <astrachan@google.com>
>>>> Cc: dri-devel@lists.freedesktop.org
>>>> Signed-off-by: John Stultz <john.stultz@linaro.org>
>>>> ---
>>>> v2: Switched to use reworked dma-heap apis
>>>> ---
>>>>  tools/testing/selftests/dmabuf-heaps/Makefile      | 11 +++
>>>>  tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c | 96 ++++++++++++++++++++++
>>>>  2 files changed, 107 insertions(+)
>>>>  create mode 100644 tools/testing/selftests/dmabuf-heaps/Makefile
>>>>  create mode 100644 tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
>>>>
>>>> diff --git a/tools/testing/selftests/dmabuf-heaps/Makefile b/tools/testing/selftests/dmabuf-heaps/Makefile
>>>> new file mode 100644
>>>> index 0000000..c414ad3
>>>> --- /dev/null
>>>> +++ b/tools/testing/selftests/dmabuf-heaps/Makefile
>>>> @@ -0,0 +1,11 @@
>>>> +# SPDX-License-Identifier: GPL-2.0
>>>> +CFLAGS += -static -O3 -Wl,-no-as-needed -Wall
>>>> +#LDLIBS += -lrt -lpthread -lm
>>>> +
>>>> +# these are all "safe" tests that don't modify
>>>> +# system time or require escalated privileges
>>>> +TEST_GEN_PROGS = dmabuf-heap
>>>> +
>>>> +
>>>> +include ../lib.mk
>>>> +
>>>> diff --git a/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
>>>> new file mode 100644
>>>> index 0000000..06837a4
>>>> --- /dev/null
>>>> +++ b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
>>>> @@ -0,0 +1,96 @@
>>>> +// SPDX-License-Identifier: GPL-2.0
>>>> +
>>>> +#include <dirent.h>
>>>> +#include <errno.h>
>>>> +#include <fcntl.h>
>>>> +#include <stdio.h>
>>>> +#include <string.h>
>>>> +#include <unistd.h>
>>>> +#include <sys/ioctl.h>
>>>> +#include <sys/mman.h>
>>>> +#include <sys/types.h>
>>>> +
>>>> +#include "../../../../include/uapi/linux/dma-heap.h"
>>>> +
>>>> +#define DEVPATH "/dev/dma_heap"
>>>> +
>>>> +int dmabuf_heap_open(char *name)
>>>> +{
>>>> +       int ret, fd;
>>>> +       char buf[256];
>>>> +
>>>> +       ret = sprintf(buf, "%s/%s", DEVPATH, name);
>>>> +       if (ret < 0) {
>>>> +               printf("sprintf failed!\n");
>>>> +               return ret;
>>>> +       }
>>>> +
>>>> +       fd = open(buf, O_RDWR);
>>>> +       if (fd < 0)
>>>> +               printf("open %s failed!\n", buf);
>>>> +       return fd;
>>>> +}
>>>> +
>>>> +int dmabuf_heap_alloc(int fd, size_t len, unsigned int flags, int *dmabuf_fd)
>>>> +{
>>>> +       struct dma_heap_allocation_data data = {
>>>> +               .len = len,
>>>> +               .flags = flags,
>>>> +       };
>>>> +       int ret;
>>>> +
>>>> +       if (dmabuf_fd == NULL)
>>>> +               return -EINVAL;
>>>> +
>>>> +       ret = ioctl(fd, DMA_HEAP_IOC_ALLOC, &data);
>>>> +       if (ret < 0)
>>>> +               return ret;
>>>> +       *dmabuf_fd = (int)data.fd;
>>>> +       return ret;
>>>> +}
>>>> +
>>>> +#define ONE_MEG (1024*1024)
>>>> +
>>>> +void do_test(char *heap_name)
>>>> +{
>>>> +       int heap_fd = -1, dmabuf_fd = -1;
>>>> +       int ret;
>>>> +
>>>> +       printf("Testing heap: %s\n", heap_name);
>>>> +
>>>> +       heap_fd = dmabuf_heap_open(heap_name);
>>>> +       if (heap_fd < 0)
>>>> +               return;
>>>> +
>>>> +       printf("Allocating 1 MEG\n");
>>>> +       ret = dmabuf_heap_alloc(heap_fd, ONE_MEG, 0, &dmabuf_fd);
>>>> +       if (ret)
>>>> +               goto out;
>>>> +
>>>> +       /* DO SOMETHING WITH THE DMABUF HERE? */
>>>
>>> You can do a call to mmap and write a pattern in the buffer.
>>>
>>
>> mmap is optional for DMA-BUFs, only attach/map are required. To test
>> those we would need a dummy device, so a test kernel module may be
>> needed to really exercise this.
>>
>> I have one I use for ION buffer testing, it consumes a DMA-BUF passed
>> from userspace, attach/maps it to a dummy device then return the
>> physical address of the first page of the buffer for validation. Might
>> be a good test, but dummy devices don't always have the proper dma
>> attributes set like a real device does, so it may also fail for some
>> otherwise valid buffers.
> 
> Cool! Do you mind sharing that? I might try to rework and integrate it
> into this patchset?
> 

Sure, top two patches here:

> https://git.ti.com/ti-analog-linux-kernel/afd-analog/commits/dma-buf-to-phys

Andrew

> thanks
> -john
> 

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 1/5 v2] dma-buf: Add dma-buf heaps framework
  2019-03-06 16:27   ` Andrew F. Davis
@ 2019-03-06 19:03     ` John Stultz
  2019-03-06 21:45       ` Andrew F. Davis
  0 siblings, 1 reply; 68+ messages in thread
From: John Stultz @ 2019-03-06 19:03 UTC (permalink / raw)
  To: Andrew F. Davis
  Cc: lkml, Laura Abbott, Benjamin Gaignard, Greg KH, Sumit Semwal,
	Liam Mark, Brian Starkey, Chenbo Feng, Alistair Strachan,
	dri-devel

On Wed, Mar 6, 2019 at 10:18 AM Andrew F. Davis <afd@ti.com> wrote:
>
> On 3/5/19 2:54 PM, John Stultz wrote:
> > From: "Andrew F. Davis" <afd@ti.com>
> >
> > This framework allows a unified userspace interface for dma-buf
> > exporters, allowing userland to allocate specific types of
> > memory for use in dma-buf sharing.
> >
> > Each heap is given its own device node, which a user can
> > allocate a dma-buf fd from using the DMA_HEAP_IOC_ALLOC.
> >
> > This code is an evoluiton of the Android ION implementation,
> > and a big thanks is due to its authors/maintainers over time
> > for their effort:
> >   Rebecca Schultz Zavin, Colin Cross, Benjamin Gaignard,
> >   Laura Abbott, and many other contributors!
> >
> > Cc: Laura Abbott <labbott@redhat.com>
> > Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
> > Cc: Greg KH <gregkh@linuxfoundation.org>
> > Cc: Sumit Semwal <sumit.semwal@linaro.org>
> > Cc: Liam Mark <lmark@codeaurora.org>
> > Cc: Brian Starkey <Brian.Starkey@arm.com>
> > Cc: Andrew F. Davis <afd@ti.com>
> > Cc: Chenbo Feng <fengc@google.com>
> > Cc: Alistair Strachan <astrachan@google.com>
> > Cc: dri-devel@lists.freedesktop.org
> > Signed-off-by: Andrew F. Davis <afd@ti.com>
> > [jstultz: reworded commit message, and lots of cleanups]
> > Signed-off-by: John Stultz <john.stultz@linaro.org>
> > ---
> > v2:
> > * Folded down fixes I had previously shared in implementing
> >   heaps
> > * Make flags a u64 (Suggested by Laura)
> > * Add PAGE_ALIGN() fix to the core alloc funciton
> > * IOCTL fixups suggested by Brian
> > * Added fixes suggested by Benjamin
> > * Removed core stats mgmt, as that should be implemented by
> >   per-heap code
> > * Changed alloc to return a dma-buf fd, rather then a buffer
> >   (as it simplifies error handling)
> > ---
> >  MAINTAINERS                   |  16 ++++
> >  drivers/dma-buf/Kconfig       |   8 ++
> >  drivers/dma-buf/Makefile      |   1 +
> >  drivers/dma-buf/dma-heap.c    | 191 ++++++++++++++++++++++++++++++++++++++++++
> >  include/linux/dma-heap.h      |  65 ++++++++++++++
> >  include/uapi/linux/dma-heap.h |  52 ++++++++++++
> >  6 files changed, 333 insertions(+)
> >  create mode 100644 drivers/dma-buf/dma-heap.c
> >  create mode 100644 include/linux/dma-heap.h
> >  create mode 100644 include/uapi/linux/dma-heap.h
> >
> > diff --git a/MAINTAINERS b/MAINTAINERS
> > index ac2e518..a661e19 100644
> > --- a/MAINTAINERS
> > +++ b/MAINTAINERS
> > @@ -4621,6 +4621,22 @@ F:     include/linux/*fence.h
> >  F:   Documentation/driver-api/dma-buf.rst
> >  T:   git git://anongit.freedesktop.org/drm/drm-misc
> >
> > +DMA-BUF HEAPS FRAMEWORK
> > +M:   Laura Abbott <labbott@redhat.com>
> > +R:   Liam Mark <lmark@codeaurora.org>
> > +R:   Brian Starkey <Brian.Starkey@arm.com>
> > +R:   "Andrew F. Davis" <afd@ti.com>
>
> Quotes not needed in maintainers file.

Whatever you say, "Andrew F. Davis", or whomever you really are! ;)


> > +
> > +             if (heap_allocation.fd ||
> > +                 heap_allocation.reserved0 ||
> > +                 heap_allocation.reserved1 ||
> > +                 heap_allocation.reserved2) {
>
> Seems too many reserved, I can understand one, but if we ever needed all
> of these we would be better off just adding another alloc ioctl.

Well, we have to have one u32 for padding. And I figured if we needed
anything more then a u32, then we're in for 2 more.

And I think the potential of the alignment and heap-private flags, I
worry we might want to  have something, but I guess we could just add
a new ioctl and keep the support for the old one if folks prefer.

> > +int dma_heap_add(struct dma_heap *heap)
> > +{
> > +     struct device *dev_ret;
> > +     int ret;
> > +
> > +     if (!heap->name || !strcmp(heap->name, "")) {
> > +             pr_err("dma_heap: Cannot add heap without a name\n");
>
> As these names end up as the dev name in the file system we may want to
> check for invalid names, there is probably a helper for that somewhere.

Hrm. I'll have to look.

> > +struct dma_heap {
> > +     const char *name;
> > +     struct dma_heap_ops *ops;
> > +     unsigned int minor;
> > +     dev_t heap_devt;
> > +     struct cdev heap_cdev;
> > +};
>
> Still not sure about this, all of the members in this struct are
> strictly internally used by the framework. The users of this framework
> should not have access to them and only need to deal with an opaque
> pointer for tracking themselves (can store it in a private struct of
> their own then container_of to get back out their struct).
>
> Anyway, not a big deal, and if it really bugs me enough I can always go
> fix it later, it's all kernel internal so not a blocker here. :)

I guess I'd just move the include/linux/dma-heap.h to
drivers/dma-buf/heaps/ and keep it localized there.
But whichever. Feel free to also send a patch and I can fold it down.

thanks
-john

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 1/5 v2] dma-buf: Add dma-buf heaps framework
  2019-03-06 19:03     ` John Stultz
@ 2019-03-06 21:45       ` Andrew F. Davis
  0 siblings, 0 replies; 68+ messages in thread
From: Andrew F. Davis @ 2019-03-06 21:45 UTC (permalink / raw)
  To: John Stultz
  Cc: lkml, Laura Abbott, Benjamin Gaignard, Greg KH, Sumit Semwal,
	Liam Mark, Brian Starkey, Chenbo Feng, Alistair Strachan,
	dri-devel

On 3/6/19 1:03 PM, John Stultz wrote:
> On Wed, Mar 6, 2019 at 10:18 AM Andrew F. Davis <afd@ti.com> wrote:
>>
>> On 3/5/19 2:54 PM, John Stultz wrote:
>>> From: "Andrew F. Davis" <afd@ti.com>
>>>
>>> This framework allows a unified userspace interface for dma-buf
>>> exporters, allowing userland to allocate specific types of
>>> memory for use in dma-buf sharing.
>>>
>>> Each heap is given its own device node, which a user can
>>> allocate a dma-buf fd from using the DMA_HEAP_IOC_ALLOC.
>>>
>>> This code is an evoluiton of the Android ION implementation,
>>> and a big thanks is due to its authors/maintainers over time
>>> for their effort:
>>>   Rebecca Schultz Zavin, Colin Cross, Benjamin Gaignard,
>>>   Laura Abbott, and many other contributors!
>>>
>>> Cc: Laura Abbott <labbott@redhat.com>
>>> Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
>>> Cc: Greg KH <gregkh@linuxfoundation.org>
>>> Cc: Sumit Semwal <sumit.semwal@linaro.org>
>>> Cc: Liam Mark <lmark@codeaurora.org>
>>> Cc: Brian Starkey <Brian.Starkey@arm.com>
>>> Cc: Andrew F. Davis <afd@ti.com>
>>> Cc: Chenbo Feng <fengc@google.com>
>>> Cc: Alistair Strachan <astrachan@google.com>
>>> Cc: dri-devel@lists.freedesktop.org
>>> Signed-off-by: Andrew F. Davis <afd@ti.com>
>>> [jstultz: reworded commit message, and lots of cleanups]
>>> Signed-off-by: John Stultz <john.stultz@linaro.org>
>>> ---
>>> v2:
>>> * Folded down fixes I had previously shared in implementing
>>>   heaps
>>> * Make flags a u64 (Suggested by Laura)
>>> * Add PAGE_ALIGN() fix to the core alloc funciton
>>> * IOCTL fixups suggested by Brian
>>> * Added fixes suggested by Benjamin
>>> * Removed core stats mgmt, as that should be implemented by
>>>   per-heap code
>>> * Changed alloc to return a dma-buf fd, rather then a buffer
>>>   (as it simplifies error handling)
>>> ---
>>>  MAINTAINERS                   |  16 ++++
>>>  drivers/dma-buf/Kconfig       |   8 ++
>>>  drivers/dma-buf/Makefile      |   1 +
>>>  drivers/dma-buf/dma-heap.c    | 191 ++++++++++++++++++++++++++++++++++++++++++
>>>  include/linux/dma-heap.h      |  65 ++++++++++++++
>>>  include/uapi/linux/dma-heap.h |  52 ++++++++++++
>>>  6 files changed, 333 insertions(+)
>>>  create mode 100644 drivers/dma-buf/dma-heap.c
>>>  create mode 100644 include/linux/dma-heap.h
>>>  create mode 100644 include/uapi/linux/dma-heap.h
>>>
>>> diff --git a/MAINTAINERS b/MAINTAINERS
>>> index ac2e518..a661e19 100644
>>> --- a/MAINTAINERS
>>> +++ b/MAINTAINERS
>>> @@ -4621,6 +4621,22 @@ F:     include/linux/*fence.h
>>>  F:   Documentation/driver-api/dma-buf.rst
>>>  T:   git git://anongit.freedesktop.org/drm/drm-misc
>>>
>>> +DMA-BUF HEAPS FRAMEWORK
>>> +M:   Laura Abbott <labbott@redhat.com>
>>> +R:   Liam Mark <lmark@codeaurora.org>
>>> +R:   Brian Starkey <Brian.Starkey@arm.com>
>>> +R:   "Andrew F. Davis" <afd@ti.com>
>>
>> Quotes not needed in maintainers file.
> 
> Whatever you say, "Andrew F. Davis", or whomever you really are! ;)
> 

 <_<
 >_>
> 
>>> +
>>> +             if (heap_allocation.fd ||
>>> +                 heap_allocation.reserved0 ||
>>> +                 heap_allocation.reserved1 ||
>>> +                 heap_allocation.reserved2) {
>>
>> Seems too many reserved, I can understand one, but if we ever needed all
>> of these we would be better off just adding another alloc ioctl.
> 
> Well, we have to have one u32 for padding. And I figured if we needed
> anything more then a u32, then we're in for 2 more.
> 
> And I think the potential of the alignment and heap-private flags, I
> worry we might want to  have something, but I guess we could just add
> a new ioctl and keep the support for the old one if folks prefer.
> 
>>> +int dma_heap_add(struct dma_heap *heap)
>>> +{
>>> +     struct device *dev_ret;
>>> +     int ret;
>>> +
>>> +     if (!heap->name || !strcmp(heap->name, "")) {
>>> +             pr_err("dma_heap: Cannot add heap without a name\n");
>>
>> As these names end up as the dev name in the file system we may want to
>> check for invalid names, there is probably a helper for that somewhere.
> 
> Hrm. I'll have to look.
> 
>>> +struct dma_heap {
>>> +     const char *name;
>>> +     struct dma_heap_ops *ops;
>>> +     unsigned int minor;
>>> +     dev_t heap_devt;
>>> +     struct cdev heap_cdev;
>>> +};
>>
>> Still not sure about this, all of the members in this struct are
>> strictly internally used by the framework. The users of this framework
>> should not have access to them and only need to deal with an opaque
>> pointer for tracking themselves (can store it in a private struct of
>> their own then container_of to get back out their struct).
>>
>> Anyway, not a big deal, and if it really bugs me enough I can always go
>> fix it later, it's all kernel internal so not a blocker here. :)
> 
> I guess I'd just move the include/linux/dma-heap.h to
> drivers/dma-buf/heaps/ and keep it localized there.
> But whichever. Feel free to also send a patch and I can fold it down.
> 

The dma-heap.h needs to stay where it is, I was thinking just move
struct dma_heap to inside drivers/dma-buf/dma-heap.c. I wouldn't worry
about changing anything right now though, I'll post a patch you can
squash in later one we confirm this whole dma-heap thing will get deemed
acceptable in the first place.

Thanks,
Andrew

> thanks
> -john
> 

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 3/5 v2] dma-buf: heaps: Add system heap to dmabuf heaps
  2019-03-06 16:01   ` Benjamin Gaignard
@ 2019-03-11  5:48     ` John Stultz
  0 siblings, 0 replies; 68+ messages in thread
From: John Stultz @ 2019-03-11  5:48 UTC (permalink / raw)
  To: Benjamin Gaignard
  Cc: lkml, Laura Abbott, Greg KH, Sumit Semwal, Liam Mark,
	Brian Starkey, Andrew F . Davis, Chenbo Feng, Alistair Strachan,
	ML dri-devel

On Wed, Mar 6, 2019 at 8:01 AM Benjamin Gaignard
<benjamin.gaignard@linaro.org> wrote:
>
> Le mar. 5 mars 2019 à 21:54, John Stultz <john.stultz@linaro.org> a écrit :
> >
> > This patch adds system heap to the dma-buf heaps framework.
> >
> > This allows applications to get a page-allocator backed dma-buf
> > for non-contiguous memory.
> >
> > This code is an evolution of the Android ION implementation, so
> > thanks to its original authors and maintainters:
> >   Rebecca Schultz Zavin, Colin Cross, Laura Abbott, and others!
> >
> > Cc: Laura Abbott <labbott@redhat.com>
> > Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
> > Cc: Greg KH <gregkh@linuxfoundation.org>
> > Cc: Sumit Semwal <sumit.semwal@linaro.org>
> > Cc: Liam Mark <lmark@codeaurora.org>
> > Cc: Brian Starkey <Brian.Starkey@arm.com>
> > Cc: Andrew F. Davis <afd@ti.com>
> > Cc: Chenbo Feng <fengc@google.com>
> > Cc: Alistair Strachan <astrachan@google.com>
> > Cc: dri-devel@lists.freedesktop.org
> > Signed-off-by: John Stultz <john.stultz@linaro.org>
> > ---
> > v2:
> > * Switch allocate to return dmabuf fd
> > * Simplify init code
> > * Checkpatch fixups
> > * Droped dead system-contig code
>
> just few blank lines to remove.
>
> Reveiwed-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>

Done! Thanks so much for the review!
-john

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 0/5 v2] DMA-BUF Heaps (destaging ION)
  2019-03-05 20:54 [RFC][PATCH 0/5 v2] DMA-BUF Heaps (destaging ION) John Stultz
                   ` (4 preceding siblings ...)
  2019-03-05 20:54 ` [RFC][PATCH 5/5 v2] kselftests: Add dma-heap test John Stultz
@ 2019-03-13 20:11 ` Liam Mark
  2019-03-13 22:30   ` John Stultz
  2019-03-15 20:34 ` Laura Abbott
  2019-03-15 23:15 ` Jerome Glisse
  7 siblings, 1 reply; 68+ messages in thread
From: Liam Mark @ 2019-03-13 20:11 UTC (permalink / raw)
  To: John Stultz
  Cc: lkml, Laura Abbott, Benjamin Gaignard, Greg KH, Sumit Semwal,
	Brian Starkey, Andrew F . Davis, Chenbo Feng, Alistair Strachan,
	dri-devel

On Tue, 5 Mar 2019, John Stultz wrote:

> Here is a initial RFC of the dma-buf heaps patchset Andrew and I
> have been working on which tries to destage a fair chunk of ION
> functionality.
> 
> The patchset implements per-heap devices which can be opened
> directly and then an ioctl is used to allocate a dmabuf from the
> heap.
> 
> The interface is similar, but much simpler then IONs, only
> providing an ALLOC ioctl.
> 
> Also, I've provided simple system and cma heaps. The system
> heap in particular is missing the page-pool optimizations ION
> had, but works well enough to validate the interface.
> 
> I've booted and tested these patches with AOSP on the HiKey960
> using the kernel tree here:
>   https://git.linaro.org/people/john.stultz/android-dev.git/log/?h=dev/dma-buf-heap
> 
> And the userspace changes here:
>   https://android-review.googlesource.com/c/device/linaro/hikey/+/909436
> 
> 
> Compared to ION, this patchset is missing the system-contig,
> carveout and chunk heaps, as I don't have a device that uses
> those, so I'm unable to do much useful validation there.
> Additionally we have no upstream users of chunk or carveout,
> and the system-contig has been deprecated in the common/andoid-*
> kernels, so this should be ok.
> 
> I've also removed the stats accounting for now, since it should
> be implemented by the heaps themselves.
> 
> Eventual TODOS:
> * Reimplement page-pool for system heap (working on this)
> * Add stats accounting to system/cma heaps
> * Make the kselftest actually useful
> * Add other heaps folks see as useful (would love to get
>   some help from actual carveout/chunk users)!

We use a modified carveout heap for certain secure use cases.
Although there would probably be some benefit in discssing how the dma-buf 
heap framework may want to support 
secure heaps in the future it is a large topic which I assume you don't 
want to tackle now.

We don't have any non-secure carveout heap use cases but the client use 
case I have seen usually revolve around
wanting large allocations to succeed very quickly.
For example I have seen camera use cases which do very large allocations 
on camera bootup from the carveout heap, these allocations would come from 
the carveout heap and fallback to the system heap when the carveout heap 
was full.
Actual non-secure carveout heap can perhaps provide more detail.

> 
> That said, the main user-interface is shaping up and I wanted
> to get some input on the device model (particularly from GreKH)
> and any other API/ABI specific input. 
> 
> thanks
> -john

Thanks John and Andrew, it's great to see this effort to get this 
functionality out of staging.

Since we are making some fundamental changes to how ION worked and since 
Android is likely also be the largest user of the dma-buf heaps framework 
I think it would be good
to have a path to resolve the issues which are currently preventing 
commercial Android releases from moving to the upstream version of ION.

I can understand if you don't necessarily want to put all/any of these 
changes into the dma-buf heaps framework as part of this series, but my 
hope is we can get 
the upstream community and the Android framework team to agree on what 
upstreamable changes to dma-buf heaps framework, and/or the Android 
framework, would be required in order for Android to move to the upstream 
dma-buf heaps framework for commercial devices.

I don't mean to make this specific to Android, but my assumption is that 
many of the ION/dma-buf heaps issues which affect Android would likely 
affect other new large users of the dma-buf heaps framework, so if we 
resolve it for Android we would be helping these future users as well.
And I do understand that some the issues facing Android may need to be 
resolved by making changes to Android framework.

I think it would be helpful to try and get as much of this agreed upon as 
possible before the dma-buf heaps framework moves out of staging.

As part of my review I will highlight some of the issues which would 
affect Android. 
In my comments I will apply them to the system heap since that is what 
Android currently uses for a lot of its use cases. 
I realize that this new framework provides more flexibility to heaps, so 
perhaps some of these issues can be solved by creating a new type of 
system heap which Android can use, but even if the solution involves 
creating a new system heap I would like to make sure that this "new" 
system heap is upstreamable.

Liam

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 2/5 v2] dma-buf: heaps: Add heap helpers
  2019-03-05 20:54 ` [RFC][PATCH 2/5 v2] dma-buf: heaps: Add heap helpers John Stultz
@ 2019-03-13 20:18   ` Liam Mark
  2019-03-13 21:48     ` Andrew F. Davis
  2019-03-15  9:06   ` Christoph Hellwig
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 68+ messages in thread
From: Liam Mark @ 2019-03-13 20:18 UTC (permalink / raw)
  To: John Stultz
  Cc: lkml, Laura Abbott, Benjamin Gaignard, Greg KH, Sumit Semwal,
	Brian Starkey, Andrew F . Davis, Chenbo Feng, Alistair Strachan,
	dri-devel

On Tue, 5 Mar 2019, John Stultz wrote:

> Add generic helper dmabuf ops for dma heaps, so we can reduce
> the amount of duplicative code for the exported dmabufs.
> 
> This code is an evolution of the Android ION implementation, so
> thanks to its original authors and maintainters:
>   Rebecca Schultz Zavin, Colin Cross, Laura Abbott, and others!
> 
> Cc: Laura Abbott <labbott@redhat.com>
> Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
> Cc: Greg KH <gregkh@linuxfoundation.org>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: Liam Mark <lmark@codeaurora.org>
> Cc: Brian Starkey <Brian.Starkey@arm.com>
> Cc: Andrew F. Davis <afd@ti.com>
> Cc: Chenbo Feng <fengc@google.com>
> Cc: Alistair Strachan <astrachan@google.com>
> Cc: dri-devel@lists.freedesktop.org
> Signed-off-by: John Stultz <john.stultz@linaro.org>
> ---
> v2:
> * Removed cache management performance hack that I had
>   accidentally folded in.
> * Removed stats code that was in helpers
> * Lots of checkpatch cleanups
> ---
>  drivers/dma-buf/Makefile             |   1 +
>  drivers/dma-buf/heaps/Makefile       |   2 +
>  drivers/dma-buf/heaps/heap-helpers.c | 335 +++++++++++++++++++++++++++++++++++
>  drivers/dma-buf/heaps/heap-helpers.h |  48 +++++
>  4 files changed, 386 insertions(+)
>  create mode 100644 drivers/dma-buf/heaps/Makefile
>  create mode 100644 drivers/dma-buf/heaps/heap-helpers.c
>  create mode 100644 drivers/dma-buf/heaps/heap-helpers.h
> 
> diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
> index b0332f1..09c2f2d 100644
> --- a/drivers/dma-buf/Makefile
> +++ b/drivers/dma-buf/Makefile
> @@ -1,4 +1,5 @@
>  obj-y := dma-buf.o dma-fence.o dma-fence-array.o reservation.o seqno-fence.o
> +obj-$(CONFIG_DMABUF_HEAPS)	+= heaps/
>  obj-$(CONFIG_DMABUF_HEAPS)	+= dma-heap.o
>  obj-$(CONFIG_SYNC_FILE)		+= sync_file.o
>  obj-$(CONFIG_SW_SYNC)		+= sw_sync.o sync_debug.o
> diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
> new file mode 100644
> index 0000000..de49898
> --- /dev/null
> +++ b/drivers/dma-buf/heaps/Makefile
> @@ -0,0 +1,2 @@
> +# SPDX-License-Identifier: GPL-2.0
> +obj-y					+= heap-helpers.o
> diff --git a/drivers/dma-buf/heaps/heap-helpers.c b/drivers/dma-buf/heaps/heap-helpers.c
> new file mode 100644
> index 0000000..ae5e9d0
> --- /dev/null
> +++ b/drivers/dma-buf/heaps/heap-helpers.c
> @@ -0,0 +1,335 @@
> +// SPDX-License-Identifier: GPL-2.0
> +#include <linux/device.h>
> +#include <linux/dma-buf.h>
> +#include <linux/err.h>
> +#include <linux/idr.h>
> +#include <linux/list.h>
> +#include <linux/slab.h>
> +#include <linux/uaccess.h>
> +#include <uapi/linux/dma-heap.h>
> +
> +#include "heap-helpers.h"
> +
> +
> +static void *dma_heap_map_kernel(struct heap_helper_buffer *buffer)
> +{
> +	struct scatterlist *sg;
> +	int i, j;
> +	void *vaddr;
> +	pgprot_t pgprot;
> +	struct sg_table *table = buffer->sg_table;
> +	int npages = PAGE_ALIGN(buffer->heap_buffer.size) / PAGE_SIZE;
> +	struct page **pages = vmalloc(array_size(npages,
> +						 sizeof(struct page *)));
> +	struct page **tmp = pages;
> +
> +	if (!pages)
> +		return ERR_PTR(-ENOMEM);
> +
> +	pgprot = PAGE_KERNEL;
> +
> +	for_each_sg(table->sgl, sg, table->nents, i) {
> +		int npages_this_entry = PAGE_ALIGN(sg->length) / PAGE_SIZE;
> +		struct page *page = sg_page(sg);
> +
> +		WARN_ON(i >= npages);
> +		for (j = 0; j < npages_this_entry; j++)
> +			*(tmp++) = page++;
> +	}
> +	vaddr = vmap(pages, npages, VM_MAP, pgprot);
> +	vfree(pages);
> +
> +	if (!vaddr)
> +		return ERR_PTR(-ENOMEM);
> +
> +	return vaddr;
> +}
> +
> +static int dma_heap_map_user(struct heap_helper_buffer *buffer,
> +			 struct vm_area_struct *vma)
> +{
> +	struct sg_table *table = buffer->sg_table;
> +	unsigned long addr = vma->vm_start;
> +	unsigned long offset = vma->vm_pgoff * PAGE_SIZE;
> +	struct scatterlist *sg;
> +	int i;
> +	int ret;
> +
> +	for_each_sg(table->sgl, sg, table->nents, i) {
> +		struct page *page = sg_page(sg);
> +		unsigned long remainder = vma->vm_end - addr;
> +		unsigned long len = sg->length;
> +
> +		if (offset >= sg->length) {
> +			offset -= sg->length;
> +			continue;
> +		} else if (offset) {
> +			page += offset / PAGE_SIZE;
> +			len = sg->length - offset;
> +			offset = 0;
> +		}
> +		len = min(len, remainder);
> +		ret = remap_pfn_range(vma, addr, page_to_pfn(page), len,
> +				      vma->vm_page_prot);
> +		if (ret)
> +			return ret;
> +		addr += len;
> +		if (addr >= vma->vm_end)
> +			return 0;
> +	}
> +
> +	return 0;
> +}
> +
> +
> +void dma_heap_buffer_destroy(struct dma_heap_buffer *heap_buffer)
> +{
> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> +
> +	if (buffer->kmap_cnt > 0) {
> +		pr_warn_once("%s: buffer still mapped in the kernel\n",
> +			     __func__);
> +		vunmap(buffer->vaddr);
> +	}
> +
> +	buffer->free(buffer);
> +}
> +
> +static void *dma_heap_buffer_kmap_get(struct dma_heap_buffer *heap_buffer)
> +{
> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> +	void *vaddr;
> +
> +	if (buffer->kmap_cnt) {
> +		buffer->kmap_cnt++;
> +		return buffer->vaddr;
> +	}
> +	vaddr = dma_heap_map_kernel(buffer);
> +	if (WARN_ONCE(!vaddr,
> +		      "heap->ops->map_kernel should return ERR_PTR on error"))
> +		return ERR_PTR(-EINVAL);
> +	if (IS_ERR(vaddr))
> +		return vaddr;
> +	buffer->vaddr = vaddr;
> +	buffer->kmap_cnt++;
> +	return vaddr;
> +}
> +
> +static void dma_heap_buffer_kmap_put(struct dma_heap_buffer *heap_buffer)
> +{
> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> +
> +	buffer->kmap_cnt--;
> +	if (!buffer->kmap_cnt) {
> +		vunmap(buffer->vaddr);
> +		buffer->vaddr = NULL;
> +	}
> +}
> +
> +static struct sg_table *dup_sg_table(struct sg_table *table)
> +{
> +	struct sg_table *new_table;
> +	int ret, i;
> +	struct scatterlist *sg, *new_sg;
> +
> +	new_table = kzalloc(sizeof(*new_table), GFP_KERNEL);
> +	if (!new_table)
> +		return ERR_PTR(-ENOMEM);
> +
> +	ret = sg_alloc_table(new_table, table->nents, GFP_KERNEL);
> +	if (ret) {
> +		kfree(new_table);
> +		return ERR_PTR(-ENOMEM);
> +	}
> +
> +	new_sg = new_table->sgl;
> +	for_each_sg(table->sgl, sg, table->nents, i) {
> +		memcpy(new_sg, sg, sizeof(*sg));
> +		new_sg->dma_address = 0;
> +		new_sg = sg_next(new_sg);
> +	}
> +
> +	return new_table;
> +}
> +
> +static void free_duped_table(struct sg_table *table)
> +{
> +	sg_free_table(table);
> +	kfree(table);
> +}
> +
> +struct dma_heaps_attachment {
> +	struct device *dev;
> +	struct sg_table *table;
> +	struct list_head list;
> +};
> +
> +static int dma_heap_attach(struct dma_buf *dmabuf,
> +			      struct dma_buf_attachment *attachment)
> +{
> +	struct dma_heaps_attachment *a;
> +	struct sg_table *table;
> +	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> +
> +	a = kzalloc(sizeof(*a), GFP_KERNEL);
> +	if (!a)
> +		return -ENOMEM;
> +
> +	table = dup_sg_table(buffer->sg_table);
> +	if (IS_ERR(table)) {
> +		kfree(a);
> +		return -ENOMEM;
> +	}
> +
> +	a->table = table;
> +	a->dev = attachment->dev;
> +	INIT_LIST_HEAD(&a->list);
> +
> +	attachment->priv = a;
> +
> +	mutex_lock(&buffer->lock);
> +	list_add(&a->list, &buffer->attachments);
> +	mutex_unlock(&buffer->lock);
> +
> +	return 0;
> +}
> +
> +static void dma_heap_detatch(struct dma_buf *dmabuf,
> +				struct dma_buf_attachment *attachment)
> +{
> +	struct dma_heaps_attachment *a = attachment->priv;
> +	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> +
> +	mutex_lock(&buffer->lock);
> +	list_del(&a->list);
> +	mutex_unlock(&buffer->lock);
> +	free_duped_table(a->table);
> +
> +	kfree(a);
> +}
> +
> +static struct sg_table *dma_heap_map_dma_buf(
> +					struct dma_buf_attachment *attachment,
> +					enum dma_data_direction direction)
> +{
> +	struct dma_heaps_attachment *a = attachment->priv;
> +	struct sg_table *table;
> +
> +	table = a->table;
> +
> +	if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
> +			direction))

Since this code is used for system heap and and as the reference.
In multimedia uses cases very large buffers can be allocated from system 
heap, and since system heap allocations have a cached kernel mapping it 
has been difficult to support uncached allocations so clients will likely 
allocate them as cached allocations.
Most access to these buffers will occur from non IO-coherent devices, 
however in frameworks such as Android these buffers will be dma mapped and 
dma unmapped frequently, for every frame and for each device in the 
"buffer pipeline", which leads to a lot of unnecessary cache maintenance 
in the dma map and dma unmap calls.
From previous discussions it doesn't seem like this could be optimized by 
only making changes to dma-buf heaps framework.

So I think it would be helpful to try and agree on what types of changes 
would be required to the Android framework and possibly the dma-buf heaps 
framework to resolve this.
Example

- Have Android framework keep buffers dma mapped for the whole use case

- Or perhaps have Android keep required devices attached to the buffer as 
they are "pipelined" so that cache maintenance can be skipped in dma map 
and dma umnap but reliably applied in begin/end_cpu_access.

> +	 table = ERR_PTR(-ENOMEM); 
> +	 return table; 
> +} 
> +
> +static void dma_heap_unmap_dma_buf(struct dma_buf_attachment *attachment,
> +			      struct sg_table *table,
> +			      enum dma_data_direction direction)
> +{
> +	dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
> +}
> +
> +static int dma_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
> +{
> +	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> +	int ret = 0;
> +
> +	mutex_lock(&buffer->lock);
> +	/* now map it to userspace */
> +	ret = dma_heap_map_user(buffer, vma);

Since this code is used for system heap and and as the reference.
Currently in Android when a system heap buffer is moving down the 
"pipeline" CPU access may occur and there may be no device attached.
In the past people have raised concerns that this should perhaps not be 
supported and that at least one device should be attached when CPU access 
occurs.

Can we decide, from a dma-buf contract perspective, on whether CPU access 
without a device attached should allowed and make it clear in the dma-buf 
documentation either way?

If it should not be allowed we can try to work with Android to see how 
they can change their framework to align with the dma-buf spec.

> +	mutex_unlock(&buffer->lock);
> +
> +	if (ret)
> +		pr_err("%s: failure mapping buffer to userspace\n",
> +		       __func__);
> +
> +	return ret;
> +}
> +
> +static void dma_heap_dma_buf_release(struct dma_buf *dmabuf)
> +{
> +	struct dma_heap_buffer *buffer = dmabuf->priv;
> +
> +	dma_heap_buffer_destroy(buffer);
> +}
> +
> +static void *dma_heap_dma_buf_kmap(struct dma_buf *dmabuf,
> +					unsigned long offset)
> +{
> +	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> +
> +	return buffer->vaddr + offset * PAGE_SIZE;
> +}
> +
> +static void dma_heap_dma_buf_kunmap(struct dma_buf *dmabuf,
> +					unsigned long offset,
> +					void *ptr)
> +{
> +}
> +
> +static int dma_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> +					enum dma_data_direction direction)
> +{
> +	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> +	void *vaddr;
> +	struct dma_heaps_attachment *a;
> +	int ret = 0;
> +
> +	mutex_lock(&buffer->lock);
> +	vaddr = dma_heap_buffer_kmap_get(heap_buffer);

Since this code is used for system heap and and as the reference.
As has been discussed in the past there are several disadvantages to 
creating a kernel mapping on each call to begin_cpu_access.

The resulting call to alloc_vmap_area is expensive and can hurt client 
KPIs

Allowing userspace clients to create and destroy kernel mappings can 
provide opportunities to crash the kernel.

Can we look at removing the creation of a kernel mapping in 
begin_cpu_access and either introduce support for dma_buf_vmap and have 
clients use that instead or perahps change 
the contract for dma_buf_kmap so that it doesn't always need to succeed?

> +	if (IS_ERR(vaddr)) {
> +		ret = PTR_ERR(vaddr);
> +		goto unlock;
> +	}
> +	mutex_unlock(&buffer->lock);
> +
> +	mutex_lock(&buffer->lock);
> +	list_for_each_entry(a, &buffer->attachments, list) {
> +		dma_sync_sg_for_cpu(a->dev, a->table->sgl, a->table->nents,
> +				    direction);

Since this code is used for system heap and and as the reference.
Not a major issue for newer kernels but I still don't think it makes sense 
to apply cache maintenance when the buffer is not dma mapped, it doesn't 
makes sense to me from a logical perspective and from a performance 
perspective.


> +	}
> +
> +unlock:
> +	mutex_unlock(&buffer->lock);
> +	return ret;
> +}
> +
> +static int dma_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
> +				      enum dma_data_direction direction)
> +{
> +	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> +	struct dma_heaps_attachment *a;
> +
> +	mutex_lock(&buffer->lock);
> +	dma_heap_buffer_kmap_put(heap_buffer);
> +	mutex_unlock(&buffer->lock);
> +
> +	mutex_lock(&buffer->lock);
> +	list_for_each_entry(a, &buffer->attachments, list) {
> +		dma_sync_sg_for_device(a->dev, a->table->sgl, a->table->nents,
> +				       direction);

There are use cases in Android which result in only small parts of a large 
buffer are written do during of CPU access.

Applying cache maintenance to the complete buffer results in a lot of 
unnecessary cache maintenance that can affect KPIs.

I believe the Android team is wondering if there could be a way to support 
partial maintenance in  where userspace could describe the buffer changes 
they have made.

I think it would be useful to make sure that there is a least a path 
forward with the current dma-buf heaps framework to solve this for system 
heap allocations.

I can get more details on the specific use cases if required.

> +	}
> +	mutex_unlock(&buffer->lock);
> +
> +	return 0;
> +}
> +
> +const struct dma_buf_ops heap_helper_ops = {
> +	.map_dma_buf = dma_heap_map_dma_buf,
> +	.unmap_dma_buf = dma_heap_unmap_dma_buf,
> +	.mmap = dma_heap_mmap,
> +	.release = dma_heap_dma_buf_release,
> +	.attach = dma_heap_attach,
> +	.detach = dma_heap_detatch,
> +	.begin_cpu_access = dma_heap_dma_buf_begin_cpu_access,
> +	.end_cpu_access = dma_heap_dma_buf_end_cpu_access,
> +	.map = dma_heap_dma_buf_kmap,
> +	.unmap = dma_heap_dma_buf_kunmap,
> +};
> diff --git a/drivers/dma-buf/heaps/heap-helpers.h b/drivers/dma-buf/heaps/heap-helpers.h
> new file mode 100644
> index 0000000..0bd8643
> --- /dev/null
> +++ b/drivers/dma-buf/heaps/heap-helpers.h
> @@ -0,0 +1,48 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * DMABUF Heaps helper code
> + *
> + * Copyright (C) 2011 Google, Inc.
> + * Copyright (C) 2019 Linaro Ltd.
> + */
> +
> +#ifndef _HEAP_HELPERS_H
> +#define _HEAP_HELPERS_H
> +
> +#include <linux/dma-heap.h>
> +#include <linux/list.h>
> +
> +struct heap_helper_buffer {
> +	struct dma_heap_buffer heap_buffer;
> +
> +	unsigned long private_flags;
> +	void *priv_virt;
> +	struct mutex lock;
> +	int kmap_cnt;
> +	void *vaddr;
> +	struct sg_table *sg_table;
> +	struct list_head attachments;
> +
> +	void (*free)(struct heap_helper_buffer *buffer);
> +
> +};
> +
> +#define to_helper_buffer(x) \
> +	container_of(x, struct heap_helper_buffer, heap_buffer)
> +
> +static inline void INIT_HEAP_HELPER_BUFFER(struct heap_helper_buffer *buffer,
> +				 void (*free)(struct heap_helper_buffer *))
> +{
> +	buffer->private_flags = 0;
> +	buffer->priv_virt = NULL;
> +	mutex_init(&buffer->lock);
> +	buffer->kmap_cnt = 0;
> +	buffer->vaddr = NULL;
> +	buffer->sg_table = NULL;
> +	INIT_LIST_HEAD(&buffer->attachments);
> +	buffer->free = free;
> +}
> +
> +extern const struct dma_buf_ops heap_helper_ops;
> +
> +#endif /* _HEAP_HELPERS_H */
> -- 
> 2.7.4
> 
> 

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 3/5 v2] dma-buf: heaps: Add system heap to dmabuf heaps
  2019-03-05 20:54 ` [RFC][PATCH 3/5 v2] dma-buf: heaps: Add system heap to dmabuf heaps John Stultz
  2019-03-06 16:01   ` Benjamin Gaignard
@ 2019-03-13 20:20   ` Liam Mark
  2019-03-13 22:49     ` John Stultz
  2019-03-15  9:06   ` Christoph Hellwig
  2 siblings, 1 reply; 68+ messages in thread
From: Liam Mark @ 2019-03-13 20:20 UTC (permalink / raw)
  To: John Stultz
  Cc: lkml, Laura Abbott, Benjamin Gaignard, Greg KH, Sumit Semwal,
	Brian Starkey, Andrew F . Davis, Chenbo Feng, Alistair Strachan,
	dri-devel

On Tue, 5 Mar 2019, John Stultz wrote:

> This patch adds system heap to the dma-buf heaps framework.
> 
> This allows applications to get a page-allocator backed dma-buf
> for non-contiguous memory.
> 
> This code is an evolution of the Android ION implementation, so
> thanks to its original authors and maintainters:
>   Rebecca Schultz Zavin, Colin Cross, Laura Abbott, and others!
> 
> Cc: Laura Abbott <labbott@redhat.com>
> Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
> Cc: Greg KH <gregkh@linuxfoundation.org>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: Liam Mark <lmark@codeaurora.org>
> Cc: Brian Starkey <Brian.Starkey@arm.com>
> Cc: Andrew F. Davis <afd@ti.com>
> Cc: Chenbo Feng <fengc@google.com>
> Cc: Alistair Strachan <astrachan@google.com>
> Cc: dri-devel@lists.freedesktop.org
> Signed-off-by: John Stultz <john.stultz@linaro.org>
> ---
> v2:
> * Switch allocate to return dmabuf fd
> * Simplify init code
> * Checkpatch fixups
> * Droped dead system-contig code
> ---
>  drivers/dma-buf/Kconfig             |   2 +
>  drivers/dma-buf/heaps/Kconfig       |   6 ++
>  drivers/dma-buf/heaps/Makefile      |   1 +
>  drivers/dma-buf/heaps/system_heap.c | 132 ++++++++++++++++++++++++++++++++++++
>  4 files changed, 141 insertions(+)
>  create mode 100644 drivers/dma-buf/heaps/Kconfig
>  create mode 100644 drivers/dma-buf/heaps/system_heap.c
> 
> diff --git a/drivers/dma-buf/Kconfig b/drivers/dma-buf/Kconfig
> index 09c61db..63c139d 100644
> --- a/drivers/dma-buf/Kconfig
> +++ b/drivers/dma-buf/Kconfig
> @@ -47,4 +47,6 @@ menuconfig DMABUF_HEAPS
>  	  this allows userspace to allocate dma-bufs that can be shared between
>  	  drivers.
>  
> +source "drivers/dma-buf/heaps/Kconfig"
> +
>  endmenu
> diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
> new file mode 100644
> index 0000000..2050527
> --- /dev/null
> +++ b/drivers/dma-buf/heaps/Kconfig
> @@ -0,0 +1,6 @@
> +config DMABUF_HEAPS_SYSTEM
> +	bool "DMA-BUF System Heap"
> +	depends on DMABUF_HEAPS
> +	help
> +	  Choose this option to enable the system dmabuf heap. The system heap
> +	  is backed by pages from the buddy allocator. If in doubt, say Y.
> diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
> index de49898..d1808ec 100644
> --- a/drivers/dma-buf/heaps/Makefile
> +++ b/drivers/dma-buf/heaps/Makefile
> @@ -1,2 +1,3 @@
>  # SPDX-License-Identifier: GPL-2.0
>  obj-y					+= heap-helpers.o
> +obj-$(CONFIG_DMABUF_HEAPS_SYSTEM)	+= system_heap.o
> diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
> new file mode 100644
> index 0000000..e001661
> --- /dev/null
> +++ b/drivers/dma-buf/heaps/system_heap.c
> @@ -0,0 +1,132 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * DMABUF System heap exporter
> + *
> + * Copyright (C) 2011 Google, Inc.
> + * Copyright (C) 2019 Linaro Ltd.
> + */
> +
> +#include <asm/page.h>
> +#include <linux/dma-buf.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/dma-heap.h>
> +#include <linux/err.h>
> +#include <linux/highmem.h>
> +#include <linux/mm.h>
> +#include <linux/scatterlist.h>
> +#include <linux/slab.h>
> +
> +#include "heap-helpers.h"
> +
> +
> +struct system_heap {
> +	struct dma_heap heap;
> +};
> +
> +
> +static void system_heap_free(struct heap_helper_buffer *buffer)
> +{
> +	int i;
> +	struct scatterlist *sg;
> +	struct sg_table *table = buffer->sg_table;
> +
> +	for_each_sg(table->sgl, sg, table->nents, i)
> +		__free_page(sg_page(sg));
> +
> +	sg_free_table(table);
> +	kfree(table);
> +	kfree(buffer);
> +}
> +
> +static int system_heap_allocate(struct dma_heap *heap,
> +				unsigned long len,
> +				unsigned long flags)
> +{
> +	struct heap_helper_buffer *helper_buffer;
> +	struct sg_table *table;
> +	struct scatterlist *sg;
> +	int i, j;
> +	int npages = PAGE_ALIGN(len) / PAGE_SIZE;
> +	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
> +	struct dma_buf *dmabuf;
> +	int ret = -ENOMEM;
> +
> +	helper_buffer = kzalloc(sizeof(*helper_buffer), GFP_KERNEL);
> +	if (!helper_buffer)
> +		return -ENOMEM;
> +
> +	INIT_HEAP_HELPER_BUFFER(helper_buffer, system_heap_free);
> +	helper_buffer->heap_buffer.flags = flags;
> +	helper_buffer->heap_buffer.heap = heap;
> +	helper_buffer->heap_buffer.size = len;
> +
> +	table = kmalloc(sizeof(struct sg_table), GFP_KERNEL);
> +	if (!table)
> +		goto err0;
> +
> +	i = sg_alloc_table(table, npages, GFP_KERNEL);
> +	if (i)
> +		goto err1;
> +	for_each_sg(table->sgl, sg, table->nents, i) {
> +		struct page *page;
> +
> +		page = alloc_page(GFP_KERNEL);

Need to zero the allocation (add __GFP_ZERO)

> +		if (!page)
> +			goto err2;
> +		sg_set_page(sg, page, PAGE_SIZE, 0);
> +	}
> +

Can always be done later, but it may be helpful to also move this common 
code from here (and from the cma heap) to the heap helpers file as it 
reduces code but will also make it easier to introduce future debug 
features such as making the dma buf names unique
to help make it easier to track down the source of memory leaks.

> +	 /* create the dmabuf */ 
> +	 exp_info.ops = &heap_helper_ops; 
> +	exp_info.size = len;
> +	exp_info.flags = O_RDWR;
> +	exp_info.priv = &helper_buffer->heap_buffer;
> +	dmabuf = dma_buf_export(&exp_info);
> +	if (IS_ERR(dmabuf)) {
> +		ret = PTR_ERR(dmabuf);
> +		goto err2;
> +	}
> +
> +	helper_buffer->heap_buffer.dmabuf = dmabuf;
> +	helper_buffer->sg_table = table;
> +
> +	ret = dma_buf_fd(dmabuf, O_CLOEXEC);
> +	if (ret < 0) {
> +		dma_buf_put(dmabuf);
> +		/* just return, as put will call release and that will free */
> +		return ret;
> +	}
> +
> +	return ret;
> +
> +err2:
> +	for_each_sg(table->sgl, sg, i, j)
> +		__free_page(sg_page(sg));
> +	sg_free_table(table);
> +err1:
> +	kfree(table);
> +err0:
> +	kfree(helper_buffer);
> +	return -ENOMEM;
> +}
> +
> +
> +static struct dma_heap_ops system_heap_ops = {
> +	.allocate = system_heap_allocate,
> +};
> +
> +static int system_heap_create(void)
> +{
> +	struct system_heap *sys_heap;
> +
> +	sys_heap = kzalloc(sizeof(*sys_heap), GFP_KERNEL);
> +	if (!sys_heap)
> +		return -ENOMEM;
> +	sys_heap->heap.name = "system_heap";
> +	sys_heap->heap.ops = &system_heap_ops;
> +
> +	dma_heap_add(&sys_heap->heap);
> +
> +	return 0;
> +}
> +device_initcall(system_heap_create);
> -- 
> 2.7.4
> 
> 

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 5/5 v2] kselftests: Add dma-heap test
  2019-03-05 20:54 ` [RFC][PATCH 5/5 v2] kselftests: Add dma-heap test John Stultz
  2019-03-06 16:14   ` Benjamin Gaignard
@ 2019-03-13 20:23   ` Liam Mark
  1 sibling, 0 replies; 68+ messages in thread
From: Liam Mark @ 2019-03-13 20:23 UTC (permalink / raw)
  To: John Stultz
  Cc: lkml, Laura Abbott, Benjamin Gaignard, Greg KH, Sumit Semwal,
	Brian Starkey, Andrew F . Davis, Chenbo Feng, Alistair Strachan,
	dri-devel

On Tue, 5 Mar 2019, John Stultz wrote:

> Add very trivial allocation test for dma-heaps.
> 
> TODO: Need to actually do some validation on
> the returned dma-buf.
> 
> Cc: Laura Abbott <labbott@redhat.com>
> Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
> Cc: Greg KH <gregkh@linuxfoundation.org>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: Liam Mark <lmark@codeaurora.org>
> Cc: Brian Starkey <Brian.Starkey@arm.com>
> Cc: Andrew F. Davis <afd@ti.com>
> Cc: Chenbo Feng <fengc@google.com>
> Cc: Alistair Strachan <astrachan@google.com>
> Cc: dri-devel@lists.freedesktop.org
> Signed-off-by: John Stultz <john.stultz@linaro.org>
> ---
> v2: Switched to use reworked dma-heap apis
> ---
>  tools/testing/selftests/dmabuf-heaps/Makefile      | 11 +++
>  tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c | 96 ++++++++++++++++++++++
>  2 files changed, 107 insertions(+)
>  create mode 100644 tools/testing/selftests/dmabuf-heaps/Makefile
>  create mode 100644 tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
> 
> diff --git a/tools/testing/selftests/dmabuf-heaps/Makefile b/tools/testing/selftests/dmabuf-heaps/Makefile
> new file mode 100644
> index 0000000..c414ad3
> --- /dev/null
> +++ b/tools/testing/selftests/dmabuf-heaps/Makefile
> @@ -0,0 +1,11 @@
> +# SPDX-License-Identifier: GPL-2.0
> +CFLAGS += -static -O3 -Wl,-no-as-needed -Wall
> +#LDLIBS += -lrt -lpthread -lm
> +
> +# these are all "safe" tests that don't modify
> +# system time or require escalated privileges
> +TEST_GEN_PROGS = dmabuf-heap
> +
> +
> +include ../lib.mk
> +
> diff --git a/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
> new file mode 100644
> index 0000000..06837a4
> --- /dev/null
> +++ b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
> @@ -0,0 +1,96 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include <dirent.h>
> +#include <errno.h>
> +#include <fcntl.h>
> +#include <stdio.h>
> +#include <string.h>
> +#include <unistd.h>
> +#include <sys/ioctl.h>
> +#include <sys/mman.h>
> +#include <sys/types.h>
> +
> +#include "../../../../include/uapi/linux/dma-heap.h"
> +
> +#define DEVPATH "/dev/dma_heap"
> +
> +int dmabuf_heap_open(char *name)
> +{
> +	int ret, fd;
> +	char buf[256];
> +
> +	ret = sprintf(buf, "%s/%s", DEVPATH, name);
> +	if (ret < 0) {
> +		printf("sprintf failed!\n");
> +		return ret;
> +	}
> +
> +	fd = open(buf, O_RDWR);
> +	if (fd < 0)
> +		printf("open %s failed!\n", buf);
> +	return fd;
> +}
> +
> +int dmabuf_heap_alloc(int fd, size_t len, unsigned int flags, int *dmabuf_fd)
> +{
> +	struct dma_heap_allocation_data data = {
> +		.len = len,
> +		.flags = flags,
> +	};
> +	int ret;
> +
> +	if (dmabuf_fd == NULL)
> +		return -EINVAL;
> +
> +	ret = ioctl(fd, DMA_HEAP_IOC_ALLOC, &data);
> +	if (ret < 0)
> +		return ret;
> +	*dmabuf_fd = (int)data.fd;
> +	return ret;
> +}
> +
> +#define ONE_MEG (1024*1024)
> +
> +void do_test(char *heap_name)
> +{
> +	int heap_fd = -1, dmabuf_fd = -1;
> +	int ret;
> +
> +	printf("Testing heap: %s\n", heap_name);
> +
> +	heap_fd = dmabuf_heap_open(heap_name);
> +	if (heap_fd < 0)
> +		return;
> +
> +	printf("Allocating 1 MEG\n");
> +	ret = dmabuf_heap_alloc(heap_fd, ONE_MEG, 0, &dmabuf_fd);

Just be aware that some CMA heaps may already have all their memory 
allocated by the client, so you may see intermittent failures depending on 
when you run the test and failrues may
be more common when run on certain platform.

> +	if (ret)
> +		goto out;
> +
> +	/* DO SOMETHING WITH THE DMABUF HERE? */
> +
> +out:
> +	if (dmabuf_fd >= 0)
> +		close(dmabuf_fd);
> +	if (heap_fd >= 0)
> +		close(heap_fd);
> +}
> +
> +
> +int main(void)
> +{
> +	DIR *d;
> +	struct dirent *dir;
> +
> +	d = opendir(DEVPATH);
> +	if (!d) {
> +		printf("No %s directory?\n", DEVPATH);
> +		return -1;
> +	}
> +
> +	while ((dir = readdir(d)) != NULL)
> +		do_test(dir->d_name);
> +
> +
> +	return 0;
> +}
> -- 
> 2.7.4
> 
> 

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 2/5 v2] dma-buf: heaps: Add heap helpers
  2019-03-13 20:18   ` Liam Mark
@ 2019-03-13 21:48     ` Andrew F. Davis
  2019-03-13 22:57       ` Liam Mark
  0 siblings, 1 reply; 68+ messages in thread
From: Andrew F. Davis @ 2019-03-13 21:48 UTC (permalink / raw)
  To: Liam Mark, John Stultz
  Cc: lkml, Laura Abbott, Benjamin Gaignard, Greg KH, Sumit Semwal,
	Brian Starkey, Chenbo Feng, Alistair Strachan, dri-devel

On 3/13/19 3:18 PM, Liam Mark wrote:
> On Tue, 5 Mar 2019, John Stultz wrote:
> 
>> Add generic helper dmabuf ops for dma heaps, so we can reduce
>> the amount of duplicative code for the exported dmabufs.
>>
>> This code is an evolution of the Android ION implementation, so
>> thanks to its original authors and maintainters:
>>   Rebecca Schultz Zavin, Colin Cross, Laura Abbott, and others!
>>
>> Cc: Laura Abbott <labbott@redhat.com>
>> Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
>> Cc: Greg KH <gregkh@linuxfoundation.org>
>> Cc: Sumit Semwal <sumit.semwal@linaro.org>
>> Cc: Liam Mark <lmark@codeaurora.org>
>> Cc: Brian Starkey <Brian.Starkey@arm.com>
>> Cc: Andrew F. Davis <afd@ti.com>
>> Cc: Chenbo Feng <fengc@google.com>
>> Cc: Alistair Strachan <astrachan@google.com>
>> Cc: dri-devel@lists.freedesktop.org
>> Signed-off-by: John Stultz <john.stultz@linaro.org>
>> ---
>> v2:
>> * Removed cache management performance hack that I had
>>   accidentally folded in.
>> * Removed stats code that was in helpers
>> * Lots of checkpatch cleanups
>> ---
>>  drivers/dma-buf/Makefile             |   1 +
>>  drivers/dma-buf/heaps/Makefile       |   2 +
>>  drivers/dma-buf/heaps/heap-helpers.c | 335 +++++++++++++++++++++++++++++++++++
>>  drivers/dma-buf/heaps/heap-helpers.h |  48 +++++
>>  4 files changed, 386 insertions(+)
>>  create mode 100644 drivers/dma-buf/heaps/Makefile
>>  create mode 100644 drivers/dma-buf/heaps/heap-helpers.c
>>  create mode 100644 drivers/dma-buf/heaps/heap-helpers.h
>>
>> diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
>> index b0332f1..09c2f2d 100644
>> --- a/drivers/dma-buf/Makefile
>> +++ b/drivers/dma-buf/Makefile
>> @@ -1,4 +1,5 @@
>>  obj-y := dma-buf.o dma-fence.o dma-fence-array.o reservation.o seqno-fence.o
>> +obj-$(CONFIG_DMABUF_HEAPS)	+= heaps/
>>  obj-$(CONFIG_DMABUF_HEAPS)	+= dma-heap.o
>>  obj-$(CONFIG_SYNC_FILE)		+= sync_file.o
>>  obj-$(CONFIG_SW_SYNC)		+= sw_sync.o sync_debug.o
>> diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
>> new file mode 100644
>> index 0000000..de49898
>> --- /dev/null
>> +++ b/drivers/dma-buf/heaps/Makefile
>> @@ -0,0 +1,2 @@
>> +# SPDX-License-Identifier: GPL-2.0
>> +obj-y					+= heap-helpers.o
>> diff --git a/drivers/dma-buf/heaps/heap-helpers.c b/drivers/dma-buf/heaps/heap-helpers.c
>> new file mode 100644
>> index 0000000..ae5e9d0
>> --- /dev/null
>> +++ b/drivers/dma-buf/heaps/heap-helpers.c
>> @@ -0,0 +1,335 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +#include <linux/device.h>
>> +#include <linux/dma-buf.h>
>> +#include <linux/err.h>
>> +#include <linux/idr.h>
>> +#include <linux/list.h>
>> +#include <linux/slab.h>
>> +#include <linux/uaccess.h>
>> +#include <uapi/linux/dma-heap.h>
>> +
>> +#include "heap-helpers.h"
>> +
>> +
>> +static void *dma_heap_map_kernel(struct heap_helper_buffer *buffer)
>> +{
>> +	struct scatterlist *sg;
>> +	int i, j;
>> +	void *vaddr;
>> +	pgprot_t pgprot;
>> +	struct sg_table *table = buffer->sg_table;
>> +	int npages = PAGE_ALIGN(buffer->heap_buffer.size) / PAGE_SIZE;
>> +	struct page **pages = vmalloc(array_size(npages,
>> +						 sizeof(struct page *)));
>> +	struct page **tmp = pages;
>> +
>> +	if (!pages)
>> +		return ERR_PTR(-ENOMEM);
>> +
>> +	pgprot = PAGE_KERNEL;
>> +
>> +	for_each_sg(table->sgl, sg, table->nents, i) {
>> +		int npages_this_entry = PAGE_ALIGN(sg->length) / PAGE_SIZE;
>> +		struct page *page = sg_page(sg);
>> +
>> +		WARN_ON(i >= npages);
>> +		for (j = 0; j < npages_this_entry; j++)
>> +			*(tmp++) = page++;
>> +	}
>> +	vaddr = vmap(pages, npages, VM_MAP, pgprot);
>> +	vfree(pages);
>> +
>> +	if (!vaddr)
>> +		return ERR_PTR(-ENOMEM);
>> +
>> +	return vaddr;
>> +}
>> +
>> +static int dma_heap_map_user(struct heap_helper_buffer *buffer,
>> +			 struct vm_area_struct *vma)
>> +{
>> +	struct sg_table *table = buffer->sg_table;
>> +	unsigned long addr = vma->vm_start;
>> +	unsigned long offset = vma->vm_pgoff * PAGE_SIZE;
>> +	struct scatterlist *sg;
>> +	int i;
>> +	int ret;
>> +
>> +	for_each_sg(table->sgl, sg, table->nents, i) {
>> +		struct page *page = sg_page(sg);
>> +		unsigned long remainder = vma->vm_end - addr;
>> +		unsigned long len = sg->length;
>> +
>> +		if (offset >= sg->length) {
>> +			offset -= sg->length;
>> +			continue;
>> +		} else if (offset) {
>> +			page += offset / PAGE_SIZE;
>> +			len = sg->length - offset;
>> +			offset = 0;
>> +		}
>> +		len = min(len, remainder);
>> +		ret = remap_pfn_range(vma, addr, page_to_pfn(page), len,
>> +				      vma->vm_page_prot);
>> +		if (ret)
>> +			return ret;
>> +		addr += len;
>> +		if (addr >= vma->vm_end)
>> +			return 0;
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +
>> +void dma_heap_buffer_destroy(struct dma_heap_buffer *heap_buffer)
>> +{
>> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
>> +
>> +	if (buffer->kmap_cnt > 0) {
>> +		pr_warn_once("%s: buffer still mapped in the kernel\n",
>> +			     __func__);
>> +		vunmap(buffer->vaddr);
>> +	}
>> +
>> +	buffer->free(buffer);
>> +}
>> +
>> +static void *dma_heap_buffer_kmap_get(struct dma_heap_buffer *heap_buffer)
>> +{
>> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
>> +	void *vaddr;
>> +
>> +	if (buffer->kmap_cnt) {
>> +		buffer->kmap_cnt++;
>> +		return buffer->vaddr;
>> +	}
>> +	vaddr = dma_heap_map_kernel(buffer);
>> +	if (WARN_ONCE(!vaddr,
>> +		      "heap->ops->map_kernel should return ERR_PTR on error"))
>> +		return ERR_PTR(-EINVAL);
>> +	if (IS_ERR(vaddr))
>> +		return vaddr;
>> +	buffer->vaddr = vaddr;
>> +	buffer->kmap_cnt++;
>> +	return vaddr;
>> +}
>> +
>> +static void dma_heap_buffer_kmap_put(struct dma_heap_buffer *heap_buffer)
>> +{
>> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
>> +
>> +	buffer->kmap_cnt--;
>> +	if (!buffer->kmap_cnt) {
>> +		vunmap(buffer->vaddr);
>> +		buffer->vaddr = NULL;
>> +	}
>> +}
>> +
>> +static struct sg_table *dup_sg_table(struct sg_table *table)
>> +{
>> +	struct sg_table *new_table;
>> +	int ret, i;
>> +	struct scatterlist *sg, *new_sg;
>> +
>> +	new_table = kzalloc(sizeof(*new_table), GFP_KERNEL);
>> +	if (!new_table)
>> +		return ERR_PTR(-ENOMEM);
>> +
>> +	ret = sg_alloc_table(new_table, table->nents, GFP_KERNEL);
>> +	if (ret) {
>> +		kfree(new_table);
>> +		return ERR_PTR(-ENOMEM);
>> +	}
>> +
>> +	new_sg = new_table->sgl;
>> +	for_each_sg(table->sgl, sg, table->nents, i) {
>> +		memcpy(new_sg, sg, sizeof(*sg));
>> +		new_sg->dma_address = 0;
>> +		new_sg = sg_next(new_sg);
>> +	}
>> +
>> +	return new_table;
>> +}
>> +
>> +static void free_duped_table(struct sg_table *table)
>> +{
>> +	sg_free_table(table);
>> +	kfree(table);
>> +}
>> +
>> +struct dma_heaps_attachment {
>> +	struct device *dev;
>> +	struct sg_table *table;
>> +	struct list_head list;
>> +};
>> +
>> +static int dma_heap_attach(struct dma_buf *dmabuf,
>> +			      struct dma_buf_attachment *attachment)
>> +{
>> +	struct dma_heaps_attachment *a;
>> +	struct sg_table *table;
>> +	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
>> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
>> +
>> +	a = kzalloc(sizeof(*a), GFP_KERNEL);
>> +	if (!a)
>> +		return -ENOMEM;
>> +
>> +	table = dup_sg_table(buffer->sg_table);
>> +	if (IS_ERR(table)) {
>> +		kfree(a);
>> +		return -ENOMEM;
>> +	}
>> +
>> +	a->table = table;
>> +	a->dev = attachment->dev;
>> +	INIT_LIST_HEAD(&a->list);
>> +
>> +	attachment->priv = a;
>> +
>> +	mutex_lock(&buffer->lock);
>> +	list_add(&a->list, &buffer->attachments);
>> +	mutex_unlock(&buffer->lock);
>> +
>> +	return 0;
>> +}
>> +
>> +static void dma_heap_detatch(struct dma_buf *dmabuf,
>> +				struct dma_buf_attachment *attachment)
>> +{
>> +	struct dma_heaps_attachment *a = attachment->priv;
>> +	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
>> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
>> +
>> +	mutex_lock(&buffer->lock);
>> +	list_del(&a->list);
>> +	mutex_unlock(&buffer->lock);
>> +	free_duped_table(a->table);
>> +
>> +	kfree(a);
>> +}
>> +
>> +static struct sg_table *dma_heap_map_dma_buf(
>> +					struct dma_buf_attachment *attachment,
>> +					enum dma_data_direction direction)
>> +{
>> +	struct dma_heaps_attachment *a = attachment->priv;
>> +	struct sg_table *table;
>> +
>> +	table = a->table;
>> +
>> +	if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
>> +			direction))
> 
> Since this code is used for system heap and and as the reference.
> In multimedia uses cases very large buffers can be allocated from system 
> heap, and since system heap allocations have a cached kernel mapping it 
> has been difficult to support uncached allocations so clients will likely 
> allocate them as cached allocations.
> Most access to these buffers will occur from non IO-coherent devices, 
> however in frameworks such as Android these buffers will be dma mapped and 
> dma unmapped frequently, for every frame and for each device in the 
> "buffer pipeline", which leads to a lot of unnecessary cache maintenance 
> in the dma map and dma unmap calls.
> From previous discussions it doesn't seem like this could be optimized by 
> only making changes to dma-buf heaps framework.
> 
> So I think it would be helpful to try and agree on what types of changes 
> would be required to the Android framework and possibly the dma-buf heaps 
> framework to resolve this.
> Example
> 
> - Have Android framework keep buffers dma mapped for the whole use case
> 
> - Or perhaps have Android keep required devices attached to the buffer as 
> they are "pipelined" so that cache maintenance can be skipped in dma map 
> and dma umnap but reliably applied in begin/end_cpu_access.
> 

I don't have a strong opinion on the solution here for Android, but from
kernel/hardware side I can see we don't have a good ownership model for
dma-buf and that looks to be causing your unneeded cache ops.

Let my give my vision for the model you request below.


>> +	 table = ERR_PTR(-ENOMEM); 
>> +	 return table; 
>> +} 
>> +
>> +static void dma_heap_unmap_dma_buf(struct dma_buf_attachment *attachment,
>> +			      struct sg_table *table,
>> +			      enum dma_data_direction direction)
>> +{
>> +	dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
>> +}
>> +
>> +static int dma_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
>> +{
>> +	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
>> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
>> +	int ret = 0;
>> +
>> +	mutex_lock(&buffer->lock);
>> +	/* now map it to userspace */
>> +	ret = dma_heap_map_user(buffer, vma);
> 
> Since this code is used for system heap and and as the reference.
> Currently in Android when a system heap buffer is moving down the 
> "pipeline" CPU access may occur and there may be no device attached.
> In the past people have raised concerns that this should perhaps not be 
> supported and that at least one device should be attached when CPU access 
> occurs.
> 
> Can we decide, from a dma-buf contract perspective, on whether CPU access 
> without a device attached should allowed and make it clear in the dma-buf 
> documentation either way?
> 
> If it should not be allowed we can try to work with Android to see how 
> they can change their framework to align with the dma-buf spec.
> 

As you say, right now DMA-BUF does not really provide guidelines for
"well behaved" heaps, and as such we all end up making different
assumptions. Although I don't think they below should be enforced for
all DMA-BUFs it should be used for our first set of reference heaps.

So from the start, lets looks at DMA-BUF buffer lifetimes. A buffer
begins by being allocated, at this point the backing resources (usually
some pages in physical CPU addressable RAM) have *not* been allocated
this is a core feature of DMA-BUF allowing smart allocations at map
time. At this point we should consider the buffer in a state "non-owned
non-backed".

Next we have three valid actions:
 * CPU use
 * device attachments
 * free

CPU use starts with a call to begin_cpu_access(), the buffer will now
both become CPU owned and will pin it's backing buffer. This last part
can cause begin_cpu_access() to fail for some types of buffers that only
pick their backing resource based on the set of attached devices, which
at this point we have none. For heaps like "system" heap this should not
be a problem as the backing storage is already selected at allocation time.

Device attachments and mapping should behave as they do now. After
mapping, the buffer is now owned by the attached devices and backed. The
ownership can be transfered to the CPU and back as usual with
{begin,end}_cpu_access().

Free right after allocation without anyone taking ownership is trivial.

Now here is where things may get interesting, we can now be in the
following states:

CPU owned backed
Device owned backed

What should we do when CPU owned and end_cpu_access() is called or when
Device owned and all devices detach. What state do we go into? My
opinion would be to return to "non-owned non-backed" which means the
backing resource can be freed. The other option is to leave it backed. I
beleave this is what Android expects right now as it returns buffers to
a "BufferQueue" to later be dequeued and used again in the same way
without time consuming reallocations.

For "system" heaps the result is the same outside of the cache ops, if
we maintain a buffer state like the above we can always know when a
cache op or similar is needed (only when transitioning owners, not on
every map/unmap). For more complex heaps we can do something similar
when transitioning from backed to non-backed (and vice-versa) such as
migrating the backing data to a more appropriate resource given the new
set of attaching/mapping devices.

A more complete state transition map will probably be needed to fill out
what should be allowed and what to do when, and I agree it will be good
to have that for this first set of reference heaps. But all of this can
be done now with DMA-heaps, so I don't want to tie up the core
de-staging too much on getting every possible heap to behave just right
either..

>> +	mutex_unlock(&buffer->lock);
>> +
>> +	if (ret)
>> +		pr_err("%s: failure mapping buffer to userspace\n",
>> +		       __func__);
>> +
>> +	return ret;
>> +}
>> +
>> +static void dma_heap_dma_buf_release(struct dma_buf *dmabuf)
>> +{
>> +	struct dma_heap_buffer *buffer = dmabuf->priv;
>> +
>> +	dma_heap_buffer_destroy(buffer);
>> +}
>> +
>> +static void *dma_heap_dma_buf_kmap(struct dma_buf *dmabuf,
>> +					unsigned long offset)
>> +{
>> +	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
>> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
>> +
>> +	return buffer->vaddr + offset * PAGE_SIZE;
>> +}
>> +
>> +static void dma_heap_dma_buf_kunmap(struct dma_buf *dmabuf,
>> +					unsigned long offset,
>> +					void *ptr)
>> +{
>> +}
>> +
>> +static int dma_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
>> +					enum dma_data_direction direction)
>> +{
>> +	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
>> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
>> +	void *vaddr;
>> +	struct dma_heaps_attachment *a;
>> +	int ret = 0;
>> +
>> +	mutex_lock(&buffer->lock);
>> +	vaddr = dma_heap_buffer_kmap_get(heap_buffer);
> 
> Since this code is used for system heap and and as the reference.
> As has been discussed in the past there are several disadvantages to 
> creating a kernel mapping on each call to begin_cpu_access.
> 
> The resulting call to alloc_vmap_area is expensive and can hurt client 
> KPIs
> 
> Allowing userspace clients to create and destroy kernel mappings can 
> provide opportunities to crash the kernel.
> 
> Can we look at removing the creation of a kernel mapping in 
> begin_cpu_access and either introduce support for dma_buf_vmap and have 
> clients use that instead or perahps change 
> the contract for dma_buf_kmap so that it doesn't always need to succeed?
> 

Agree, for this we should just fail (or succeed but perform no action?)
if there are no active mappings (neither in kernel (kmap, vmap) nor in
userspace (mmap)), forcing a new mapping just to keep
dma_sync_sg_for_cpu() happy is not the correct thing to do here.

>> +	if (IS_ERR(vaddr)) {
>> +		ret = PTR_ERR(vaddr);
>> +		goto unlock;
>> +	}
>> +	mutex_unlock(&buffer->lock);
>> +
>> +	mutex_lock(&buffer->lock);
>> +	list_for_each_entry(a, &buffer->attachments, list) {
>> +		dma_sync_sg_for_cpu(a->dev, a->table->sgl, a->table->nents,
>> +				    direction);
> 
> Since this code is used for system heap and and as the reference.
> Not a major issue for newer kernels but I still don't think it makes sense 
> to apply cache maintenance when the buffer is not dma mapped, it doesn't 
> makes sense to me from a logical perspective and from a performance 
> perspective.
> 

Agree again, this is only needed when we are transitioning out of state
"device owned" which we will not be in if we have no active
attached/mapped devices.

> 
>> +	}
>> +
>> +unlock:
>> +	mutex_unlock(&buffer->lock);
>> +	return ret;
>> +}
>> +
>> +static int dma_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
>> +				      enum dma_data_direction direction)
>> +{
>> +	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
>> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
>> +	struct dma_heaps_attachment *a;
>> +
>> +	mutex_lock(&buffer->lock);
>> +	dma_heap_buffer_kmap_put(heap_buffer);
>> +	mutex_unlock(&buffer->lock);
>> +
>> +	mutex_lock(&buffer->lock);
>> +	list_for_each_entry(a, &buffer->attachments, list) {
>> +		dma_sync_sg_for_device(a->dev, a->table->sgl, a->table->nents,
>> +				       direction);
> 
> There are use cases in Android which result in only small parts of a large 
> buffer are written do during of CPU access.
> 

We have the same requirements for some OpenCL use-cases
(clCreateSubBuffer), which allows for chunks of a larger buffer to have
different properties and therefor cache operations different from the
larger parrent buffer.

> Applying cache maintenance to the complete buffer results in a lot of 
> unnecessary cache maintenance that can affect KPIs.
> 
> I believe the Android team is wondering if there could be a way to support 
> partial maintenance in  where userspace could describe the buffer changes 
> they have made.
> 

The issue I have with that is it would require userspace to know about
the cache line size of all involved devices, otherwise you would have
users writing to the first 64B and invalidate to read the second 64B
written from a DMA device outside the cache, in a system with 128B
cache-lines you will always lose some data here.

> I think it would be useful to make sure that there is a least a path 
> forward with the current dma-buf heaps framework to solve this for system 
> heap allocations.
> 

DMA-BUF sync operations always act on the full buffer, changing this
would require changes to the core DMA-BUF framework (so outside the
scope of this set) :)

Andrew

> I can get more details on the specific use cases if required.
> 
>> +	}
>> +	mutex_unlock(&buffer->lock);
>> +
>> +	return 0;
>> +}
>> +
>> +const struct dma_buf_ops heap_helper_ops = {
>> +	.map_dma_buf = dma_heap_map_dma_buf,
>> +	.unmap_dma_buf = dma_heap_unmap_dma_buf,
>> +	.mmap = dma_heap_mmap,
>> +	.release = dma_heap_dma_buf_release,
>> +	.attach = dma_heap_attach,
>> +	.detach = dma_heap_detatch,
>> +	.begin_cpu_access = dma_heap_dma_buf_begin_cpu_access,
>> +	.end_cpu_access = dma_heap_dma_buf_end_cpu_access,
>> +	.map = dma_heap_dma_buf_kmap,
>> +	.unmap = dma_heap_dma_buf_kunmap,
>> +};
>> diff --git a/drivers/dma-buf/heaps/heap-helpers.h b/drivers/dma-buf/heaps/heap-helpers.h
>> new file mode 100644
>> index 0000000..0bd8643
>> --- /dev/null
>> +++ b/drivers/dma-buf/heaps/heap-helpers.h
>> @@ -0,0 +1,48 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * DMABUF Heaps helper code
>> + *
>> + * Copyright (C) 2011 Google, Inc.
>> + * Copyright (C) 2019 Linaro Ltd.
>> + */
>> +
>> +#ifndef _HEAP_HELPERS_H
>> +#define _HEAP_HELPERS_H
>> +
>> +#include <linux/dma-heap.h>
>> +#include <linux/list.h>
>> +
>> +struct heap_helper_buffer {
>> +	struct dma_heap_buffer heap_buffer;
>> +
>> +	unsigned long private_flags;
>> +	void *priv_virt;
>> +	struct mutex lock;
>> +	int kmap_cnt;
>> +	void *vaddr;
>> +	struct sg_table *sg_table;
>> +	struct list_head attachments;
>> +
>> +	void (*free)(struct heap_helper_buffer *buffer);
>> +
>> +};
>> +
>> +#define to_helper_buffer(x) \
>> +	container_of(x, struct heap_helper_buffer, heap_buffer)
>> +
>> +static inline void INIT_HEAP_HELPER_BUFFER(struct heap_helper_buffer *buffer,
>> +				 void (*free)(struct heap_helper_buffer *))
>> +{
>> +	buffer->private_flags = 0;
>> +	buffer->priv_virt = NULL;
>> +	mutex_init(&buffer->lock);
>> +	buffer->kmap_cnt = 0;
>> +	buffer->vaddr = NULL;
>> +	buffer->sg_table = NULL;
>> +	INIT_LIST_HEAD(&buffer->attachments);
>> +	buffer->free = free;
>> +}
>> +
>> +extern const struct dma_buf_ops heap_helper_ops;
>> +
>> +#endif /* _HEAP_HELPERS_H */
>> -- 
>> 2.7.4
>>
>>
> 
> Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
> a Linux Foundation Collaborative Project
> 

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 0/5 v2] DMA-BUF Heaps (destaging ION)
  2019-03-13 20:11 ` [RFC][PATCH 0/5 v2] DMA-BUF Heaps (destaging ION) Liam Mark
@ 2019-03-13 22:30   ` John Stultz
  2019-03-13 23:29     ` Liam Mark
  2019-03-19 16:54     ` Benjamin Gaignard
  0 siblings, 2 replies; 68+ messages in thread
From: John Stultz @ 2019-03-13 22:30 UTC (permalink / raw)
  To: Liam Mark
  Cc: lkml, Laura Abbott, Benjamin Gaignard, Greg KH, Sumit Semwal,
	Brian Starkey, Andrew F . Davis, Chenbo Feng, Alistair Strachan,
	dri-devel, Vincent Donnefort, Marissa Wall

On Wed, Mar 13, 2019 at 1:11 PM Liam Mark <lmark@codeaurora.org> wrote:
> On Tue, 5 Mar 2019, John Stultz wrote:
> >
> > Eventual TODOS:
> > * Reimplement page-pool for system heap (working on this)
> > * Add stats accounting to system/cma heaps
> > * Make the kselftest actually useful
> > * Add other heaps folks see as useful (would love to get
> >   some help from actual carveout/chunk users)!
>
> We use a modified carveout heap for certain secure use cases.

Cool! It would be great to see if you have any concerns about adding
such a secure-carveout heap to this framework. I suspect it would be
fairly similar to how its integrated into ION, but particularly I'd be
interested in issues around the lack of private flags and other
allocation arguments like alignment.

> Although there would probably be some benefit in discssing how the dma-buf
> heap framework may want to support
> secure heaps in the future it is a large topic which I assume you don't
> want to tackle now.

So I suspect others (Benjamin?) would have a more informed opinion on
the details, but the intent is to allow secure heap implementations.
I'm not sure what areas of concern you have for this allocation
framework in particular?

> We don't have any non-secure carveout heap use cases but the client use
> case I have seen usually revolve around
> wanting large allocations to succeed very quickly.
> For example I have seen camera use cases which do very large allocations
> on camera bootup from the carveout heap, these allocations would come from
> the carveout heap and fallback to the system heap when the carveout heap
> was full.
> Actual non-secure carveout heap can perhaps provide more detail.

Yea, I'm aware that folks still see carveout as preferable to CMA due
to more consistent/predictable allocation latency.  I think we still
have the issue that we don't have bindings to establish/configure
carveout regions w/ dts, and I'm not really wanting to hold up the
allocation API on that issue.


> Since we are making some fundamental changes to how ION worked and since
> Android is likely also be the largest user of the dma-buf heaps framework
> I think it would be good
> to have a path to resolve the issues which are currently preventing
> commercial Android releases from moving to the upstream version of ION.

Yea, I do see solving the cache management efficiency issues as
critical for the dmabuf heaps to be actually usable (my previous
version of this patchset accidentally had my hacks to improve
performance rolled in!).  And there are discussions going on in
various channels to try to figure out how to either change Android to
use dma-bufs more in line with how upstream expects, or what more
generic dma-buf changes we may need to allow Android to use dmabufs
with the expected performance they need.

> I can understand if you don't necessarily want to put all/any of these
> changes into the dma-buf heaps framework as part of this series, but my
> hope is we can get
> the upstream community and the Android framework team to agree on what
> upstreamable changes to dma-buf heaps framework, and/or the Android
> framework, would be required in order for Android to move to the upstream
> dma-buf heaps framework for commercial devices.

Yes. Though I also don't want to get the bigger dma-buf usage
discussion (which really affects all dmabuf exporters) too tied up
with this patch sets attempt to provide a usable allocation interface.
Part of the problem that I think we've seen with ION is that there is
a nest of of related issues, and the entire thing is just too big to
address at once, which I think is part of why ION has sat in staging
for so long. This patchset just tries to provide an dmabuf allocation
interface, and a few example exporter heap types.

> I don't mean to make this specific to Android, but my assumption is that
> many of the ION/dma-buf heaps issues which affect Android would likely
> affect other new large users of the dma-buf heaps framework, so if we
> resolve it for Android we would be helping these future users as well.
> And I do understand that some the issues facing Android may need to be
> resolved by making changes to Android framework.

While true, I also think some of the assumptions in how the dma-bufs
are used (pre-attachment of all devices, etc) are maybe not so
realistic given how Android is using them.  I do want to explore if
Android can change how they use dma-bufs, but I also worry that we
need to think about how we could loosen the expectations for dma-bufs,
as well as trying to figure out how to support things folks have
brought up like partial cache maintenance.

> I think it would be helpful to try and get as much of this agreed upon as
> possible before the dma-buf heaps framework moves out of staging.
>
> As part of my review I will highlight some of the issues which would
> affect Android.
> In my comments I will apply them to the system heap since that is what
> Android currently uses for a lot of its use cases.
> I realize that this new framework provides more flexibility to heaps, so
> perhaps some of these issues can be solved by creating a new type of
> system heap which Android can use, but even if the solution involves
> creating a new system heap I would like to make sure that this "new"
> system heap is upstreamable.

So yea, I do realize I'm dodging the hard problem here, but I think
the cache-management/usage issue is far more generic.

You're right that this implementation give a lot of flexibility to the
exporter heaps in how they implement the dmabuf ops (just like how
other device drivers that are dmabuf exporters have the same
flexibility), but I very much agree we don't want to add a system and
then later a "system-android" heap. So yea, a reasonable amount of
caution is warranted here.

Thanks so much for the review and feedback! I'll try to address things
as I can as I'm traveling this week (so I may be a bit spotty).

thanks
-john

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 3/5 v2] dma-buf: heaps: Add system heap to dmabuf heaps
  2019-03-13 20:20   ` Liam Mark
@ 2019-03-13 22:49     ` John Stultz
  0 siblings, 0 replies; 68+ messages in thread
From: John Stultz @ 2019-03-13 22:49 UTC (permalink / raw)
  To: Liam Mark
  Cc: lkml, Laura Abbott, Benjamin Gaignard, Greg KH, Sumit Semwal,
	Brian Starkey, Andrew F . Davis, Chenbo Feng, Alistair Strachan,
	dri-devel, Vincent Donnefort, Marissa Wall

On Wed, Mar 13, 2019 at 1:20 PM Liam Mark <lmark@codeaurora.org> wrote:
> On Tue, 5 Mar 2019, John Stultz wrote:
> > +
> > +             page = alloc_page(GFP_KERNEL);
>
> Need to zero the allocation (add __GFP_ZERO)

Ah! Thanks! Fixed now.

> > +             if (!page)
> > +                     goto err2;
> > +             sg_set_page(sg, page, PAGE_SIZE, 0);
> > +     }
> > +
>
> Can always be done later, but it may be helpful to also move this common
> code from here (and from the cma heap) to the heap helpers file as it
> reduces code but will also make it easier to introduce future debug
> features such as making the dma buf names unique
> to help make it easier to track down the source of memory leaks.

I think this is a good suggestion, but I do want to be careful to try
to make sure we add debugging tools to the larger dmabuf
infrastructure, rather then just the heap code (though having some
heap specific usage info would still be good). I think there's a
separate patchset to dmabufs originally from Greg Hackmann that
provides names that is supposed to help with what your suggesting.
  https://lists.freedesktop.org/archives/dri-devel/2019-February/208759.html

thanks!
-john

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 2/5 v2] dma-buf: heaps: Add heap helpers
  2019-03-13 21:48     ` Andrew F. Davis
@ 2019-03-13 22:57       ` Liam Mark
  2019-03-13 23:42         ` Andrew F. Davis
  0 siblings, 1 reply; 68+ messages in thread
From: Liam Mark @ 2019-03-13 22:57 UTC (permalink / raw)
  To: Andrew F. Davis
  Cc: John Stultz, lkml, Laura Abbott, Benjamin Gaignard, Greg KH,
	Sumit Semwal, Brian Starkey, Chenbo Feng, Alistair Strachan,
	dri-devel

On Wed, 13 Mar 2019, Andrew F. Davis wrote:

> On 3/13/19 3:18 PM, Liam Mark wrote:
> > On Tue, 5 Mar 2019, John Stultz wrote:
> > 
> >> Add generic helper dmabuf ops for dma heaps, so we can reduce
> >> the amount of duplicative code for the exported dmabufs.
> >>
> >> This code is an evolution of the Android ION implementation, so
> >> thanks to its original authors and maintainters:
> >>   Rebecca Schultz Zavin, Colin Cross, Laura Abbott, and others!
> >>
> >> Cc: Laura Abbott <labbott@redhat.com>
> >> Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
> >> Cc: Greg KH <gregkh@linuxfoundation.org>
> >> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> >> Cc: Liam Mark <lmark@codeaurora.org>
> >> Cc: Brian Starkey <Brian.Starkey@arm.com>
> >> Cc: Andrew F. Davis <afd@ti.com>
> >> Cc: Chenbo Feng <fengc@google.com>
> >> Cc: Alistair Strachan <astrachan@google.com>
> >> Cc: dri-devel@lists.freedesktop.org
> >> Signed-off-by: John Stultz <john.stultz@linaro.org>
> >> ---
> >> v2:
> >> * Removed cache management performance hack that I had
> >>   accidentally folded in.
> >> * Removed stats code that was in helpers
> >> * Lots of checkpatch cleanups
> >> ---
> >>  drivers/dma-buf/Makefile             |   1 +
> >>  drivers/dma-buf/heaps/Makefile       |   2 +
> >>  drivers/dma-buf/heaps/heap-helpers.c | 335 +++++++++++++++++++++++++++++++++++
> >>  drivers/dma-buf/heaps/heap-helpers.h |  48 +++++
> >>  4 files changed, 386 insertions(+)
> >>  create mode 100644 drivers/dma-buf/heaps/Makefile
> >>  create mode 100644 drivers/dma-buf/heaps/heap-helpers.c
> >>  create mode 100644 drivers/dma-buf/heaps/heap-helpers.h
> >>
> >> diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
> >> index b0332f1..09c2f2d 100644
> >> --- a/drivers/dma-buf/Makefile
> >> +++ b/drivers/dma-buf/Makefile
> >> @@ -1,4 +1,5 @@
> >>  obj-y := dma-buf.o dma-fence.o dma-fence-array.o reservation.o seqno-fence.o
> >> +obj-$(CONFIG_DMABUF_HEAPS)	+= heaps/
> >>  obj-$(CONFIG_DMABUF_HEAPS)	+= dma-heap.o
> >>  obj-$(CONFIG_SYNC_FILE)		+= sync_file.o
> >>  obj-$(CONFIG_SW_SYNC)		+= sw_sync.o sync_debug.o
> >> diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
> >> new file mode 100644
> >> index 0000000..de49898
> >> --- /dev/null
> >> +++ b/drivers/dma-buf/heaps/Makefile
> >> @@ -0,0 +1,2 @@
> >> +# SPDX-License-Identifier: GPL-2.0
> >> +obj-y					+= heap-helpers.o
> >> diff --git a/drivers/dma-buf/heaps/heap-helpers.c b/drivers/dma-buf/heaps/heap-helpers.c
> >> new file mode 100644
> >> index 0000000..ae5e9d0
> >> --- /dev/null
> >> +++ b/drivers/dma-buf/heaps/heap-helpers.c
> >> @@ -0,0 +1,335 @@
> >> +// SPDX-License-Identifier: GPL-2.0
> >> +#include <linux/device.h>
> >> +#include <linux/dma-buf.h>
> >> +#include <linux/err.h>
> >> +#include <linux/idr.h>
> >> +#include <linux/list.h>
> >> +#include <linux/slab.h>
> >> +#include <linux/uaccess.h>
> >> +#include <uapi/linux/dma-heap.h>
> >> +
> >> +#include "heap-helpers.h"
> >> +
> >> +
> >> +static void *dma_heap_map_kernel(struct heap_helper_buffer *buffer)
> >> +{
> >> +	struct scatterlist *sg;
> >> +	int i, j;
> >> +	void *vaddr;
> >> +	pgprot_t pgprot;
> >> +	struct sg_table *table = buffer->sg_table;
> >> +	int npages = PAGE_ALIGN(buffer->heap_buffer.size) / PAGE_SIZE;
> >> +	struct page **pages = vmalloc(array_size(npages,
> >> +						 sizeof(struct page *)));
> >> +	struct page **tmp = pages;
> >> +
> >> +	if (!pages)
> >> +		return ERR_PTR(-ENOMEM);
> >> +
> >> +	pgprot = PAGE_KERNEL;
> >> +
> >> +	for_each_sg(table->sgl, sg, table->nents, i) {
> >> +		int npages_this_entry = PAGE_ALIGN(sg->length) / PAGE_SIZE;
> >> +		struct page *page = sg_page(sg);
> >> +
> >> +		WARN_ON(i >= npages);
> >> +		for (j = 0; j < npages_this_entry; j++)
> >> +			*(tmp++) = page++;
> >> +	}
> >> +	vaddr = vmap(pages, npages, VM_MAP, pgprot);
> >> +	vfree(pages);
> >> +
> >> +	if (!vaddr)
> >> +		return ERR_PTR(-ENOMEM);
> >> +
> >> +	return vaddr;
> >> +}
> >> +
> >> +static int dma_heap_map_user(struct heap_helper_buffer *buffer,
> >> +			 struct vm_area_struct *vma)
> >> +{
> >> +	struct sg_table *table = buffer->sg_table;
> >> +	unsigned long addr = vma->vm_start;
> >> +	unsigned long offset = vma->vm_pgoff * PAGE_SIZE;
> >> +	struct scatterlist *sg;
> >> +	int i;
> >> +	int ret;
> >> +
> >> +	for_each_sg(table->sgl, sg, table->nents, i) {
> >> +		struct page *page = sg_page(sg);
> >> +		unsigned long remainder = vma->vm_end - addr;
> >> +		unsigned long len = sg->length;
> >> +
> >> +		if (offset >= sg->length) {
> >> +			offset -= sg->length;
> >> +			continue;
> >> +		} else if (offset) {
> >> +			page += offset / PAGE_SIZE;
> >> +			len = sg->length - offset;
> >> +			offset = 0;
> >> +		}
> >> +		len = min(len, remainder);
> >> +		ret = remap_pfn_range(vma, addr, page_to_pfn(page), len,
> >> +				      vma->vm_page_prot);
> >> +		if (ret)
> >> +			return ret;
> >> +		addr += len;
> >> +		if (addr >= vma->vm_end)
> >> +			return 0;
> >> +	}
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +
> >> +void dma_heap_buffer_destroy(struct dma_heap_buffer *heap_buffer)
> >> +{
> >> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> >> +
> >> +	if (buffer->kmap_cnt > 0) {
> >> +		pr_warn_once("%s: buffer still mapped in the kernel\n",
> >> +			     __func__);
> >> +		vunmap(buffer->vaddr);
> >> +	}
> >> +
> >> +	buffer->free(buffer);
> >> +}
> >> +
> >> +static void *dma_heap_buffer_kmap_get(struct dma_heap_buffer *heap_buffer)
> >> +{
> >> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> >> +	void *vaddr;
> >> +
> >> +	if (buffer->kmap_cnt) {
> >> +		buffer->kmap_cnt++;
> >> +		return buffer->vaddr;
> >> +	}
> >> +	vaddr = dma_heap_map_kernel(buffer);
> >> +	if (WARN_ONCE(!vaddr,
> >> +		      "heap->ops->map_kernel should return ERR_PTR on error"))
> >> +		return ERR_PTR(-EINVAL);
> >> +	if (IS_ERR(vaddr))
> >> +		return vaddr;
> >> +	buffer->vaddr = vaddr;
> >> +	buffer->kmap_cnt++;
> >> +	return vaddr;
> >> +}
> >> +
> >> +static void dma_heap_buffer_kmap_put(struct dma_heap_buffer *heap_buffer)
> >> +{
> >> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> >> +
> >> +	buffer->kmap_cnt--;
> >> +	if (!buffer->kmap_cnt) {
> >> +		vunmap(buffer->vaddr);
> >> +		buffer->vaddr = NULL;
> >> +	}
> >> +}
> >> +
> >> +static struct sg_table *dup_sg_table(struct sg_table *table)
> >> +{
> >> +	struct sg_table *new_table;
> >> +	int ret, i;
> >> +	struct scatterlist *sg, *new_sg;
> >> +
> >> +	new_table = kzalloc(sizeof(*new_table), GFP_KERNEL);
> >> +	if (!new_table)
> >> +		return ERR_PTR(-ENOMEM);
> >> +
> >> +	ret = sg_alloc_table(new_table, table->nents, GFP_KERNEL);
> >> +	if (ret) {
> >> +		kfree(new_table);
> >> +		return ERR_PTR(-ENOMEM);
> >> +	}
> >> +
> >> +	new_sg = new_table->sgl;
> >> +	for_each_sg(table->sgl, sg, table->nents, i) {
> >> +		memcpy(new_sg, sg, sizeof(*sg));
> >> +		new_sg->dma_address = 0;
> >> +		new_sg = sg_next(new_sg);
> >> +	}
> >> +
> >> +	return new_table;
> >> +}
> >> +
> >> +static void free_duped_table(struct sg_table *table)
> >> +{
> >> +	sg_free_table(table);
> >> +	kfree(table);
> >> +}
> >> +
> >> +struct dma_heaps_attachment {
> >> +	struct device *dev;
> >> +	struct sg_table *table;
> >> +	struct list_head list;
> >> +};
> >> +
> >> +static int dma_heap_attach(struct dma_buf *dmabuf,
> >> +			      struct dma_buf_attachment *attachment)
> >> +{
> >> +	struct dma_heaps_attachment *a;
> >> +	struct sg_table *table;
> >> +	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
> >> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> >> +
> >> +	a = kzalloc(sizeof(*a), GFP_KERNEL);
> >> +	if (!a)
> >> +		return -ENOMEM;
> >> +
> >> +	table = dup_sg_table(buffer->sg_table);
> >> +	if (IS_ERR(table)) {
> >> +		kfree(a);
> >> +		return -ENOMEM;
> >> +	}
> >> +
> >> +	a->table = table;
> >> +	a->dev = attachment->dev;
> >> +	INIT_LIST_HEAD(&a->list);
> >> +
> >> +	attachment->priv = a;
> >> +
> >> +	mutex_lock(&buffer->lock);
> >> +	list_add(&a->list, &buffer->attachments);
> >> +	mutex_unlock(&buffer->lock);
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +static void dma_heap_detatch(struct dma_buf *dmabuf,
> >> +				struct dma_buf_attachment *attachment)
> >> +{
> >> +	struct dma_heaps_attachment *a = attachment->priv;
> >> +	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
> >> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> >> +
> >> +	mutex_lock(&buffer->lock);
> >> +	list_del(&a->list);
> >> +	mutex_unlock(&buffer->lock);
> >> +	free_duped_table(a->table);
> >> +
> >> +	kfree(a);
> >> +}
> >> +
> >> +static struct sg_table *dma_heap_map_dma_buf(
> >> +					struct dma_buf_attachment *attachment,
> >> +					enum dma_data_direction direction)
> >> +{
> >> +	struct dma_heaps_attachment *a = attachment->priv;
> >> +	struct sg_table *table;
> >> +
> >> +	table = a->table;
> >> +
> >> +	if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
> >> +			direction))
> > 
> > Since this code is used for system heap and and as the reference.
> > In multimedia uses cases very large buffers can be allocated from system 
> > heap, and since system heap allocations have a cached kernel mapping it 
> > has been difficult to support uncached allocations so clients will likely 
> > allocate them as cached allocations.
> > Most access to these buffers will occur from non IO-coherent devices, 
> > however in frameworks such as Android these buffers will be dma mapped and 
> > dma unmapped frequently, for every frame and for each device in the 
> > "buffer pipeline", which leads to a lot of unnecessary cache maintenance 
> > in the dma map and dma unmap calls.
> > From previous discussions it doesn't seem like this could be optimized by 
> > only making changes to dma-buf heaps framework.
> > 
> > So I think it would be helpful to try and agree on what types of changes 
> > would be required to the Android framework and possibly the dma-buf heaps 
> > framework to resolve this.
> > Example
> > 
> > - Have Android framework keep buffers dma mapped for the whole use case
> > 
> > - Or perhaps have Android keep required devices attached to the buffer as 
> > they are "pipelined" so that cache maintenance can be skipped in dma map 
> > and dma umnap but reliably applied in begin/end_cpu_access.
> > 
> 
> I don't have a strong opinion on the solution here for Android, but from
> kernel/hardware side I can see we don't have a good ownership model for
> dma-buf and that looks to be causing your unneeded cache ops.
> 
> Let my give my vision for the model you request below.
> 
> 
> >> +	 table = ERR_PTR(-ENOMEM); 
> >> +	 return table; 
> >> +} 
> >> +
> >> +static void dma_heap_unmap_dma_buf(struct dma_buf_attachment *attachment,
> >> +			      struct sg_table *table,
> >> +			      enum dma_data_direction direction)
> >> +{
> >> +	dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
> >> +}
> >> +
> >> +static int dma_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
> >> +{
> >> +	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
> >> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> >> +	int ret = 0;
> >> +
> >> +	mutex_lock(&buffer->lock);
> >> +	/* now map it to userspace */
> >> +	ret = dma_heap_map_user(buffer, vma);
> > 
> > Since this code is used for system heap and and as the reference.
> > Currently in Android when a system heap buffer is moving down the 
> > "pipeline" CPU access may occur and there may be no device attached.
> > In the past people have raised concerns that this should perhaps not be 
> > supported and that at least one device should be attached when CPU access 
> > occurs.
> > 
> > Can we decide, from a dma-buf contract perspective, on whether CPU access 
> > without a device attached should allowed and make it clear in the dma-buf 
> > documentation either way?
> > 
> > If it should not be allowed we can try to work with Android to see how 
> > they can change their framework to align with the dma-buf spec.
> > 
> 
> As you say, right now DMA-BUF does not really provide guidelines for
> "well behaved" heaps, and as such we all end up making different
> assumptions. Although I don't think they below should be enforced for
> all DMA-BUFs it should be used for our first set of reference heaps.
> 
> So from the start, lets looks at DMA-BUF buffer lifetimes. A buffer
> begins by being allocated, at this point the backing resources (usually
> some pages in physical CPU addressable RAM) have *not* been allocated
> this is a core feature of DMA-BUF allowing smart allocations at map
> time. At this point we should consider the buffer in a state "non-owned
> non-backed".
> 
> Next we have three valid actions:
>  * CPU use
>  * device attachments
>  * free
> 
> CPU use starts with a call to begin_cpu_access(), the buffer will now
> both become CPU owned and will pin it's backing buffer. This last part
> can cause begin_cpu_access() to fail for some types of buffers that only
> pick their backing resource based on the set of attached devices, which
> at this point we have none. For heaps like "system" heap this should not
> be a problem as the backing storage is already selected at allocation time.
> 
> Device attachments and mapping should behave as they do now. After
> mapping, the buffer is now owned by the attached devices and backed. The
> ownership can be transfered to the CPU and back as usual with
> {begin,end}_cpu_access().
> 
> Free right after allocation without anyone taking ownership is trivial.
> 
> Now here is where things may get interesting, we can now be in the
> following states:
> 
> CPU owned backed
> Device owned backed
> 
> What should we do when CPU owned and end_cpu_access() is called or when
> Device owned and all devices detach. What state do we go into? My
> opinion would be to return to "non-owned non-backed" which means the
> backing resource can be freed. The other option is to leave it backed. I
> beleave this is what Android expects right now as it returns buffers to
> a "BufferQueue" to later be dequeued and used again in the same way
> without time consuming reallocations.
> 

I think the issue may be more serious with Android than allow improving 
the performance of buffer re-allocation.

In cases such as video playback:
#1 video device reads and transforms the video data 
#2 optionally there may be some software video post processing 
#3 surface flinger 
#4 HAL 
#5 display device eventually gets the buffer

My understanding is the last device attached to the buffer (the video 
device) can be detached before Android sends the buffer down the 
"pipeline" where some software module may do CPU access and where 
eventually a new device is attached when the buffer is going to be acceded 
by a device. 

So Android is counting on the contents of the buffer being retained while 
it is "non-owned".

> For "system" heaps the result is the same outside of the cache ops, if
> we maintain a buffer state like the above we can always know when a
> cache op or similar is needed (only when transitioning owners, not on
> every map/unmap). 

Assuming I understood your comments correctly, I agree that we will know 
"when" to optimally apply the cache maintenance but the problem with 
allowing CPU access to buffers that are now "non-owned" is that we can't 
apply the cache maintenance when it is required (because there is no 
longer a device available).


> For more complex heaps we can do something similar
> when transitioning from backed to non-backed (and vice-versa) such as
> migrating the backing data to a more appropriate resource given the new
> set of attaching/mapping devices.
> 
> A more complete state transition map will probably be needed to fill out
> what should be allowed and what to do when, and I agree it will be good
> to have that for this first set of reference heaps. But all of this can
> be done now with DMA-heaps, so I don't want to tie up the core
> de-staging too much on getting every possible heap to behave just right
> either..
> 
> >> +	mutex_unlock(&buffer->lock);
> >> +
> >> +	if (ret)
> >> +		pr_err("%s: failure mapping buffer to userspace\n",
> >> +		       __func__);
> >> +
> >> +	return ret;
> >> +}
> >> +
> >> +static void dma_heap_dma_buf_release(struct dma_buf *dmabuf)
> >> +{
> >> +	struct dma_heap_buffer *buffer = dmabuf->priv;
> >> +
> >> +	dma_heap_buffer_destroy(buffer);
> >> +}
> >> +
> >> +static void *dma_heap_dma_buf_kmap(struct dma_buf *dmabuf,
> >> +					unsigned long offset)
> >> +{
> >> +	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
> >> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> >> +
> >> +	return buffer->vaddr + offset * PAGE_SIZE;
> >> +}
> >> +
> >> +static void dma_heap_dma_buf_kunmap(struct dma_buf *dmabuf,
> >> +					unsigned long offset,
> >> +					void *ptr)
> >> +{
> >> +}
> >> +
> >> +static int dma_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> >> +					enum dma_data_direction direction)
> >> +{
> >> +	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
> >> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> >> +	void *vaddr;
> >> +	struct dma_heaps_attachment *a;
> >> +	int ret = 0;
> >> +
> >> +	mutex_lock(&buffer->lock);
> >> +	vaddr = dma_heap_buffer_kmap_get(heap_buffer);
> > 
> > Since this code is used for system heap and and as the reference.
> > As has been discussed in the past there are several disadvantages to 
> > creating a kernel mapping on each call to begin_cpu_access.
> > 
> > The resulting call to alloc_vmap_area is expensive and can hurt client 
> > KPIs
> > 
> > Allowing userspace clients to create and destroy kernel mappings can 
> > provide opportunities to crash the kernel.
> > 
> > Can we look at removing the creation of a kernel mapping in 
> > begin_cpu_access and either introduce support for dma_buf_vmap and have 
> > clients use that instead or perahps change 
> > the contract for dma_buf_kmap so that it doesn't always need to succeed?
> > 
> 
> Agree, for this we should just fail (or succeed but perform no action?)
> if there are no active mappings (neither in kernel (kmap, vmap) nor in
> userspace (mmap)), forcing a new mapping just to keep
> dma_sync_sg_for_cpu() happy is not the correct thing to do here.
> 
> >> +	if (IS_ERR(vaddr)) {
> >> +		ret = PTR_ERR(vaddr);
> >> +		goto unlock;
> >> +	}
> >> +	mutex_unlock(&buffer->lock);
> >> +
> >> +	mutex_lock(&buffer->lock);
> >> +	list_for_each_entry(a, &buffer->attachments, list) {
> >> +		dma_sync_sg_for_cpu(a->dev, a->table->sgl, a->table->nents,
> >> +				    direction);
> > 
> > Since this code is used for system heap and and as the reference.
> > Not a major issue for newer kernels but I still don't think it makes sense 
> > to apply cache maintenance when the buffer is not dma mapped, it doesn't 
> > makes sense to me from a logical perspective and from a performance 
> > perspective.
> > 
> 
> Agree again, this is only needed when we are transitioning out of state
> "device owned" which we will not be in if we have no active
> attached/mapped devices.
> 
> > 
> >> +	}
> >> +
> >> +unlock:
> >> +	mutex_unlock(&buffer->lock);
> >> +	return ret;
> >> +}
> >> +
> >> +static int dma_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
> >> +				      enum dma_data_direction direction)
> >> +{
> >> +	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
> >> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> >> +	struct dma_heaps_attachment *a;
> >> +
> >> +	mutex_lock(&buffer->lock);
> >> +	dma_heap_buffer_kmap_put(heap_buffer);
> >> +	mutex_unlock(&buffer->lock);
> >> +
> >> +	mutex_lock(&buffer->lock);
> >> +	list_for_each_entry(a, &buffer->attachments, list) {
> >> +		dma_sync_sg_for_device(a->dev, a->table->sgl, a->table->nents,
> >> +				       direction);
> > 
> > There are use cases in Android which result in only small parts of a large 
> > buffer are written do during of CPU access.
> > 
> 
> We have the same requirements for some OpenCL use-cases
> (clCreateSubBuffer), which allows for chunks of a larger buffer to have
> different properties and therefor cache operations different from the
> larger parrent buffer.
> 
> > Applying cache maintenance to the complete buffer results in a lot of 
> > unnecessary cache maintenance that can affect KPIs.
> > 
> > I believe the Android team is wondering if there could be a way to support 
> > partial maintenance in  where userspace could describe the buffer changes 
> > they have made.
> > 
> 
> The issue I have with that is it would require userspace to know about
> the cache line size of all involved devices, otherwise you would have
> users writing to the first 64B and invalidate to read the second 64B
> written from a DMA device outside the cache, in a system with 128B
> cache-lines you will always lose some data here.
> 

Good point.

> > I think it would be useful to make sure that there is a least a path 
> > forward with the current dma-buf heaps framework to solve this for system 
> > heap allocations.
> > 
> 
> DMA-BUF sync operations always act on the full buffer, changing this
> would require changes to the core DMA-BUF framework (so outside the
> scope of this set) :)
> 
> Andrew
> 
> > I can get more details on the specific use cases if required.
> > 
> >> +	}
> >> +	mutex_unlock(&buffer->lock);
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +const struct dma_buf_ops heap_helper_ops = {
> >> +	.map_dma_buf = dma_heap_map_dma_buf,
> >> +	.unmap_dma_buf = dma_heap_unmap_dma_buf,
> >> +	.mmap = dma_heap_mmap,
> >> +	.release = dma_heap_dma_buf_release,
> >> +	.attach = dma_heap_attach,
> >> +	.detach = dma_heap_detatch,
> >> +	.begin_cpu_access = dma_heap_dma_buf_begin_cpu_access,
> >> +	.end_cpu_access = dma_heap_dma_buf_end_cpu_access,
> >> +	.map = dma_heap_dma_buf_kmap,
> >> +	.unmap = dma_heap_dma_buf_kunmap,
> >> +};
> >> diff --git a/drivers/dma-buf/heaps/heap-helpers.h b/drivers/dma-buf/heaps/heap-helpers.h
> >> new file mode 100644
> >> index 0000000..0bd8643
> >> --- /dev/null
> >> +++ b/drivers/dma-buf/heaps/heap-helpers.h
> >> @@ -0,0 +1,48 @@
> >> +/* SPDX-License-Identifier: GPL-2.0 */
> >> +/*
> >> + * DMABUF Heaps helper code
> >> + *
> >> + * Copyright (C) 2011 Google, Inc.
> >> + * Copyright (C) 2019 Linaro Ltd.
> >> + */
> >> +
> >> +#ifndef _HEAP_HELPERS_H
> >> +#define _HEAP_HELPERS_H
> >> +
> >> +#include <linux/dma-heap.h>
> >> +#include <linux/list.h>
> >> +
> >> +struct heap_helper_buffer {
> >> +	struct dma_heap_buffer heap_buffer;
> >> +
> >> +	unsigned long private_flags;
> >> +	void *priv_virt;
> >> +	struct mutex lock;
> >> +	int kmap_cnt;
> >> +	void *vaddr;
> >> +	struct sg_table *sg_table;
> >> +	struct list_head attachments;
> >> +
> >> +	void (*free)(struct heap_helper_buffer *buffer);
> >> +
> >> +};
> >> +
> >> +#define to_helper_buffer(x) \
> >> +	container_of(x, struct heap_helper_buffer, heap_buffer)
> >> +
> >> +static inline void INIT_HEAP_HELPER_BUFFER(struct heap_helper_buffer *buffer,
> >> +				 void (*free)(struct heap_helper_buffer *))
> >> +{
> >> +	buffer->private_flags = 0;
> >> +	buffer->priv_virt = NULL;
> >> +	mutex_init(&buffer->lock);
> >> +	buffer->kmap_cnt = 0;
> >> +	buffer->vaddr = NULL;
> >> +	buffer->sg_table = NULL;
> >> +	INIT_LIST_HEAD(&buffer->attachments);
> >> +	buffer->free = free;
> >> +}
> >> +
> >> +extern const struct dma_buf_ops heap_helper_ops;
> >> +
> >> +#endif /* _HEAP_HELPERS_H */
> >> -- 
> >> 2.7.4
> >>
> >>
> > 
> > Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
> > a Linux Foundation Collaborative Project
> > 
> 

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 0/5 v2] DMA-BUF Heaps (destaging ION)
  2019-03-13 22:30   ` John Stultz
@ 2019-03-13 23:29     ` Liam Mark
  2019-03-19 16:54     ` Benjamin Gaignard
  1 sibling, 0 replies; 68+ messages in thread
From: Liam Mark @ 2019-03-13 23:29 UTC (permalink / raw)
  To: John Stultz
  Cc: lkml, Laura Abbott, Benjamin Gaignard, Greg KH, Sumit Semwal,
	Brian Starkey, Andrew F . Davis, Chenbo Feng, Alistair Strachan,
	dri-devel, Vincent Donnefort, Marissa Wall

On Wed, 13 Mar 2019, John Stultz wrote:

> On Wed, Mar 13, 2019 at 1:11 PM Liam Mark <lmark@codeaurora.org> wrote:
> > On Tue, 5 Mar 2019, John Stultz wrote:
> > >
> > > Eventual TODOS:
> > > * Reimplement page-pool for system heap (working on this)
> > > * Add stats accounting to system/cma heaps
> > > * Make the kselftest actually useful
> > > * Add other heaps folks see as useful (would love to get
> > >   some help from actual carveout/chunk users)!
> >
> > We use a modified carveout heap for certain secure use cases.
> 
> Cool! It would be great to see if you have any concerns about adding
> such a secure-carveout heap to this framework. I suspect it would be
> fairly similar to how its integrated into ION, but particularly I'd be
> interested in issues around the lack of private flags and other
> allocation arguments like alignment.
> 

We are actively working to drop our secure careveout heap in order to 
improve memory utilization so I don't think there would be a good case for 
upstreaming it.

Our other secure heaps are CMA based and system heap based.
Because people have had difficulty designing a generic secure heap which 
would satisfy enough of everybody's use cases to be upstreamable we have 
been looking into moving away from having local secure heap changes and 
instead have been looking at how to instead have a separate driver be 
responsible for making an ION buffer memory secure. The idea was to do 
this in order to remove a lot of our local ION changes, but if a secure 
heap was upstreamed that supported our secure use cases I am sure we would 
be interested in using that.

The local change the ION API to support these heaps is the addition of all 
the VMID flags so that the client can specify where the client wants the 
memory assigned.


> > Although there would probably be some benefit in discssing how the dma-buf
> > heap framework may want to support
> > secure heaps in the future it is a large topic which I assume you don't
> > want to tackle now.
> 
> So I suspect others (Benjamin?) would have a more informed opinion on
> the details, but the intent is to allow secure heap implementations.
> I'm not sure what areas of concern you have for this allocation
> framework in particular?
> 

I don't have any areas of concern, my thought was just that flushing out a 
potential design for an upstreamable secure heap would allow us catch 
early on if there is anything fundamental that would need to change 
dma-buf heaps framework (such as the allocation API).
I don't think this is a necessary task at this point. 

Liam

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 2/5 v2] dma-buf: heaps: Add heap helpers
  2019-03-13 22:57       ` Liam Mark
@ 2019-03-13 23:42         ` Andrew F. Davis
  0 siblings, 0 replies; 68+ messages in thread
From: Andrew F. Davis @ 2019-03-13 23:42 UTC (permalink / raw)
  To: Liam Mark
  Cc: John Stultz, lkml, Laura Abbott, Benjamin Gaignard, Greg KH,
	Sumit Semwal, Brian Starkey, Chenbo Feng, Alistair Strachan,
	dri-devel

On 3/13/19 5:57 PM, Liam Mark wrote:
> On Wed, 13 Mar 2019, Andrew F. Davis wrote:
> 
>> On 3/13/19 3:18 PM, Liam Mark wrote:
>>> On Tue, 5 Mar 2019, John Stultz wrote:
>>>
>>>> Add generic helper dmabuf ops for dma heaps, so we can reduce
>>>> the amount of duplicative code for the exported dmabufs.
>>>>
>>>> This code is an evolution of the Android ION implementation, so
>>>> thanks to its original authors and maintainters:
>>>>   Rebecca Schultz Zavin, Colin Cross, Laura Abbott, and others!
>>>>
>>>> Cc: Laura Abbott <labbott@redhat.com>
>>>> Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
>>>> Cc: Greg KH <gregkh@linuxfoundation.org>
>>>> Cc: Sumit Semwal <sumit.semwal@linaro.org>
>>>> Cc: Liam Mark <lmark@codeaurora.org>
>>>> Cc: Brian Starkey <Brian.Starkey@arm.com>
>>>> Cc: Andrew F. Davis <afd@ti.com>
>>>> Cc: Chenbo Feng <fengc@google.com>
>>>> Cc: Alistair Strachan <astrachan@google.com>
>>>> Cc: dri-devel@lists.freedesktop.org
>>>> Signed-off-by: John Stultz <john.stultz@linaro.org>
>>>> ---
>>>> v2:
>>>> * Removed cache management performance hack that I had
>>>>   accidentally folded in.
>>>> * Removed stats code that was in helpers
>>>> * Lots of checkpatch cleanups
>>>> ---
>>>>  drivers/dma-buf/Makefile             |   1 +
>>>>  drivers/dma-buf/heaps/Makefile       |   2 +
>>>>  drivers/dma-buf/heaps/heap-helpers.c | 335 +++++++++++++++++++++++++++++++++++
>>>>  drivers/dma-buf/heaps/heap-helpers.h |  48 +++++
>>>>  4 files changed, 386 insertions(+)
>>>>  create mode 100644 drivers/dma-buf/heaps/Makefile
>>>>  create mode 100644 drivers/dma-buf/heaps/heap-helpers.c
>>>>  create mode 100644 drivers/dma-buf/heaps/heap-helpers.h
>>>>
>>>> diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
>>>> index b0332f1..09c2f2d 100644
>>>> --- a/drivers/dma-buf/Makefile
>>>> +++ b/drivers/dma-buf/Makefile
>>>> @@ -1,4 +1,5 @@
>>>>  obj-y := dma-buf.o dma-fence.o dma-fence-array.o reservation.o seqno-fence.o
>>>> +obj-$(CONFIG_DMABUF_HEAPS)	+= heaps/
>>>>  obj-$(CONFIG_DMABUF_HEAPS)	+= dma-heap.o
>>>>  obj-$(CONFIG_SYNC_FILE)		+= sync_file.o
>>>>  obj-$(CONFIG_SW_SYNC)		+= sw_sync.o sync_debug.o
>>>> diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
>>>> new file mode 100644
>>>> index 0000000..de49898
>>>> --- /dev/null
>>>> +++ b/drivers/dma-buf/heaps/Makefile
>>>> @@ -0,0 +1,2 @@
>>>> +# SPDX-License-Identifier: GPL-2.0
>>>> +obj-y					+= heap-helpers.o
>>>> diff --git a/drivers/dma-buf/heaps/heap-helpers.c b/drivers/dma-buf/heaps/heap-helpers.c
>>>> new file mode 100644
>>>> index 0000000..ae5e9d0
>>>> --- /dev/null
>>>> +++ b/drivers/dma-buf/heaps/heap-helpers.c
>>>> @@ -0,0 +1,335 @@
>>>> +// SPDX-License-Identifier: GPL-2.0
>>>> +#include <linux/device.h>
>>>> +#include <linux/dma-buf.h>
>>>> +#include <linux/err.h>
>>>> +#include <linux/idr.h>
>>>> +#include <linux/list.h>
>>>> +#include <linux/slab.h>
>>>> +#include <linux/uaccess.h>
>>>> +#include <uapi/linux/dma-heap.h>
>>>> +
>>>> +#include "heap-helpers.h"
>>>> +
>>>> +
>>>> +static void *dma_heap_map_kernel(struct heap_helper_buffer *buffer)
>>>> +{
>>>> +	struct scatterlist *sg;
>>>> +	int i, j;
>>>> +	void *vaddr;
>>>> +	pgprot_t pgprot;
>>>> +	struct sg_table *table = buffer->sg_table;
>>>> +	int npages = PAGE_ALIGN(buffer->heap_buffer.size) / PAGE_SIZE;
>>>> +	struct page **pages = vmalloc(array_size(npages,
>>>> +						 sizeof(struct page *)));
>>>> +	struct page **tmp = pages;
>>>> +
>>>> +	if (!pages)
>>>> +		return ERR_PTR(-ENOMEM);
>>>> +
>>>> +	pgprot = PAGE_KERNEL;
>>>> +
>>>> +	for_each_sg(table->sgl, sg, table->nents, i) {
>>>> +		int npages_this_entry = PAGE_ALIGN(sg->length) / PAGE_SIZE;
>>>> +		struct page *page = sg_page(sg);
>>>> +
>>>> +		WARN_ON(i >= npages);
>>>> +		for (j = 0; j < npages_this_entry; j++)
>>>> +			*(tmp++) = page++;
>>>> +	}
>>>> +	vaddr = vmap(pages, npages, VM_MAP, pgprot);
>>>> +	vfree(pages);
>>>> +
>>>> +	if (!vaddr)
>>>> +		return ERR_PTR(-ENOMEM);
>>>> +
>>>> +	return vaddr;
>>>> +}
>>>> +
>>>> +static int dma_heap_map_user(struct heap_helper_buffer *buffer,
>>>> +			 struct vm_area_struct *vma)
>>>> +{
>>>> +	struct sg_table *table = buffer->sg_table;
>>>> +	unsigned long addr = vma->vm_start;
>>>> +	unsigned long offset = vma->vm_pgoff * PAGE_SIZE;
>>>> +	struct scatterlist *sg;
>>>> +	int i;
>>>> +	int ret;
>>>> +
>>>> +	for_each_sg(table->sgl, sg, table->nents, i) {
>>>> +		struct page *page = sg_page(sg);
>>>> +		unsigned long remainder = vma->vm_end - addr;
>>>> +		unsigned long len = sg->length;
>>>> +
>>>> +		if (offset >= sg->length) {
>>>> +			offset -= sg->length;
>>>> +			continue;
>>>> +		} else if (offset) {
>>>> +			page += offset / PAGE_SIZE;
>>>> +			len = sg->length - offset;
>>>> +			offset = 0;
>>>> +		}
>>>> +		len = min(len, remainder);
>>>> +		ret = remap_pfn_range(vma, addr, page_to_pfn(page), len,
>>>> +				      vma->vm_page_prot);
>>>> +		if (ret)
>>>> +			return ret;
>>>> +		addr += len;
>>>> +		if (addr >= vma->vm_end)
>>>> +			return 0;
>>>> +	}
>>>> +
>>>> +	return 0;
>>>> +}
>>>> +
>>>> +
>>>> +void dma_heap_buffer_destroy(struct dma_heap_buffer *heap_buffer)
>>>> +{
>>>> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
>>>> +
>>>> +	if (buffer->kmap_cnt > 0) {
>>>> +		pr_warn_once("%s: buffer still mapped in the kernel\n",
>>>> +			     __func__);
>>>> +		vunmap(buffer->vaddr);
>>>> +	}
>>>> +
>>>> +	buffer->free(buffer);
>>>> +}
>>>> +
>>>> +static void *dma_heap_buffer_kmap_get(struct dma_heap_buffer *heap_buffer)
>>>> +{
>>>> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
>>>> +	void *vaddr;
>>>> +
>>>> +	if (buffer->kmap_cnt) {
>>>> +		buffer->kmap_cnt++;
>>>> +		return buffer->vaddr;
>>>> +	}
>>>> +	vaddr = dma_heap_map_kernel(buffer);
>>>> +	if (WARN_ONCE(!vaddr,
>>>> +		      "heap->ops->map_kernel should return ERR_PTR on error"))
>>>> +		return ERR_PTR(-EINVAL);
>>>> +	if (IS_ERR(vaddr))
>>>> +		return vaddr;
>>>> +	buffer->vaddr = vaddr;
>>>> +	buffer->kmap_cnt++;
>>>> +	return vaddr;
>>>> +}
>>>> +
>>>> +static void dma_heap_buffer_kmap_put(struct dma_heap_buffer *heap_buffer)
>>>> +{
>>>> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
>>>> +
>>>> +	buffer->kmap_cnt--;
>>>> +	if (!buffer->kmap_cnt) {
>>>> +		vunmap(buffer->vaddr);
>>>> +		buffer->vaddr = NULL;
>>>> +	}
>>>> +}
>>>> +
>>>> +static struct sg_table *dup_sg_table(struct sg_table *table)
>>>> +{
>>>> +	struct sg_table *new_table;
>>>> +	int ret, i;
>>>> +	struct scatterlist *sg, *new_sg;
>>>> +
>>>> +	new_table = kzalloc(sizeof(*new_table), GFP_KERNEL);
>>>> +	if (!new_table)
>>>> +		return ERR_PTR(-ENOMEM);
>>>> +
>>>> +	ret = sg_alloc_table(new_table, table->nents, GFP_KERNEL);
>>>> +	if (ret) {
>>>> +		kfree(new_table);
>>>> +		return ERR_PTR(-ENOMEM);
>>>> +	}
>>>> +
>>>> +	new_sg = new_table->sgl;
>>>> +	for_each_sg(table->sgl, sg, table->nents, i) {
>>>> +		memcpy(new_sg, sg, sizeof(*sg));
>>>> +		new_sg->dma_address = 0;
>>>> +		new_sg = sg_next(new_sg);
>>>> +	}
>>>> +
>>>> +	return new_table;
>>>> +}
>>>> +
>>>> +static void free_duped_table(struct sg_table *table)
>>>> +{
>>>> +	sg_free_table(table);
>>>> +	kfree(table);
>>>> +}
>>>> +
>>>> +struct dma_heaps_attachment {
>>>> +	struct device *dev;
>>>> +	struct sg_table *table;
>>>> +	struct list_head list;
>>>> +};
>>>> +
>>>> +static int dma_heap_attach(struct dma_buf *dmabuf,
>>>> +			      struct dma_buf_attachment *attachment)
>>>> +{
>>>> +	struct dma_heaps_attachment *a;
>>>> +	struct sg_table *table;
>>>> +	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
>>>> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
>>>> +
>>>> +	a = kzalloc(sizeof(*a), GFP_KERNEL);
>>>> +	if (!a)
>>>> +		return -ENOMEM;
>>>> +
>>>> +	table = dup_sg_table(buffer->sg_table);
>>>> +	if (IS_ERR(table)) {
>>>> +		kfree(a);
>>>> +		return -ENOMEM;
>>>> +	}
>>>> +
>>>> +	a->table = table;
>>>> +	a->dev = attachment->dev;
>>>> +	INIT_LIST_HEAD(&a->list);
>>>> +
>>>> +	attachment->priv = a;
>>>> +
>>>> +	mutex_lock(&buffer->lock);
>>>> +	list_add(&a->list, &buffer->attachments);
>>>> +	mutex_unlock(&buffer->lock);
>>>> +
>>>> +	return 0;
>>>> +}
>>>> +
>>>> +static void dma_heap_detatch(struct dma_buf *dmabuf,
>>>> +				struct dma_buf_attachment *attachment)
>>>> +{
>>>> +	struct dma_heaps_attachment *a = attachment->priv;
>>>> +	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
>>>> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
>>>> +
>>>> +	mutex_lock(&buffer->lock);
>>>> +	list_del(&a->list);
>>>> +	mutex_unlock(&buffer->lock);
>>>> +	free_duped_table(a->table);
>>>> +
>>>> +	kfree(a);
>>>> +}
>>>> +
>>>> +static struct sg_table *dma_heap_map_dma_buf(
>>>> +					struct dma_buf_attachment *attachment,
>>>> +					enum dma_data_direction direction)
>>>> +{
>>>> +	struct dma_heaps_attachment *a = attachment->priv;
>>>> +	struct sg_table *table;
>>>> +
>>>> +	table = a->table;
>>>> +
>>>> +	if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
>>>> +			direction))
>>>
>>> Since this code is used for system heap and and as the reference.
>>> In multimedia uses cases very large buffers can be allocated from system 
>>> heap, and since system heap allocations have a cached kernel mapping it 
>>> has been difficult to support uncached allocations so clients will likely 
>>> allocate them as cached allocations.
>>> Most access to these buffers will occur from non IO-coherent devices, 
>>> however in frameworks such as Android these buffers will be dma mapped and 
>>> dma unmapped frequently, for every frame and for each device in the 
>>> "buffer pipeline", which leads to a lot of unnecessary cache maintenance 
>>> in the dma map and dma unmap calls.
>>> From previous discussions it doesn't seem like this could be optimized by 
>>> only making changes to dma-buf heaps framework.
>>>
>>> So I think it would be helpful to try and agree on what types of changes 
>>> would be required to the Android framework and possibly the dma-buf heaps 
>>> framework to resolve this.
>>> Example
>>>
>>> - Have Android framework keep buffers dma mapped for the whole use case
>>>
>>> - Or perhaps have Android keep required devices attached to the buffer as 
>>> they are "pipelined" so that cache maintenance can be skipped in dma map 
>>> and dma umnap but reliably applied in begin/end_cpu_access.
>>>
>>
>> I don't have a strong opinion on the solution here for Android, but from
>> kernel/hardware side I can see we don't have a good ownership model for
>> dma-buf and that looks to be causing your unneeded cache ops.
>>
>> Let my give my vision for the model you request below.
>>
>>
>>>> +	 table = ERR_PTR(-ENOMEM); 
>>>> +	 return table; 
>>>> +} 
>>>> +
>>>> +static void dma_heap_unmap_dma_buf(struct dma_buf_attachment *attachment,
>>>> +			      struct sg_table *table,
>>>> +			      enum dma_data_direction direction)
>>>> +{
>>>> +	dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
>>>> +}
>>>> +
>>>> +static int dma_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
>>>> +{
>>>> +	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
>>>> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
>>>> +	int ret = 0;
>>>> +
>>>> +	mutex_lock(&buffer->lock);
>>>> +	/* now map it to userspace */
>>>> +	ret = dma_heap_map_user(buffer, vma);
>>>
>>> Since this code is used for system heap and and as the reference.
>>> Currently in Android when a system heap buffer is moving down the 
>>> "pipeline" CPU access may occur and there may be no device attached.
>>> In the past people have raised concerns that this should perhaps not be 
>>> supported and that at least one device should be attached when CPU access 
>>> occurs.
>>>
>>> Can we decide, from a dma-buf contract perspective, on whether CPU access 
>>> without a device attached should allowed and make it clear in the dma-buf 
>>> documentation either way?
>>>
>>> If it should not be allowed we can try to work with Android to see how 
>>> they can change their framework to align with the dma-buf spec.
>>>
>>
>> As you say, right now DMA-BUF does not really provide guidelines for
>> "well behaved" heaps, and as such we all end up making different
>> assumptions. Although I don't think they below should be enforced for
>> all DMA-BUFs it should be used for our first set of reference heaps.
>>
>> So from the start, lets looks at DMA-BUF buffer lifetimes. A buffer
>> begins by being allocated, at this point the backing resources (usually
>> some pages in physical CPU addressable RAM) have *not* been allocated
>> this is a core feature of DMA-BUF allowing smart allocations at map
>> time. At this point we should consider the buffer in a state "non-owned
>> non-backed".
>>
>> Next we have three valid actions:
>>  * CPU use
>>  * device attachments
>>  * free
>>
>> CPU use starts with a call to begin_cpu_access(), the buffer will now
>> both become CPU owned and will pin it's backing buffer. This last part
>> can cause begin_cpu_access() to fail for some types of buffers that only
>> pick their backing resource based on the set of attached devices, which
>> at this point we have none. For heaps like "system" heap this should not
>> be a problem as the backing storage is already selected at allocation time.
>>
>> Device attachments and mapping should behave as they do now. After
>> mapping, the buffer is now owned by the attached devices and backed. The
>> ownership can be transfered to the CPU and back as usual with
>> {begin,end}_cpu_access().
>>
>> Free right after allocation without anyone taking ownership is trivial.
>>
>> Now here is where things may get interesting, we can now be in the
>> following states:
>>
>> CPU owned backed
>> Device owned backed
>>
>> What should we do when CPU owned and end_cpu_access() is called or when
>> Device owned and all devices detach. What state do we go into? My
>> opinion would be to return to "non-owned non-backed" which means the
>> backing resource can be freed. The other option is to leave it backed. I
>> beleave this is what Android expects right now as it returns buffers to
>> a "BufferQueue" to later be dequeued and used again in the same way
>> without time consuming reallocations.
>>
> 
> I think the issue may be more serious with Android than allow improving 
> the performance of buffer re-allocation.
> 
> In cases such as video playback:
> #1 video device reads and transforms the video data 
> #2 optionally there may be some software video post processing 
> #3 surface flinger 
> #4 HAL 
> #5 display device eventually gets the buffer
> 
> My understanding is the last device attached to the buffer (the video 
> device) can be detached before Android sends the buffer down the 
> "pipeline" where some software module may do CPU access and where 
> eventually a new device is attached when the buffer is going to be acceded 
> by a device. 
> 
> So Android is counting on the contents of the buffer being retained while 
> it is "non-owned".
> 

It would not be owned, but still "backed", after it has been once
attached/mapped it should it always keep the same data, even if that
data has to be migrated to different backing resources based on new
attachments. For instance if the video device and the display device
have some memory that the other cannot access, after the video device
maps it the heap may put the backing memory in a spot that cannot be
reached by the display device.

This is why a core part of DMA-BUF is that all users should be attached
before the first map so smart decisions can be made about the backing
memory. What Android does here is to assume DMA-BUF is always like
regular memory, it gets no use from the late-allocate features and so
may suffer some performance penalties in some cases for some heaps.

>> For "system" heaps the result is the same outside of the cache ops, if
>> we maintain a buffer state like the above we can always know when a
>> cache op or similar is needed (only when transitioning owners, not on
>> every map/unmap). 
> 
> Assuming I understood your comments correctly, I agree that we will know 
> "when" to optimally apply the cache maintenance but the problem with 
> allowing CPU access to buffers that are now "non-owned" is that we can't 
> apply the cache maintenance when it is required (because there is no 
> longer a device available).
> 

And without knowing what device was last attached we really cannot
always do the right thing. For example, if our above video device was IO
coherent the written data may have landed in L3 cache. So the standard
cache maintenance (cache invalidate) we would normally do would actually
wipe out the data we are looking for, but if it was not IO coherent it
would be needed. This is why the DMA framework doesn't like doing
maintenance without targeted device.

What you could do if you really want those savings (I'm still not sure
they are worth anything, but it's your platform) you could track the set
of last attached devices, then only when needed perform the ops. I don't
think that will scale very well, to me the last valid time to do the
maintenance is on device unmap, and so unless we can prove the CPU will
not access the data between then and the next map (and that the next
mapping device isn't IO coherent), then I really cant see a generic way
to avoid those ops.

Andrew

> 
>> For more complex heaps we can do something similar
>> when transitioning from backed to non-backed (and vice-versa) such as
>> migrating the backing data to a more appropriate resource given the new
>> set of attaching/mapping devices.
>>
>> A more complete state transition map will probably be needed to fill out
>> what should be allowed and what to do when, and I agree it will be good
>> to have that for this first set of reference heaps. But all of this can
>> be done now with DMA-heaps, so I don't want to tie up the core
>> de-staging too much on getting every possible heap to behave just right
>> either..
>>
>>>> +	mutex_unlock(&buffer->lock);
>>>> +
>>>> +	if (ret)
>>>> +		pr_err("%s: failure mapping buffer to userspace\n",
>>>> +		       __func__);
>>>> +
>>>> +	return ret;
>>>> +}
>>>> +
>>>> +static void dma_heap_dma_buf_release(struct dma_buf *dmabuf)
>>>> +{
>>>> +	struct dma_heap_buffer *buffer = dmabuf->priv;
>>>> +
>>>> +	dma_heap_buffer_destroy(buffer);
>>>> +}
>>>> +
>>>> +static void *dma_heap_dma_buf_kmap(struct dma_buf *dmabuf,
>>>> +					unsigned long offset)
>>>> +{
>>>> +	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
>>>> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
>>>> +
>>>> +	return buffer->vaddr + offset * PAGE_SIZE;
>>>> +}
>>>> +
>>>> +static void dma_heap_dma_buf_kunmap(struct dma_buf *dmabuf,
>>>> +					unsigned long offset,
>>>> +					void *ptr)
>>>> +{
>>>> +}
>>>> +
>>>> +static int dma_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
>>>> +					enum dma_data_direction direction)
>>>> +{
>>>> +	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
>>>> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
>>>> +	void *vaddr;
>>>> +	struct dma_heaps_attachment *a;
>>>> +	int ret = 0;
>>>> +
>>>> +	mutex_lock(&buffer->lock);
>>>> +	vaddr = dma_heap_buffer_kmap_get(heap_buffer);
>>>
>>> Since this code is used for system heap and and as the reference.
>>> As has been discussed in the past there are several disadvantages to 
>>> creating a kernel mapping on each call to begin_cpu_access.
>>>
>>> The resulting call to alloc_vmap_area is expensive and can hurt client 
>>> KPIs
>>>
>>> Allowing userspace clients to create and destroy kernel mappings can 
>>> provide opportunities to crash the kernel.
>>>
>>> Can we look at removing the creation of a kernel mapping in 
>>> begin_cpu_access and either introduce support for dma_buf_vmap and have 
>>> clients use that instead or perahps change 
>>> the contract for dma_buf_kmap so that it doesn't always need to succeed?
>>>
>>
>> Agree, for this we should just fail (or succeed but perform no action?)
>> if there are no active mappings (neither in kernel (kmap, vmap) nor in
>> userspace (mmap)), forcing a new mapping just to keep
>> dma_sync_sg_for_cpu() happy is not the correct thing to do here.
>>
>>>> +	if (IS_ERR(vaddr)) {
>>>> +		ret = PTR_ERR(vaddr);
>>>> +		goto unlock;
>>>> +	}
>>>> +	mutex_unlock(&buffer->lock);
>>>> +
>>>> +	mutex_lock(&buffer->lock);
>>>> +	list_for_each_entry(a, &buffer->attachments, list) {
>>>> +		dma_sync_sg_for_cpu(a->dev, a->table->sgl, a->table->nents,
>>>> +				    direction);
>>>
>>> Since this code is used for system heap and and as the reference.
>>> Not a major issue for newer kernels but I still don't think it makes sense 
>>> to apply cache maintenance when the buffer is not dma mapped, it doesn't 
>>> makes sense to me from a logical perspective and from a performance 
>>> perspective.
>>>
>>
>> Agree again, this is only needed when we are transitioning out of state
>> "device owned" which we will not be in if we have no active
>> attached/mapped devices.
>>
>>>
>>>> +	}
>>>> +
>>>> +unlock:
>>>> +	mutex_unlock(&buffer->lock);
>>>> +	return ret;
>>>> +}
>>>> +
>>>> +static int dma_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
>>>> +				      enum dma_data_direction direction)
>>>> +{
>>>> +	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
>>>> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
>>>> +	struct dma_heaps_attachment *a;
>>>> +
>>>> +	mutex_lock(&buffer->lock);
>>>> +	dma_heap_buffer_kmap_put(heap_buffer);
>>>> +	mutex_unlock(&buffer->lock);
>>>> +
>>>> +	mutex_lock(&buffer->lock);
>>>> +	list_for_each_entry(a, &buffer->attachments, list) {
>>>> +		dma_sync_sg_for_device(a->dev, a->table->sgl, a->table->nents,
>>>> +				       direction);
>>>
>>> There are use cases in Android which result in only small parts of a large 
>>> buffer are written do during of CPU access.
>>>
>>
>> We have the same requirements for some OpenCL use-cases
>> (clCreateSubBuffer), which allows for chunks of a larger buffer to have
>> different properties and therefor cache operations different from the
>> larger parrent buffer.
>>
>>> Applying cache maintenance to the complete buffer results in a lot of 
>>> unnecessary cache maintenance that can affect KPIs.
>>>
>>> I believe the Android team is wondering if there could be a way to support 
>>> partial maintenance in  where userspace could describe the buffer changes 
>>> they have made.
>>>
>>
>> The issue I have with that is it would require userspace to know about
>> the cache line size of all involved devices, otherwise you would have
>> users writing to the first 64B and invalidate to read the second 64B
>> written from a DMA device outside the cache, in a system with 128B
>> cache-lines you will always lose some data here.
>>
> 
> Good point.
> 
>>> I think it would be useful to make sure that there is a least a path 
>>> forward with the current dma-buf heaps framework to solve this for system 
>>> heap allocations.
>>>
>>
>> DMA-BUF sync operations always act on the full buffer, changing this
>> would require changes to the core DMA-BUF framework (so outside the
>> scope of this set) :)
>>
>> Andrew
>>
>>> I can get more details on the specific use cases if required.
>>>
>>>> +	}
>>>> +	mutex_unlock(&buffer->lock);
>>>> +
>>>> +	return 0;
>>>> +}
>>>> +
>>>> +const struct dma_buf_ops heap_helper_ops = {
>>>> +	.map_dma_buf = dma_heap_map_dma_buf,
>>>> +	.unmap_dma_buf = dma_heap_unmap_dma_buf,
>>>> +	.mmap = dma_heap_mmap,
>>>> +	.release = dma_heap_dma_buf_release,
>>>> +	.attach = dma_heap_attach,
>>>> +	.detach = dma_heap_detatch,
>>>> +	.begin_cpu_access = dma_heap_dma_buf_begin_cpu_access,
>>>> +	.end_cpu_access = dma_heap_dma_buf_end_cpu_access,
>>>> +	.map = dma_heap_dma_buf_kmap,
>>>> +	.unmap = dma_heap_dma_buf_kunmap,
>>>> +};
>>>> diff --git a/drivers/dma-buf/heaps/heap-helpers.h b/drivers/dma-buf/heaps/heap-helpers.h
>>>> new file mode 100644
>>>> index 0000000..0bd8643
>>>> --- /dev/null
>>>> +++ b/drivers/dma-buf/heaps/heap-helpers.h
>>>> @@ -0,0 +1,48 @@
>>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>>> +/*
>>>> + * DMABUF Heaps helper code
>>>> + *
>>>> + * Copyright (C) 2011 Google, Inc.
>>>> + * Copyright (C) 2019 Linaro Ltd.
>>>> + */
>>>> +
>>>> +#ifndef _HEAP_HELPERS_H
>>>> +#define _HEAP_HELPERS_H
>>>> +
>>>> +#include <linux/dma-heap.h>
>>>> +#include <linux/list.h>
>>>> +
>>>> +struct heap_helper_buffer {
>>>> +	struct dma_heap_buffer heap_buffer;
>>>> +
>>>> +	unsigned long private_flags;
>>>> +	void *priv_virt;
>>>> +	struct mutex lock;
>>>> +	int kmap_cnt;
>>>> +	void *vaddr;
>>>> +	struct sg_table *sg_table;
>>>> +	struct list_head attachments;
>>>> +
>>>> +	void (*free)(struct heap_helper_buffer *buffer);
>>>> +
>>>> +};
>>>> +
>>>> +#define to_helper_buffer(x) \
>>>> +	container_of(x, struct heap_helper_buffer, heap_buffer)
>>>> +
>>>> +static inline void INIT_HEAP_HELPER_BUFFER(struct heap_helper_buffer *buffer,
>>>> +				 void (*free)(struct heap_helper_buffer *))
>>>> +{
>>>> +	buffer->private_flags = 0;
>>>> +	buffer->priv_virt = NULL;
>>>> +	mutex_init(&buffer->lock);
>>>> +	buffer->kmap_cnt = 0;
>>>> +	buffer->vaddr = NULL;
>>>> +	buffer->sg_table = NULL;
>>>> +	INIT_LIST_HEAD(&buffer->attachments);
>>>> +	buffer->free = free;
>>>> +}
>>>> +
>>>> +extern const struct dma_buf_ops heap_helper_ops;
>>>> +
>>>> +#endif /* _HEAP_HELPERS_H */
>>>> -- 
>>>> 2.7.4
>>>>
>>>>
>>>
>>> Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
>>> a Linux Foundation Collaborative Project
>>>
>>
> 
> Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
> a Linux Foundation Collaborative Project
> 

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 1/5 v2] dma-buf: Add dma-buf heaps framework
  2019-03-05 20:54 ` [RFC][PATCH 1/5 v2] dma-buf: Add dma-buf heaps framework John Stultz
  2019-03-06 16:12   ` Benjamin Gaignard
  2019-03-06 16:27   ` Andrew F. Davis
@ 2019-03-15  8:54   ` Christoph Hellwig
  2019-03-15 20:24     ` Andrew F. Davis
  2019-03-15 20:18   ` Laura Abbott
                     ` (2 subsequent siblings)
  5 siblings, 1 reply; 68+ messages in thread
From: Christoph Hellwig @ 2019-03-15  8:54 UTC (permalink / raw)
  To: John Stultz
  Cc: lkml, Andrew F. Davis, Laura Abbott, Benjamin Gaignard, Greg KH,
	Sumit Semwal, Liam Mark, Brian Starkey, Chenbo Feng,
	Alistair Strachan, dri-devel

> +static int dma_heap_release(struct inode *inode, struct file *filp)
> +{
> +	filp->private_data = NULL;
> +
> +	return 0;
> +}

No point in clearing ->private_data, the file is about to be freed.

> +
> +static long dma_heap_ioctl(struct file *filp, unsigned int cmd,
> +			   unsigned long arg)

Pleae don't use the weird legacy filp naming, file is a perfectly
valid and readable default name for struct file pointers.

> +{
> +	switch (cmd) {
> +	case DMA_HEAP_IOC_ALLOC:
> +	{
> +		struct dma_heap_allocation_data heap_allocation;
> +		struct dma_heap *heap = filp->private_data;
> +		int fd;

Please split each ioctl into a separate function from the very start,
otherwise this will grow into a spaghetty mess sooner than you can
see cheese.

> +	dev_ret = device_create(dma_heap_class,
> +				NULL,
> +				heap->heap_devt,
> +				NULL,
> +				heap->name);

No need this weird argument alignment.

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 1/5 v2] dma-buf: Add dma-buf heaps framework
  2019-03-06 16:12   ` Benjamin Gaignard
  2019-03-06 16:57     ` John Stultz
@ 2019-03-15  8:55     ` Christoph Hellwig
  1 sibling, 0 replies; 68+ messages in thread
From: Christoph Hellwig @ 2019-03-15  8:55 UTC (permalink / raw)
  To: Benjamin Gaignard
  Cc: John Stultz, lkml, Andrew F. Davis, Laura Abbott, Greg KH,
	Sumit Semwal, Liam Mark, Brian Starkey, Chenbo Feng,
	Alistair Strachan, ML dri-devel

Hi Benjamin,

please fix your mailer to avoid completely pointless full quotes.

Thank you!

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 2/5 v2] dma-buf: heaps: Add heap helpers
  2019-03-05 20:54 ` [RFC][PATCH 2/5 v2] dma-buf: heaps: Add heap helpers John Stultz
  2019-03-13 20:18   ` Liam Mark
@ 2019-03-15  9:06   ` Christoph Hellwig
  2019-03-19 15:03     ` Andrew F. Davis
  2019-03-21 20:01     ` John Stultz
  2019-03-19 14:26   ` Brian Starkey
  2019-03-21 20:43   ` Andrew F. Davis
  3 siblings, 2 replies; 68+ messages in thread
From: Christoph Hellwig @ 2019-03-15  9:06 UTC (permalink / raw)
  To: John Stultz
  Cc: lkml, Laura Abbott, Benjamin Gaignard, Greg KH, Sumit Semwal,
	Liam Mark, Brian Starkey, Andrew F . Davis, Chenbo Feng,
	Alistair Strachan, dri-devel

> +static void *dma_heap_map_kernel(struct heap_helper_buffer *buffer)
> +{
> +	struct scatterlist *sg;
> +	int i, j;
> +	void *vaddr;
> +	pgprot_t pgprot;
> +	struct sg_table *table = buffer->sg_table;
> +	int npages = PAGE_ALIGN(buffer->heap_buffer.size) / PAGE_SIZE;
> +	struct page **pages = vmalloc(array_size(npages,
> +						 sizeof(struct page *)));
> +	struct page **tmp = pages;
> +
> +	if (!pages)
> +		return ERR_PTR(-ENOMEM);
> +
> +	pgprot = PAGE_KERNEL;
> +
> +	for_each_sg(table->sgl, sg, table->nents, i) {
> +		int npages_this_entry = PAGE_ALIGN(sg->length) / PAGE_SIZE;
> +		struct page *page = sg_page(sg);
> +
> +		WARN_ON(i >= npages);
> +		for (j = 0; j < npages_this_entry; j++)
> +			*(tmp++) = page++;

This should probably use nth_page.

That being said I really wish we could have a more iterative version
of vmap, where the caller does a get_vm_area_caller and then adds
each chunk using another call, including the possibility of mapping
larger than PAGE_SIZE contigous ones.  Any chance you could look into
that?

> +		ret = remap_pfn_range(vma, addr, page_to_pfn(page), len,
> +				      vma->vm_page_prot);

So the same chunk could be mapped to userspace and vmap, and later on
also DMA mapped.  Who is going to take care of cache aliasing as I
see nothing of that in this series?

> +	if (buffer->kmap_cnt) {
> +		buffer->kmap_cnt++;
> +		return buffer->vaddr;
> +	}
> +	vaddr = dma_heap_map_kernel(buffer);
> +	if (WARN_ONCE(!vaddr,
> +		      "heap->ops->map_kernel should return ERR_PTR on error"))
> +		return ERR_PTR(-EINVAL);
> +	if (IS_ERR(vaddr))
> +		return vaddr;
> +	buffer->vaddr = vaddr;
> +	buffer->kmap_cnt++;

The cnt manioulation is odd.  The normal way to make this readable
is to use a postfix op on the check, as that makes it clear to everyone,
e.g.:

	if (buffer->kmap_cnt++)
		return buffer->vaddr;
	..

> +	buffer->kmap_cnt--;
> +	if (!buffer->kmap_cnt) {
> +		vunmap(buffer->vaddr);
> +		buffer->vaddr = NULL;
> +	}

Same here, just with an infix.

> +static inline void INIT_HEAP_HELPER_BUFFER(struct heap_helper_buffer *buffer,
> +				 void (*free)(struct heap_helper_buffer *))
> +{
> +	buffer->private_flags = 0;
> +	buffer->priv_virt = NULL;
> +	mutex_init(&buffer->lock);
> +	buffer->kmap_cnt = 0;
> +	buffer->vaddr = NULL;
> +	buffer->sg_table = NULL;
> +	INIT_LIST_HEAD(&buffer->attachments);
> +	buffer->free = free;
> +}

There is absolutely no reason to inlines this as far as I can tell.

Also it would seem much simpler to simply let the caller assign the
free callback.

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 3/5 v2] dma-buf: heaps: Add system heap to dmabuf heaps
  2019-03-05 20:54 ` [RFC][PATCH 3/5 v2] dma-buf: heaps: Add system heap to dmabuf heaps John Stultz
  2019-03-06 16:01   ` Benjamin Gaignard
  2019-03-13 20:20   ` Liam Mark
@ 2019-03-15  9:06   ` Christoph Hellwig
  2 siblings, 0 replies; 68+ messages in thread
From: Christoph Hellwig @ 2019-03-15  9:06 UTC (permalink / raw)
  To: John Stultz
  Cc: lkml, Laura Abbott, Benjamin Gaignard, Greg KH, Sumit Semwal,
	Liam Mark, Brian Starkey, Andrew F . Davis, Chenbo Feng,
	Alistair Strachan, dri-devel

> +	i = sg_alloc_table(table, npages, GFP_KERNEL);
> +	if (i)
> +		goto err1;
> +	for_each_sg(table->sgl, sg, table->nents, i) {
> +		struct page *page;
> +
> +		page = alloc_page(GFP_KERNEL);
> +		if (!page)
> +			goto err2;
> +		sg_set_page(sg, page, PAGE_SIZE, 0);
> +	}

Given that one if not the primary intent here is to DMA map the memory
this is a bad idea, as it might require bounce buffering.

What we really want here is something like this:

http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/dma-noncoherent-allocator.3

And I wonder if the S/G building should also move into common code
instead of being duplicated everywhere.

> +static int system_heap_create(void)
> +{
> +	struct system_heap *sys_heap;
> +
> +	sys_heap = kzalloc(sizeof(*sys_heap), GFP_KERNEL);
> +	if (!sys_heap)
> +		return -ENOMEM;
> +	sys_heap->heap.name = "system_heap";
> +	sys_heap->heap.ops = &system_heap_ops;
> +
> +	dma_heap_add(&sys_heap->heap);

Why is this dynamically allocated?

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 4/5 v2] dma-buf: heaps: Add CMA heap to dmabuf heapss
  2019-03-05 20:54 ` [RFC][PATCH 4/5 v2] dma-buf: heaps: Add CMA heap to dmabuf heapss John Stultz
  2019-03-06 16:05   ` Benjamin Gaignard
@ 2019-03-15  9:06   ` Christoph Hellwig
  2019-03-15 20:08     ` John Stultz
  2019-03-19 14:53   ` Brian Starkey
  2 siblings, 1 reply; 68+ messages in thread
From: Christoph Hellwig @ 2019-03-15  9:06 UTC (permalink / raw)
  To: John Stultz
  Cc: lkml, Laura Abbott, Benjamin Gaignard, Greg KH, Sumit Semwal,
	Liam Mark, Brian Starkey, Andrew F . Davis, Chenbo Feng,
	Alistair Strachan, dri-devel

On Tue, Mar 05, 2019 at 12:54:32PM -0800, John Stultz wrote:
> This adds a CMA heap, which allows userspace to allocate
> a dma-buf of contiguous memory out of a CMA region.

With my previous suggestion of DMA API usage you'd get CMA support for
free in the system one instead of all this duplicate code..

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 5/5 v2] kselftests: Add dma-heap test
  2019-03-06 17:01     ` John Stultz
@ 2019-03-15 20:07       ` Laura Abbott
  2019-03-15 20:13         ` John Stultz
  0 siblings, 1 reply; 68+ messages in thread
From: Laura Abbott @ 2019-03-15 20:07 UTC (permalink / raw)
  To: John Stultz, Benjamin Gaignard
  Cc: lkml, Greg KH, Sumit Semwal, Liam Mark, Brian Starkey,
	Andrew F . Davis, Chenbo Feng, Alistair Strachan, ML dri-devel

On 3/6/19 9:01 AM, John Stultz wrote:
> On Wed, Mar 6, 2019 at 8:14 AM Benjamin Gaignard
> <benjamin.gaignard@linaro.org> wrote:
>> Le mar. 5 mars 2019 à 21:54, John Stultz <john.stultz@linaro.org> a écrit :
>>> +
>>> +       printf("Allocating 1 MEG\n");
>>> +       ret = dmabuf_heap_alloc(heap_fd, ONE_MEG, 0, &dmabuf_fd);
>>> +       if (ret)
>>> +               goto out;
>>> +
>>> +       /* DO SOMETHING WITH THE DMABUF HERE? */
>>
>> You can do a call to mmap and write a pattern in the buffer.
> 
> Yea. I can also do some invalid allocations to make sure things fail properly.
> 
> But I was talking a bit w/ Sumit about the lack of any general dmabuf
> tests, and am curious if we need to have a importer device driver that
> can validate its a real dmabuf and exercise more of the dmabuf ops.
> 
> thanks
> -john
> 

There's the vgem driver in drm. I did some work to clean that
up so it could take an import af33a9190d02 ("drm/vgem: Enable dmabuf import
interfaces") . I mostly used it for private tests and never ended
up upstreaming any of the tests.

Thanks,
Laura

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 4/5 v2] dma-buf: heaps: Add CMA heap to dmabuf heapss
  2019-03-15  9:06   ` Christoph Hellwig
@ 2019-03-15 20:08     ` John Stultz
  0 siblings, 0 replies; 68+ messages in thread
From: John Stultz @ 2019-03-15 20:08 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: lkml, Laura Abbott, Benjamin Gaignard, Greg KH, Sumit Semwal,
	Liam Mark, Brian Starkey, Andrew F . Davis, Chenbo Feng,
	Alistair Strachan, dri-devel

On Fri, Mar 15, 2019 at 2:06 AM Christoph Hellwig <hch@infradead.org> wrote:
>
> On Tue, Mar 05, 2019 at 12:54:32PM -0800, John Stultz wrote:
> > This adds a CMA heap, which allows userspace to allocate
> > a dma-buf of contiguous memory out of a CMA region.
>
> With my previous suggestion of DMA API usage you'd get CMA support for
> free in the system one instead of all this duplicate code..

Hey Christoph! Thanks for the review here! I'm still digesting your
comments, so apologies if I misunderstand.

On the point here, unless you're referring to some earlier suggestion
on a previous discussion (and not the system heap feedback), part of
the reason there are separate heaps is to allow Android to be able to
optimize where the allocations are coming from to best match the use
case. So they only want to allocate CMA backed dmabufs when the use
case has devices that require it, or they may even want to have
separate a CMA region reserved for a specific use case (like camera
buffers). Similarly for any future heap for allocating secure
dma-bufs.  So while in the implementation we can consolidate the code
more, but we'd still probably want to have separate heaps.

Does that make sense? Am I misinterpreting your feedback?

thanks
-john

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 5/5 v2] kselftests: Add dma-heap test
  2019-03-15 20:07       ` Laura Abbott
@ 2019-03-15 20:13         ` John Stultz
  2019-03-15 20:49           ` Laura Abbott
  0 siblings, 1 reply; 68+ messages in thread
From: John Stultz @ 2019-03-15 20:13 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Benjamin Gaignard, lkml, Greg KH, Sumit Semwal, Liam Mark,
	Brian Starkey, Andrew F . Davis, Chenbo Feng, Alistair Strachan,
	ML dri-devel

On Fri, Mar 15, 2019 at 1:07 PM Laura Abbott <labbott@redhat.com> wrote:
>
> On 3/6/19 9:01 AM, John Stultz wrote:
> > On Wed, Mar 6, 2019 at 8:14 AM Benjamin Gaignard
> > <benjamin.gaignard@linaro.org> wrote:
> >> Le mar. 5 mars 2019 à 21:54, John Stultz <john.stultz@linaro.org> a écrit :
> >>> +
> >>> +       printf("Allocating 1 MEG\n");
> >>> +       ret = dmabuf_heap_alloc(heap_fd, ONE_MEG, 0, &dmabuf_fd);
> >>> +       if (ret)
> >>> +               goto out;
> >>> +
> >>> +       /* DO SOMETHING WITH THE DMABUF HERE? */
> >>
> >> You can do a call to mmap and write a pattern in the buffer.
> >
> > Yea. I can also do some invalid allocations to make sure things fail properly.
> >
> > But I was talking a bit w/ Sumit about the lack of any general dmabuf
> > tests, and am curious if we need to have a importer device driver that
> > can validate its a real dmabuf and exercise more of the dmabuf ops.
> >
> > thanks
> > -john
> >
>
> There's the vgem driver in drm. I did some work to clean that
> up so it could take an import af33a9190d02 ("drm/vgem: Enable dmabuf import
> interfaces") . I mostly used it for private tests and never ended
> up upstreaming any of the tests.

Thanks for the poitner, I'll check that out as well!  Also, if you
still have them around, I'd be interested in checking out the tests to
try to get something integrated into kselftest.

Talking with Brian yesterday, there was some thought that we should
try to put together some sort of example dmabuf pipeline that isn't
hardware dependent and can be used to demonstrate the usage model as
well as validate the frameworks and maybe even benchmark some of the
ideas floating around right now.  So suggestions here would be
welcome!

thanks
-john

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 1/5 v2] dma-buf: Add dma-buf heaps framework
  2019-03-05 20:54 ` [RFC][PATCH 1/5 v2] dma-buf: Add dma-buf heaps framework John Stultz
                     ` (2 preceding siblings ...)
  2019-03-15  8:54   ` Christoph Hellwig
@ 2019-03-15 20:18   ` Laura Abbott
  2019-03-15 20:49     ` Andrew F. Davis
  2019-03-15 21:29     ` John Stultz
  2019-03-19 12:08   ` Brian Starkey
  2019-03-27 14:53   ` Greg KH
  5 siblings, 2 replies; 68+ messages in thread
From: Laura Abbott @ 2019-03-15 20:18 UTC (permalink / raw)
  To: John Stultz, lkml
  Cc: Andrew F. Davis, Benjamin Gaignard, Greg KH, Sumit Semwal,
	Liam Mark, Brian Starkey, Chenbo Feng, Alistair Strachan,
	dri-devel

On 3/5/19 12:54 PM, John Stultz wrote:
> +DMA-BUF HEAPS FRAMEWORK
> +M:	Laura Abbott<labbott@redhat.com>
> +R:	Liam Mark<lmark@codeaurora.org>
> +R:	Brian Starkey<Brian.Starkey@arm.com>
> +R:	"Andrew F. Davis"<afd@ti.com>
> +R:	John Stultz<john.stultz@linaro.org>
> +S:	Maintained
> +L:	linux-media@vger.kernel.org
> +L:	dri-devel@lists.freedesktop.org
> +L:	linaro-mm-sig@lists.linaro.org  (moderated for non-subscribers)
> +F:	include/uapi/linux/dma-heap.h
> +F:	include/linux/dma-heap.h
> +F:	drivers/dma-buf/dma-heap.c
> +F:	drivers/dma-buf/heaps/*
> +T:	git git://anongit.freedesktop.org/drm/drm-misc

So I talked about this with Sumit privately but I think
it might make sense to have me step down as maintainer when
this goes out of staging. I mostly worked on Ion at my
previous position and anything I do now is mostly a side
project. I still want to see it succeed which is why I
took on the maintainer role but I don't want to become blocking
for people who have a stronger vision about where this needs
to go (see also, I'm not working with this on a daily basis).

If you just want someone to help review or take patches
to be pulled, I'm happy to do so but I'd hate to become
the bottleneck on getting things done for people who
are attempting to do real work.

Thanks,
Laura

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 1/5 v2] dma-buf: Add dma-buf heaps framework
  2019-03-15  8:54   ` Christoph Hellwig
@ 2019-03-15 20:24     ` Andrew F. Davis
  0 siblings, 0 replies; 68+ messages in thread
From: Andrew F. Davis @ 2019-03-15 20:24 UTC (permalink / raw)
  To: Christoph Hellwig, John Stultz
  Cc: lkml, Laura Abbott, Benjamin Gaignard, Greg KH, Sumit Semwal,
	Liam Mark, Brian Starkey, Chenbo Feng, Alistair Strachan,
	dri-devel

On 3/15/19 3:54 AM, Christoph Hellwig wrote:
>> +static int dma_heap_release(struct inode *inode, struct file *filp)
>> +{
>> +	filp->private_data = NULL;
>> +
>> +	return 0;
>> +}
> 
> No point in clearing ->private_data, the file is about to be freed.
> 

This was leftover from when release had some memory to free, will remove.

>> +
>> +static long dma_heap_ioctl(struct file *filp, unsigned int cmd,
>> +			   unsigned long arg)
> 
> Pleae don't use the weird legacy filp naming, file is a perfectly
> valid and readable default name for struct file pointers.
> 

Thanks for info, I saw both used and this was used where I found the
prototype so I used it too, will fix.

>> +{
>> +	switch (cmd) {
>> +	case DMA_HEAP_IOC_ALLOC:
>> +	{
>> +		struct dma_heap_allocation_data heap_allocation;
>> +		struct dma_heap *heap = filp->private_data;
>> +		int fd;
> 
> Please split each ioctl into a separate function from the very start,
> otherwise this will grow into a spaghetty mess sooner than you can
> see cheese.
> 

Good idea, will fix.

>> +	dev_ret = device_create(dma_heap_class,
>> +				NULL,
>> +				heap->heap_devt,
>> +				NULL,
>> +				heap->name);
> 
> No need this weird argument alignment.
> 

I kinda like it this way, if everything cant fit on one line then
everything gets its own line, seems more consistent. If there is strong
objection I can fix.

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 0/5 v2] DMA-BUF Heaps (destaging ION)
  2019-03-05 20:54 [RFC][PATCH 0/5 v2] DMA-BUF Heaps (destaging ION) John Stultz
                   ` (5 preceding siblings ...)
  2019-03-13 20:11 ` [RFC][PATCH 0/5 v2] DMA-BUF Heaps (destaging ION) Liam Mark
@ 2019-03-15 20:34 ` Laura Abbott
  2019-03-15 23:15 ` Jerome Glisse
  7 siblings, 0 replies; 68+ messages in thread
From: Laura Abbott @ 2019-03-15 20:34 UTC (permalink / raw)
  To: John Stultz, lkml
  Cc: Benjamin Gaignard, Greg KH, Sumit Semwal, Liam Mark,
	Brian Starkey, Andrew F . Davis, Chenbo Feng, Alistair Strachan,
	dri-devel

On 3/5/19 12:54 PM, John Stultz wrote:
> Here is a initial RFC of the dma-buf heaps patchset Andrew and I
> have been working on which tries to destage a fair chunk of ION
> functionality.
> 
> The patchset implements per-heap devices which can be opened
> directly and then an ioctl is used to allocate a dmabuf from the
> heap.
> 
> The interface is similar, but much simpler then IONs, only
> providing an ALLOC ioctl.
> 
> Also, I've provided simple system and cma heaps. The system
> heap in particular is missing the page-pool optimizations ION
> had, but works well enough to validate the interface.
> 
> I've booted and tested these patches with AOSP on the HiKey960
> using the kernel tree here:
>    https://git.linaro.org/people/john.stultz/android-dev.git/log/?h=dev/dma-buf-heap
> 
> And the userspace changes here:
>    https://android-review.googlesource.com/c/device/linaro/hikey/+/909436
> 
> 
> Compared to ION, this patchset is missing the system-contig,
> carveout and chunk heaps, as I don't have a device that uses
> those, so I'm unable to do much useful validation there.
> Additionally we have no upstream users of chunk or carveout,
> and the system-contig has been deprecated in the common/andoid-*
> kernels, so this should be ok.
> 
> I've also removed the stats accounting for now, since it should
> be implemented by the heaps themselves.
> 
> Eventual TODOS:
> * Reimplement page-pool for system heap (working on this)
> * Add stats accounting to system/cma heaps
> * Make the kselftest actually useful
> * Add other heaps folks see as useful (would love to get
>    some help from actual carveout/chunk users)!
> 
> That said, the main user-interface is shaping up and I wanted
> to get some input on the device model (particularly from GreKH)
> and any other API/ABI specific input.
> 
> thanks
> -john
> 
> Cc: Laura Abbott <labbott@redhat.com>
> Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
> Cc: Greg KH <gregkh@linuxfoundation.org>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: Liam Mark <lmark@codeaurora.org>
> Cc: Brian Starkey <Brian.Starkey@arm.com>
> Cc: Andrew F. Davis <afd@ti.com>
> Cc: Chenbo Feng <fengc@google.com>
> Cc: Alistair Strachan <astrachan@google.com>
> Cc: dri-devel@lists.freedesktop.org
> 
> Andrew F. Davis (1):
>    dma-buf: Add dma-buf heaps framework
> 
> John Stultz (4):
>    dma-buf: heaps: Add heap helpers
>    dma-buf: heaps: Add system heap to dmabuf heaps
>    dma-buf: heaps: Add CMA heap to dmabuf heapss
>    kselftests: Add dma-heap test
> 
>   MAINTAINERS                                        |  16 +
>   drivers/dma-buf/Kconfig                            |  10 +
>   drivers/dma-buf/Makefile                           |   2 +
>   drivers/dma-buf/dma-heap.c                         | 191 ++++++++++++
>   drivers/dma-buf/heaps/Kconfig                      |  14 +
>   drivers/dma-buf/heaps/Makefile                     |   4 +
>   drivers/dma-buf/heaps/cma_heap.c                   | 164 ++++++++++
>   drivers/dma-buf/heaps/heap-helpers.c               | 335 +++++++++++++++++++++
>   drivers/dma-buf/heaps/heap-helpers.h               |  48 +++
>   drivers/dma-buf/heaps/system_heap.c                | 132 ++++++++
>   include/linux/dma-heap.h                           |  65 ++++
>   include/uapi/linux/dma-heap.h                      |  52 ++++
>   tools/testing/selftests/dmabuf-heaps/Makefile      |  11 +
>   tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c |  96 ++++++
>   14 files changed, 1140 insertions(+)
>   create mode 100644 drivers/dma-buf/dma-heap.c
>   create mode 100644 drivers/dma-buf/heaps/Kconfig
>   create mode 100644 drivers/dma-buf/heaps/Makefile
>   create mode 100644 drivers/dma-buf/heaps/cma_heap.c
>   create mode 100644 drivers/dma-buf/heaps/heap-helpers.c
>   create mode 100644 drivers/dma-buf/heaps/heap-helpers.h
>   create mode 100644 drivers/dma-buf/heaps/system_heap.c
>   create mode 100644 include/linux/dma-heap.h
>   create mode 100644 include/uapi/linux/dma-heap.h
>   create mode 100644 tools/testing/selftests/dmabuf-heaps/Makefile
>   create mode 100644 tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
> 

This is looking really great. Thanks for doing the work to
push this forward. It seems like we're in general agreement
about much of this. Which of the issues that have come up
do you think are a "hard no" to keeping this from going in?

Thanks,
Laura

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 5/5 v2] kselftests: Add dma-heap test
  2019-03-15 20:13         ` John Stultz
@ 2019-03-15 20:49           ` Laura Abbott
  0 siblings, 0 replies; 68+ messages in thread
From: Laura Abbott @ 2019-03-15 20:49 UTC (permalink / raw)
  To: John Stultz
  Cc: Benjamin Gaignard, lkml, Greg KH, Sumit Semwal, Liam Mark,
	Brian Starkey, Andrew F . Davis, Chenbo Feng, Alistair Strachan,
	ML dri-devel

On 3/15/19 1:13 PM, John Stultz wrote:
> On Fri, Mar 15, 2019 at 1:07 PM Laura Abbott <labbott@redhat.com> wrote:
>>
>> On 3/6/19 9:01 AM, John Stultz wrote:
>>> On Wed, Mar 6, 2019 at 8:14 AM Benjamin Gaignard
>>> <benjamin.gaignard@linaro.org> wrote:
>>>> Le mar. 5 mars 2019 à 21:54, John Stultz <john.stultz@linaro.org> a écrit :
>>>>> +
>>>>> +       printf("Allocating 1 MEG\n");
>>>>> +       ret = dmabuf_heap_alloc(heap_fd, ONE_MEG, 0, &dmabuf_fd);
>>>>> +       if (ret)
>>>>> +               goto out;
>>>>> +
>>>>> +       /* DO SOMETHING WITH THE DMABUF HERE? */
>>>>
>>>> You can do a call to mmap and write a pattern in the buffer.
>>>
>>> Yea. I can also do some invalid allocations to make sure things fail properly.
>>>
>>> But I was talking a bit w/ Sumit about the lack of any general dmabuf
>>> tests, and am curious if we need to have a importer device driver that
>>> can validate its a real dmabuf and exercise more of the dmabuf ops.
>>>
>>> thanks
>>> -john
>>>
>>
>> There's the vgem driver in drm. I did some work to clean that
>> up so it could take an import af33a9190d02 ("drm/vgem: Enable dmabuf import
>> interfaces") . I mostly used it for private tests and never ended
>> up upstreaming any of the tests.
> 
> Thanks for the poitner, I'll check that out as well!  Also, if you
> still have them around, I'd be interested in checking out the tests to
> try to get something integrated into kselftest.
> 
> Talking with Brian yesterday, there was some thought that we should
> try to put together some sort of example dmabuf pipeline that isn't
> hardware dependent and can be used to demonstrate the usage model as
> well as validate the frameworks and maybe even benchmark some of the
> ideas floating around right now.  So suggestions here would be
> welcome!
> 

So the existing ion selftest (tools/testing/selftests/android/ion)
does make use of the import to do some very simple tests.
I can't seem to find the more complex tests I had though
they may have been lost during my last machine move :(

I do think building off of vgem would be a good first step
for a testing pipeline, although I worry we wouldn't be
able to measure caching effects without a real device since
setting up coherency testing otherwise seems error prone.

Thanks,
Laura

> thanks
> -john
> 


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 1/5 v2] dma-buf: Add dma-buf heaps framework
  2019-03-15 20:18   ` Laura Abbott
@ 2019-03-15 20:49     ` Andrew F. Davis
  2019-03-15 21:29     ` John Stultz
  1 sibling, 0 replies; 68+ messages in thread
From: Andrew F. Davis @ 2019-03-15 20:49 UTC (permalink / raw)
  To: Laura Abbott, John Stultz, lkml
  Cc: Benjamin Gaignard, Greg KH, Sumit Semwal, Liam Mark,
	Brian Starkey, Chenbo Feng, Alistair Strachan, dri-devel

On 3/15/19 3:18 PM, Laura Abbott wrote:
> On 3/5/19 12:54 PM, John Stultz wrote:
>> +DMA-BUF HEAPS FRAMEWORK
>> +M:    Laura Abbott<labbott@redhat.com>
>> +R:    Liam Mark<lmark@codeaurora.org>
>> +R:    Brian Starkey<Brian.Starkey@arm.com>
>> +R:    "Andrew F. Davis"<afd@ti.com>
>> +R:    John Stultz<john.stultz@linaro.org>
>> +S:    Maintained
>> +L:    linux-media@vger.kernel.org
>> +L:    dri-devel@lists.freedesktop.org
>> +L:    linaro-mm-sig@lists.linaro.org  (moderated for non-subscribers)
>> +F:    include/uapi/linux/dma-heap.h
>> +F:    include/linux/dma-heap.h
>> +F:    drivers/dma-buf/dma-heap.c
>> +F:    drivers/dma-buf/heaps/*
>> +T:    git git://anongit.freedesktop.org/drm/drm-misc
> 
> So I talked about this with Sumit privately but I think
> it might make sense to have me step down as maintainer when
> this goes out of staging. I mostly worked on Ion at my
> previous position and anything I do now is mostly a side
> project. I still want to see it succeed which is why I
> took on the maintainer role but I don't want to become blocking
> for people who have a stronger vision about where this needs
> to go (see also, I'm not working with this on a daily basis).
> 
> If you just want someone to help review or take patches
> to be pulled, I'm happy to do so but I'd hate to become
> the bottleneck on getting things done for people who
> are attempting to do real work.
> 

We could consider this as an "ION inspired" framework, and treat it like
an extension of DMA-BUF. In which case Sumit could become the default
Maintainer if he's up for it.

Andrew

> Thanks,
> Laura

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 1/5 v2] dma-buf: Add dma-buf heaps framework
  2019-03-15 20:18   ` Laura Abbott
  2019-03-15 20:49     ` Andrew F. Davis
@ 2019-03-15 21:29     ` John Stultz
  2019-03-15 22:44       ` Laura Abbott
  1 sibling, 1 reply; 68+ messages in thread
From: John Stultz @ 2019-03-15 21:29 UTC (permalink / raw)
  To: Laura Abbott
  Cc: lkml, Andrew F. Davis, Benjamin Gaignard, Greg KH, Sumit Semwal,
	Liam Mark, Brian Starkey, Chenbo Feng, Alistair Strachan,
	dri-devel

On Fri, Mar 15, 2019 at 1:18 PM Laura Abbott <labbott@redhat.com> wrote:
>
> On 3/5/19 12:54 PM, John Stultz wrote:
> > +DMA-BUF HEAPS FRAMEWORK
> > +M:   Laura Abbott<labbott@redhat.com>
> > +R:   Liam Mark<lmark@codeaurora.org>
> > +R:   Brian Starkey<Brian.Starkey@arm.com>
> > +R:   "Andrew F. Davis"<afd@ti.com>
> > +R:   John Stultz<john.stultz@linaro.org>
> > +S:   Maintained
> > +L:   linux-media@vger.kernel.org
> > +L:   dri-devel@lists.freedesktop.org
> > +L:   linaro-mm-sig@lists.linaro.org  (moderated for non-subscribers)
> > +F:   include/uapi/linux/dma-heap.h
> > +F:   include/linux/dma-heap.h
> > +F:   drivers/dma-buf/dma-heap.c
> > +F:   drivers/dma-buf/heaps/*
> > +T:   git git://anongit.freedesktop.org/drm/drm-misc
>
> So I talked about this with Sumit privately but I think
> it might make sense to have me step down as maintainer when
> this goes out of staging. I mostly worked on Ion at my
> previous position and anything I do now is mostly a side
> project. I still want to see it succeed which is why I
> took on the maintainer role but I don't want to become blocking
> for people who have a stronger vision about where this needs
> to go (see also, I'm not working with this on a daily basis).
>
> If you just want someone to help review or take patches
> to be pulled, I'm happy to do so but I'd hate to become
> the bottleneck on getting things done for people who
> are attempting to do real work.

I worry this will make everyone to touch the side of their nose and
yell "NOT IT!" :)

First of all, thank you so much for your efforts maintaining ION along
with your attempts to drag out requirements from interested parties
and the numerous attempts to get collaborative discussion going at
countless conferences! Your persistence and continual nudging in the
face of apathetic private users of the code probably cannot be
appreciated enough!

Your past practical experience with ION and active work with the
upstream community made you a stand out pick for this, but I
understand not wanting to be eternally stuck with a maintainership if
your not active in the area.  I'm happy to volunteer as a neutral
party, but I worry my limited experience with some of the more
complicated usage would make my opinions less informed then they
probably need to be.  Further, as a neutral party, Sumit would
probably be a better pick since he's already maintaining the dmabuf
core.

So I'd nominate Andrew, Liam or Benjamin (or all three?) as they all
have more practical experience enabling past ION heaps on real devices
and have demonstrated active interest in working in the community.

So, in other words... NOT IT! :)
-john

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 1/5 v2] dma-buf: Add dma-buf heaps framework
  2019-03-15 21:29     ` John Stultz
@ 2019-03-15 22:44       ` Laura Abbott
  2019-03-18  4:41         ` Sumit Semwal
  0 siblings, 1 reply; 68+ messages in thread
From: Laura Abbott @ 2019-03-15 22:44 UTC (permalink / raw)
  To: John Stultz
  Cc: lkml, Andrew F. Davis, Benjamin Gaignard, Greg KH, Sumit Semwal,
	Liam Mark, Brian Starkey, Chenbo Feng, Alistair Strachan,
	dri-devel

On 3/15/19 2:29 PM, John Stultz wrote:
> On Fri, Mar 15, 2019 at 1:18 PM Laura Abbott <labbott@redhat.com> wrote:
>>
>> On 3/5/19 12:54 PM, John Stultz wrote:
>>> +DMA-BUF HEAPS FRAMEWORK
>>> +M:   Laura Abbott<labbott@redhat.com>
>>> +R:   Liam Mark<lmark@codeaurora.org>
>>> +R:   Brian Starkey<Brian.Starkey@arm.com>
>>> +R:   "Andrew F. Davis"<afd@ti.com>
>>> +R:   John Stultz<john.stultz@linaro.org>
>>> +S:   Maintained
>>> +L:   linux-media@vger.kernel.org
>>> +L:   dri-devel@lists.freedesktop.org
>>> +L:   linaro-mm-sig@lists.linaro.org  (moderated for non-subscribers)
>>> +F:   include/uapi/linux/dma-heap.h
>>> +F:   include/linux/dma-heap.h
>>> +F:   drivers/dma-buf/dma-heap.c
>>> +F:   drivers/dma-buf/heaps/*
>>> +T:   git git://anongit.freedesktop.org/drm/drm-misc
>>
>> So I talked about this with Sumit privately but I think
>> it might make sense to have me step down as maintainer when
>> this goes out of staging. I mostly worked on Ion at my
>> previous position and anything I do now is mostly a side
>> project. I still want to see it succeed which is why I
>> took on the maintainer role but I don't want to become blocking
>> for people who have a stronger vision about where this needs
>> to go (see also, I'm not working with this on a daily basis).
>>
>> If you just want someone to help review or take patches
>> to be pulled, I'm happy to do so but I'd hate to become
>> the bottleneck on getting things done for people who
>> are attempting to do real work.
> 
> I worry this will make everyone to touch the side of their nose and
> yell "NOT IT!" :)
> 
> First of all, thank you so much for your efforts maintaining ION along
> with your attempts to drag out requirements from interested parties
> and the numerous attempts to get collaborative discussion going at
> countless conferences! Your persistence and continual nudging in the
> face of apathetic private users of the code probably cannot be
> appreciated enough!
> 
> Your past practical experience with ION and active work with the
> upstream community made you a stand out pick for this, but I
> understand not wanting to be eternally stuck with a maintainership if
> your not active in the area.  I'm happy to volunteer as a neutral
> party, but I worry my limited experience with some of the more
> complicated usage would make my opinions less informed then they
> probably need to be.  Further, as a neutral party, Sumit would
> probably be a better pick since he's already maintaining the dmabuf
> core.
> 

Honestly if you're doing the work to re-write everything, I
think you're more than qualified to be the maintainer. I
would also support Sumit as well.

> So I'd nominate Andrew, Liam or Benjamin (or all three?) as they all
> have more practical experience enabling past ION heaps on real devices
> and have demonstrated active interest in working in the community.
> 

I do think this would benefit both from multiple maintainers and
from maintainers who are actively using the framework. Like I
said, I can still be a maintainer but I think having some comaintainers
would be very helpful (and I'd support any of the names you've
suggested)

> So, in other words... NOT IT! :)

I think you have to shout "Noes goes" first. :)

> -john
> 

Thanks,
Laura

P.S. For the benefit of anyone who's confused,
https://en.wikipedia.org/wiki/Nose_goes

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 0/5 v2] DMA-BUF Heaps (destaging ION)
  2019-03-05 20:54 [RFC][PATCH 0/5 v2] DMA-BUF Heaps (destaging ION) John Stultz
                   ` (6 preceding siblings ...)
  2019-03-15 20:34 ` Laura Abbott
@ 2019-03-15 23:15 ` Jerome Glisse
  2019-03-16  0:16   ` John Stultz
  7 siblings, 1 reply; 68+ messages in thread
From: Jerome Glisse @ 2019-03-15 23:15 UTC (permalink / raw)
  To: John Stultz
  Cc: lkml, Greg KH, Chenbo Feng, Alistair Strachan, Liam Mark,
	dri-devel, Andrew F . Davis

On Tue, Mar 05, 2019 at 12:54:28PM -0800, John Stultz wrote:
> Here is a initial RFC of the dma-buf heaps patchset Andrew and I
> have been working on which tries to destage a fair chunk of ION
> functionality.
> 
> The patchset implements per-heap devices which can be opened
> directly and then an ioctl is used to allocate a dmabuf from the
> heap.
> 
> The interface is similar, but much simpler then IONs, only
> providing an ALLOC ioctl.
> 
> Also, I've provided simple system and cma heaps. The system
> heap in particular is missing the page-pool optimizations ION
> had, but works well enough to validate the interface.
> 
> I've booted and tested these patches with AOSP on the HiKey960
> using the kernel tree here:
>   https://git.linaro.org/people/john.stultz/android-dev.git/log/?h=dev/dma-buf-heap
> 
> And the userspace changes here:
>   https://android-review.googlesource.com/c/device/linaro/hikey/+/909436

What upstream driver will use this eventualy ? And why is it
needed ?

Cheers,
Jérôme

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 0/5 v2] DMA-BUF Heaps (destaging ION)
  2019-03-15 23:15 ` Jerome Glisse
@ 2019-03-16  0:16   ` John Stultz
  0 siblings, 0 replies; 68+ messages in thread
From: John Stultz @ 2019-03-16  0:16 UTC (permalink / raw)
  To: Jerome Glisse
  Cc: lkml, Greg KH, Chenbo Feng, Alistair Strachan, Liam Mark,
	dri-devel, Andrew F . Davis

On Fri, Mar 15, 2019 at 4:15 PM Jerome Glisse <jglisse@redhat.com> wrote:
> On Tue, Mar 05, 2019 at 12:54:28PM -0800, John Stultz wrote:
> > Here is a initial RFC of the dma-buf heaps patchset Andrew and I
> > have been working on which tries to destage a fair chunk of ION
> > functionality.
> >
> > The patchset implements per-heap devices which can be opened
> > directly and then an ioctl is used to allocate a dmabuf from the
> > heap.
> >
> > The interface is similar, but much simpler then IONs, only
> > providing an ALLOC ioctl.
> >
> > Also, I've provided simple system and cma heaps. The system
> > heap in particular is missing the page-pool optimizations ION
> > had, but works well enough to validate the interface.
> >
> > I've booted and tested these patches with AOSP on the HiKey960
> > using the kernel tree here:
> >   https://git.linaro.org/people/john.stultz/android-dev.git/log/?h=dev/dma-buf-heap
> >
> > And the userspace changes here:
> >   https://android-review.googlesource.com/c/device/linaro/hikey/+/909436
>
> What upstream driver will use this eventualy ? And why is it
> needed ?

So, its sort of a complicated answer, as we don't have a fully open
pipeline just yet.  The HiKey board's upstream kirin drm driver uses
this, as it needs contiguous buffers for its framebuffers. So in
Android the HiKey gralloc (opensource userspace) allocates the HW_FB
buffers from the CMA heap.  Other graphics buffers are then allocated
by gralloc out of the system heap and SurfaceFlinger and  drm_hwc
(both also opensource userspace) coordinate squashing those other
buffers down through the mali utgard gpu (proprietary GL blob) onto
the target HW_FB buffer.  (All of the above is the same for the
HiKey960, but we're still working the drm driver into shape for
upstreaming).

That said, I know the Lima driver is starting to shape up, and I'm
hoping to give it a whirl to replace the proprietary utgard driver.
Everything else would stay the same, which would give us a fully open
pipeline.

I know for other dev boards like the db410c w/ freedreno, the Android
pipeline gets away with using the gbmgralloc implementation, but my
understanding in that case is the rendering/display pipeline doesn't
require contiguous buffers, so the allocation logic can be simpler,
and doesn't use ION heaps. But its possible a gralloc could be
implemented to use the system heap for allocations on that device.

thanks
-john

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 1/5 v2] dma-buf: Add dma-buf heaps framework
  2019-03-15 22:44       ` Laura Abbott
@ 2019-03-18  4:41         ` Sumit Semwal
  0 siblings, 0 replies; 68+ messages in thread
From: Sumit Semwal @ 2019-03-18  4:41 UTC (permalink / raw)
  To: Laura Abbott
  Cc: John Stultz, lkml, Andrew F. Davis, Benjamin Gaignard, Greg KH,
	Liam Mark, Brian Starkey, Chenbo Feng, Alistair Strachan,
	dri-devel

On Sat, 16 Mar 2019 at 04:15, Laura Abbott <labbott@redhat.com> wrote:
>
> On 3/15/19 2:29 PM, John Stultz wrote:
> > On Fri, Mar 15, 2019 at 1:18 PM Laura Abbott <labbott@redhat.com> wrote:
> >>
> >> On 3/5/19 12:54 PM, John Stultz wrote:
> >>> +DMA-BUF HEAPS FRAMEWORK
> >>> +M:   Laura Abbott<labbott@redhat.com>
> >>> +R:   Liam Mark<lmark@codeaurora.org>
> >>> +R:   Brian Starkey<Brian.Starkey@arm.com>
> >>> +R:   "Andrew F. Davis"<afd@ti.com>
> >>> +R:   John Stultz<john.stultz@linaro.org>
> >>> +S:   Maintained
> >>> +L:   linux-media@vger.kernel.org
> >>> +L:   dri-devel@lists.freedesktop.org
> >>> +L:   linaro-mm-sig@lists.linaro.org  (moderated for non-subscribers)
> >>> +F:   include/uapi/linux/dma-heap.h
> >>> +F:   include/linux/dma-heap.h
> >>> +F:   drivers/dma-buf/dma-heap.c
> >>> +F:   drivers/dma-buf/heaps/*
> >>> +T:   git git://anongit.freedesktop.org/drm/drm-misc
> >>
> >> So I talked about this with Sumit privately but I think
> >> it might make sense to have me step down as maintainer when
> >> this goes out of staging. I mostly worked on Ion at my
> >> previous position and anything I do now is mostly a side
> >> project. I still want to see it succeed which is why I
> >> took on the maintainer role but I don't want to become blocking
> >> for people who have a stronger vision about where this needs
> >> to go (see also, I'm not working with this on a daily basis).
> >>
> >> If you just want someone to help review or take patches
> >> to be pulled, I'm happy to do so but I'd hate to become
> >> the bottleneck on getting things done for people who
> >> are attempting to do real work.
> >
> > I worry this will make everyone to touch the side of their nose and
> > yell "NOT IT!" :)
> >
> > First of all, thank you so much for your efforts maintaining ION along
> > with your attempts to drag out requirements from interested parties
> > and the numerous attempts to get collaborative discussion going at
> > countless conferences! Your persistence and continual nudging in the
> > face of apathetic private users of the code probably cannot be
> > appreciated enough!
I totally second John here - the persistence has been inspiring to me
personally as well :)

> >
> > Your past practical experience with ION and active work with the
> > upstream community made you a stand out pick for this, but I
> > understand not wanting to be eternally stuck with a maintainership if
> > your not active in the area.  I'm happy to volunteer as a neutral
> > party, but I worry my limited experience with some of the more
> > complicated usage would make my opinions less informed then they
> > probably need to be.  Further, as a neutral party, Sumit would
> > probably be a better pick since he's already maintaining the dmabuf
> > core.
> >
>
> Honestly if you're doing the work to re-write everything, I
> think you're more than qualified to be the maintainer. I
> would also support Sumit as well.
>
> > So I'd nominate Andrew, Liam or Benjamin (or all three?) as they all
> > have more practical experience enabling past ION heaps on real devices
> > and have demonstrated active interest in working in the community.
> >
>
> I do think this would benefit both from multiple maintainers and
> from maintainers who are actively using the framework. Like I
> said, I can still be a maintainer but I think having some comaintainers
> would be very helpful (and I'd support any of the names you've
> suggested)
If it's required, I am happy to co-maintain - we could even follow the
drm-misc model of multiple co-maintainers. I will support any or all
of the names above as well :)

>
> > So, in other words... NOT IT! :)
>
> I think you have to shout "Noes goes" first. :)
>
> > -john
> >
>
> Thanks,
> Laura
>
> P.S. For the benefit of anyone who's confused,
> https://en.wikipedia.org/wiki/Nose_goes


Best,
Sumit.

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 1/5 v2] dma-buf: Add dma-buf heaps framework
  2019-03-05 20:54 ` [RFC][PATCH 1/5 v2] dma-buf: Add dma-buf heaps framework John Stultz
                     ` (3 preceding siblings ...)
  2019-03-15 20:18   ` Laura Abbott
@ 2019-03-19 12:08   ` Brian Starkey
  2019-03-19 15:24     ` Andrew F. Davis
  2019-03-21 21:16     ` John Stultz
  2019-03-27 14:53   ` Greg KH
  5 siblings, 2 replies; 68+ messages in thread
From: Brian Starkey @ 2019-03-19 12:08 UTC (permalink / raw)
  To: John Stultz
  Cc: lkml, Andrew F. Davis, Laura Abbott, Benjamin Gaignard, Greg KH,
	Sumit Semwal, Liam Mark, Chenbo Feng, Alistair Strachan,
	dri-devel, nd

Hi John,

On Tue, Mar 05, 2019 at 12:54:29PM -0800, John Stultz wrote:
> From: "Andrew F. Davis" <afd@ti.com>

[snip]

> +
> +#define NUM_HEAP_MINORS 128
> +static DEFINE_IDR(dma_heap_idr);
> +static DEFINE_MUTEX(minor_lock); /* Protect idr accesses */

I saw that Matthew Wilcox is trying to nuke idr:
https://patchwork.freedesktop.org/series/57073/

Perhaps a different data structure could be considered? (I don't have
an informed opinion on which).

> +
> +dev_t dma_heap_devt;
> +struct class *dma_heap_class;
> +struct list_head dma_heap_list;
> +struct dentry *dma_heap_debug_root;
> +
> +static int dma_heap_buffer_alloc(struct dma_heap *heap, size_t len,
> +				 unsigned int flags)
> +{
> +	len = PAGE_ALIGN(len);
> +	if (!len)
> +		return -EINVAL;

I think aligning len to pages only makes sense if heaps are going to
allocate aligned to pages too. Perhaps that's an implicit assumption?
If so, lets document it.

Why not let the heaps take care of aligning len however they want
though?

...

> +
> +int dma_heap_add(struct dma_heap *heap)
> +{
> +	struct device *dev_ret;
> +	int ret;
> +
> +	if (!heap->name || !strcmp(heap->name, "")) {
> +		pr_err("dma_heap: Cannot add heap without a name\n");
> +		return -EINVAL;
> +	}
> +
> +	if (!heap->ops || !heap->ops->allocate) {
> +		pr_err("dma_heap: Cannot add heap with invalid ops struct\n");
> +		return -EINVAL;
> +	}
> +
> +	/* Find unused minor number */
> +	mutex_lock(&minor_lock);
> +	ret = idr_alloc(&dma_heap_idr, heap, 0, NUM_HEAP_MINORS, GFP_KERNEL);
> +	mutex_unlock(&minor_lock);
> +	if (ret < 0) {
> +		pr_err("dma_heap: Unable to get minor number for heap\n");
> +		return ret;
> +	}
> +	heap->minor = ret;
> +
> +	/* Create device */
> +	heap->heap_devt = MKDEV(MAJOR(dma_heap_devt), heap->minor);
> +	dev_ret = device_create(dma_heap_class,
> +				NULL,
> +				heap->heap_devt,
> +				NULL,
> +				heap->name);
> +	if (IS_ERR(dev_ret)) {
> +		pr_err("dma_heap: Unable to create char device\n");
> +		return PTR_ERR(dev_ret);
> +	}
> +
> +	/* Add device */
> +	cdev_init(&heap->heap_cdev, &dma_heap_fops);
> +	ret = cdev_add(&heap->heap_cdev, dma_heap_devt, NUM_HEAP_MINORS);

Shouldn't this be s/dma_heap_devt/heap->heap_devt/ and a count of 1?

Also would it be better to have cdev_add/device_create the other way
around? First create the char device, then once it's all set up
register it with sysfs.

> +	if (ret < 0) {
> +		device_destroy(dma_heap_class, heap->heap_devt);
> +		pr_err("dma_heap: Unable to add char device\n");
> +		return ret;
> +	}
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(dma_heap_add);

Until we've figured out how modules are going to work, I still think
it would be a good idea to not export this.

Cheers,
-Brian


^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 2/5 v2] dma-buf: heaps: Add heap helpers
  2019-03-05 20:54 ` [RFC][PATCH 2/5 v2] dma-buf: heaps: Add heap helpers John Stultz
  2019-03-13 20:18   ` Liam Mark
  2019-03-15  9:06   ` Christoph Hellwig
@ 2019-03-19 14:26   ` Brian Starkey
  2019-03-21 20:11     ` John Stultz
  2019-03-21 20:35     ` Andrew F. Davis
  2019-03-21 20:43   ` Andrew F. Davis
  3 siblings, 2 replies; 68+ messages in thread
From: Brian Starkey @ 2019-03-19 14:26 UTC (permalink / raw)
  To: John Stultz
  Cc: lkml, Laura Abbott, Benjamin Gaignard, Greg KH, Sumit Semwal,
	Liam Mark, Andrew F . Davis, Chenbo Feng, Alistair Strachan,
	dri-devel, nd

Hi John,

On Tue, Mar 05, 2019 at 12:54:30PM -0800, John Stultz wrote:

...

> +
> +void dma_heap_buffer_destroy(struct dma_heap_buffer *heap_buffer)
> +{
> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> +
> +	if (buffer->kmap_cnt > 0) {
> +		pr_warn_once("%s: buffer still mapped in the kernel\n",
> +			     __func__);

Could be worth something louder like a full WARN.

> +		vunmap(buffer->vaddr);
> +	}
> +
> +	buffer->free(buffer);
> +}
> +

...

> +
> +static void *dma_heap_dma_buf_kmap(struct dma_buf *dmabuf,
> +					unsigned long offset)
> +{
> +	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> +
> +	return buffer->vaddr + offset * PAGE_SIZE;

I think it'd be good to check for NULL vaddr and return NULL in that
case. Less chance of an invalid pointer being accidentally used then.

Thanks,
-Brian

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 4/5 v2] dma-buf: heaps: Add CMA heap to dmabuf heapss
  2019-03-05 20:54 ` [RFC][PATCH 4/5 v2] dma-buf: heaps: Add CMA heap to dmabuf heapss John Stultz
  2019-03-06 16:05   ` Benjamin Gaignard
  2019-03-15  9:06   ` Christoph Hellwig
@ 2019-03-19 14:53   ` Brian Starkey
  2 siblings, 0 replies; 68+ messages in thread
From: Brian Starkey @ 2019-03-19 14:53 UTC (permalink / raw)
  To: John Stultz
  Cc: lkml, Laura Abbott, Benjamin Gaignard, Greg KH, Sumit Semwal,
	Liam Mark, Andrew F . Davis, Chenbo Feng, Alistair Strachan,
	dri-devel, nd

On Tue, Mar 05, 2019 at 12:54:32PM -0800, John Stultz wrote:
> This adds a CMA heap, which allows userspace to allocate
> a dma-buf of contiguous memory out of a CMA region.
> 
> This code is an evolution of the Android ION implementation, so
> thanks to its original author and maintainters:
>   Benjamin Gaignard, Laura Abbott, and others!
> 
> Cc: Laura Abbott <labbott@redhat.com>
> Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
> Cc: Greg KH <gregkh@linuxfoundation.org>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: Liam Mark <lmark@codeaurora.org>
> Cc: Brian Starkey <Brian.Starkey@arm.com>
> Cc: Andrew F. Davis <afd@ti.com>
> Cc: Chenbo Feng <fengc@google.com>
> Cc: Alistair Strachan <astrachan@google.com>
> Cc: dri-devel@lists.freedesktop.org
> Signed-off-by: John Stultz <john.stultz@linaro.org>
> ---
> v2:
> * Switch allocate to return dmabuf fd
> * Simplify init code
> * Checkpatch fixups
> ---
>  drivers/dma-buf/heaps/Kconfig    |   8 ++
>  drivers/dma-buf/heaps/Makefile   |   1 +
>  drivers/dma-buf/heaps/cma_heap.c | 164 +++++++++++++++++++++++++++++++++++++++
>  3 files changed, 173 insertions(+)
>  create mode 100644 drivers/dma-buf/heaps/cma_heap.c
> 
> diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
> index 2050527..a5eef06 100644
> --- a/drivers/dma-buf/heaps/Kconfig
> +++ b/drivers/dma-buf/heaps/Kconfig
> @@ -4,3 +4,11 @@ config DMABUF_HEAPS_SYSTEM
>  	help
>  	  Choose this option to enable the system dmabuf heap. The system heap
>  	  is backed by pages from the buddy allocator. If in doubt, say Y.
> +
> +config DMABUF_HEAPS_CMA
> +	bool "DMA-BUF CMA Heap"
> +	depends on DMABUF_HEAPS && DMA_CMA
> +	help
> +	  Choose this option to enable dma-buf CMA heap. This heap is backed
> +	  by the Contiguous Memory Allocator (CMA). If your system has these
> +	  regions, you should say Y here.
> diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
> index d1808ec..6e54cde 100644
> --- a/drivers/dma-buf/heaps/Makefile
> +++ b/drivers/dma-buf/heaps/Makefile
> @@ -1,3 +1,4 @@
>  # SPDX-License-Identifier: GPL-2.0
>  obj-y					+= heap-helpers.o
>  obj-$(CONFIG_DMABUF_HEAPS_SYSTEM)	+= system_heap.o
> +obj-$(CONFIG_DMABUF_HEAPS_CMA)		+= cma_heap.o
> diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c
> new file mode 100644
> index 0000000..33c18ec
> --- /dev/null
> +++ b/drivers/dma-buf/heaps/cma_heap.c
> @@ -0,0 +1,164 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * DMABUF CMA heap exporter
> + *
> + * Copyright (C) 2012, 2019 Linaro Ltd.
> + * Author: <benjamin.gaignard@linaro.org> for ST-Ericsson.
> + */
> +
> +#include <linux/device.h>
> +#include <linux/dma-buf.h>
> +#include <linux/dma-heap.h>
> +#include <linux/slab.h>
> +#include <linux/errno.h>
> +#include <linux/err.h>
> +#include <linux/cma.h>
> +#include <linux/scatterlist.h>
> +#include <linux/highmem.h>
> +
> +#include "heap-helpers.h"
> +
> +struct cma_heap {
> +	struct dma_heap heap;
> +	struct cma *cma;
> +};
> +
> +

extra line

> +#define to_cma_heap(x) container_of(x, struct cma_heap, heap)
> +
> +

extra line

> +static void cma_heap_free(struct heap_helper_buffer *buffer)
> +{
> +	struct cma_heap *cma_heap = to_cma_heap(buffer->heap_buffer.heap);
> +	struct page *pages = buffer->priv_virt;
> +	unsigned long nr_pages;
> +
> +	nr_pages = PAGE_ALIGN(buffer->heap_buffer.size) >> PAGE_SHIFT;

As you align at alloc time, I don't think the PAGE_ALIGN is really
necessary here.

> +
> +	/* release memory */
> +	cma_release(cma_heap->cma, pages, nr_pages);
> +	/* release sg table */
> +	sg_free_table(buffer->sg_table);
> +	kfree(buffer->sg_table);
> +	kfree(buffer);
> +}
> +
> +/* dmabuf heap CMA operations functions */
> +static int cma_heap_allocate(struct dma_heap *heap,
> +				unsigned long len,
> +				unsigned long flags)
> +{
> +	struct cma_heap *cma_heap = to_cma_heap(heap);
> +	struct heap_helper_buffer *helper_buffer;
> +	struct sg_table *table;
> +	struct page *pages;
> +	size_t size = PAGE_ALIGN(len);
> +	unsigned long nr_pages = size >> PAGE_SHIFT;
> +	unsigned long align = get_order(size);
> +	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
> +	struct dma_buf *dmabuf;
> +	int ret = -ENOMEM;
> +
> +	if (align > CONFIG_CMA_ALIGNMENT)
> +		align = CONFIG_CMA_ALIGNMENT;
> +
> +	helper_buffer = kzalloc(sizeof(*helper_buffer), GFP_KERNEL);
> +	if (!helper_buffer)
> +		return -ENOMEM;
> +
> +	INIT_HEAP_HELPER_BUFFER(helper_buffer, cma_heap_free);
> +	helper_buffer->heap_buffer.flags = flags;
> +	helper_buffer->heap_buffer.heap = heap;
> +	helper_buffer->heap_buffer.size = len;
> +
> +

extra line

> +	pages = cma_alloc(cma_heap->cma, nr_pages, align, false);
> +	if (!pages)
> +		goto free_buf;
> +
> +	if (PageHighMem(pages)) {
> +		unsigned long nr_clear_pages = nr_pages;
> +		struct page *page = pages;
> +
> +		while (nr_clear_pages > 0) {
> +			void *vaddr = kmap_atomic(page);
> +
> +			memset(vaddr, 0, PAGE_SIZE);
> +			kunmap_atomic(vaddr);
> +			page++;
> +			nr_clear_pages--;
> +		}
> +	} else {
> +		memset(page_address(pages), 0, size);
> +	}
> +
> +	table = kmalloc(sizeof(*table), GFP_KERNEL);
> +	if (!table)
> +		goto free_cma;
> +
> +	ret = sg_alloc_table(table, 1, GFP_KERNEL);
> +	if (ret)
> +		goto free_table;
> +
> +	sg_set_page(table->sgl, pages, size, 0);
> +
> +	/* create the dmabuf */
> +	exp_info.ops = &heap_helper_ops;
> +	exp_info.size = len;
> +	exp_info.flags = O_RDWR;
> +	exp_info.priv = &helper_buffer->heap_buffer;
> +	dmabuf = dma_buf_export(&exp_info);
> +	if (IS_ERR(dmabuf)) {
> +		ret = PTR_ERR(dmabuf);
> +		goto free_table;
> +	}
> +
> +	helper_buffer->heap_buffer.dmabuf = dmabuf;
> +	helper_buffer->priv_virt = pages;
> +	helper_buffer->sg_table = table;
> +
> +	ret = dma_buf_fd(dmabuf, O_CLOEXEC);
> +	if (ret < 0) {
> +		dma_buf_put(dmabuf);
> +		/* just return, as put will call release and that will free */
> +		return ret;
> +	}
> +
> +	return ret;
> +free_table:
> +	kfree(table);
> +free_cma:
> +	cma_release(cma_heap->cma, pages, nr_pages);
> +free_buf:
> +	kfree(helper_buffer);
> +	return ret;
> +}
> +
> +static struct dma_heap_ops cma_heap_ops = {
> +	.allocate = cma_heap_allocate,
> +};
> +
> +static int __add_cma_heaps(struct cma *cma, void *data)

nit: __add_cma_heap (not plural) seems more accurate.

Whatever you decide for the above, you can add my r-b.

Thanks,
-Brian

> +{
> +	struct cma_heap *cma_heap;
> +
> +	cma_heap = kzalloc(sizeof(*cma_heap), GFP_KERNEL);
> +
> +	if (!cma_heap)
> +		return -ENOMEM;
> +
> +	cma_heap->heap.name = cma_get_name(cma);
> +	cma_heap->heap.ops = &cma_heap_ops;
> +	cma_heap->cma = cma;
> +
> +	dma_heap_add(&cma_heap->heap);
> +
> +	return 0;
> +}
> +
> +static int add_cma_heaps(void)
> +{
> +	cma_for_each_area(__add_cma_heaps, NULL);
> +	return 0;
> +}
> +device_initcall(add_cma_heaps);
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 2/5 v2] dma-buf: heaps: Add heap helpers
  2019-03-15  9:06   ` Christoph Hellwig
@ 2019-03-19 15:03     ` Andrew F. Davis
  2019-03-21 20:01     ` John Stultz
  1 sibling, 0 replies; 68+ messages in thread
From: Andrew F. Davis @ 2019-03-19 15:03 UTC (permalink / raw)
  To: Christoph Hellwig, John Stultz
  Cc: lkml, Laura Abbott, Benjamin Gaignard, Greg KH, Sumit Semwal,
	Liam Mark, Brian Starkey, Chenbo Feng, Alistair Strachan,
	dri-devel

On 3/15/19 4:06 AM, Christoph Hellwig wrote:
>> +		ret = remap_pfn_range(vma, addr, page_to_pfn(page), len,
>> +				      vma->vm_page_prot);
> 
> So the same chunk could be mapped to userspace and vmap, and later on
> also DMA mapped.  Who is going to take care of cache aliasing as I
> see nothing of that in this series?
> 

We should only have one type of memory per heap so all mappings will
have the same type. That should solve the ARM specific issues, but I'm
guessing you are thinking of more tricky architectures where all
mappings need to be tracked and cleaned/invalidated..

For that I think we will have to track each right? How do others handle
that, we can't be the first to offer cached buffers to userspace.

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 1/5 v2] dma-buf: Add dma-buf heaps framework
  2019-03-19 12:08   ` Brian Starkey
@ 2019-03-19 15:24     ` Andrew F. Davis
  2019-03-21 21:16     ` John Stultz
  1 sibling, 0 replies; 68+ messages in thread
From: Andrew F. Davis @ 2019-03-19 15:24 UTC (permalink / raw)
  To: Brian Starkey, John Stultz
  Cc: lkml, Laura Abbott, Benjamin Gaignard, Greg KH, Sumit Semwal,
	Liam Mark, Chenbo Feng, Alistair Strachan, dri-devel, nd

On 3/19/19 7:08 AM, Brian Starkey wrote:
> Hi John,
> 
> On Tue, Mar 05, 2019 at 12:54:29PM -0800, John Stultz wrote:
>> From: "Andrew F. Davis" <afd@ti.com>
> 
> [snip]
> 
>> +
>> +#define NUM_HEAP_MINORS 128
>> +static DEFINE_IDR(dma_heap_idr);
>> +static DEFINE_MUTEX(minor_lock); /* Protect idr accesses */
> 
> I saw that Matthew Wilcox is trying to nuke idr:
> https://patchwork.freedesktop.org/series/57073/
> 
> Perhaps a different data structure could be considered? (I don't have
> an informed opinion on which).
> 

Looks like XArray is the suggested replacement. Should be easy enough,
the minor number would just index to our heap struct directly, I'll give
it a shot and see.

>> +
>> +dev_t dma_heap_devt;
>> +struct class *dma_heap_class;
>> +struct list_head dma_heap_list;
>> +struct dentry *dma_heap_debug_root;
>> +
>> +static int dma_heap_buffer_alloc(struct dma_heap *heap, size_t len,
>> +				 unsigned int flags)
>> +{
>> +	len = PAGE_ALIGN(len);
>> +	if (!len)
>> +		return -EINVAL;
> 
> I think aligning len to pages only makes sense if heaps are going to
> allocate aligned to pages too. Perhaps that's an implicit assumption?
> If so, lets document it.
> 
> Why not let the heaps take care of aligning len however they want
> though?
> 

This is how I originally had it, but I think we couldn't find any case
where you would want an the start or end of a buffer to not fall on a
page boundary here. It would only lead to problems. As you say though,
nothing keeping us from moving that into the heaps themselves.

> ...
> 
>> +
>> +int dma_heap_add(struct dma_heap *heap)
>> +{
>> +	struct device *dev_ret;
>> +	int ret;
>> +
>> +	if (!heap->name || !strcmp(heap->name, "")) {
>> +		pr_err("dma_heap: Cannot add heap without a name\n");
>> +		return -EINVAL;
>> +	}
>> +
>> +	if (!heap->ops || !heap->ops->allocate) {
>> +		pr_err("dma_heap: Cannot add heap with invalid ops struct\n");
>> +		return -EINVAL;
>> +	}
>> +
>> +	/* Find unused minor number */
>> +	mutex_lock(&minor_lock);
>> +	ret = idr_alloc(&dma_heap_idr, heap, 0, NUM_HEAP_MINORS, GFP_KERNEL);
>> +	mutex_unlock(&minor_lock);
>> +	if (ret < 0) {
>> +		pr_err("dma_heap: Unable to get minor number for heap\n");
>> +		return ret;
>> +	}
>> +	heap->minor = ret;
>> +
>> +	/* Create device */
>> +	heap->heap_devt = MKDEV(MAJOR(dma_heap_devt), heap->minor);
>> +	dev_ret = device_create(dma_heap_class,
>> +				NULL,
>> +				heap->heap_devt,
>> +				NULL,
>> +				heap->name);
>> +	if (IS_ERR(dev_ret)) {
>> +		pr_err("dma_heap: Unable to create char device\n");
>> +		return PTR_ERR(dev_ret);
>> +	}
>> +
>> +	/* Add device */
>> +	cdev_init(&heap->heap_cdev, &dma_heap_fops);
>> +	ret = cdev_add(&heap->heap_cdev, dma_heap_devt, NUM_HEAP_MINORS);
> 
> Shouldn't this be s/dma_heap_devt/heap->heap_devt/ and a count of 1?
> 

Hmm, strange this ever worked before..

> Also would it be better to have cdev_add/device_create the other way
> around? First create the char device, then once it's all set up
> register it with sysfs.
> 

Yes that does seem to be more common, lets flip it.

>> +	if (ret < 0) {
>> +		device_destroy(dma_heap_class, heap->heap_devt);
>> +		pr_err("dma_heap: Unable to add char device\n");
>> +		return ret;
>> +	}
>> +
>> +	return 0;
>> +}
>> +EXPORT_SYMBOL(dma_heap_add);
> 
> Until we've figured out how modules are going to work, I still think
> it would be a good idea to not export this.
> 

Agree.

Andrew

> Cheers,
> -Brian
> 

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 0/5 v2] DMA-BUF Heaps (destaging ION)
  2019-03-13 22:30   ` John Stultz
  2019-03-13 23:29     ` Liam Mark
@ 2019-03-19 16:54     ` Benjamin Gaignard
  2019-03-19 16:59       ` Andrew F. Davis
  1 sibling, 1 reply; 68+ messages in thread
From: Benjamin Gaignard @ 2019-03-19 16:54 UTC (permalink / raw)
  To: John Stultz
  Cc: Liam Mark, lkml, Laura Abbott, Greg KH, Sumit Semwal,
	Brian Starkey, Andrew F . Davis, Chenbo Feng, Alistair Strachan,
	dri-devel, Vincent Donnefort, Marissa Wall

Le mer. 13 mars 2019 à 23:31, John Stultz <john.stultz@linaro.org> a écrit :
>
> On Wed, Mar 13, 2019 at 1:11 PM Liam Mark <lmark@codeaurora.org> wrote:
> > On Tue, 5 Mar 2019, John Stultz wrote:
> > >
> > > Eventual TODOS:
> > > * Reimplement page-pool for system heap (working on this)
> > > * Add stats accounting to system/cma heaps
> > > * Make the kselftest actually useful
> > > * Add other heaps folks see as useful (would love to get
> > >   some help from actual carveout/chunk users)!
> >
> > We use a modified carveout heap for certain secure use cases.
>
> Cool! It would be great to see if you have any concerns about adding
> such a secure-carveout heap to this framework. I suspect it would be
> fairly similar to how its integrated into ION, but particularly I'd be
> interested in issues around the lack of private flags and other
> allocation arguments like alignment.
>
> > Although there would probably be some benefit in discssing how the dma-buf
> > heap framework may want to support
> > secure heaps in the future it is a large topic which I assume you don't
> > want to tackle now.
>
> So I suspect others (Benjamin?) would have a more informed opinion on
> the details, but the intent is to allow secure heap implementations.
> I'm not sure what areas of concern you have for this allocation
> framework in particular?

yes I would be great to understand how you provide the information to
tell that a dmabuf
is secure (or not) since we can't add flag in dmabuf structure itself.
An option is manage
the access rights when a device attach itself to the dmabuf but in
this case you need define
a list of allowed devices per heap...
If you have a good solution for secure heaps you are welcome :-)

Benjamin
>
> > We don't have any non-secure carveout heap use cases but the client use
> > case I have seen usually revolve around
> > wanting large allocations to succeed very quickly.
> > For example I have seen camera use cases which do very large allocations
> > on camera bootup from the carveout heap, these allocations would come from
> > the carveout heap and fallback to the system heap when the carveout heap
> > was full.
> > Actual non-secure carveout heap can perhaps provide more detail.
>
> Yea, I'm aware that folks still see carveout as preferable to CMA due
> to more consistent/predictable allocation latency.  I think we still
> have the issue that we don't have bindings to establish/configure
> carveout regions w/ dts, and I'm not really wanting to hold up the
> allocation API on that issue.
>
>
> > Since we are making some fundamental changes to how ION worked and since
> > Android is likely also be the largest user of the dma-buf heaps framework
> > I think it would be good
> > to have a path to resolve the issues which are currently preventing
> > commercial Android releases from moving to the upstream version of ION.
>
> Yea, I do see solving the cache management efficiency issues as
> critical for the dmabuf heaps to be actually usable (my previous
> version of this patchset accidentally had my hacks to improve
> performance rolled in!).  And there are discussions going on in
> various channels to try to figure out how to either change Android to
> use dma-bufs more in line with how upstream expects, or what more
> generic dma-buf changes we may need to allow Android to use dmabufs
> with the expected performance they need.
>
> > I can understand if you don't necessarily want to put all/any of these
> > changes into the dma-buf heaps framework as part of this series, but my
> > hope is we can get
> > the upstream community and the Android framework team to agree on what
> > upstreamable changes to dma-buf heaps framework, and/or the Android
> > framework, would be required in order for Android to move to the upstream
> > dma-buf heaps framework for commercial devices.
>
> Yes. Though I also don't want to get the bigger dma-buf usage
> discussion (which really affects all dmabuf exporters) too tied up
> with this patch sets attempt to provide a usable allocation interface.
> Part of the problem that I think we've seen with ION is that there is
> a nest of of related issues, and the entire thing is just too big to
> address at once, which I think is part of why ION has sat in staging
> for so long. This patchset just tries to provide an dmabuf allocation
> interface, and a few example exporter heap types.
>
> > I don't mean to make this specific to Android, but my assumption is that
> > many of the ION/dma-buf heaps issues which affect Android would likely
> > affect other new large users of the dma-buf heaps framework, so if we
> > resolve it for Android we would be helping these future users as well.
> > And I do understand that some the issues facing Android may need to be
> > resolved by making changes to Android framework.
>
> While true, I also think some of the assumptions in how the dma-bufs
> are used (pre-attachment of all devices, etc) are maybe not so
> realistic given how Android is using them.  I do want to explore if
> Android can change how they use dma-bufs, but I also worry that we
> need to think about how we could loosen the expectations for dma-bufs,
> as well as trying to figure out how to support things folks have
> brought up like partial cache maintenance.
>
> > I think it would be helpful to try and get as much of this agreed upon as
> > possible before the dma-buf heaps framework moves out of staging.
> >
> > As part of my review I will highlight some of the issues which would
> > affect Android.
> > In my comments I will apply them to the system heap since that is what
> > Android currently uses for a lot of its use cases.
> > I realize that this new framework provides more flexibility to heaps, so
> > perhaps some of these issues can be solved by creating a new type of
> > system heap which Android can use, but even if the solution involves
> > creating a new system heap I would like to make sure that this "new"
> > system heap is upstreamable.
>
> So yea, I do realize I'm dodging the hard problem here, but I think
> the cache-management/usage issue is far more generic.
>
> You're right that this implementation give a lot of flexibility to the
> exporter heaps in how they implement the dmabuf ops (just like how
> other device drivers that are dmabuf exporters have the same
> flexibility), but I very much agree we don't want to add a system and
> then later a "system-android" heap. So yea, a reasonable amount of
> caution is warranted here.
>
> Thanks so much for the review and feedback! I'll try to address things
> as I can as I'm traveling this week (so I may be a bit spotty).
>
> thanks
> -john

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 0/5 v2] DMA-BUF Heaps (destaging ION)
  2019-03-19 16:54     ` Benjamin Gaignard
@ 2019-03-19 16:59       ` Andrew F. Davis
  2019-03-19 21:58         ` Rob Clark
  0 siblings, 1 reply; 68+ messages in thread
From: Andrew F. Davis @ 2019-03-19 16:59 UTC (permalink / raw)
  To: Benjamin Gaignard, John Stultz
  Cc: Liam Mark, lkml, Laura Abbott, Greg KH, Sumit Semwal,
	Brian Starkey, Chenbo Feng, Alistair Strachan, dri-devel,
	Vincent Donnefort, Marissa Wall

On 3/19/19 11:54 AM, Benjamin Gaignard wrote:
> Le mer. 13 mars 2019 à 23:31, John Stultz <john.stultz@linaro.org> a écrit :
>>
>> On Wed, Mar 13, 2019 at 1:11 PM Liam Mark <lmark@codeaurora.org> wrote:
>>> On Tue, 5 Mar 2019, John Stultz wrote:
>>>>
>>>> Eventual TODOS:
>>>> * Reimplement page-pool for system heap (working on this)
>>>> * Add stats accounting to system/cma heaps
>>>> * Make the kselftest actually useful
>>>> * Add other heaps folks see as useful (would love to get
>>>>   some help from actual carveout/chunk users)!
>>>
>>> We use a modified carveout heap for certain secure use cases.
>>
>> Cool! It would be great to see if you have any concerns about adding
>> such a secure-carveout heap to this framework. I suspect it would be
>> fairly similar to how its integrated into ION, but particularly I'd be
>> interested in issues around the lack of private flags and other
>> allocation arguments like alignment.
>>
>>> Although there would probably be some benefit in discssing how the dma-buf
>>> heap framework may want to support
>>> secure heaps in the future it is a large topic which I assume you don't
>>> want to tackle now.
>>
>> So I suspect others (Benjamin?) would have a more informed opinion on
>> the details, but the intent is to allow secure heap implementations.
>> I'm not sure what areas of concern you have for this allocation
>> framework in particular?
> 
> yes I would be great to understand how you provide the information to
> tell that a dmabuf
> is secure (or not) since we can't add flag in dmabuf structure itself.
> An option is manage
> the access rights when a device attach itself to the dmabuf but in
> this case you need define
> a list of allowed devices per heap...
> If you have a good solution for secure heaps you are welcome :-)
> 

Do we really need any of that? A secure buffer is secured by the
hardware firewalls that keep out certain IP (including often the
processor running Linux). So the only thing we need to track internally
is that we should not allow mmap/kmap on the buffer. That can be done in
the per-heap layer, everything else stays the same as a standard
carveout heap.

Andrew

> Benjamin
>>
>>> We don't have any non-secure carveout heap use cases but the client use
>>> case I have seen usually revolve around
>>> wanting large allocations to succeed very quickly.
>>> For example I have seen camera use cases which do very large allocations
>>> on camera bootup from the carveout heap, these allocations would come from
>>> the carveout heap and fallback to the system heap when the carveout heap
>>> was full.
>>> Actual non-secure carveout heap can perhaps provide more detail.
>>
>> Yea, I'm aware that folks still see carveout as preferable to CMA due
>> to more consistent/predictable allocation latency.  I think we still
>> have the issue that we don't have bindings to establish/configure
>> carveout regions w/ dts, and I'm not really wanting to hold up the
>> allocation API on that issue.
>>
>>
>>> Since we are making some fundamental changes to how ION worked and since
>>> Android is likely also be the largest user of the dma-buf heaps framework
>>> I think it would be good
>>> to have a path to resolve the issues which are currently preventing
>>> commercial Android releases from moving to the upstream version of ION.
>>
>> Yea, I do see solving the cache management efficiency issues as
>> critical for the dmabuf heaps to be actually usable (my previous
>> version of this patchset accidentally had my hacks to improve
>> performance rolled in!).  And there are discussions going on in
>> various channels to try to figure out how to either change Android to
>> use dma-bufs more in line with how upstream expects, or what more
>> generic dma-buf changes we may need to allow Android to use dmabufs
>> with the expected performance they need.
>>
>>> I can understand if you don't necessarily want to put all/any of these
>>> changes into the dma-buf heaps framework as part of this series, but my
>>> hope is we can get
>>> the upstream community and the Android framework team to agree on what
>>> upstreamable changes to dma-buf heaps framework, and/or the Android
>>> framework, would be required in order for Android to move to the upstream
>>> dma-buf heaps framework for commercial devices.
>>
>> Yes. Though I also don't want to get the bigger dma-buf usage
>> discussion (which really affects all dmabuf exporters) too tied up
>> with this patch sets attempt to provide a usable allocation interface.
>> Part of the problem that I think we've seen with ION is that there is
>> a nest of of related issues, and the entire thing is just too big to
>> address at once, which I think is part of why ION has sat in staging
>> for so long. This patchset just tries to provide an dmabuf allocation
>> interface, and a few example exporter heap types.
>>
>>> I don't mean to make this specific to Android, but my assumption is that
>>> many of the ION/dma-buf heaps issues which affect Android would likely
>>> affect other new large users of the dma-buf heaps framework, so if we
>>> resolve it for Android we would be helping these future users as well.
>>> And I do understand that some the issues facing Android may need to be
>>> resolved by making changes to Android framework.
>>
>> While true, I also think some of the assumptions in how the dma-bufs
>> are used (pre-attachment of all devices, etc) are maybe not so
>> realistic given how Android is using them.  I do want to explore if
>> Android can change how they use dma-bufs, but I also worry that we
>> need to think about how we could loosen the expectations for dma-bufs,
>> as well as trying to figure out how to support things folks have
>> brought up like partial cache maintenance.
>>
>>> I think it would be helpful to try and get as much of this agreed upon as
>>> possible before the dma-buf heaps framework moves out of staging.
>>>
>>> As part of my review I will highlight some of the issues which would
>>> affect Android.
>>> In my comments I will apply them to the system heap since that is what
>>> Android currently uses for a lot of its use cases.
>>> I realize that this new framework provides more flexibility to heaps, so
>>> perhaps some of these issues can be solved by creating a new type of
>>> system heap which Android can use, but even if the solution involves
>>> creating a new system heap I would like to make sure that this "new"
>>> system heap is upstreamable.
>>
>> So yea, I do realize I'm dodging the hard problem here, but I think
>> the cache-management/usage issue is far more generic.
>>
>> You're right that this implementation give a lot of flexibility to the
>> exporter heaps in how they implement the dmabuf ops (just like how
>> other device drivers that are dmabuf exporters have the same
>> flexibility), but I very much agree we don't want to add a system and
>> then later a "system-android" heap. So yea, a reasonable amount of
>> caution is warranted here.
>>
>> Thanks so much for the review and feedback! I'll try to address things
>> as I can as I'm traveling this week (so I may be a bit spotty).
>>
>> thanks
>> -john

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 0/5 v2] DMA-BUF Heaps (destaging ION)
  2019-03-19 16:59       ` Andrew F. Davis
@ 2019-03-19 21:58         ` Rob Clark
  2019-03-19 22:36           ` John Stultz
  0 siblings, 1 reply; 68+ messages in thread
From: Rob Clark @ 2019-03-19 21:58 UTC (permalink / raw)
  To: Andrew F. Davis
  Cc: Benjamin Gaignard, John Stultz, Alistair Strachan,
	Vincent Donnefort, Greg KH, Chenbo Feng, lkml, Liam Mark,
	Marissa Wall, dri-devel

On Tue, Mar 19, 2019 at 1:00 PM Andrew F. Davis <afd@ti.com> wrote:
>
> On 3/19/19 11:54 AM, Benjamin Gaignard wrote:
> > Le mer. 13 mars 2019 à 23:31, John Stultz <john.stultz@linaro.org> a écrit :
> >>
> >> On Wed, Mar 13, 2019 at 1:11 PM Liam Mark <lmark@codeaurora.org> wrote:
> >>> On Tue, 5 Mar 2019, John Stultz wrote:
> >>>>
> >>>> Eventual TODOS:
> >>>> * Reimplement page-pool for system heap (working on this)
> >>>> * Add stats accounting to system/cma heaps
> >>>> * Make the kselftest actually useful
> >>>> * Add other heaps folks see as useful (would love to get
> >>>>   some help from actual carveout/chunk users)!
> >>>
> >>> We use a modified carveout heap for certain secure use cases.
> >>
> >> Cool! It would be great to see if you have any concerns about adding
> >> such a secure-carveout heap to this framework. I suspect it would be
> >> fairly similar to how its integrated into ION, but particularly I'd be
> >> interested in issues around the lack of private flags and other
> >> allocation arguments like alignment.
> >>
> >>> Although there would probably be some benefit in discssing how the dma-buf
> >>> heap framework may want to support
> >>> secure heaps in the future it is a large topic which I assume you don't
> >>> want to tackle now.
> >>
> >> So I suspect others (Benjamin?) would have a more informed opinion on
> >> the details, but the intent is to allow secure heap implementations.
> >> I'm not sure what areas of concern you have for this allocation
> >> framework in particular?
> >
> > yes I would be great to understand how you provide the information to
> > tell that a dmabuf
> > is secure (or not) since we can't add flag in dmabuf structure itself.
> > An option is manage
> > the access rights when a device attach itself to the dmabuf but in
> > this case you need define
> > a list of allowed devices per heap...
> > If you have a good solution for secure heaps you are welcome :-)
> >
>
> Do we really need any of that? A secure buffer is secured by the
> hardware firewalls that keep out certain IP (including often the
> processor running Linux). So the only thing we need to track internally
> is that we should not allow mmap/kmap on the buffer. That can be done in
> the per-heap layer, everything else stays the same as a standard
> carveout heap.

For at least some hw the importing driver needs to configure things
differently for secure buffers :-/

BR,
-R

>
> Andrew
>
> > Benjamin
> >>
> >>> We don't have any non-secure carveout heap use cases but the client use
> >>> case I have seen usually revolve around
> >>> wanting large allocations to succeed very quickly.
> >>> For example I have seen camera use cases which do very large allocations
> >>> on camera bootup from the carveout heap, these allocations would come from
> >>> the carveout heap and fallback to the system heap when the carveout heap
> >>> was full.
> >>> Actual non-secure carveout heap can perhaps provide more detail.
> >>
> >> Yea, I'm aware that folks still see carveout as preferable to CMA due
> >> to more consistent/predictable allocation latency.  I think we still
> >> have the issue that we don't have bindings to establish/configure
> >> carveout regions w/ dts, and I'm not really wanting to hold up the
> >> allocation API on that issue.
> >>
> >>
> >>> Since we are making some fundamental changes to how ION worked and since
> >>> Android is likely also be the largest user of the dma-buf heaps framework
> >>> I think it would be good
> >>> to have a path to resolve the issues which are currently preventing
> >>> commercial Android releases from moving to the upstream version of ION.
> >>
> >> Yea, I do see solving the cache management efficiency issues as
> >> critical for the dmabuf heaps to be actually usable (my previous
> >> version of this patchset accidentally had my hacks to improve
> >> performance rolled in!).  And there are discussions going on in
> >> various channels to try to figure out how to either change Android to
> >> use dma-bufs more in line with how upstream expects, or what more
> >> generic dma-buf changes we may need to allow Android to use dmabufs
> >> with the expected performance they need.
> >>
> >>> I can understand if you don't necessarily want to put all/any of these
> >>> changes into the dma-buf heaps framework as part of this series, but my
> >>> hope is we can get
> >>> the upstream community and the Android framework team to agree on what
> >>> upstreamable changes to dma-buf heaps framework, and/or the Android
> >>> framework, would be required in order for Android to move to the upstream
> >>> dma-buf heaps framework for commercial devices.
> >>
> >> Yes. Though I also don't want to get the bigger dma-buf usage
> >> discussion (which really affects all dmabuf exporters) too tied up
> >> with this patch sets attempt to provide a usable allocation interface.
> >> Part of the problem that I think we've seen with ION is that there is
> >> a nest of of related issues, and the entire thing is just too big to
> >> address at once, which I think is part of why ION has sat in staging
> >> for so long. This patchset just tries to provide an dmabuf allocation
> >> interface, and a few example exporter heap types.
> >>
> >>> I don't mean to make this specific to Android, but my assumption is that
> >>> many of the ION/dma-buf heaps issues which affect Android would likely
> >>> affect other new large users of the dma-buf heaps framework, so if we
> >>> resolve it for Android we would be helping these future users as well.
> >>> And I do understand that some the issues facing Android may need to be
> >>> resolved by making changes to Android framework.
> >>
> >> While true, I also think some of the assumptions in how the dma-bufs
> >> are used (pre-attachment of all devices, etc) are maybe not so
> >> realistic given how Android is using them.  I do want to explore if
> >> Android can change how they use dma-bufs, but I also worry that we
> >> need to think about how we could loosen the expectations for dma-bufs,
> >> as well as trying to figure out how to support things folks have
> >> brought up like partial cache maintenance.
> >>
> >>> I think it would be helpful to try and get as much of this agreed upon as
> >>> possible before the dma-buf heaps framework moves out of staging.
> >>>
> >>> As part of my review I will highlight some of the issues which would
> >>> affect Android.
> >>> In my comments I will apply them to the system heap since that is what
> >>> Android currently uses for a lot of its use cases.
> >>> I realize that this new framework provides more flexibility to heaps, so
> >>> perhaps some of these issues can be solved by creating a new type of
> >>> system heap which Android can use, but even if the solution involves
> >>> creating a new system heap I would like to make sure that this "new"
> >>> system heap is upstreamable.
> >>
> >> So yea, I do realize I'm dodging the hard problem here, but I think
> >> the cache-management/usage issue is far more generic.
> >>
> >> You're right that this implementation give a lot of flexibility to the
> >> exporter heaps in how they implement the dmabuf ops (just like how
> >> other device drivers that are dmabuf exporters have the same
> >> flexibility), but I very much agree we don't want to add a system and
> >> then later a "system-android" heap. So yea, a reasonable amount of
> >> caution is warranted here.
> >>
> >> Thanks so much for the review and feedback! I'll try to address things
> >> as I can as I'm traveling this week (so I may be a bit spotty).
> >>
> >> thanks
> >> -john
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 0/5 v2] DMA-BUF Heaps (destaging ION)
  2019-03-19 21:58         ` Rob Clark
@ 2019-03-19 22:36           ` John Stultz
  2019-03-20  9:16             ` Benjamin Gaignard
  0 siblings, 1 reply; 68+ messages in thread
From: John Stultz @ 2019-03-19 22:36 UTC (permalink / raw)
  To: Rob Clark
  Cc: Andrew F. Davis, Benjamin Gaignard, Alistair Strachan,
	Vincent Donnefort, Greg KH, Chenbo Feng, lkml, Liam Mark,
	Marissa Wall, dri-devel

On Tue, Mar 19, 2019 at 2:58 PM Rob Clark <robdclark@gmail.com> wrote:
>
> On Tue, Mar 19, 2019 at 1:00 PM Andrew F. Davis <afd@ti.com> wrote:
> >
> > On 3/19/19 11:54 AM, Benjamin Gaignard wrote:
> > > Le mer. 13 mars 2019 à 23:31, John Stultz <john.stultz@linaro.org> a écrit :
> > >>
> > >> On Wed, Mar 13, 2019 at 1:11 PM Liam Mark <lmark@codeaurora.org> wrote:
> > >>> On Tue, 5 Mar 2019, John Stultz wrote:
> > >>>>
> > >>>> Eventual TODOS:
> > >>>> * Reimplement page-pool for system heap (working on this)
> > >>>> * Add stats accounting to system/cma heaps
> > >>>> * Make the kselftest actually useful
> > >>>> * Add other heaps folks see as useful (would love to get
> > >>>>   some help from actual carveout/chunk users)!
> > >>>
> > >>> We use a modified carveout heap for certain secure use cases.
> > >>
> > >> Cool! It would be great to see if you have any concerns about adding
> > >> such a secure-carveout heap to this framework. I suspect it would be
> > >> fairly similar to how its integrated into ION, but particularly I'd be
> > >> interested in issues around the lack of private flags and other
> > >> allocation arguments like alignment.
> > >>
> > >>> Although there would probably be some benefit in discssing how the dma-buf
> > >>> heap framework may want to support
> > >>> secure heaps in the future it is a large topic which I assume you don't
> > >>> want to tackle now.
> > >>
> > >> So I suspect others (Benjamin?) would have a more informed opinion on
> > >> the details, but the intent is to allow secure heap implementations.
> > >> I'm not sure what areas of concern you have for this allocation
> > >> framework in particular?
> > >
> > > yes I would be great to understand how you provide the information to
> > > tell that a dmabuf
> > > is secure (or not) since we can't add flag in dmabuf structure itself.
> > > An option is manage
> > > the access rights when a device attach itself to the dmabuf but in
> > > this case you need define
> > > a list of allowed devices per heap...
> > > If you have a good solution for secure heaps you are welcome :-)
> > >
> >
> > Do we really need any of that? A secure buffer is secured by the
> > hardware firewalls that keep out certain IP (including often the
> > processor running Linux). So the only thing we need to track internally
> > is that we should not allow mmap/kmap on the buffer. That can be done in
> > the per-heap layer, everything else stays the same as a standard
> > carveout heap.
>
> For at least some hw the importing driver needs to configure things
> differently for secure buffers :-/

Does the import ioctl need/use a flag for that then? Userland already
has to keep meta-data about dmabufs around.

thanks
-john

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 0/5 v2] DMA-BUF Heaps (destaging ION)
  2019-03-19 22:36           ` John Stultz
@ 2019-03-20  9:16             ` Benjamin Gaignard
  2019-03-20 14:44               ` Andrew F. Davis
  2019-03-20 16:11               ` John Stultz
  0 siblings, 2 replies; 68+ messages in thread
From: Benjamin Gaignard @ 2019-03-20  9:16 UTC (permalink / raw)
  To: John Stultz
  Cc: Rob Clark, Andrew F. Davis, Alistair Strachan, Vincent Donnefort,
	Greg KH, Chenbo Feng, lkml, Liam Mark, Marissa Wall, dri-devel

Le mar. 19 mars 2019 à 23:36, John Stultz <john.stultz@linaro.org> a écrit :
>
> On Tue, Mar 19, 2019 at 2:58 PM Rob Clark <robdclark@gmail.com> wrote:
> >
> > On Tue, Mar 19, 2019 at 1:00 PM Andrew F. Davis <afd@ti.com> wrote:
> > >
> > > On 3/19/19 11:54 AM, Benjamin Gaignard wrote:
> > > > Le mer. 13 mars 2019 à 23:31, John Stultz <john.stultz@linaro.org> a écrit :
> > > >>
> > > >> On Wed, Mar 13, 2019 at 1:11 PM Liam Mark <lmark@codeaurora.org> wrote:
> > > >>> On Tue, 5 Mar 2019, John Stultz wrote:
> > > >>>>
> > > >>>> Eventual TODOS:
> > > >>>> * Reimplement page-pool for system heap (working on this)
> > > >>>> * Add stats accounting to system/cma heaps
> > > >>>> * Make the kselftest actually useful
> > > >>>> * Add other heaps folks see as useful (would love to get
> > > >>>>   some help from actual carveout/chunk users)!
> > > >>>
> > > >>> We use a modified carveout heap for certain secure use cases.
> > > >>
> > > >> Cool! It would be great to see if you have any concerns about adding
> > > >> such a secure-carveout heap to this framework. I suspect it would be
> > > >> fairly similar to how its integrated into ION, but particularly I'd be
> > > >> interested in issues around the lack of private flags and other
> > > >> allocation arguments like alignment.
> > > >>
> > > >>> Although there would probably be some benefit in discssing how the dma-buf
> > > >>> heap framework may want to support
> > > >>> secure heaps in the future it is a large topic which I assume you don't
> > > >>> want to tackle now.
> > > >>
> > > >> So I suspect others (Benjamin?) would have a more informed opinion on
> > > >> the details, but the intent is to allow secure heap implementations.
> > > >> I'm not sure what areas of concern you have for this allocation
> > > >> framework in particular?
> > > >
> > > > yes I would be great to understand how you provide the information to
> > > > tell that a dmabuf
> > > > is secure (or not) since we can't add flag in dmabuf structure itself.
> > > > An option is manage
> > > > the access rights when a device attach itself to the dmabuf but in
> > > > this case you need define
> > > > a list of allowed devices per heap...
> > > > If you have a good solution for secure heaps you are welcome :-)
> > > >
> > >
> > > Do we really need any of that? A secure buffer is secured by the
> > > hardware firewalls that keep out certain IP (including often the
> > > processor running Linux). So the only thing we need to track internally
> > > is that we should not allow mmap/kmap on the buffer. That can be done in
> > > the per-heap layer, everything else stays the same as a standard
> > > carveout heap.
> >
> > For at least some hw the importing driver needs to configure things
> > differently for secure buffers :-/
>
> Does the import ioctl need/use a flag for that then? Userland already
> has to keep meta-data about dmabufs around.

To secure a buffer you need to know who is allowed to write/read it and
hardware block involved in the dataflow may need to know that the buffer
is secure to configure themself.
As example for a video decoding you allow hw video decoder to read in
a buffer and display to read it. You can also allow cpu to write on the buffer
to add subtitles. For that we need to be able to mmap/kmap the buffer.
Using a carveout heap for secure buffer mean that you reserved a large
memory region only for this purpose, that isn't possible on embedded device
where we are always limited in memory so we use CMA.
In the past I have used dmabuf's attach function to know who write into
the buffer and then configure who will be able to read it. It was working well
but the issue was how to in generic way this behavior.

>
> thanks
> -john

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 0/5 v2] DMA-BUF Heaps (destaging ION)
  2019-03-20  9:16             ` Benjamin Gaignard
@ 2019-03-20 14:44               ` Andrew F. Davis
  2019-03-20 15:59                 ` Benjamin Gaignard
  2019-03-20 16:11               ` John Stultz
  1 sibling, 1 reply; 68+ messages in thread
From: Andrew F. Davis @ 2019-03-20 14:44 UTC (permalink / raw)
  To: Benjamin Gaignard, John Stultz
  Cc: Rob Clark, Alistair Strachan, Vincent Donnefort, Greg KH,
	Chenbo Feng, lkml, Liam Mark, Marissa Wall, dri-devel

On 3/20/19 4:16 AM, Benjamin Gaignard wrote:
> Le mar. 19 mars 2019 à 23:36, John Stultz <john.stultz@linaro.org> a écrit :
>>
>> On Tue, Mar 19, 2019 at 2:58 PM Rob Clark <robdclark@gmail.com> wrote:
>>>
>>> On Tue, Mar 19, 2019 at 1:00 PM Andrew F. Davis <afd@ti.com> wrote:
>>>>
>>>> On 3/19/19 11:54 AM, Benjamin Gaignard wrote:
>>>>> Le mer. 13 mars 2019 à 23:31, John Stultz <john.stultz@linaro.org> a écrit :
>>>>>>
>>>>>> On Wed, Mar 13, 2019 at 1:11 PM Liam Mark <lmark@codeaurora.org> wrote:
>>>>>>> On Tue, 5 Mar 2019, John Stultz wrote:
>>>>>>>>
>>>>>>>> Eventual TODOS:
>>>>>>>> * Reimplement page-pool for system heap (working on this)
>>>>>>>> * Add stats accounting to system/cma heaps
>>>>>>>> * Make the kselftest actually useful
>>>>>>>> * Add other heaps folks see as useful (would love to get
>>>>>>>>   some help from actual carveout/chunk users)!
>>>>>>>
>>>>>>> We use a modified carveout heap for certain secure use cases.
>>>>>>
>>>>>> Cool! It would be great to see if you have any concerns about adding
>>>>>> such a secure-carveout heap to this framework. I suspect it would be
>>>>>> fairly similar to how its integrated into ION, but particularly I'd be
>>>>>> interested in issues around the lack of private flags and other
>>>>>> allocation arguments like alignment.
>>>>>>
>>>>>>> Although there would probably be some benefit in discssing how the dma-buf
>>>>>>> heap framework may want to support
>>>>>>> secure heaps in the future it is a large topic which I assume you don't
>>>>>>> want to tackle now.
>>>>>>
>>>>>> So I suspect others (Benjamin?) would have a more informed opinion on
>>>>>> the details, but the intent is to allow secure heap implementations.
>>>>>> I'm not sure what areas of concern you have for this allocation
>>>>>> framework in particular?
>>>>>
>>>>> yes I would be great to understand how you provide the information to
>>>>> tell that a dmabuf
>>>>> is secure (or not) since we can't add flag in dmabuf structure itself.
>>>>> An option is manage
>>>>> the access rights when a device attach itself to the dmabuf but in
>>>>> this case you need define
>>>>> a list of allowed devices per heap...
>>>>> If you have a good solution for secure heaps you are welcome :-)
>>>>>
>>>>
>>>> Do we really need any of that? A secure buffer is secured by the
>>>> hardware firewalls that keep out certain IP (including often the
>>>> processor running Linux). So the only thing we need to track internally
>>>> is that we should not allow mmap/kmap on the buffer. That can be done in
>>>> the per-heap layer, everything else stays the same as a standard
>>>> carveout heap.
>>>
>>> For at least some hw the importing driver needs to configure things
>>> differently for secure buffers :-/
>>
>> Does the import ioctl need/use a flag for that then? Userland already
>> has to keep meta-data about dmabufs around.
> 
> To secure a buffer you need to know who is allowed to write/read it and
> hardware block involved in the dataflow may need to know that the buffer
> is secure to configure themself.
> As example for a video decoding you allow hw video decoder to read in
> a buffer and display to read it. You can also allow cpu to write on the buffer
> to add subtitles. For that we need to be able to mmap/kmap the buffer.
> Using a carveout heap for secure buffer mean that you reserved a large
> memory region only for this purpose, that isn't possible on embedded device
> where we are always limited in memory so we use CMA.
> In the past I have used dmabuf's attach function to know who write into
> the buffer and then configure who will be able to read it. It was working well
> but the issue was how to in generic way this behavior.
> 

Okay, I think I see what you are saying now.

The way we handle secure playback is to firewall everything upfront and
it is up to the application to inform the hardware about what it can and
cannot do to the buffer, or simply not ask anything not allowed (E.g.
writeback the decrypted stream) else it will get a firewall exception.
The buffer itself doesn't have to carry any information.

It sounds like you want the hardware driver to be able to detect the
use-case based on the buffer itself and configure itself accordingly? Or
the exporter at attach time to check access permissions?

The first would need a change to DMA-BUF framework, maybe an added flag.
The second would just need a heap exporter with the system wide smarts,
but as you say that is not very generic..

Andrew

>>
>> thanks
>> -john

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 0/5 v2] DMA-BUF Heaps (destaging ION)
  2019-03-20 14:44               ` Andrew F. Davis
@ 2019-03-20 15:59                 ` Benjamin Gaignard
  0 siblings, 0 replies; 68+ messages in thread
From: Benjamin Gaignard @ 2019-03-20 15:59 UTC (permalink / raw)
  To: Andrew F. Davis
  Cc: John Stultz, Rob Clark, Alistair Strachan, Vincent Donnefort,
	Greg KH, Chenbo Feng, lkml, Liam Mark, Marissa Wall, dri-devel

Le mer. 20 mars 2019 à 15:54, Andrew F. Davis <afd@ti.com> a écrit :
>
> On 3/20/19 4:16 AM, Benjamin Gaignard wrote:
> > Le mar. 19 mars 2019 à 23:36, John Stultz <john.stultz@linaro.org> a écrit :
> >>
> >> On Tue, Mar 19, 2019 at 2:58 PM Rob Clark <robdclark@gmail.com> wrote:
> >>>
> >>> On Tue, Mar 19, 2019 at 1:00 PM Andrew F. Davis <afd@ti.com> wrote:
> >>>>
> >>>> On 3/19/19 11:54 AM, Benjamin Gaignard wrote:
> >>>>> Le mer. 13 mars 2019 à 23:31, John Stultz <john.stultz@linaro.org> a écrit :
> >>>>>>
> >>>>>> On Wed, Mar 13, 2019 at 1:11 PM Liam Mark <lmark@codeaurora.org> wrote:
> >>>>>>> On Tue, 5 Mar 2019, John Stultz wrote:
> >>>>>>>>
> >>>>>>>> Eventual TODOS:
> >>>>>>>> * Reimplement page-pool for system heap (working on this)
> >>>>>>>> * Add stats accounting to system/cma heaps
> >>>>>>>> * Make the kselftest actually useful
> >>>>>>>> * Add other heaps folks see as useful (would love to get
> >>>>>>>>   some help from actual carveout/chunk users)!
> >>>>>>>
> >>>>>>> We use a modified carveout heap for certain secure use cases.
> >>>>>>
> >>>>>> Cool! It would be great to see if you have any concerns about adding
> >>>>>> such a secure-carveout heap to this framework. I suspect it would be
> >>>>>> fairly similar to how its integrated into ION, but particularly I'd be
> >>>>>> interested in issues around the lack of private flags and other
> >>>>>> allocation arguments like alignment.
> >>>>>>
> >>>>>>> Although there would probably be some benefit in discssing how the dma-buf
> >>>>>>> heap framework may want to support
> >>>>>>> secure heaps in the future it is a large topic which I assume you don't
> >>>>>>> want to tackle now.
> >>>>>>
> >>>>>> So I suspect others (Benjamin?) would have a more informed opinion on
> >>>>>> the details, but the intent is to allow secure heap implementations.
> >>>>>> I'm not sure what areas of concern you have for this allocation
> >>>>>> framework in particular?
> >>>>>
> >>>>> yes I would be great to understand how you provide the information to
> >>>>> tell that a dmabuf
> >>>>> is secure (or not) since we can't add flag in dmabuf structure itself.
> >>>>> An option is manage
> >>>>> the access rights when a device attach itself to the dmabuf but in
> >>>>> this case you need define
> >>>>> a list of allowed devices per heap...
> >>>>> If you have a good solution for secure heaps you are welcome :-)
> >>>>>
> >>>>
> >>>> Do we really need any of that? A secure buffer is secured by the
> >>>> hardware firewalls that keep out certain IP (including often the
> >>>> processor running Linux). So the only thing we need to track internally
> >>>> is that we should not allow mmap/kmap on the buffer. That can be done in
> >>>> the per-heap layer, everything else stays the same as a standard
> >>>> carveout heap.
> >>>
> >>> For at least some hw the importing driver needs to configure things
> >>> differently for secure buffers :-/
> >>
> >> Does the import ioctl need/use a flag for that then? Userland already
> >> has to keep meta-data about dmabufs around.
> >
> > To secure a buffer you need to know who is allowed to write/read it and
> > hardware block involved in the dataflow may need to know that the buffer
> > is secure to configure themself.
> > As example for a video decoding you allow hw video decoder to read in
> > a buffer and display to read it. You can also allow cpu to write on the buffer
> > to add subtitles. For that we need to be able to mmap/kmap the buffer.
> > Using a carveout heap for secure buffer mean that you reserved a large
> > memory region only for this purpose, that isn't possible on embedded device
> > where we are always limited in memory so we use CMA.
> > In the past I have used dmabuf's attach function to know who write into
> > the buffer and then configure who will be able to read it. It was working well
> > but the issue was how to in generic way this behavior.
> >
>
> Okay, I think I see what you are saying now.
>
> The way we handle secure playback is to firewall everything upfront and
> it is up to the application to inform the hardware about what it can and
> cannot do to the buffer, or simply not ask anything not allowed (E.g.
> writeback the decrypted stream) else it will get a firewall exception.
> The buffer itself doesn't have to carry any information.
>
> It sounds like you want the hardware driver to be able to detect the
> use-case based on the buffer itself and configure itself accordingly? Or
> the exporter at attach time to check access permissions?

Both are needed, the buffer client must know that it is a secure buffer and heap
will have to configure the permissions.

>
> The first would need a change to DMA-BUF framework, maybe an added flag.

Sumit will NACK that because dmabuf have to remain neutral and not embedded
flags for every possible usage.

> The second would just need a heap exporter with the system wide smarts,
> but as you say that is not very generic..

yes it is difficult to find a good solution for that.

>
> Andrew
>
> >>
> >> thanks
> >> -john

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 0/5 v2] DMA-BUF Heaps (destaging ION)
  2019-03-20  9:16             ` Benjamin Gaignard
  2019-03-20 14:44               ` Andrew F. Davis
@ 2019-03-20 16:11               ` John Stultz
  1 sibling, 0 replies; 68+ messages in thread
From: John Stultz @ 2019-03-20 16:11 UTC (permalink / raw)
  To: Benjamin Gaignard
  Cc: Rob Clark, Andrew F. Davis, Alistair Strachan, Vincent Donnefort,
	Greg KH, Chenbo Feng, lkml, Liam Mark, Marissa Wall, dri-devel

On Wed, Mar 20, 2019 at 2:16 AM Benjamin Gaignard
<benjamin.gaignard@linaro.org> wrote:
> Le mar. 19 mars 2019 à 23:36, John Stultz <john.stultz@linaro.org> a écrit :
> > On Tue, Mar 19, 2019 at 2:58 PM Rob Clark <robdclark@gmail.com> wrote:
> > > For at least some hw the importing driver needs to configure things
> > > differently for secure buffers :-/
> >
> > Does the import ioctl need/use a flag for that then? Userland already
> > has to keep meta-data about dmabufs around.
>
> To secure a buffer you need to know who is allowed to write/read it and
> hardware block involved in the dataflow may need to know that the buffer
> is secure to configure themself.
> As example for a video decoding you allow hw video decoder to read in
> a buffer and display to read it. You can also allow cpu to write on the buffer
> to add subtitles. For that we need to be able to mmap/kmap the buffer.
> Using a carveout heap for secure buffer mean that you reserved a large
> memory region only for this purpose, that isn't possible on embedded device
> where we are always limited in memory so we use CMA.
> In the past I have used dmabuf's attach function to know who write into
> the buffer and then configure who will be able to read it. It was working well
> but the issue was how to in generic way this behavior.

Given the complexity of the configuration needed when allocating the
buffer, instead of trying to make a generic secure buffer allocator,
would having per-usege heaps make sense?  It just feels there are so
many specifics to the secure buffer setup and configuration that maybe
there can't be a generic configuration interface.  So instead maybe we
let the heap implementations provide set usage configs?

This doesn't necessarily require that you have separate pools of
memory (they can share the same backing store), but by having multiple
per-config heap devices, maybe this could avoid trying to fit all the
options into one interface?

On the import side, I'm not sure how much the importing device needs
to know about the specific rules here (out side of "secure buffer" or
not), so maybe that's another catch.

thanks
-john

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 2/5 v2] dma-buf: heaps: Add heap helpers
  2019-03-15  9:06   ` Christoph Hellwig
  2019-03-19 15:03     ` Andrew F. Davis
@ 2019-03-21 20:01     ` John Stultz
  1 sibling, 0 replies; 68+ messages in thread
From: John Stultz @ 2019-03-21 20:01 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: lkml, Laura Abbott, Benjamin Gaignard, Greg KH, Sumit Semwal,
	Liam Mark, Brian Starkey, Andrew F . Davis, Chenbo Feng,
	Alistair Strachan, dri-devel

On Fri, Mar 15, 2019 at 2:06 AM Christoph Hellwig <hch@infradead.org> wrote:
> > +     if (buffer->kmap_cnt) {
> > +             buffer->kmap_cnt++;
> > +             return buffer->vaddr;
> > +     }
> > +     vaddr = dma_heap_map_kernel(buffer);
> > +     if (WARN_ONCE(!vaddr,
> > +                   "heap->ops->map_kernel should return ERR_PTR on error"))
> > +             return ERR_PTR(-EINVAL);
> > +     if (IS_ERR(vaddr))
> > +             return vaddr;
> > +     buffer->vaddr = vaddr;
> > +     buffer->kmap_cnt++;
>
> The cnt manioulation is odd.  The normal way to make this readable
> is to use a postfix op on the check, as that makes it clear to everyone,
> e.g.:
>
>         if (buffer->kmap_cnt++)
>                 return buffer->vaddr;
>         ..

Thanks again on the feedback, I have had some other distractions
recently, so just getting around to these details again now.

The trouble w/ the suggestion here, if we increment in the check, then
if we trip on any of the other error paths, the cnt value will be
wrong when we return (and doing the extra decrementing in the error
paths feels as ugly as just doing the increment at the end of the
success paths)

> > +     buffer->kmap_cnt--;
> > +     if (!buffer->kmap_cnt) {
> > +             vunmap(buffer->vaddr);
> > +             buffer->vaddr = NULL;
> > +     }
>
> Same here, just with an infix.
>
> > +static inline void INIT_HEAP_HELPER_BUFFER(struct heap_helper_buffer *buffer,
> > +                              void (*free)(struct heap_helper_buffer *))
> > +{
> > +     buffer->private_flags = 0;
> > +     buffer->priv_virt = NULL;
> > +     mutex_init(&buffer->lock);
> > +     buffer->kmap_cnt = 0;
> > +     buffer->vaddr = NULL;
> > +     buffer->sg_table = NULL;
> > +     INIT_LIST_HEAD(&buffer->attachments);
> > +     buffer->free = free;
> > +}
>
> There is absolutely no reason to inlines this as far as I can tell.

Yea, I think I was mimicing some of the helpers like INIT_LIST_HEAD()
But sounds good. I can uninline it.

> Also it would seem much simpler to simply let the caller assign the
> free callback.

Yea, its a bit ugly but I worry the caller might forget?

Thanks again for the feedback!
-john

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 2/5 v2] dma-buf: heaps: Add heap helpers
  2019-03-19 14:26   ` Brian Starkey
@ 2019-03-21 20:11     ` John Stultz
  2019-03-21 20:35     ` Andrew F. Davis
  1 sibling, 0 replies; 68+ messages in thread
From: John Stultz @ 2019-03-21 20:11 UTC (permalink / raw)
  To: Brian Starkey
  Cc: lkml, Laura Abbott, Benjamin Gaignard, Greg KH, Sumit Semwal,
	Liam Mark, Andrew F . Davis, Chenbo Feng, Alistair Strachan,
	dri-devel, nd

On Tue, Mar 19, 2019 at 7:26 AM Brian Starkey <Brian.Starkey@arm.com> wrote:
>
> Hi John,
>
> On Tue, Mar 05, 2019 at 12:54:30PM -0800, John Stultz wrote:
>
> ...
>
> > +
> > +void dma_heap_buffer_destroy(struct dma_heap_buffer *heap_buffer)
> > +{
> > +     struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> > +
> > +     if (buffer->kmap_cnt > 0) {
> > +             pr_warn_once("%s: buffer still mapped in the kernel\n",
> > +                          __func__);
>
> Could be worth something louder like a full WARN.
>
> > +             vunmap(buffer->vaddr);
> > +     }
> > +
> > +     buffer->free(buffer);
> > +}
> > +
>
> ...
>
> > +
> > +static void *dma_heap_dma_buf_kmap(struct dma_buf *dmabuf,
> > +                                     unsigned long offset)
> > +{
> > +     struct dma_heap_buffer *heap_buffer = dmabuf->priv;
> > +     struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> > +
> > +     return buffer->vaddr + offset * PAGE_SIZE;
>
> I think it'd be good to check for NULL vaddr and return NULL in that
> case. Less chance of an invalid pointer being accidentally used then.

Thanks so much for the feedback! I've added these two suggestions in!
-john

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 4/5 v2] dma-buf: heaps: Add CMA heap to dmabuf heapss
  2019-03-06 16:05   ` Benjamin Gaignard
@ 2019-03-21 20:15     ` John Stultz
  0 siblings, 0 replies; 68+ messages in thread
From: John Stultz @ 2019-03-21 20:15 UTC (permalink / raw)
  To: Benjamin Gaignard
  Cc: lkml, Laura Abbott, Greg KH, Sumit Semwal, Liam Mark,
	Brian Starkey, Andrew F . Davis, Chenbo Feng, Alistair Strachan,
	ML dri-devel

On Wed, Mar 6, 2019 at 8:05 AM Benjamin Gaignard
<benjamin.gaignard@linaro.org> wrote:
> Le mar. 5 mars 2019 à 21:54, John Stultz <john.stultz@linaro.org> a écrit :
> > +#define to_cma_heap(x) container_of(x, struct cma_heap, heap)
>
> Even if I had write this macro years ago, now I would prefer to have a
> static inline function
> to be able to check the types.
>
> with that:
> Reviewed-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>

Thanks for the suggestion! I've reworked that and the other
container_of macro I had in the patch series to be inline functions.

thanks again!
-john

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 2/5 v2] dma-buf: heaps: Add heap helpers
  2019-03-19 14:26   ` Brian Starkey
  2019-03-21 20:11     ` John Stultz
@ 2019-03-21 20:35     ` Andrew F. Davis
  1 sibling, 0 replies; 68+ messages in thread
From: Andrew F. Davis @ 2019-03-21 20:35 UTC (permalink / raw)
  To: Brian Starkey, John Stultz
  Cc: lkml, Laura Abbott, Benjamin Gaignard, Greg KH, Sumit Semwal,
	Liam Mark, Chenbo Feng, Alistair Strachan, dri-devel, nd

On 3/19/19 9:26 AM, Brian Starkey wrote:
> Hi John,
> 
> On Tue, Mar 05, 2019 at 12:54:30PM -0800, John Stultz wrote:
> 
> ...
> 
>> +
>> +void dma_heap_buffer_destroy(struct dma_heap_buffer *heap_buffer)
>> +{
>> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
>> +
>> +	if (buffer->kmap_cnt > 0) {
>> +		pr_warn_once("%s: buffer still mapped in the kernel\n",
>> +			     __func__);
> 
> Could be worth something louder like a full WARN.
> 
>> +		vunmap(buffer->vaddr);
>> +	}
>> +
>> +	buffer->free(buffer);
>> +}
>> +
> 
> ...
> 
>> +
>> +static void *dma_heap_dma_buf_kmap(struct dma_buf *dmabuf,
>> +					unsigned long offset)
>> +{
>> +	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
>> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
>> +
>> +	return buffer->vaddr + offset * PAGE_SIZE;
> 
> I think it'd be good to check for NULL vaddr and return NULL in that
> case. Less chance of an invalid pointer being accidentally used then.
> 

Why do we assume vaddr is set at all here? I'm guessing we expected
dma_heap_map_kernel to have been called, that is not going to always be
the case. kmap should perform it's own single page kmap here and not
rely on the clunky full buffer vmap (which is probably broken on 32bit
systems here when the buffers are large).

Andrew

> Thanks,
> -Brian
> 

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 2/5 v2] dma-buf: heaps: Add heap helpers
  2019-03-05 20:54 ` [RFC][PATCH 2/5 v2] dma-buf: heaps: Add heap helpers John Stultz
                     ` (2 preceding siblings ...)
  2019-03-19 14:26   ` Brian Starkey
@ 2019-03-21 20:43   ` Andrew F. Davis
  3 siblings, 0 replies; 68+ messages in thread
From: Andrew F. Davis @ 2019-03-21 20:43 UTC (permalink / raw)
  To: John Stultz, lkml
  Cc: Laura Abbott, Benjamin Gaignard, Greg KH, Sumit Semwal,
	Liam Mark, Brian Starkey, Chenbo Feng, Alistair Strachan,
	dri-devel

On 3/5/19 2:54 PM, John Stultz wrote:
> Add generic helper dmabuf ops for dma heaps, so we can reduce
> the amount of duplicative code for the exported dmabufs.
> 
> This code is an evolution of the Android ION implementation, so
> thanks to its original authors and maintainters:
>   Rebecca Schultz Zavin, Colin Cross, Laura Abbott, and others!
> 
> Cc: Laura Abbott <labbott@redhat.com>
> Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
> Cc: Greg KH <gregkh@linuxfoundation.org>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: Liam Mark <lmark@codeaurora.org>
> Cc: Brian Starkey <Brian.Starkey@arm.com>
> Cc: Andrew F. Davis <afd@ti.com>
> Cc: Chenbo Feng <fengc@google.com>
> Cc: Alistair Strachan <astrachan@google.com>
> Cc: dri-devel@lists.freedesktop.org
> Signed-off-by: John Stultz <john.stultz@linaro.org>
> ---
> v2:
> * Removed cache management performance hack that I had
>   accidentally folded in.
> * Removed stats code that was in helpers
> * Lots of checkpatch cleanups
> ---
>  drivers/dma-buf/Makefile             |   1 +
>  drivers/dma-buf/heaps/Makefile       |   2 +
>  drivers/dma-buf/heaps/heap-helpers.c | 335 +++++++++++++++++++++++++++++++++++
>  drivers/dma-buf/heaps/heap-helpers.h |  48 +++++
>  4 files changed, 386 insertions(+)
>  create mode 100644 drivers/dma-buf/heaps/Makefile
>  create mode 100644 drivers/dma-buf/heaps/heap-helpers.c
>  create mode 100644 drivers/dma-buf/heaps/heap-helpers.h
> 
> diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
> index b0332f1..09c2f2d 100644
> --- a/drivers/dma-buf/Makefile
> +++ b/drivers/dma-buf/Makefile
> @@ -1,4 +1,5 @@
>  obj-y := dma-buf.o dma-fence.o dma-fence-array.o reservation.o seqno-fence.o
> +obj-$(CONFIG_DMABUF_HEAPS)	+= heaps/
>  obj-$(CONFIG_DMABUF_HEAPS)	+= dma-heap.o
>  obj-$(CONFIG_SYNC_FILE)		+= sync_file.o
>  obj-$(CONFIG_SW_SYNC)		+= sw_sync.o sync_debug.o
> diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
> new file mode 100644
> index 0000000..de49898
> --- /dev/null
> +++ b/drivers/dma-buf/heaps/Makefile
> @@ -0,0 +1,2 @@
> +# SPDX-License-Identifier: GPL-2.0
> +obj-y					+= heap-helpers.o
> diff --git a/drivers/dma-buf/heaps/heap-helpers.c b/drivers/dma-buf/heaps/heap-helpers.c
> new file mode 100644
> index 0000000..ae5e9d0
> --- /dev/null
> +++ b/drivers/dma-buf/heaps/heap-helpers.c
> @@ -0,0 +1,335 @@
> +// SPDX-License-Identifier: GPL-2.0
> +#include <linux/device.h>
> +#include <linux/dma-buf.h>
> +#include <linux/err.h>
> +#include <linux/idr.h>
> +#include <linux/list.h>
> +#include <linux/slab.h>
> +#include <linux/uaccess.h>
> +#include <uapi/linux/dma-heap.h>
> +
> +#include "heap-helpers.h"
> +
> +
> +static void *dma_heap_map_kernel(struct heap_helper_buffer *buffer)
> +{
> +	struct scatterlist *sg;
> +	int i, j;
> +	void *vaddr;
> +	pgprot_t pgprot;
> +	struct sg_table *table = buffer->sg_table;
> +	int npages = PAGE_ALIGN(buffer->heap_buffer.size) / PAGE_SIZE;
> +	struct page **pages = vmalloc(array_size(npages,
> +						 sizeof(struct page *)));
> +	struct page **tmp = pages;
> +
> +	if (!pages)
> +		return ERR_PTR(-ENOMEM);
> +
> +	pgprot = PAGE_KERNEL;
> +
> +	for_each_sg(table->sgl, sg, table->nents, i) {
> +		int npages_this_entry = PAGE_ALIGN(sg->length) / PAGE_SIZE;
> +		struct page *page = sg_page(sg);
> +
> +		WARN_ON(i >= npages);
> +		for (j = 0; j < npages_this_entry; j++)
> +			*(tmp++) = page++;
> +	}
> +	vaddr = vmap(pages, npages, VM_MAP, pgprot);
> +	vfree(pages);
> +
> +	if (!vaddr)
> +		return ERR_PTR(-ENOMEM);
> +
> +	return vaddr;
> +}
> +
> +static int dma_heap_map_user(struct heap_helper_buffer *buffer,
> +			 struct vm_area_struct *vma)
> +{
> +	struct sg_table *table = buffer->sg_table;
> +	unsigned long addr = vma->vm_start;
> +	unsigned long offset = vma->vm_pgoff * PAGE_SIZE;
> +	struct scatterlist *sg;
> +	int i;
> +	int ret;
> +
> +	for_each_sg(table->sgl, sg, table->nents, i) {
> +		struct page *page = sg_page(sg);
> +		unsigned long remainder = vma->vm_end - addr;
> +		unsigned long len = sg->length;
> +
> +		if (offset >= sg->length) {
> +			offset -= sg->length;
> +			continue;
> +		} else if (offset) {
> +			page += offset / PAGE_SIZE;
> +			len = sg->length - offset;
> +			offset = 0;
> +		}
> +		len = min(len, remainder);
> +		ret = remap_pfn_range(vma, addr, page_to_pfn(page), len,
> +				      vma->vm_page_prot);
> +		if (ret)
> +			return ret;
> +		addr += len;
> +		if (addr >= vma->vm_end)
> +			return 0;
> +	}
> +
> +	return 0;
> +}
> +
> +
> +void dma_heap_buffer_destroy(struct dma_heap_buffer *heap_buffer)
> +{
> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> +
> +	if (buffer->kmap_cnt > 0) {
> +		pr_warn_once("%s: buffer still mapped in the kernel\n",
> +			     __func__);
> +		vunmap(buffer->vaddr);
> +	}
> +
> +	buffer->free(buffer);
> +}
> +
> +static void *dma_heap_buffer_kmap_get(struct dma_heap_buffer *heap_buffer)
> +{
> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> +	void *vaddr;
> +
> +	if (buffer->kmap_cnt) {
> +		buffer->kmap_cnt++;
> +		return buffer->vaddr;
> +	}
> +	vaddr = dma_heap_map_kernel(buffer);

The function is called "kmap" but here we go and vmap the whole buffer.
The function names are not consistent here.

> +	if (WARN_ONCE(!vaddr,
> +		      "heap->ops->map_kernel should return ERR_PTR on error"))
> +		return ERR_PTR(-EINVAL);
> +	if (IS_ERR(vaddr))
> +		return vaddr;
> +	buffer->vaddr = vaddr;
> +	buffer->kmap_cnt++;
> +	return vaddr;
> +}
> +
> +static void dma_heap_buffer_kmap_put(struct dma_heap_buffer *heap_buffer)
> +{
> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> +
> +	buffer->kmap_cnt--;
> +	if (!buffer->kmap_cnt) {
> +		vunmap(buffer->vaddr);
> +		buffer->vaddr = NULL;
> +	}
> +}
> +
> +static struct sg_table *dup_sg_table(struct sg_table *table)
> +{
> +	struct sg_table *new_table;
> +	int ret, i;
> +	struct scatterlist *sg, *new_sg;
> +
> +	new_table = kzalloc(sizeof(*new_table), GFP_KERNEL);
> +	if (!new_table)
> +		return ERR_PTR(-ENOMEM);
> +
> +	ret = sg_alloc_table(new_table, table->nents, GFP_KERNEL);
> +	if (ret) {
> +		kfree(new_table);
> +		return ERR_PTR(-ENOMEM);
> +	}
> +
> +	new_sg = new_table->sgl;
> +	for_each_sg(table->sgl, sg, table->nents, i) {
> +		memcpy(new_sg, sg, sizeof(*sg));
> +		new_sg->dma_address = 0;
> +		new_sg = sg_next(new_sg);
> +	}
> +
> +	return new_table;
> +}
> +
> +static void free_duped_table(struct sg_table *table)
> +{
> +	sg_free_table(table);
> +	kfree(table);
> +}
> +
> +struct dma_heaps_attachment {
> +	struct device *dev;
> +	struct sg_table *table;
> +	struct list_head list;
> +};
> +
> +static int dma_heap_attach(struct dma_buf *dmabuf,
> +			      struct dma_buf_attachment *attachment)
> +{
> +	struct dma_heaps_attachment *a;
> +	struct sg_table *table;
> +	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> +
> +	a = kzalloc(sizeof(*a), GFP_KERNEL);
> +	if (!a)
> +		return -ENOMEM;
> +
> +	table = dup_sg_table(buffer->sg_table);
> +	if (IS_ERR(table)) {
> +		kfree(a);
> +		return -ENOMEM;
> +	}
> +
> +	a->table = table;
> +	a->dev = attachment->dev;
> +	INIT_LIST_HEAD(&a->list);
> +
> +	attachment->priv = a;
> +
> +	mutex_lock(&buffer->lock);
> +	list_add(&a->list, &buffer->attachments);
> +	mutex_unlock(&buffer->lock);
> +
> +	return 0;
> +}
> +
> +static void dma_heap_detatch(struct dma_buf *dmabuf,
> +				struct dma_buf_attachment *attachment)
> +{
> +	struct dma_heaps_attachment *a = attachment->priv;
> +	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> +
> +	mutex_lock(&buffer->lock);
> +	list_del(&a->list);
> +	mutex_unlock(&buffer->lock);
> +	free_duped_table(a->table);
> +
> +	kfree(a);
> +}
> +
> +static struct sg_table *dma_heap_map_dma_buf(
> +					struct dma_buf_attachment *attachment,
> +					enum dma_data_direction direction)
> +{
> +	struct dma_heaps_attachment *a = attachment->priv;
> +	struct sg_table *table;
> +
> +	table = a->table;
> +
> +	if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
> +			direction))
> +		table = ERR_PTR(-ENOMEM);
> +	return table;
> +}
> +
> +static void dma_heap_unmap_dma_buf(struct dma_buf_attachment *attachment,
> +			      struct sg_table *table,
> +			      enum dma_data_direction direction)
> +{
> +	dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
> +}
> +
> +static int dma_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
> +{
> +	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> +	int ret = 0;
> +
> +	mutex_lock(&buffer->lock);
> +	/* now map it to userspace */
> +	ret = dma_heap_map_user(buffer, vma);

Only used here, we can just put is functions code here. Also do we need
this locked? What can race here? Everything accessed is static for the
buffers lifetime.

> +	mutex_unlock(&buffer->lock);
> +
> +	if (ret)
> +		pr_err("%s: failure mapping buffer to userspace\n",
> +		       __func__);
> +
> +	return ret;
> +}
> +
> +static void dma_heap_dma_buf_release(struct dma_buf *dmabuf)
> +{
> +	struct dma_heap_buffer *buffer = dmabuf->priv;
> +
> +	dma_heap_buffer_destroy(buffer);
> +}
> +
> +static void *dma_heap_dma_buf_kmap(struct dma_buf *dmabuf,
> +					unsigned long offset)
> +{
> +	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> +
> +	return buffer->vaddr + offset * PAGE_SIZE;
> +}
> +
> +static void dma_heap_dma_buf_kunmap(struct dma_buf *dmabuf,
> +					unsigned long offset,
> +					void *ptr)
> +{
> +}
> +
> +static int dma_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> +					enum dma_data_direction direction)
> +{
> +	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> +	void *vaddr;
> +	struct dma_heaps_attachment *a;
> +	int ret = 0;
> +
> +	mutex_lock(&buffer->lock);
> +	vaddr = dma_heap_buffer_kmap_get(heap_buffer);


Not needed here, this can be dropped and so can the
dma_heap_buffer_kmap_put() below.

Andrew

> +	if (IS_ERR(vaddr)) {
> +		ret = PTR_ERR(vaddr);
> +		goto unlock;
> +	}
> +	mutex_unlock(&buffer->lock);
> +
> +	mutex_lock(&buffer->lock);
> +	list_for_each_entry(a, &buffer->attachments, list) {
> +		dma_sync_sg_for_cpu(a->dev, a->table->sgl, a->table->nents,
> +				    direction);
> +	}
> +
> +unlock:
> +	mutex_unlock(&buffer->lock);
> +	return ret;
> +}
> +
> +static int dma_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
> +				      enum dma_data_direction direction)
> +{
> +	struct dma_heap_buffer *heap_buffer = dmabuf->priv;
> +	struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> +	struct dma_heaps_attachment *a;
> +
> +	mutex_lock(&buffer->lock);
> +	dma_heap_buffer_kmap_put(heap_buffer);
> +	mutex_unlock(&buffer->lock);
> +
> +	mutex_lock(&buffer->lock);
> +	list_for_each_entry(a, &buffer->attachments, list) {
> +		dma_sync_sg_for_device(a->dev, a->table->sgl, a->table->nents,
> +				       direction);
> +	}
> +	mutex_unlock(&buffer->lock);
> +
> +	return 0;
> +}
> +
> +const struct dma_buf_ops heap_helper_ops = {
> +	.map_dma_buf = dma_heap_map_dma_buf,
> +	.unmap_dma_buf = dma_heap_unmap_dma_buf,
> +	.mmap = dma_heap_mmap,
> +	.release = dma_heap_dma_buf_release,
> +	.attach = dma_heap_attach,
> +	.detach = dma_heap_detatch,
> +	.begin_cpu_access = dma_heap_dma_buf_begin_cpu_access,
> +	.end_cpu_access = dma_heap_dma_buf_end_cpu_access,
> +	.map = dma_heap_dma_buf_kmap,
> +	.unmap = dma_heap_dma_buf_kunmap,
> +};
> diff --git a/drivers/dma-buf/heaps/heap-helpers.h b/drivers/dma-buf/heaps/heap-helpers.h
> new file mode 100644
> index 0000000..0bd8643
> --- /dev/null
> +++ b/drivers/dma-buf/heaps/heap-helpers.h
> @@ -0,0 +1,48 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * DMABUF Heaps helper code
> + *
> + * Copyright (C) 2011 Google, Inc.
> + * Copyright (C) 2019 Linaro Ltd.
> + */
> +
> +#ifndef _HEAP_HELPERS_H
> +#define _HEAP_HELPERS_H
> +
> +#include <linux/dma-heap.h>
> +#include <linux/list.h>
> +
> +struct heap_helper_buffer {
> +	struct dma_heap_buffer heap_buffer;
> +
> +	unsigned long private_flags;
> +	void *priv_virt;
> +	struct mutex lock;
> +	int kmap_cnt;
> +	void *vaddr;
> +	struct sg_table *sg_table;
> +	struct list_head attachments;
> +
> +	void (*free)(struct heap_helper_buffer *buffer);
> +
> +};
> +
> +#define to_helper_buffer(x) \
> +	container_of(x, struct heap_helper_buffer, heap_buffer)
> +
> +static inline void INIT_HEAP_HELPER_BUFFER(struct heap_helper_buffer *buffer,
> +				 void (*free)(struct heap_helper_buffer *))
> +{
> +	buffer->private_flags = 0;
> +	buffer->priv_virt = NULL;
> +	mutex_init(&buffer->lock);
> +	buffer->kmap_cnt = 0;
> +	buffer->vaddr = NULL;
> +	buffer->sg_table = NULL;
> +	INIT_LIST_HEAD(&buffer->attachments);
> +	buffer->free = free;
> +}
> +
> +extern const struct dma_buf_ops heap_helper_ops;
> +
> +#endif /* _HEAP_HELPERS_H */
> 

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 1/5 v2] dma-buf: Add dma-buf heaps framework
  2019-03-19 12:08   ` Brian Starkey
  2019-03-19 15:24     ` Andrew F. Davis
@ 2019-03-21 21:16     ` John Stultz
  1 sibling, 0 replies; 68+ messages in thread
From: John Stultz @ 2019-03-21 21:16 UTC (permalink / raw)
  To: Brian Starkey
  Cc: lkml, Andrew F. Davis, Laura Abbott, Benjamin Gaignard, Greg KH,
	Sumit Semwal, Liam Mark, Chenbo Feng, Alistair Strachan,
	dri-devel, nd

On Tue, Mar 19, 2019 at 5:08 AM Brian Starkey <Brian.Starkey@arm.com> wrote:
>
> Hi John,
>
> On Tue, Mar 05, 2019 at 12:54:29PM -0800, John Stultz wrote:
> > From: "Andrew F. Davis" <afd@ti.com>
>
> [snip]
>
> > +
> > +#define NUM_HEAP_MINORS 128
> > +static DEFINE_IDR(dma_heap_idr);
> > +static DEFINE_MUTEX(minor_lock); /* Protect idr accesses */
>
> I saw that Matthew Wilcox is trying to nuke idr:
> https://patchwork.freedesktop.org/series/57073/
>
> Perhaps a different data structure could be considered? (I don't have
> an informed opinion on which).

Thanks for pointing this out! I've just switched to using the Xarray
implementation in my tree.

> > +static int dma_heap_buffer_alloc(struct dma_heap *heap, size_t len,
> > +                              unsigned int flags)
> > +{
> > +     len = PAGE_ALIGN(len);
> > +     if (!len)
> > +             return -EINVAL;
>
> I think aligning len to pages only makes sense if heaps are going to
> allocate aligned to pages too. Perhaps that's an implicit assumption?
> If so, lets document it.

I've added a comment as such (or do you have more thoughts on where it
should be documented?), and for consistency removed the PAGE_ALIGN
usage in the heap allocator hooks.

> Why not let the heaps take care of aligning len however they want
> though?

As Andrew already said, It seems page granularity would have to be the
finest allocation granularity for dmabufs.  If heaps want to implement
their own larger granularity alignment, I don't see any reason they
would be limited there.

And for me, its mostly because I stubbed my toe implementing the heap
code w/ the first patch that didn't have the page alignment in the
generic code. :)

> > +     /* Create device */
> > +     heap->heap_devt = MKDEV(MAJOR(dma_heap_devt), heap->minor);
> > +     dev_ret = device_create(dma_heap_class,
> > +                             NULL,
> > +                             heap->heap_devt,
> > +                             NULL,
> > +                             heap->name);
> > +     if (IS_ERR(dev_ret)) {
> > +             pr_err("dma_heap: Unable to create char device\n");
> > +             return PTR_ERR(dev_ret);
> > +     }
> > +
> > +     /* Add device */
> > +     cdev_init(&heap->heap_cdev, &dma_heap_fops);
> > +     ret = cdev_add(&heap->heap_cdev, dma_heap_devt, NUM_HEAP_MINORS);
>
> Shouldn't this be s/dma_heap_devt/heap->heap_devt/ and a count of 1?
>
> Also would it be better to have cdev_add/device_create the other way
> around? First create the char device, then once it's all set up
> register it with sysfs.

Thanks for catching that! Much appreciated! Reworked as suggested.

Though I realized last week I have not figured out a consistent way to
have the heaps show up in /dev/dma_heaps/<device> on both Android and
classic Linux environments.  I need to go stare at the /dev/input/
setup code some more.

> > +     if (ret < 0) {
> > +             device_destroy(dma_heap_class, heap->heap_devt);
> > +             pr_err("dma_heap: Unable to add char device\n");
> > +             return ret;
> > +     }
> > +
> > +     return 0;
> > +}
> > +EXPORT_SYMBOL(dma_heap_add);
>
> Until we've figured out how modules are going to work, I still think
> it would be a good idea to not export this.

Done!

thanks
-john

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 1/5 v2] dma-buf: Add dma-buf heaps framework
  2019-03-05 20:54 ` [RFC][PATCH 1/5 v2] dma-buf: Add dma-buf heaps framework John Stultz
                     ` (4 preceding siblings ...)
  2019-03-19 12:08   ` Brian Starkey
@ 2019-03-27 14:53   ` Greg KH
  2019-03-28  6:09     ` John Stultz
  5 siblings, 1 reply; 68+ messages in thread
From: Greg KH @ 2019-03-27 14:53 UTC (permalink / raw)
  To: John Stultz
  Cc: lkml, Andrew F. Davis, Laura Abbott, Benjamin Gaignard,
	Sumit Semwal, Liam Mark, Brian Starkey, Chenbo Feng,
	Alistair Strachan, dri-devel

On Tue, Mar 05, 2019 at 12:54:29PM -0800, John Stultz wrote:
> From: "Andrew F. Davis" <afd@ti.com>
> 
> This framework allows a unified userspace interface for dma-buf
> exporters, allowing userland to allocate specific types of
> memory for use in dma-buf sharing.
> 
> Each heap is given its own device node, which a user can
> allocate a dma-buf fd from using the DMA_HEAP_IOC_ALLOC.
> 
> This code is an evoluiton of the Android ION implementation,
> and a big thanks is due to its authors/maintainers over time
> for their effort:
>   Rebecca Schultz Zavin, Colin Cross, Benjamin Gaignard,
>   Laura Abbott, and many other contributors!

Comments just on the user/kernel api and how it interacts with the
driver model.  Not on the "real" functionality of this code :)

> +#define DEVNAME "dma_heap"
> +
> +#define NUM_HEAP_MINORS 128

Why a max?

> +static DEFINE_IDR(dma_heap_idr);
> +static DEFINE_MUTEX(minor_lock); /* Protect idr accesses */

Move to use xarray now so that Matthew doesn't have to send patches
converting this code later :)

It also allows you to drop the mutex.

> +
> +dev_t dma_heap_devt;
> +struct class *dma_heap_class;
> +struct list_head dma_heap_list;
> +struct dentry *dma_heap_debug_root;

Global variables?

> +
> +static int dma_heap_buffer_alloc(struct dma_heap *heap, size_t len,
> +				 unsigned int flags)
> +{
> +	len = PAGE_ALIGN(len);
> +	if (!len)
> +		return -EINVAL;
> +
> +	return heap->ops->allocate(heap, len, flags);
> +}
> +
> +static int dma_heap_open(struct inode *inode, struct file *filp)
> +{
> +	struct dma_heap *heap;
> +
> +	mutex_lock(&minor_lock);
> +	heap = idr_find(&dma_heap_idr, iminor(inode));
> +	mutex_unlock(&minor_lock);
> +	if (!heap) {
> +		pr_err("dma_heap: minor %d unknown.\n", iminor(inode));
> +		return -ENODEV;
> +	}
> +
> +	/* instance data as context */
> +	filp->private_data = heap;
> +	nonseekable_open(inode, filp);
> +
> +	return 0;
> +}
> +
> +static int dma_heap_release(struct inode *inode, struct file *filp)
> +{
> +	filp->private_data = NULL;

Why does this matter?  release should only be called on the way out of
here, no need to do anything as nothing else can be called, right?

release shouldn't be needed from what I can tell.

> +
> +	return 0;
> +}
> +
> +static long dma_heap_ioctl(struct file *filp, unsigned int cmd,
> +			   unsigned long arg)
> +{
> +	switch (cmd) {
> +	case DMA_HEAP_IOC_ALLOC:
> +	{
> +		struct dma_heap_allocation_data heap_allocation;
> +		struct dma_heap *heap = filp->private_data;
> +		int fd;
> +
> +		if (copy_from_user(&heap_allocation, (void __user *)arg,
> +				   sizeof(heap_allocation)))
> +			return -EFAULT;
> +
> +		if (heap_allocation.fd ||
> +		    heap_allocation.reserved0 ||
> +		    heap_allocation.reserved1 ||
> +		    heap_allocation.reserved2) {
> +			pr_warn_once("dma_heap: ioctl data not valid\n");
> +			return -EINVAL;
> +		}

Good job in forcing the reserved fields to be 0!

> +		if (heap_allocation.flags & ~DMA_HEAP_VALID_FLAGS) {
> +			pr_warn_once("dma_heap: flags has invalid or unsupported flags set\n");
> +			return -EINVAL;
> +		}
> +
> +		fd = dma_heap_buffer_alloc(heap, heap_allocation.len,
> +					   heap_allocation.flags);

No max value checking for .len?  Can you really ask for anything?

> +		if (fd < 0)
> +			return fd;
> +
> +		heap_allocation.fd = fd;
> +
> +		if (copy_to_user((void __user *)arg, &heap_allocation,
> +				 sizeof(heap_allocation)))
> +			return -EFAULT;
> +
> +		break;
> +	}
> +	default:
> +		return -ENOTTY;
> +	}
> +
> +	return 0;
> +}
> +
> +static const struct file_operations dma_heap_fops = {
> +	.owner          = THIS_MODULE,
> +	.open		= dma_heap_open,
> +	.release	= dma_heap_release,
> +	.unlocked_ioctl = dma_heap_ioctl,
> +#ifdef CONFIG_COMPAT
> +	.compat_ioctl	= dma_heap_ioctl,
> +#endif

Why is compat_ioctl even needed?

> +};
> +
> +int dma_heap_add(struct dma_heap *heap)
> +{
> +	struct device *dev_ret;
> +	int ret;
> +
> +	if (!heap->name || !strcmp(heap->name, "")) {
> +		pr_err("dma_heap: Cannot add heap without a name\n");
> +		return -EINVAL;
> +	}
> +
> +	if (!heap->ops || !heap->ops->allocate) {
> +		pr_err("dma_heap: Cannot add heap with invalid ops struct\n");
> +		return -EINVAL;
> +	}
> +
> +	/* Find unused minor number */
> +	mutex_lock(&minor_lock);
> +	ret = idr_alloc(&dma_heap_idr, heap, 0, NUM_HEAP_MINORS, GFP_KERNEL);
> +	mutex_unlock(&minor_lock);

Again, xarray.

But I will ask you to back up, why need a major number at all?  Why not
just use the misc subsystem?  How many of these are you going to have
over time in a "normal" system?  How about a "abnormal system"?

We have seen people running Android in "containers" such that they
needed binderfs to handle huge numbers of individual android systems
running at the same time.  Will this api break those systems if you have
a tiny maximum number you an allocate?

> +	if (ret < 0) {
> +		pr_err("dma_heap: Unable to get minor number for heap\n");
> +		return ret;
> +	}
> +	heap->minor = ret;
> +
> +	/* Create device */
> +	heap->heap_devt = MKDEV(MAJOR(dma_heap_devt), heap->minor);
> +	dev_ret = device_create(dma_heap_class,
> +				NULL,
> +				heap->heap_devt,
> +				NULL,
> +				heap->name);

No parent?  Can't hang this off of anything?  Ok, having it show up in
/sys/devices/virtual/ is probably good enough.

> +	if (IS_ERR(dev_ret)) {
> +		pr_err("dma_heap: Unable to create char device\n");
> +		return PTR_ERR(dev_ret);
> +	}
> +
> +	/* Add device */
> +	cdev_init(&heap->heap_cdev, &dma_heap_fops);
> +	ret = cdev_add(&heap->heap_cdev, dma_heap_devt, NUM_HEAP_MINORS);
> +	if (ret < 0) {
> +		device_destroy(dma_heap_class, heap->heap_devt);
> +		pr_err("dma_heap: Unable to add char device\n");
> +		return ret;
> +	}
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(dma_heap_add);

EXPORT_SYMBOL_GPL() please?  For core stuff like this it's good.

> +
> +static int dma_heap_init(void)
> +{
> +	int ret;
> +
> +	ret = alloc_chrdev_region(&dma_heap_devt, 0, NUM_HEAP_MINORS, DEVNAME);
> +	if (ret)
> +		return ret;
> +
> +	dma_heap_class = class_create(THIS_MODULE, DEVNAME);
> +	if (IS_ERR(dma_heap_class)) {
> +		unregister_chrdev_region(dma_heap_devt, NUM_HEAP_MINORS);
> +		return PTR_ERR(dma_heap_class);
> +	}
> +
> +	return 0;
> +}
> +subsys_initcall(dma_heap_init);

Overall, looks sane, the comments above are all really minor.


> --- /dev/null
> +++ b/include/uapi/linux/dma-heap.h
> @@ -0,0 +1,52 @@
> +/* SPDX-License-Identifier: GPL-2.0 */

Wrong license for a uapi .h file :(

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC][PATCH 1/5 v2] dma-buf: Add dma-buf heaps framework
  2019-03-27 14:53   ` Greg KH
@ 2019-03-28  6:09     ` John Stultz
  0 siblings, 0 replies; 68+ messages in thread
From: John Stultz @ 2019-03-28  6:09 UTC (permalink / raw)
  To: Greg KH
  Cc: lkml, Andrew F. Davis, Laura Abbott, Benjamin Gaignard,
	Sumit Semwal, Liam Mark, Brian Starkey, Chenbo Feng,
	Alistair Strachan, dri-devel

On Wed, Mar 27, 2019 at 11:25 AM Greg KH <gregkh@linuxfoundation.org> wrote:
>
> On Tue, Mar 05, 2019 at 12:54:29PM -0800, John Stultz wrote:
> > From: "Andrew F. Davis" <afd@ti.com>
> >
> > This framework allows a unified userspace interface for dma-buf
> > exporters, allowing userland to allocate specific types of
> > memory for use in dma-buf sharing.
> >
> > Each heap is given its own device node, which a user can
> > allocate a dma-buf fd from using the DMA_HEAP_IOC_ALLOC.
> >
> > This code is an evoluiton of the Android ION implementation,
> > and a big thanks is due to its authors/maintainers over time
> > for their effort:
> >   Rebecca Schultz Zavin, Colin Cross, Benjamin Gaignard,
> >   Laura Abbott, and many other contributors!
>
> Comments just on the user/kernel api and how it interacts with the
> driver model.  Not on the "real" functionality of this code :)

Thanks so much for the feedback! In some cases Andrew and I have
already made the changes you've suggested, and hopefully will have a
new version to share soon.


> > +#define DEVNAME "dma_heap"
> > +
> > +#define NUM_HEAP_MINORS 128
>
> Why a max?

Mostly because other drivers do. I'll see if this can be removed with
the Xarray bits.


> > +static DEFINE_IDR(dma_heap_idr);
> > +static DEFINE_MUTEX(minor_lock); /* Protect idr accesses */
>
> Move to use xarray now so that Matthew doesn't have to send patches
> converting this code later :)
>
> It also allows you to drop the mutex.

Yep. Already converted to Xarray, it is nicer!


> > +dev_t dma_heap_devt;
> > +struct class *dma_heap_class;
> > +struct list_head dma_heap_list;
> > +struct dentry *dma_heap_debug_root;
>
> Global variables?

Oops. Will make those static. Thanks!

> > +
> > +static int dma_heap_buffer_alloc(struct dma_heap *heap, size_t len,
> > +                              unsigned int flags)
> > +{
> > +     len = PAGE_ALIGN(len);
> > +     if (!len)
> > +             return -EINVAL;
> > +
> > +     return heap->ops->allocate(heap, len, flags);
> > +}
> > +
> > +static int dma_heap_open(struct inode *inode, struct file *filp)
> > +{
> > +     struct dma_heap *heap;
> > +
> > +     mutex_lock(&minor_lock);
> > +     heap = idr_find(&dma_heap_idr, iminor(inode));
> > +     mutex_unlock(&minor_lock);
> > +     if (!heap) {
> > +             pr_err("dma_heap: minor %d unknown.\n", iminor(inode));
> > +             return -ENODEV;
> > +     }
> > +
> > +     /* instance data as context */
> > +     filp->private_data = heap;
> > +     nonseekable_open(inode, filp);
> > +
> > +     return 0;
> > +}
> > +
> > +static int dma_heap_release(struct inode *inode, struct file *filp)
> > +{
> > +     filp->private_data = NULL;
>
> Why does this matter?  release should only be called on the way out of
> here, no need to do anything as nothing else can be called, right?
>
> release shouldn't be needed from what I can tell.

Yep. Christoph suggested the same and its been removed already.


> > +             if (heap_allocation.flags & ~DMA_HEAP_VALID_FLAGS) {
> > +                     pr_warn_once("dma_heap: flags has invalid or unsupported flags set\n");
> > +                     return -EINVAL;
> > +             }
> > +
> > +             fd = dma_heap_buffer_alloc(heap, heap_allocation.len,
> > +                                        heap_allocation.flags);
>
> No max value checking for .len?  Can you really ask for anything?

So I think any length caps would be heap specific, so we want to pass
them on here.

> > +static const struct file_operations dma_heap_fops = {
> > +     .owner          = THIS_MODULE,
> > +     .open           = dma_heap_open,
> > +     .release        = dma_heap_release,
> > +     .unlocked_ioctl = dma_heap_ioctl,
> > +#ifdef CONFIG_COMPAT
> > +     .compat_ioctl   = dma_heap_ioctl,
> > +#endif
>
> Why is compat_ioctl even needed?

Probably my mistake. I didn't realize if we're running 32bit on 64bit
and there's no compat, the unlocked_ioctl gets called.


> > +     /* Find unused minor number */
> > +     mutex_lock(&minor_lock);
> > +     ret = idr_alloc(&dma_heap_idr, heap, 0, NUM_HEAP_MINORS, GFP_KERNEL);
> > +     mutex_unlock(&minor_lock);
>
> Again, xarray.

Ack.

> But I will ask you to back up, why need a major number at all?  Why not
> just use the misc subsystem?  How many of these are you going to have
> over time in a "normal" system?  How about a "abnormal system"?

So early implementations did use misc, but in order to get the
/dev/heap/cma_heap style directories, in both Android and classic udev
linux systems I had to create a class.

This v2 patch didn't get it quite right (got it working properly in
android but not on classic systems), but the next version does get the
subdir created properly (similar to how the input layer does it).

As for number of heaps, I wouldn't expect there to be a ton on any
given system. Most likely less then 16, but possibly up to 32. 128
seemed like a safe "crazy out there" cap. But perspectives on crazy
shift over time :)

> We have seen people running Android in "containers" such that they
> needed binderfs to handle huge numbers of individual android systems
> running at the same time.  Will this api break those systems if you have
> a tiny maximum number you an allocate?

I'd have to think some more on this. Right now I'd expect that you'd
not be trying to virtualize the heaps in a container so you'd not have
m heaps * n containers on the system. Instead the containers would
mount/link in the devnode (I'm a bit fuzzy on how containers handle
devnode creation/restrictions) as appropriate (the nice part with this
over ION is we have per heap dev nodes, so the set shared can be
limited).  But I'd have to think more about the risks of how multiple
containers might share things like cma heaps.


> > +     if (ret < 0) {
> > +             pr_err("dma_heap: Unable to get minor number for heap\n");
> > +             return ret;
> > +     }
> > +     heap->minor = ret;
> > +
> > +     /* Create device */
> > +     heap->heap_devt = MKDEV(MAJOR(dma_heap_devt), heap->minor);
> > +     dev_ret = device_create(dma_heap_class,
> > +                             NULL,
> > +                             heap->heap_devt,
> > +                             NULL,
> > +                             heap->name);
>
> No parent?  Can't hang this off of anything?  Ok, having it show up in
> /sys/devices/virtual/ is probably good enough.
>
> > +     if (IS_ERR(dev_ret)) {
> > +             pr_err("dma_heap: Unable to create char device\n");
> > +             return PTR_ERR(dev_ret);
> > +     }
> > +
> > +     /* Add device */
> > +     cdev_init(&heap->heap_cdev, &dma_heap_fops);
> > +     ret = cdev_add(&heap->heap_cdev, dma_heap_devt, NUM_HEAP_MINORS);
> > +     if (ret < 0) {
> > +             device_destroy(dma_heap_class, heap->heap_devt);
> > +             pr_err("dma_heap: Unable to add char device\n");
> > +             return ret;
> > +     }
> > +
> > +     return 0;
> > +}
> > +EXPORT_SYMBOL(dma_heap_add);
>
> EXPORT_SYMBOL_GPL() please?  For core stuff like this it's good.

Actually, removed the export completely for now since its probably not
ready for modules yet. But will be sure to tag it GPL when we do
re-add it.


> > +
> > +static int dma_heap_init(void)
> > +{
> > +     int ret;
> > +
> > +     ret = alloc_chrdev_region(&dma_heap_devt, 0, NUM_HEAP_MINORS, DEVNAME);
> > +     if (ret)
> > +             return ret;
> > +
> > +     dma_heap_class = class_create(THIS_MODULE, DEVNAME);
> > +     if (IS_ERR(dma_heap_class)) {
> > +             unregister_chrdev_region(dma_heap_devt, NUM_HEAP_MINORS);
> > +             return PTR_ERR(dma_heap_class);
> > +     }
> > +
> > +     return 0;
> > +}
> > +subsys_initcall(dma_heap_init);
>
> Overall, looks sane, the comments above are all really minor.

Very much appreciate the review!


>
> > --- /dev/null
> > +++ b/include/uapi/linux/dma-heap.h
> > @@ -0,0 +1,52 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
>
> Wrong license for a uapi .h file :(

Ack. Fixed.

Thanks so much again!
-john

^ permalink raw reply	[flat|nested] 68+ messages in thread

end of thread, other threads:[~2019-03-28  6:09 UTC | newest]

Thread overview: 68+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-05 20:54 [RFC][PATCH 0/5 v2] DMA-BUF Heaps (destaging ION) John Stultz
2019-03-05 20:54 ` [RFC][PATCH 1/5 v2] dma-buf: Add dma-buf heaps framework John Stultz
2019-03-06 16:12   ` Benjamin Gaignard
2019-03-06 16:57     ` John Stultz
2019-03-15  8:55     ` Christoph Hellwig
2019-03-06 16:27   ` Andrew F. Davis
2019-03-06 19:03     ` John Stultz
2019-03-06 21:45       ` Andrew F. Davis
2019-03-15  8:54   ` Christoph Hellwig
2019-03-15 20:24     ` Andrew F. Davis
2019-03-15 20:18   ` Laura Abbott
2019-03-15 20:49     ` Andrew F. Davis
2019-03-15 21:29     ` John Stultz
2019-03-15 22:44       ` Laura Abbott
2019-03-18  4:41         ` Sumit Semwal
2019-03-19 12:08   ` Brian Starkey
2019-03-19 15:24     ` Andrew F. Davis
2019-03-21 21:16     ` John Stultz
2019-03-27 14:53   ` Greg KH
2019-03-28  6:09     ` John Stultz
2019-03-05 20:54 ` [RFC][PATCH 2/5 v2] dma-buf: heaps: Add heap helpers John Stultz
2019-03-13 20:18   ` Liam Mark
2019-03-13 21:48     ` Andrew F. Davis
2019-03-13 22:57       ` Liam Mark
2019-03-13 23:42         ` Andrew F. Davis
2019-03-15  9:06   ` Christoph Hellwig
2019-03-19 15:03     ` Andrew F. Davis
2019-03-21 20:01     ` John Stultz
2019-03-19 14:26   ` Brian Starkey
2019-03-21 20:11     ` John Stultz
2019-03-21 20:35     ` Andrew F. Davis
2019-03-21 20:43   ` Andrew F. Davis
2019-03-05 20:54 ` [RFC][PATCH 3/5 v2] dma-buf: heaps: Add system heap to dmabuf heaps John Stultz
2019-03-06 16:01   ` Benjamin Gaignard
2019-03-11  5:48     ` John Stultz
2019-03-13 20:20   ` Liam Mark
2019-03-13 22:49     ` John Stultz
2019-03-15  9:06   ` Christoph Hellwig
2019-03-05 20:54 ` [RFC][PATCH 4/5 v2] dma-buf: heaps: Add CMA heap to dmabuf heapss John Stultz
2019-03-06 16:05   ` Benjamin Gaignard
2019-03-21 20:15     ` John Stultz
2019-03-15  9:06   ` Christoph Hellwig
2019-03-15 20:08     ` John Stultz
2019-03-19 14:53   ` Brian Starkey
2019-03-05 20:54 ` [RFC][PATCH 5/5 v2] kselftests: Add dma-heap test John Stultz
2019-03-06 16:14   ` Benjamin Gaignard
2019-03-06 16:35     ` Andrew F. Davis
2019-03-06 18:19       ` John Stultz
2019-03-06 18:32         ` Andrew F. Davis
2019-03-06 17:01     ` John Stultz
2019-03-15 20:07       ` Laura Abbott
2019-03-15 20:13         ` John Stultz
2019-03-15 20:49           ` Laura Abbott
2019-03-13 20:23   ` Liam Mark
2019-03-13 20:11 ` [RFC][PATCH 0/5 v2] DMA-BUF Heaps (destaging ION) Liam Mark
2019-03-13 22:30   ` John Stultz
2019-03-13 23:29     ` Liam Mark
2019-03-19 16:54     ` Benjamin Gaignard
2019-03-19 16:59       ` Andrew F. Davis
2019-03-19 21:58         ` Rob Clark
2019-03-19 22:36           ` John Stultz
2019-03-20  9:16             ` Benjamin Gaignard
2019-03-20 14:44               ` Andrew F. Davis
2019-03-20 15:59                 ` Benjamin Gaignard
2019-03-20 16:11               ` John Stultz
2019-03-15 20:34 ` Laura Abbott
2019-03-15 23:15 ` Jerome Glisse
2019-03-16  0:16   ` John Stultz

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).