linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION)
@ 2019-09-06 18:47 John Stultz
  2019-09-06 18:47 ` [RESEND][PATCH v8 1/5] dma-buf: Add dma-buf heaps framework John Stultz
                   ` (9 more replies)
  0 siblings, 10 replies; 39+ messages in thread
From: John Stultz @ 2019-09-06 18:47 UTC (permalink / raw)
  To: lkml
  Cc: John Stultz, Laura Abbott, Benjamin Gaignard, Sumit Semwal,
	Liam Mark, Pratik Patel, Brian Starkey, Vincent Donnefort,
	Sudipto Paul, Andrew F . Davis, Christoph Hellwig, Chenbo Feng,
	Alistair Strachan, Hridya Valsaraju, dri-devel

Here is yet another pass at the dma-buf heaps patchset Andrew
and I have been working on which tries to destage a fair chunk
of ION functionality.

The patchset implements per-heap devices which can be opened
directly and then an ioctl is used to allocate a dmabuf from the
heap.

The interface is similar, but much simpler then IONs, only
providing an ALLOC ioctl.

Also, I've provided relatively simple system and cma heaps.

I've booted and tested these patches with AOSP on the HiKey960
using the kernel tree here:
  https://git.linaro.org/people/john.stultz/android-dev.git/log/?h=dev/dma-buf-heap

And the userspace changes here:
  https://android-review.googlesource.com/c/device/linaro/hikey/+/909436

Compared to ION, this patchset is missing the system-contig,
carveout and chunk heaps, as I don't have a device that uses
those, so I'm unable to do much useful validation there.
Additionally we have no upstream users of chunk or carveout,
and the system-contig has been deprecated in the common/andoid-*
kernels, so this should be ok.

I've also removed the stats accounting, since any such accounting
should be implemented by dma-buf core or the heaps themselves.

Most of the changes in this revision are adddressing the more
concrete feedback from Christoph (many thanks!). Though I'm not
sure if some of the less specific feedback was completely resolved
in discussion last time around. Please let me know!

New in v8:
* Make struct dma_heap_ops consts (Suggested by Christoph)
* Add flush_kernel_vmap_range/invalidate_kernel_vmap_range calls
  (suggested by Christoph)
* Condense dma_heap_buffer and heap_helper_buffer (suggested by
  Christoph)
* Get rid of needless struct system_heap (suggested by Christoph)
* Fix indentation by using shorter argument names (suggested by
  Christoph)
* Remove unused private_flags value
* Add forgotten include file to fix build issue on x86
* Checkpatch whitespace fixups

Thoughts and feedback would be greatly appreciated!

thanks
-john

Cc: Laura Abbott <labbott@redhat.com>
Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Liam Mark <lmark@codeaurora.org>
Cc: Pratik Patel <pratikp@codeaurora.org>
Cc: Brian Starkey <Brian.Starkey@arm.com>
Cc: Vincent Donnefort <Vincent.Donnefort@arm.com>
Cc: Sudipto Paul <Sudipto.Paul@arm.com>
Cc: Andrew F. Davis <afd@ti.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Chenbo Feng <fengc@google.com>
Cc: Alistair Strachan <astrachan@google.com>
Cc: Hridya Valsaraju <hridya@google.com>
Cc: dri-devel@lists.freedesktop.org


Andrew F. Davis (1):
  dma-buf: Add dma-buf heaps framework

John Stultz (4):
  dma-buf: heaps: Add heap helpers
  dma-buf: heaps: Add system heap to dmabuf heaps
  dma-buf: heaps: Add CMA heap to dmabuf heaps
  kselftests: Add dma-heap test

 MAINTAINERS                                   |  18 ++
 drivers/dma-buf/Kconfig                       |  11 +
 drivers/dma-buf/Makefile                      |   2 +
 drivers/dma-buf/dma-heap.c                    | 250 ++++++++++++++++
 drivers/dma-buf/heaps/Kconfig                 |  14 +
 drivers/dma-buf/heaps/Makefile                |   4 +
 drivers/dma-buf/heaps/cma_heap.c              | 164 +++++++++++
 drivers/dma-buf/heaps/heap-helpers.c          | 269 ++++++++++++++++++
 drivers/dma-buf/heaps/heap-helpers.h          |  55 ++++
 drivers/dma-buf/heaps/system_heap.c           | 122 ++++++++
 include/linux/dma-heap.h                      |  59 ++++
 include/uapi/linux/dma-heap.h                 |  55 ++++
 tools/testing/selftests/dmabuf-heaps/Makefile |   9 +
 .../selftests/dmabuf-heaps/dmabuf-heap.c      | 230 +++++++++++++++
 14 files changed, 1262 insertions(+)
 create mode 100644 drivers/dma-buf/dma-heap.c
 create mode 100644 drivers/dma-buf/heaps/Kconfig
 create mode 100644 drivers/dma-buf/heaps/Makefile
 create mode 100644 drivers/dma-buf/heaps/cma_heap.c
 create mode 100644 drivers/dma-buf/heaps/heap-helpers.c
 create mode 100644 drivers/dma-buf/heaps/heap-helpers.h
 create mode 100644 drivers/dma-buf/heaps/system_heap.c
 create mode 100644 include/linux/dma-heap.h
 create mode 100644 include/uapi/linux/dma-heap.h
 create mode 100644 tools/testing/selftests/dmabuf-heaps/Makefile
 create mode 100644 tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c

-- 
2.17.1


^ permalink raw reply	[flat|nested] 39+ messages in thread

* [RESEND][PATCH v8 1/5] dma-buf: Add dma-buf heaps framework
  2019-09-06 18:47 [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION) John Stultz
@ 2019-09-06 18:47 ` John Stultz
  2019-09-23 22:08   ` Brian Starkey
  2019-09-06 18:47 ` [RESEND][PATCH v8 2/5] dma-buf: heaps: Add heap helpers John Stultz
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 39+ messages in thread
From: John Stultz @ 2019-09-06 18:47 UTC (permalink / raw)
  To: lkml
  Cc: Andrew F. Davis, Laura Abbott, Benjamin Gaignard, Sumit Semwal,
	Liam Mark, Pratik Patel, Brian Starkey, Vincent Donnefort,
	Sudipto Paul, Christoph Hellwig, Chenbo Feng, Alistair Strachan,
	Hridya Valsaraju, dri-devel, John Stultz

From: "Andrew F. Davis" <afd@ti.com>

This framework allows a unified userspace interface for dma-buf
exporters, allowing userland to allocate specific types of memory
for use in dma-buf sharing.

Each heap is given its own device node, which a user can allocate
a dma-buf fd from using the DMA_HEAP_IOC_ALLOC.

This code is an evoluiton of the Android ION implementation,
and a big thanks is due to its authors/maintainers over time
for their effort:
  Rebecca Schultz Zavin, Colin Cross, Benjamin Gaignard,
  Laura Abbott, and many other contributors!

Cc: Laura Abbott <labbott@redhat.com>
Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Liam Mark <lmark@codeaurora.org>
Cc: Pratik Patel <pratikp@codeaurora.org>
Cc: Brian Starkey <Brian.Starkey@arm.com>
Cc: Vincent Donnefort <Vincent.Donnefort@arm.com>
Cc: Sudipto Paul <Sudipto.Paul@arm.com>
Cc: Andrew F. Davis <afd@ti.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Chenbo Feng <fengc@google.com>
Cc: Alistair Strachan <astrachan@google.com>
Cc: Hridya Valsaraju <hridya@google.com>
Cc: dri-devel@lists.freedesktop.org
Reviewed-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Signed-off-by: Andrew F. Davis <afd@ti.com>
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
v2:
* Folded down fixes I had previously shared in implementing
  heaps
* Make flags a u64 (Suggested by Laura)
* Add PAGE_ALIGN() fix to the core alloc funciton
* IOCTL fixups suggested by Brian
* Added fixes suggested by Benjamin
* Removed core stats mgmt, as that should be implemented by
  per-heap code
* Changed alloc to return a dma-buf fd, rather than a buffer
  (as it simplifies error handling)
v3:
* Removed scare-quotes in MAINTAINERS email address
* Get rid of .release function as it didn't do anything (from
  Christoph)
* Renamed filp to file (suggested by Christoph)
* Split out ioctl handling to separate function (suggested by
  Christoph)
* Add comment documenting PAGE_ALIGN usage (suggested by Brian)
* Switch from idr to Xarray (suggested by Brian)
* Fixup cdev creation (suggested by Brian)
* Avoid EXPORT_SYMBOL until we finalize modules (suggested by
  Brian)
* Make struct dma_heap internal only (folded in from Andrew)
* Small cleanups suggested by GregKH
* Provide class->devnode callback to get consistent /dev/
  subdirectory naming (Suggested by Bjorn)
v4:
* Folded down dma-heap.h change that was in a following patch
* Added fd_flags entry to allocation structure and pass it
  through to heap code for use on dma-buf fd creation (suggested
  by Benjamin)
v5:
* Minor cleanups
v6:
* Improved error path handling, minor whitespace fixes, both
  suggested by Brian
v7:
* Longer Kconfig description to quiet checkpatch warnings
* Re-add compat_ioctl bits (Hridya noticed 32bit userland wasn't
  working)
v8:
* Make struct dma_heap_ops consts (Suggested by Christoph)
* Checkpatch whitespace fixups
---
 MAINTAINERS                   |  18 +++
 drivers/dma-buf/Kconfig       |   9 ++
 drivers/dma-buf/Makefile      |   1 +
 drivers/dma-buf/dma-heap.c    | 250 ++++++++++++++++++++++++++++++++++
 include/linux/dma-heap.h      |  59 ++++++++
 include/uapi/linux/dma-heap.h |  55 ++++++++
 6 files changed, 392 insertions(+)
 create mode 100644 drivers/dma-buf/dma-heap.c
 create mode 100644 include/linux/dma-heap.h
 create mode 100644 include/uapi/linux/dma-heap.h

diff --git a/MAINTAINERS b/MAINTAINERS
index e7a47b5210fd..13e564e37161 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -4904,6 +4904,24 @@ F:	include/linux/*fence.h
 F:	Documentation/driver-api/dma-buf.rst
 T:	git git://anongit.freedesktop.org/drm/drm-misc
 
+DMA-BUF HEAPS FRAMEWORK
+M:	Sumit Semwal <sumit.semwal@linaro.org>
+R:	Andrew F. Davis <afd@ti.com>
+R:	Benjamin Gaignard <benjamin.gaignard@linaro.org>
+R:	Liam Mark <lmark@codeaurora.org>
+R:	Laura Abbott <labbott@redhat.com>
+R:	Brian Starkey <Brian.Starkey@arm.com>
+R:	John Stultz <john.stultz@linaro.org>
+S:	Maintained
+L:	linux-media@vger.kernel.org
+L:	dri-devel@lists.freedesktop.org
+L:	linaro-mm-sig@lists.linaro.org (moderated for non-subscribers)
+F:	include/uapi/linux/dma-heap.h
+F:	include/linux/dma-heap.h
+F:	drivers/dma-buf/dma-heap.c
+F:	drivers/dma-buf/heaps/*
+T:	git git://anongit.freedesktop.org/drm/drm-misc
+
 DMA GENERIC OFFLOAD ENGINE SUBSYSTEM
 M:	Vinod Koul <vkoul@kernel.org>
 L:	dmaengine@vger.kernel.org
diff --git a/drivers/dma-buf/Kconfig b/drivers/dma-buf/Kconfig
index b6a9c2f1bc41..162e24e1e429 100644
--- a/drivers/dma-buf/Kconfig
+++ b/drivers/dma-buf/Kconfig
@@ -39,4 +39,13 @@ config UDMABUF
 	  A driver to let userspace turn memfd regions into dma-bufs.
 	  Qemu can use this to create host dmabufs for guest framebuffers.
 
+menuconfig DMABUF_HEAPS
+	bool "DMA-BUF Userland Memory Heaps"
+	select DMA_SHARED_BUFFER
+	help
+	  Choose this option to enable the DMA-BUF userland memory heaps,
+	  this options creates per heap chardevs in /dev/dma_heap/ which
+	  allows userspace to use to allocate dma-bufs that can be shared
+	  between drivers.
+
 endmenu
diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
index e8c7310cb800..1cb3dd104825 100644
--- a/drivers/dma-buf/Makefile
+++ b/drivers/dma-buf/Makefile
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: GPL-2.0-only
 obj-y := dma-buf.o dma-fence.o dma-fence-array.o dma-fence-chain.o \
 	 reservation.o seqno-fence.o
+obj-$(CONFIG_DMABUF_HEAPS)	+= dma-heap.o
 obj-$(CONFIG_SYNC_FILE)		+= sync_file.o
 obj-$(CONFIG_SW_SYNC)		+= sw_sync.o sync_debug.o
 obj-$(CONFIG_UDMABUF)		+= udmabuf.o
diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c
new file mode 100644
index 000000000000..f961dbf6ec72
--- /dev/null
+++ b/drivers/dma-buf/dma-heap.c
@@ -0,0 +1,250 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Framework for userspace DMA-BUF allocations
+ *
+ * Copyright (C) 2011 Google, Inc.
+ * Copyright (C) 2019 Linaro Ltd.
+ */
+
+#include <linux/cdev.h>
+#include <linux/debugfs.h>
+#include <linux/device.h>
+#include <linux/dma-buf.h>
+#include <linux/err.h>
+#include <linux/xarray.h>
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+#include <linux/syscalls.h>
+#include <linux/dma-heap.h>
+#include <uapi/linux/dma-heap.h>
+
+#define DEVNAME "dma_heap"
+
+#define NUM_HEAP_MINORS 128
+
+/**
+ * struct dma_heap - represents a dmabuf heap in the system
+ * @name:		used for debugging/device-node name
+ * @ops:		ops struct for this heap
+ * @minor		minor number of this heap device
+ * @heap_devt		heap device node
+ * @heap_cdev		heap char device
+ *
+ * Represents a heap of memory from which buffers can be made.
+ */
+struct dma_heap {
+	const char *name;
+	const struct dma_heap_ops *ops;
+	void *priv;
+	unsigned int minor;
+	dev_t heap_devt;
+	struct cdev heap_cdev;
+};
+
+static dev_t dma_heap_devt;
+static struct class *dma_heap_class;
+static DEFINE_XARRAY_ALLOC(dma_heap_minors);
+
+static int dma_heap_buffer_alloc(struct dma_heap *heap, size_t len,
+				 unsigned int fd_flags,
+				 unsigned int heap_flags)
+{
+	/*
+	 * Allocations from all heaps have to begin
+	 * and end on page boundaries.
+	 */
+	len = PAGE_ALIGN(len);
+	if (!len)
+		return -EINVAL;
+
+	return heap->ops->allocate(heap, len, fd_flags, heap_flags);
+}
+
+static int dma_heap_open(struct inode *inode, struct file *file)
+{
+	struct dma_heap *heap;
+
+	heap = xa_load(&dma_heap_minors, iminor(inode));
+	if (!heap) {
+		pr_err("dma_heap: minor %d unknown.\n", iminor(inode));
+		return -ENODEV;
+	}
+
+	/* instance data as context */
+	file->private_data = heap;
+	nonseekable_open(inode, file);
+
+	return 0;
+}
+
+static long dma_heap_ioctl_allocate(struct file *file, unsigned long arg)
+{
+	struct dma_heap_allocation_data heap_allocation;
+	struct dma_heap *heap = file->private_data;
+	int fd;
+
+	if (copy_from_user(&heap_allocation, (void __user *)arg,
+			   sizeof(heap_allocation)))
+		return -EFAULT;
+
+	if (heap_allocation.fd ||
+	    heap_allocation.reserved0 ||
+	    heap_allocation.reserved1) {
+		pr_warn_once("dma_heap: ioctl data not valid\n");
+		return -EINVAL;
+	}
+
+	if (heap_allocation.fd_flags & ~DMA_HEAP_VALID_FD_FLAGS) {
+		pr_warn_once("dma_heap: fd_flags has invalid or unsupported flags set\n");
+		return -EINVAL;
+	}
+
+	if (heap_allocation.heap_flags & ~DMA_HEAP_VALID_HEAP_FLAGS) {
+		pr_warn_once("dma_heap: heap flags has invalid or unsupported flags set\n");
+		return -EINVAL;
+	}
+
+	fd = dma_heap_buffer_alloc(heap, heap_allocation.len,
+				   heap_allocation.fd_flags,
+				   heap_allocation.heap_flags);
+	if (fd < 0)
+		return fd;
+
+	heap_allocation.fd = fd;
+
+	if (copy_to_user((void __user *)arg, &heap_allocation,
+			 sizeof(heap_allocation))) {
+		ksys_close(fd);
+		return -EFAULT;
+	}
+
+	return 0;
+}
+
+static long dma_heap_ioctl(struct file *file, unsigned int cmd,
+			   unsigned long arg)
+{
+	int ret = 0;
+
+	switch (cmd) {
+	case DMA_HEAP_IOC_ALLOC:
+		ret = dma_heap_ioctl_allocate(file, arg);
+		break;
+	default:
+		return -ENOTTY;
+	}
+
+	return ret;
+}
+
+static const struct file_operations dma_heap_fops = {
+	.owner          = THIS_MODULE,
+	.open		= dma_heap_open,
+	.unlocked_ioctl = dma_heap_ioctl,
+#ifdef CONFIG_COMPAT
+	.compat_ioctl	= dma_heap_ioctl,
+#endif
+};
+
+/**
+ * dma_heap_get_data() - get per-subdriver data for the heap
+ * @heap: DMA-Heap to retrieve private data for
+ *
+ * Returns:
+ * The per-subdriver data for the heap.
+ */
+void *dma_heap_get_data(struct dma_heap *heap)
+{
+	return heap->priv;
+}
+
+struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
+{
+	struct dma_heap *heap, *err_ret;
+	struct device *dev_ret;
+	int ret;
+
+	if (!exp_info->name || !strcmp(exp_info->name, "")) {
+		pr_err("dma_heap: Cannot add heap without a name\n");
+		return ERR_PTR(-EINVAL);
+	}
+
+	if (!exp_info->ops || !exp_info->ops->allocate) {
+		pr_err("dma_heap: Cannot add heap with invalid ops struct\n");
+		return ERR_PTR(-EINVAL);
+	}
+
+	heap = kzalloc(sizeof(*heap), GFP_KERNEL);
+	if (!heap)
+		return ERR_PTR(-ENOMEM);
+
+	heap->name = exp_info->name;
+	heap->ops = exp_info->ops;
+	heap->priv = exp_info->priv;
+
+	/* Find unused minor number */
+	ret = xa_alloc(&dma_heap_minors, &heap->minor, heap,
+		       XA_LIMIT(0, NUM_HEAP_MINORS - 1), GFP_KERNEL);
+	if (ret < 0) {
+		pr_err("dma_heap: Unable to get minor number for heap\n");
+		err_ret = ERR_PTR(ret);
+		goto err0;
+	}
+
+	/* Create device */
+	heap->heap_devt = MKDEV(MAJOR(dma_heap_devt), heap->minor);
+
+	cdev_init(&heap->heap_cdev, &dma_heap_fops);
+	ret = cdev_add(&heap->heap_cdev, heap->heap_devt, 1);
+	if (ret < 0) {
+		pr_err("dma_heap: Unable to add char device\n");
+		err_ret = ERR_PTR(ret);
+		goto err1;
+	}
+
+	dev_ret = device_create(dma_heap_class,
+				NULL,
+				heap->heap_devt,
+				NULL,
+				heap->name);
+	if (IS_ERR(dev_ret)) {
+		pr_err("dma_heap: Unable to create device\n");
+		err_ret = (struct dma_heap *)dev_ret;
+		goto err2;
+	}
+
+	return heap;
+
+err2:
+	cdev_del(&heap->heap_cdev);
+err1:
+	xa_erase(&dma_heap_minors, heap->minor);
+err0:
+	kfree(heap);
+	return err_ret;
+}
+
+static char *dma_heap_devnode(struct device *dev, umode_t *mode)
+{
+	return kasprintf(GFP_KERNEL, "dma_heap/%s", dev_name(dev));
+}
+
+static int dma_heap_init(void)
+{
+	int ret;
+
+	ret = alloc_chrdev_region(&dma_heap_devt, 0, NUM_HEAP_MINORS, DEVNAME);
+	if (ret)
+		return ret;
+
+	dma_heap_class = class_create(THIS_MODULE, DEVNAME);
+	if (IS_ERR(dma_heap_class)) {
+		unregister_chrdev_region(dma_heap_devt, NUM_HEAP_MINORS);
+		return PTR_ERR(dma_heap_class);
+	}
+	dma_heap_class->devnode = dma_heap_devnode;
+
+	return 0;
+}
+subsys_initcall(dma_heap_init);
diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h
new file mode 100644
index 000000000000..df43e6d905e7
--- /dev/null
+++ b/include/linux/dma-heap.h
@@ -0,0 +1,59 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * DMABUF Heaps Allocation Infrastructure
+ *
+ * Copyright (C) 2011 Google, Inc.
+ * Copyright (C) 2019 Linaro Ltd.
+ */
+
+#ifndef _DMA_HEAPS_H
+#define _DMA_HEAPS_H
+
+#include <linux/cdev.h>
+#include <linux/types.h>
+
+struct dma_heap;
+
+/**
+ * struct dma_heap_ops - ops to operate on a given heap
+ * @allocate:		allocate dmabuf and return fd
+ *
+ * allocate returns dmabuf fd  on success, -errno on error.
+ */
+struct dma_heap_ops {
+	int (*allocate)(struct dma_heap *heap,
+			unsigned long len,
+			unsigned long fd_flags,
+			unsigned long heap_flags);
+};
+
+/**
+ * struct dma_heap_export_info - information needed to export a new dmabuf heap
+ * @name:	used for debugging/device-node name
+ * @ops:	ops struct for this heap
+ * @priv:	heap exporter private data
+ *
+ * Information needed to export a new dmabuf heap.
+ */
+struct dma_heap_export_info {
+	const char *name;
+	const struct dma_heap_ops *ops;
+	void *priv;
+};
+
+/**
+ * dma_heap_get_data() - get per-heap driver data
+ * @heap: DMA-Heap to retrieve private data for
+ *
+ * Returns:
+ * The per-heap data for the heap.
+ */
+void *dma_heap_get_data(struct dma_heap *heap);
+
+/**
+ * dma_heap_add - adds a heap to dmabuf heaps
+ * @exp_info:		information needed to register this heap
+ */
+struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info);
+
+#endif /* _DMA_HEAPS_H */
diff --git a/include/uapi/linux/dma-heap.h b/include/uapi/linux/dma-heap.h
new file mode 100644
index 000000000000..6ce5cc68d238
--- /dev/null
+++ b/include/uapi/linux/dma-heap.h
@@ -0,0 +1,55 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+/*
+ * DMABUF Heaps Userspace API
+ *
+ * Copyright (C) 2011 Google, Inc.
+ * Copyright (C) 2019 Linaro Ltd.
+ */
+#ifndef _UAPI_LINUX_DMABUF_POOL_H
+#define _UAPI_LINUX_DMABUF_POOL_H
+
+#include <linux/ioctl.h>
+#include <linux/types.h>
+
+/**
+ * DOC: DMABUF Heaps Userspace API
+ */
+
+/* Valid FD_FLAGS are O_CLOEXEC, O_RDONLY, O_WRONLY, O_RDWR */
+#define DMA_HEAP_VALID_FD_FLAGS (O_CLOEXEC | O_ACCMODE)
+
+/* Currently no heap flags */
+#define DMA_HEAP_VALID_HEAP_FLAGS (0)
+
+/**
+ * struct dma_heap_allocation_data - metadata passed from userspace for
+ *                                      allocations
+ * @len:		size of the allocation
+ * @fd:			will be populated with a fd which provdes the
+ *			handle to the allocated dma-buf
+ * @fd_flags:		file descriptor flags used when allocating
+ * @heap_flags:		flags passed to heap
+ *
+ * Provided by userspace as an argument to the ioctl
+ */
+struct dma_heap_allocation_data {
+	__u64 len;
+	__u32 fd;
+	__u32 fd_flags;
+	__u64 heap_flags;
+	__u32 reserved0;
+	__u32 reserved1;
+};
+
+#define DMA_HEAP_IOC_MAGIC		'H'
+
+/**
+ * DOC: DMA_HEAP_IOC_ALLOC - allocate memory from pool
+ *
+ * Takes an dma_heap_allocation_data struct and returns it with the fd field
+ * populated with the dmabuf handle of the allocation.
+ */
+#define DMA_HEAP_IOC_ALLOC	_IOWR(DMA_HEAP_IOC_MAGIC, 0, \
+				      struct dma_heap_allocation_data)
+
+#endif /* _UAPI_LINUX_DMABUF_POOL_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RESEND][PATCH v8 2/5] dma-buf: heaps: Add heap helpers
  2019-09-06 18:47 [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION) John Stultz
  2019-09-06 18:47 ` [RESEND][PATCH v8 1/5] dma-buf: Add dma-buf heaps framework John Stultz
@ 2019-09-06 18:47 ` John Stultz
  2019-09-23 22:08   ` Brian Starkey
  2019-09-06 18:47 ` [RESEND][PATCH v8 3/5] dma-buf: heaps: Add system heap to dmabuf heaps John Stultz
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 39+ messages in thread
From: John Stultz @ 2019-09-06 18:47 UTC (permalink / raw)
  To: lkml
  Cc: John Stultz, Laura Abbott, Benjamin Gaignard, Sumit Semwal,
	Liam Mark, Pratik Patel, Brian Starkey, Vincent Donnefort,
	Sudipto Paul, Andrew F . Davis, Christoph Hellwig, Chenbo Feng,
	Alistair Strachan, Hridya Valsaraju, dri-devel

Add generic helper dmabuf ops for dma heaps, so we can reduce
the amount of duplicative code for the exported dmabufs.

This code is an evolution of the Android ION implementation, so
thanks to its original authors and maintainters:
  Rebecca Schultz Zavin, Colin Cross, Laura Abbott, and others!

Cc: Laura Abbott <labbott@redhat.com>
Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Liam Mark <lmark@codeaurora.org>
Cc: Pratik Patel <pratikp@codeaurora.org>
Cc: Brian Starkey <Brian.Starkey@arm.com>
Cc: Vincent Donnefort <Vincent.Donnefort@arm.com>
Cc: Sudipto Paul <Sudipto.Paul@arm.com>
Cc: Andrew F. Davis <afd@ti.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Chenbo Feng <fengc@google.com>
Cc: Alistair Strachan <astrachan@google.com>
Cc: Hridya Valsaraju <hridya@google.com>
Cc: dri-devel@lists.freedesktop.org
Reviewed-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
v2:
* Removed cache management performance hack that I had
  accidentally folded in.
* Removed stats code that was in helpers
* Lots of checkpatch cleanups
v3:
* Uninline INIT_HEAP_HELPER_BUFFER (suggested by Christoph)
* Switch to WARN on buffer destroy failure (suggested by Brian)
* buffer->kmap_cnt decrementing cleanup (suggested by Christoph)
* Extra buffer->vaddr checking in dma_heap_dma_buf_kmap
  (suggested by Brian)
* Switch to_helper_buffer from macro to inline function
  (suggested by Benjamin)
* Rename kmap->vmap (folded in from Andrew)
* Use vmap for vmapping - not begin_cpu_access (folded in from
  Andrew)
* Drop kmap for now, as its optional (folded in from Andrew)
* Fold dma_heap_map_user into the single caller (foled in from
  Andrew)
* Folded in patch from Andrew to track page list per heap not
  sglist, which simplifies the tracking logic
v4:
* Moved dma-heap.h change out to previous patch
v6:
* Minor cleanups and typo fixes suggested by Brian
v7:
* Removed stray ;
* Make init_heap_helper_buffer lowercase, as suggested by Christoph
* Add dmabuf export helper to reduce boilerplate code
v8:
* Remove unused private_flags value
* Condense dma_heap_buffer and heap_helper_buffer (suggested by
  Christoph)
* Fix indentation by using shorter argument names (suggested by
  Christoph)
* Add flush_kernel_vmap_range/invalidate_kernel_vmap_range calls
  (suggested by Christoph)
* Checkpatch whitespace fixups
---
 drivers/dma-buf/Makefile             |   1 +
 drivers/dma-buf/heaps/Makefile       |   2 +
 drivers/dma-buf/heaps/heap-helpers.c | 269 +++++++++++++++++++++++++++
 drivers/dma-buf/heaps/heap-helpers.h |  55 ++++++
 4 files changed, 327 insertions(+)
 create mode 100644 drivers/dma-buf/heaps/Makefile
 create mode 100644 drivers/dma-buf/heaps/heap-helpers.c
 create mode 100644 drivers/dma-buf/heaps/heap-helpers.h

diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
index 1cb3dd104825..e3e3dca29e46 100644
--- a/drivers/dma-buf/Makefile
+++ b/drivers/dma-buf/Makefile
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: GPL-2.0-only
 obj-y := dma-buf.o dma-fence.o dma-fence-array.o dma-fence-chain.o \
 	 reservation.o seqno-fence.o
+obj-$(CONFIG_DMABUF_HEAPS)	+= heaps/
 obj-$(CONFIG_DMABUF_HEAPS)	+= dma-heap.o
 obj-$(CONFIG_SYNC_FILE)		+= sync_file.o
 obj-$(CONFIG_SW_SYNC)		+= sw_sync.o sync_debug.o
diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
new file mode 100644
index 000000000000..de49898112db
--- /dev/null
+++ b/drivers/dma-buf/heaps/Makefile
@@ -0,0 +1,2 @@
+# SPDX-License-Identifier: GPL-2.0
+obj-y					+= heap-helpers.o
diff --git a/drivers/dma-buf/heaps/heap-helpers.c b/drivers/dma-buf/heaps/heap-helpers.c
new file mode 100644
index 000000000000..b2500d831fe3
--- /dev/null
+++ b/drivers/dma-buf/heaps/heap-helpers.c
@@ -0,0 +1,269 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/device.h>
+#include <linux/dma-buf.h>
+#include <linux/err.h>
+#include <linux/highmem.h>
+#include <linux/idr.h>
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+#include <uapi/linux/dma-heap.h>
+#include "heap-helpers.h"
+
+void init_heap_helper_buffer(struct heap_helper_buffer *buffer,
+			     void (*free)(struct heap_helper_buffer *))
+{
+	buffer->priv_virt = NULL;
+	mutex_init(&buffer->lock);
+	buffer->vmap_cnt = 0;
+	buffer->vaddr = NULL;
+	buffer->pagecount = 0;
+	buffer->pages = NULL;
+	INIT_LIST_HEAD(&buffer->attachments);
+	buffer->free = free;
+}
+
+struct dma_buf *heap_helper_export_dmabuf(struct heap_helper_buffer *buffer,
+					  int fd_flags)
+{
+	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+
+	exp_info.ops = &heap_helper_ops;
+	exp_info.size = buffer->size;
+	exp_info.flags = fd_flags;
+	exp_info.priv = buffer;
+
+	return dma_buf_export(&exp_info);
+}
+
+static void *dma_heap_map_kernel(struct heap_helper_buffer *buffer)
+{
+	void *vaddr;
+
+	vaddr = vmap(buffer->pages, buffer->pagecount, VM_MAP, PAGE_KERNEL);
+	if (!vaddr)
+		return ERR_PTR(-ENOMEM);
+
+	return vaddr;
+}
+
+static void dma_heap_buffer_destroy(struct heap_helper_buffer *buffer)
+{
+	if (buffer->vmap_cnt > 0) {
+		WARN("%s: buffer still mapped in the kernel\n", __func__);
+		vunmap(buffer->vaddr);
+	}
+
+	buffer->free(buffer);
+}
+
+static void *dma_heap_buffer_vmap_get(struct heap_helper_buffer *buffer)
+{
+	void *vaddr;
+
+	if (buffer->vmap_cnt) {
+		buffer->vmap_cnt++;
+		return buffer->vaddr;
+	}
+	vaddr = dma_heap_map_kernel(buffer);
+	if (WARN_ONCE(!vaddr,
+		      "heap->ops->map_kernel should return ERR_PTR on error"))
+		return ERR_PTR(-EINVAL);
+	if (IS_ERR(vaddr))
+		return vaddr;
+	buffer->vaddr = vaddr;
+	buffer->vmap_cnt++;
+	return vaddr;
+}
+
+static void dma_heap_buffer_vmap_put(struct heap_helper_buffer *buffer)
+{
+	if (!--buffer->vmap_cnt) {
+		vunmap(buffer->vaddr);
+		buffer->vaddr = NULL;
+	}
+}
+
+struct dma_heaps_attachment {
+	struct device *dev;
+	struct sg_table table;
+	struct list_head list;
+};
+
+static int dma_heap_attach(struct dma_buf *dmabuf,
+			   struct dma_buf_attachment *attachment)
+{
+	struct dma_heaps_attachment *a;
+	struct heap_helper_buffer *buffer = dmabuf->priv;
+	int ret;
+
+	a = kzalloc(sizeof(*a), GFP_KERNEL);
+	if (!a)
+		return -ENOMEM;
+
+	ret = sg_alloc_table_from_pages(&a->table, buffer->pages,
+					buffer->pagecount, 0,
+					buffer->pagecount << PAGE_SHIFT,
+					GFP_KERNEL);
+	if (ret) {
+		kfree(a);
+		return ret;
+	}
+
+	a->dev = attachment->dev;
+	INIT_LIST_HEAD(&a->list);
+
+	attachment->priv = a;
+
+	mutex_lock(&buffer->lock);
+	list_add(&a->list, &buffer->attachments);
+	mutex_unlock(&buffer->lock);
+
+	return 0;
+}
+
+static void dma_heap_detach(struct dma_buf *dmabuf,
+			    struct dma_buf_attachment *attachment)
+{
+	struct dma_heaps_attachment *a = attachment->priv;
+	struct heap_helper_buffer *buffer = dmabuf->priv;
+
+	mutex_lock(&buffer->lock);
+	list_del(&a->list);
+	mutex_unlock(&buffer->lock);
+
+	sg_free_table(&a->table);
+	kfree(a);
+}
+
+static
+struct sg_table *dma_heap_map_dma_buf(struct dma_buf_attachment *attachment,
+				      enum dma_data_direction direction)
+{
+	struct dma_heaps_attachment *a = attachment->priv;
+	struct sg_table *table;
+
+	table = &a->table;
+
+	if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
+			direction))
+		table = ERR_PTR(-ENOMEM);
+	return table;
+}
+
+static void dma_heap_unmap_dma_buf(struct dma_buf_attachment *attachment,
+				   struct sg_table *table,
+				   enum dma_data_direction direction)
+{
+	dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
+}
+
+static vm_fault_t dma_heap_vm_fault(struct vm_fault *vmf)
+{
+	struct vm_area_struct *vma = vmf->vma;
+	struct heap_helper_buffer *buffer = vma->vm_private_data;
+
+	vmf->page = buffer->pages[vmf->pgoff];
+	get_page(vmf->page);
+
+	return 0;
+}
+
+static const struct vm_operations_struct dma_heap_vm_ops = {
+	.fault = dma_heap_vm_fault,
+};
+
+static int dma_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
+{
+	struct heap_helper_buffer *buffer = dmabuf->priv;
+
+	if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) == 0)
+		return -EINVAL;
+
+	vma->vm_ops = &dma_heap_vm_ops;
+	vma->vm_private_data = buffer;
+
+	return 0;
+}
+
+static void dma_heap_dma_buf_release(struct dma_buf *dmabuf)
+{
+	struct heap_helper_buffer *buffer = dmabuf->priv;
+
+	dma_heap_buffer_destroy(buffer);
+}
+
+static int dma_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
+					     enum dma_data_direction direction)
+{
+	struct heap_helper_buffer *buffer = dmabuf->priv;
+	struct dma_heaps_attachment *a;
+	int ret = 0;
+
+	mutex_lock(&buffer->lock);
+
+	if (buffer->vmap_cnt)
+		invalidate_kernel_vmap_range(buffer->vaddr, buffer->size);
+
+	list_for_each_entry(a, &buffer->attachments, list) {
+		dma_sync_sg_for_cpu(a->dev, a->table.sgl, a->table.nents,
+				    direction);
+	}
+	mutex_unlock(&buffer->lock);
+
+	return ret;
+}
+
+static int dma_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
+					   enum dma_data_direction direction)
+{
+	struct heap_helper_buffer *buffer = dmabuf->priv;
+	struct dma_heaps_attachment *a;
+
+	mutex_lock(&buffer->lock);
+
+	if (buffer->vmap_cnt)
+		flush_kernel_vmap_range(buffer->vaddr, buffer->size);
+
+	list_for_each_entry(a, &buffer->attachments, list) {
+		dma_sync_sg_for_device(a->dev, a->table.sgl, a->table.nents,
+				       direction);
+	}
+	mutex_unlock(&buffer->lock);
+
+	return 0;
+}
+
+static void *dma_heap_dma_buf_vmap(struct dma_buf *dmabuf)
+{
+	struct heap_helper_buffer *buffer = dmabuf->priv;
+	void *vaddr;
+
+	mutex_lock(&buffer->lock);
+	vaddr = dma_heap_buffer_vmap_get(buffer);
+	mutex_unlock(&buffer->lock);
+
+	return vaddr;
+}
+
+static void dma_heap_dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr)
+{
+	struct heap_helper_buffer *buffer = dmabuf->priv;
+
+	mutex_lock(&buffer->lock);
+	dma_heap_buffer_vmap_put(buffer);
+	mutex_unlock(&buffer->lock);
+}
+
+const struct dma_buf_ops heap_helper_ops = {
+	.map_dma_buf = dma_heap_map_dma_buf,
+	.unmap_dma_buf = dma_heap_unmap_dma_buf,
+	.mmap = dma_heap_mmap,
+	.release = dma_heap_dma_buf_release,
+	.attach = dma_heap_attach,
+	.detach = dma_heap_detach,
+	.begin_cpu_access = dma_heap_dma_buf_begin_cpu_access,
+	.end_cpu_access = dma_heap_dma_buf_end_cpu_access,
+	.vmap = dma_heap_dma_buf_vmap,
+	.vunmap = dma_heap_dma_buf_vunmap,
+};
diff --git a/drivers/dma-buf/heaps/heap-helpers.h b/drivers/dma-buf/heaps/heap-helpers.h
new file mode 100644
index 000000000000..ebe1c15f16cf
--- /dev/null
+++ b/drivers/dma-buf/heaps/heap-helpers.h
@@ -0,0 +1,55 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * DMABUF Heaps helper code
+ *
+ * Copyright (C) 2011 Google, Inc.
+ * Copyright (C) 2019 Linaro Ltd.
+ */
+
+#ifndef _HEAP_HELPERS_H
+#define _HEAP_HELPERS_H
+
+#include <linux/dma-heap.h>
+#include <linux/list.h>
+
+/**
+ * struct heap_helper_buffer - helper buffer metadata
+ * @heap:		back pointer to the heap the buffer came from
+ * @dmabuf:		backing dma-buf for this buffer
+ * @size:		size of the buffer
+ * @flags:		buffer specific flags
+ * @priv_virt		pointer to heap specific private value
+ * @lock		mutext to protect the data in this structure
+ * @vmap_cnt		count of vmap references on the buffer
+ * @vaddr		vmap'ed virtual address
+ * @pagecount		number of pages in the buffer
+ * @pages		list of page pointers
+ * @attachment		list of device attachments
+ *
+ * @free		heap callback to free the buffer
+ */
+struct heap_helper_buffer {
+	struct dma_heap *heap;
+	struct dma_buf *dmabuf;
+	size_t size;
+	unsigned long flags;
+
+	void *priv_virt;
+	struct mutex lock;
+	int vmap_cnt;
+	void *vaddr;
+	pgoff_t pagecount;
+	struct page **pages;
+	struct list_head attachments;
+
+	void (*free)(struct heap_helper_buffer *buffer);
+};
+
+void init_heap_helper_buffer(struct heap_helper_buffer *buffer,
+			     void (*free)(struct heap_helper_buffer *));
+
+struct dma_buf *heap_helper_export_dmabuf(struct heap_helper_buffer *buffer,
+					  int fd_flags);
+
+extern const struct dma_buf_ops heap_helper_ops;
+#endif /* _HEAP_HELPERS_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RESEND][PATCH v8 3/5] dma-buf: heaps: Add system heap to dmabuf heaps
  2019-09-06 18:47 [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION) John Stultz
  2019-09-06 18:47 ` [RESEND][PATCH v8 1/5] dma-buf: Add dma-buf heaps framework John Stultz
  2019-09-06 18:47 ` [RESEND][PATCH v8 2/5] dma-buf: heaps: Add heap helpers John Stultz
@ 2019-09-06 18:47 ` John Stultz
  2019-09-23 22:09   ` Brian Starkey
  2019-09-06 18:47 ` [RESEND][PATCH v8 4/5] dma-buf: heaps: Add CMA " John Stultz
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 39+ messages in thread
From: John Stultz @ 2019-09-06 18:47 UTC (permalink / raw)
  To: lkml
  Cc: John Stultz, Laura Abbott, Benjamin Gaignard, Sumit Semwal,
	Liam Mark, Pratik Patel, Brian Starkey, Vincent Donnefort,
	Sudipto Paul, Andrew F . Davis, Christoph Hellwig, Chenbo Feng,
	Alistair Strachan, Hridya Valsaraju, dri-devel

This patch adds system heap to the dma-buf heaps framework.

This allows applications to get a page-allocator backed dma-buf
for non-contiguous memory.

This code is an evolution of the Android ION implementation, so
thanks to its original authors and maintainters:
  Rebecca Schultz Zavin, Colin Cross, Laura Abbott, and others!

Cc: Laura Abbott <labbott@redhat.com>
Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Liam Mark <lmark@codeaurora.org>
Cc: Pratik Patel <pratikp@codeaurora.org>
Cc: Brian Starkey <Brian.Starkey@arm.com>
Cc: Vincent Donnefort <Vincent.Donnefort@arm.com>
Cc: Sudipto Paul <Sudipto.Paul@arm.com>
Cc: Andrew F. Davis <afd@ti.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Chenbo Feng <fengc@google.com>
Cc: Alistair Strachan <astrachan@google.com>
Cc: Hridya Valsaraju <hridya@google.com>
Cc: dri-devel@lists.freedesktop.org
Reviewed-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
v2:
* Switch allocate to return dmabuf fd
* Simplify init code
* Checkpatch fixups
* Droped dead system-contig code
v3:
* Whitespace fixups from Benjamin
* Make sure we're zeroing the allocated pages (from Liam)
* Use PAGE_ALIGN() consistently (suggested by Brian)
* Fold in new registration style from Andrew
* Avoid needless dynamic allocation of sys_heap (suggested by
  Christoph)
* Minor cleanups
* Folded in changes from Andrew to use simplified page list
  from the heap helpers
v4:
* Optimization to allocate pages in chunks, similar to old
  pagepool code
* Use fd_flags when creating dmabuf fd (Suggested by Benjamin)
v5:
* Back out large order page allocations (was leaking memory,
  as the page array didn't properly track order size)
v6:
* Minor whitespace change suggested by Brian
* Remove unused variable
v7:
* Use newly lower-cased init_heap_helper_buffer helper
* Add system heap DOS avoidance suggested by Laura from ION code
* Use new dmabuf export helper
v8:
* Make struct dma_heap_ops consts (suggested by Christoph)
* Get rid of needless struct system_heap (suggested by Christoph)
* Condense dma_heap_buffer and heap_helper_buffer (suggested by
  Christoph)
* Add forgotten include file to fix build issue on x86
---
 drivers/dma-buf/Kconfig             |   2 +
 drivers/dma-buf/heaps/Kconfig       |   6 ++
 drivers/dma-buf/heaps/Makefile      |   1 +
 drivers/dma-buf/heaps/system_heap.c | 122 ++++++++++++++++++++++++++++
 4 files changed, 131 insertions(+)
 create mode 100644 drivers/dma-buf/heaps/Kconfig
 create mode 100644 drivers/dma-buf/heaps/system_heap.c

diff --git a/drivers/dma-buf/Kconfig b/drivers/dma-buf/Kconfig
index 162e24e1e429..657ce743abda 100644
--- a/drivers/dma-buf/Kconfig
+++ b/drivers/dma-buf/Kconfig
@@ -48,4 +48,6 @@ menuconfig DMABUF_HEAPS
 	  allows userspace to use to allocate dma-bufs that can be shared
 	  between drivers.
 
+source "drivers/dma-buf/heaps/Kconfig"
+
 endmenu
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
new file mode 100644
index 000000000000..205052744169
--- /dev/null
+++ b/drivers/dma-buf/heaps/Kconfig
@@ -0,0 +1,6 @@
+config DMABUF_HEAPS_SYSTEM
+	bool "DMA-BUF System Heap"
+	depends on DMABUF_HEAPS
+	help
+	  Choose this option to enable the system dmabuf heap. The system heap
+	  is backed by pages from the buddy allocator. If in doubt, say Y.
diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
index de49898112db..d1808eca2581 100644
--- a/drivers/dma-buf/heaps/Makefile
+++ b/drivers/dma-buf/heaps/Makefile
@@ -1,2 +1,3 @@
 # SPDX-License-Identifier: GPL-2.0
 obj-y					+= heap-helpers.o
+obj-$(CONFIG_DMABUF_HEAPS_SYSTEM)	+= system_heap.o
diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
new file mode 100644
index 000000000000..5db4ef9b4afc
--- /dev/null
+++ b/drivers/dma-buf/heaps/system_heap.c
@@ -0,0 +1,122 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * DMABUF System heap exporter
+ *
+ * Copyright (C) 2011 Google, Inc.
+ * Copyright (C) 2019 Linaro Ltd.
+ */
+
+#include <asm/page.h>
+#include <linux/dma-buf.h>
+#include <linux/dma-mapping.h>
+#include <linux/dma-heap.h>
+#include <linux/err.h>
+#include <linux/highmem.h>
+#include <linux/mm.h>
+#include <linux/scatterlist.h>
+#include <linux/slab.h>
+#include <linux/sched/signal.h>
+
+#include "heap-helpers.h"
+
+struct dma_heap *sys_heap;
+
+static void system_heap_free(struct heap_helper_buffer *buffer)
+{
+	pgoff_t pg;
+
+	for (pg = 0; pg < buffer->pagecount; pg++)
+		__free_page(buffer->pages[pg]);
+	kfree(buffer->pages);
+	kfree(buffer);
+}
+
+static int system_heap_allocate(struct dma_heap *heap,
+				unsigned long len,
+				unsigned long fd_flags,
+				unsigned long heap_flags)
+{
+	struct heap_helper_buffer *helper_buffer;
+	struct dma_buf *dmabuf;
+	int ret = -ENOMEM;
+	pgoff_t pg;
+
+	helper_buffer = kzalloc(sizeof(*helper_buffer), GFP_KERNEL);
+	if (!helper_buffer)
+		return -ENOMEM;
+
+	init_heap_helper_buffer(helper_buffer, system_heap_free);
+	helper_buffer->flags = heap_flags;
+	helper_buffer->heap = heap;
+	helper_buffer->size = len;
+
+	helper_buffer->pagecount = len / PAGE_SIZE;
+	helper_buffer->pages = kmalloc_array(helper_buffer->pagecount,
+					     sizeof(*helper_buffer->pages),
+					     GFP_KERNEL);
+	if (!helper_buffer->pages) {
+		ret = -ENOMEM;
+		goto err0;
+	}
+
+	for (pg = 0; pg < helper_buffer->pagecount; pg++) {
+		/*
+		 * Avoid trying to allocate memory if the process
+		 * has been killed by by SIGKILL
+		 */
+		if (fatal_signal_pending(current))
+			goto err1;
+
+		helper_buffer->pages[pg] = alloc_page(GFP_KERNEL | __GFP_ZERO);
+		if (!helper_buffer->pages[pg])
+			goto err1;
+	}
+
+	/* create the dmabuf */
+	dmabuf = heap_helper_export_dmabuf(helper_buffer, fd_flags);
+	if (IS_ERR(dmabuf)) {
+		ret = PTR_ERR(dmabuf);
+		goto err1;
+	}
+
+	helper_buffer->dmabuf = dmabuf;
+
+	ret = dma_buf_fd(dmabuf, fd_flags);
+	if (ret < 0) {
+		dma_buf_put(dmabuf);
+		/* just return, as put will call release and that will free */
+		return ret;
+	}
+
+	return ret;
+
+err1:
+	while (pg > 0)
+		__free_page(helper_buffer->pages[--pg]);
+	kfree(helper_buffer->pages);
+err0:
+	kfree(helper_buffer);
+
+	return -ENOMEM;
+}
+
+static const struct dma_heap_ops system_heap_ops = {
+	.allocate = system_heap_allocate,
+};
+
+static int system_heap_create(void)
+{
+	struct dma_heap_export_info exp_info;
+	int ret = 0;
+
+	exp_info.name = "system_heap";
+	exp_info.ops = &system_heap_ops;
+	exp_info.priv = NULL;
+
+	sys_heap = dma_heap_add(&exp_info);
+	if (IS_ERR(sys_heap))
+		ret = PTR_ERR(sys_heap);
+
+	return ret;
+}
+device_initcall(system_heap_create);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RESEND][PATCH v8 4/5] dma-buf: heaps: Add CMA heap to dmabuf heaps
  2019-09-06 18:47 [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION) John Stultz
                   ` (2 preceding siblings ...)
  2019-09-06 18:47 ` [RESEND][PATCH v8 3/5] dma-buf: heaps: Add system heap to dmabuf heaps John Stultz
@ 2019-09-06 18:47 ` John Stultz
  2019-09-23 22:10   ` Brian Starkey
  2019-09-06 18:47 ` [RESEND][PATCH v8 5/5] kselftests: Add dma-heap test John Stultz
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 39+ messages in thread
From: John Stultz @ 2019-09-06 18:47 UTC (permalink / raw)
  To: lkml
  Cc: John Stultz, Laura Abbott, Benjamin Gaignard, Sumit Semwal,
	Liam Mark, Pratik Patel, Brian Starkey, Vincent Donnefort,
	Sudipto Paul, Andrew F . Davis, Christoph Hellwig, Chenbo Feng,
	Alistair Strachan, Hridya Valsaraju, dri-devel

This adds a CMA heap, which allows userspace to allocate
a dma-buf of contiguous memory out of a CMA region.

This code is an evolution of the Android ION implementation, so
thanks to its original author and maintainters:
  Benjamin Gaignard, Laura Abbott, and others!

Cc: Laura Abbott <labbott@redhat.com>
Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Liam Mark <lmark@codeaurora.org>
Cc: Pratik Patel <pratikp@codeaurora.org>
Cc: Brian Starkey <Brian.Starkey@arm.com>
Cc: Vincent Donnefort <Vincent.Donnefort@arm.com>
Cc: Sudipto Paul <Sudipto.Paul@arm.com>
Cc: Andrew F. Davis <afd@ti.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Chenbo Feng <fengc@google.com>
Cc: Alistair Strachan <astrachan@google.com>
Cc: Hridya Valsaraju <hridya@google.com>
Cc: dri-devel@lists.freedesktop.org
Reviewed-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
v2:
* Switch allocate to return dmabuf fd
* Simplify init code
* Checkpatch fixups
v3:
* Switch to inline function for to_cma_heap()
* Minor cleanups suggested by Brian
* Fold in new registration style from Andrew
* Folded in changes from Andrew to use simplified page list
  from the heap helpers
v4:
* Use the fd_flags when creating dmabuf fd (Suggested by
  Benjamin)
* Use precalculated pagecount (Suggested by Andrew)
v6:
* Changed variable names to improve clarity, as suggested
  by Brian
v7:
* Use newly lower-cased init_heap_helper_buffer helper
* Use new dmabuf export helper
v8:
* Make struct dma_heap_ops const (Suggested by Christoph)
* Condense dma_heap_buffer and heap_helper_buffer (suggested by
  Christoph)
* Checkpatch whitespace fixups
---
 drivers/dma-buf/heaps/Kconfig    |   8 ++
 drivers/dma-buf/heaps/Makefile   |   1 +
 drivers/dma-buf/heaps/cma_heap.c | 164 +++++++++++++++++++++++++++++++
 3 files changed, 173 insertions(+)
 create mode 100644 drivers/dma-buf/heaps/cma_heap.c

diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
index 205052744169..a5eef06c4226 100644
--- a/drivers/dma-buf/heaps/Kconfig
+++ b/drivers/dma-buf/heaps/Kconfig
@@ -4,3 +4,11 @@ config DMABUF_HEAPS_SYSTEM
 	help
 	  Choose this option to enable the system dmabuf heap. The system heap
 	  is backed by pages from the buddy allocator. If in doubt, say Y.
+
+config DMABUF_HEAPS_CMA
+	bool "DMA-BUF CMA Heap"
+	depends on DMABUF_HEAPS && DMA_CMA
+	help
+	  Choose this option to enable dma-buf CMA heap. This heap is backed
+	  by the Contiguous Memory Allocator (CMA). If your system has these
+	  regions, you should say Y here.
diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
index d1808eca2581..6e54cdec3da0 100644
--- a/drivers/dma-buf/heaps/Makefile
+++ b/drivers/dma-buf/heaps/Makefile
@@ -1,3 +1,4 @@
 # SPDX-License-Identifier: GPL-2.0
 obj-y					+= heap-helpers.o
 obj-$(CONFIG_DMABUF_HEAPS_SYSTEM)	+= system_heap.o
+obj-$(CONFIG_DMABUF_HEAPS_CMA)		+= cma_heap.o
diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c
new file mode 100644
index 000000000000..b8f67b7c6a5c
--- /dev/null
+++ b/drivers/dma-buf/heaps/cma_heap.c
@@ -0,0 +1,164 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * DMABUF CMA heap exporter
+ *
+ * Copyright (C) 2012, 2019 Linaro Ltd.
+ * Author: <benjamin.gaignard@linaro.org> for ST-Ericsson.
+ */
+
+#include <linux/device.h>
+#include <linux/dma-buf.h>
+#include <linux/dma-heap.h>
+#include <linux/slab.h>
+#include <linux/errno.h>
+#include <linux/err.h>
+#include <linux/cma.h>
+#include <linux/scatterlist.h>
+#include <linux/highmem.h>
+
+#include "heap-helpers.h"
+
+struct cma_heap {
+	struct dma_heap *heap;
+	struct cma *cma;
+};
+
+static void cma_heap_free(struct heap_helper_buffer *buffer)
+{
+	struct cma_heap *cma_heap = dma_heap_get_data(buffer->heap);
+	unsigned long nr_pages = buffer->pagecount;
+	struct page *cma_pages = buffer->priv_virt;
+
+	/* free page list */
+	kfree(buffer->pages);
+	/* release memory */
+	cma_release(cma_heap->cma, cma_pages, nr_pages);
+	kfree(buffer);
+}
+
+/* dmabuf heap CMA operations functions */
+static int cma_heap_allocate(struct dma_heap *heap,
+			     unsigned long len,
+			     unsigned long fd_flags,
+			     unsigned long heap_flags)
+{
+	struct cma_heap *cma_heap = dma_heap_get_data(heap);
+	struct heap_helper_buffer *helper_buffer;
+	struct page *cma_pages;
+	size_t size = PAGE_ALIGN(len);
+	unsigned long nr_pages = size >> PAGE_SHIFT;
+	unsigned long align = get_order(size);
+	struct dma_buf *dmabuf;
+	int ret = -ENOMEM;
+	pgoff_t pg;
+
+	if (align > CONFIG_CMA_ALIGNMENT)
+		align = CONFIG_CMA_ALIGNMENT;
+
+	helper_buffer = kzalloc(sizeof(*helper_buffer), GFP_KERNEL);
+	if (!helper_buffer)
+		return -ENOMEM;
+
+	init_heap_helper_buffer(helper_buffer, cma_heap_free);
+	helper_buffer->flags = heap_flags;
+	helper_buffer->heap = heap;
+	helper_buffer->size = len;
+
+	cma_pages = cma_alloc(cma_heap->cma, nr_pages, align, false);
+	if (!cma_pages)
+		goto free_buf;
+
+	if (PageHighMem(cma_pages)) {
+		unsigned long nr_clear_pages = nr_pages;
+		struct page *page = cma_pages;
+
+		while (nr_clear_pages > 0) {
+			void *vaddr = kmap_atomic(page);
+
+			memset(vaddr, 0, PAGE_SIZE);
+			kunmap_atomic(vaddr);
+			page++;
+			nr_clear_pages--;
+		}
+	} else {
+		memset(page_address(cma_pages), 0, size);
+	}
+
+	helper_buffer->pagecount = nr_pages;
+	helper_buffer->pages = kmalloc_array(helper_buffer->pagecount,
+					     sizeof(*helper_buffer->pages),
+					     GFP_KERNEL);
+	if (!helper_buffer->pages) {
+		ret = -ENOMEM;
+		goto free_cma;
+	}
+
+	for (pg = 0; pg < helper_buffer->pagecount; pg++) {
+		helper_buffer->pages[pg] = &cma_pages[pg];
+		if (!helper_buffer->pages[pg])
+			goto free_pages;
+	}
+
+	/* create the dmabuf */
+	dmabuf = heap_helper_export_dmabuf(helper_buffer, fd_flags);
+	if (IS_ERR(dmabuf)) {
+		ret = PTR_ERR(dmabuf);
+		goto free_pages;
+	}
+
+	helper_buffer->dmabuf = dmabuf;
+	helper_buffer->priv_virt = cma_pages;
+
+	ret = dma_buf_fd(dmabuf, fd_flags);
+	if (ret < 0) {
+		dma_buf_put(dmabuf);
+		/* just return, as put will call release and that will free */
+		return ret;
+	}
+
+	return ret;
+
+free_pages:
+	kfree(helper_buffer->pages);
+free_cma:
+	cma_release(cma_heap->cma, cma_pages, nr_pages);
+free_buf:
+	kfree(helper_buffer);
+	return ret;
+}
+
+static const struct dma_heap_ops cma_heap_ops = {
+	.allocate = cma_heap_allocate,
+};
+
+static int __add_cma_heap(struct cma *cma, void *data)
+{
+	struct cma_heap *cma_heap;
+	struct dma_heap_export_info exp_info;
+
+	cma_heap = kzalloc(sizeof(*cma_heap), GFP_KERNEL);
+	if (!cma_heap)
+		return -ENOMEM;
+	cma_heap->cma = cma;
+
+	exp_info.name = cma_get_name(cma);
+	exp_info.ops = &cma_heap_ops;
+	exp_info.priv = cma_heap;
+
+	cma_heap->heap = dma_heap_add(&exp_info);
+	if (IS_ERR(cma_heap->heap)) {
+		int ret = PTR_ERR(cma_heap->heap);
+
+		kfree(cma_heap);
+		return ret;
+	}
+
+	return 0;
+}
+
+static int add_cma_heaps(void)
+{
+	cma_for_each_area(__add_cma_heap, NULL);
+	return 0;
+}
+device_initcall(add_cma_heaps);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [RESEND][PATCH v8 5/5] kselftests: Add dma-heap test
  2019-09-06 18:47 [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION) John Stultz
                   ` (3 preceding siblings ...)
  2019-09-06 18:47 ` [RESEND][PATCH v8 4/5] dma-buf: heaps: Add CMA " John Stultz
@ 2019-09-06 18:47 ` John Stultz
  2019-09-23 22:11   ` Brian Starkey
  2019-09-19 16:51 ` [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION) Sumit Semwal
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 39+ messages in thread
From: John Stultz @ 2019-09-06 18:47 UTC (permalink / raw)
  To: lkml
  Cc: John Stultz, Benjamin Gaignard, Sumit Semwal, Liam Mark,
	Pratik Patel, Brian Starkey, Vincent Donnefort, Sudipto Paul,
	Andrew F . Davis, Christoph Hellwig, Chenbo Feng,
	Alistair Strachan, Hridya Valsaraju, dri-devel

Add very trivial allocation and import test for dma-heaps,
utilizing the vgem driver as a test importer.

A good chunk of this code taken from:
  tools/testing/selftests/android/ion/ionmap_test.c
  Originally by Laura Abbott <labbott@redhat.com>

Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Liam Mark <lmark@codeaurora.org>
Cc: Pratik Patel <pratikp@codeaurora.org>
Cc: Brian Starkey <Brian.Starkey@arm.com>
Cc: Vincent Donnefort <Vincent.Donnefort@arm.com>
Cc: Sudipto Paul <Sudipto.Paul@arm.com>
Cc: Andrew F. Davis <afd@ti.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Chenbo Feng <fengc@google.com>
Cc: Alistair Strachan <astrachan@google.com>
Cc: Hridya Valsaraju <hridya@google.com>
Cc: dri-devel@lists.freedesktop.org
Reviewed-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
v2:
* Switched to use reworked dma-heap apis
v3:
* Add simple mmap
* Utilize dma-buf testdev to test importing
v4:
* Rework to use vgem
* Pass in fd_flags to match interface changes
* Skip . and .. dirs
v6:
* Number of style/cleanups suggested by Brian
v7:
* Whitespace fixup for checkpatch
v8:
* More checkpatch whitespace fixups
---
 tools/testing/selftests/dmabuf-heaps/Makefile |   9 +
 .../selftests/dmabuf-heaps/dmabuf-heap.c      | 230 ++++++++++++++++++
 2 files changed, 239 insertions(+)
 create mode 100644 tools/testing/selftests/dmabuf-heaps/Makefile
 create mode 100644 tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c

diff --git a/tools/testing/selftests/dmabuf-heaps/Makefile b/tools/testing/selftests/dmabuf-heaps/Makefile
new file mode 100644
index 000000000000..8c4c36e2972d
--- /dev/null
+++ b/tools/testing/selftests/dmabuf-heaps/Makefile
@@ -0,0 +1,9 @@
+# SPDX-License-Identifier: GPL-2.0
+CFLAGS += -static -O3 -Wl,-no-as-needed -Wall
+#LDLIBS += -lrt -lpthread -lm
+
+# these are all "safe" tests that don't modify
+# system time or require escalated privileges
+TEST_GEN_PROGS = dmabuf-heap
+
+include ../lib.mk
diff --git a/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
new file mode 100644
index 000000000000..e439d6cf3d81
--- /dev/null
+++ b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
@@ -0,0 +1,230 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <dirent.h>
+#include <errno.h>
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+#include <sys/ioctl.h>
+#include <sys/mman.h>
+#include <sys/types.h>
+
+#include <linux/dma-buf.h>
+#include <drm/drm.h>
+
+#include "../../../../include/uapi/linux/dma-heap.h"
+
+#define DEVPATH "/dev/dma_heap"
+
+static int check_vgem(int fd)
+{
+	drm_version_t version = { 0 };
+	char name[5];
+	int ret;
+
+	version.name_len = 4;
+	version.name = name;
+
+	ret = ioctl(fd, DRM_IOCTL_VERSION, &version);
+	if (ret)
+		return 0;
+
+	return !strcmp(name, "vgem");
+}
+
+static int open_vgem(void)
+{
+	int i, fd;
+	const char *drmstr = "/dev/dri/card";
+
+	fd = -1;
+	for (i = 0; i < 16; i++) {
+		char name[80];
+
+		sprintf(name, "%s%u", drmstr, i);
+
+		fd = open(name, O_RDWR);
+		if (fd < 0)
+			continue;
+
+		if (!check_vgem(fd)) {
+			close(fd);
+			continue;
+		} else {
+			break;
+		}
+	}
+	return fd;
+}
+
+static int import_vgem_fd(int vgem_fd, int dma_buf_fd, uint32_t *handle)
+{
+	struct drm_prime_handle import_handle = {
+		.fd = dma_buf_fd,
+		.flags = 0,
+		.handle = 0,
+	 };
+	int ret;
+
+	ret = ioctl(vgem_fd, DRM_IOCTL_PRIME_FD_TO_HANDLE, &import_handle);
+	if (ret == 0)
+		*handle = import_handle.handle;
+	return ret;
+}
+
+static void close_handle(int vgem_fd, uint32_t handle)
+{
+	struct drm_gem_close close = {
+		.handle = handle,
+	};
+
+	ioctl(vgem_fd, DRM_IOCTL_GEM_CLOSE, &close);
+}
+
+static int dmabuf_heap_open(char *name)
+{
+	int ret, fd;
+	char buf[256];
+
+	ret = sprintf(buf, "%s/%s", DEVPATH, name);
+	if (ret < 0) {
+		printf("sprintf failed!\n");
+		return ret;
+	}
+
+	fd = open(buf, O_RDWR);
+	if (fd < 0)
+		printf("open %s failed!\n", buf);
+	return fd;
+}
+
+static int dmabuf_heap_alloc(int fd, size_t len, unsigned int flags,
+			     int *dmabuf_fd)
+{
+	struct dma_heap_allocation_data data = {
+		.len = len,
+		.fd_flags = O_RDWR | O_CLOEXEC,
+		.heap_flags = flags,
+	};
+	int ret;
+
+	if (!dmabuf_fd)
+		return -EINVAL;
+
+	ret = ioctl(fd, DMA_HEAP_IOC_ALLOC, &data);
+	if (ret < 0)
+		return ret;
+	*dmabuf_fd = (int)data.fd;
+	return ret;
+}
+
+static void dmabuf_sync(int fd, int start_stop)
+{
+	struct dma_buf_sync sync = {
+		.flags = start_stop | DMA_BUF_SYNC_RW,
+	};
+	int ret;
+
+	ret = ioctl(fd, DMA_BUF_IOCTL_SYNC, &sync);
+	if (ret)
+		printf("sync failed %d\n", errno);
+}
+
+#define ONE_MEG (1024 * 1024)
+
+static void do_test(char *heap_name)
+{
+	int heap_fd = -1, dmabuf_fd = -1, importer_fd = -1;
+	uint32_t handle = 0;
+	void *p = NULL;
+	int ret;
+
+	printf("Testing heap: %s\n", heap_name);
+
+	heap_fd = dmabuf_heap_open(heap_name);
+	if (heap_fd < 0)
+		return;
+
+	printf("Allocating 1 MEG\n");
+	ret = dmabuf_heap_alloc(heap_fd, ONE_MEG, 0, &dmabuf_fd);
+	if (ret) {
+		printf("Allocation Failed!\n");
+		goto out;
+	}
+	/* mmap and write a simple pattern */
+	p = mmap(NULL,
+		 ONE_MEG,
+		 PROT_READ | PROT_WRITE,
+		 MAP_SHARED,
+		 dmabuf_fd,
+		 0);
+	if (p == MAP_FAILED) {
+		printf("mmap() failed: %m\n");
+		goto out;
+	}
+	printf("mmap passed\n");
+
+	dmabuf_sync(dmabuf_fd, DMA_BUF_SYNC_START);
+
+	memset(p, 1, ONE_MEG / 2);
+	memset((char *)p + ONE_MEG / 2, 0, ONE_MEG / 2);
+	dmabuf_sync(dmabuf_fd, DMA_BUF_SYNC_END);
+
+	importer_fd = open_vgem();
+	if (importer_fd < 0) {
+		ret = importer_fd;
+		printf("Failed to open vgem\n");
+		goto out;
+	}
+
+	ret = import_vgem_fd(importer_fd, dmabuf_fd, &handle);
+	if (ret < 0) {
+		printf("Failed to import buffer\n");
+		goto out;
+	}
+	printf("import passed\n");
+
+	dmabuf_sync(dmabuf_fd, DMA_BUF_SYNC_START);
+	memset(p, 0xff, ONE_MEG);
+	dmabuf_sync(dmabuf_fd, DMA_BUF_SYNC_END);
+	printf("syncs passed\n");
+
+	close_handle(importer_fd, handle);
+
+out:
+	if (p)
+		munmap(p, ONE_MEG);
+	if (importer_fd >= 0)
+		close(importer_fd);
+	if (dmabuf_fd >= 0)
+		close(dmabuf_fd);
+	if (heap_fd >= 0)
+		close(heap_fd);
+}
+
+int main(void)
+{
+	DIR *d;
+	struct dirent *dir;
+
+	d = opendir(DEVPATH);
+	if (!d) {
+		printf("No %s directory?\n", DEVPATH);
+		return -1;
+	}
+
+	while ((dir = readdir(d)) != NULL) {
+		if (!strncmp(dir->d_name, ".", 2))
+			continue;
+		if (!strncmp(dir->d_name, "..", 3))
+			continue;
+
+		do_test(dir->d_name);
+	}
+	closedir(d);
+
+	return 0;
+}
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION)
  2019-09-06 18:47 [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION) John Stultz
                   ` (4 preceding siblings ...)
  2019-09-06 18:47 ` [RESEND][PATCH v8 5/5] kselftests: Add dma-heap test John Stultz
@ 2019-09-19 16:51 ` Sumit Semwal
  2019-09-24 16:22   ` Ayan Halder
  2019-09-30 13:40 ` Laura Abbott
                   ` (3 subsequent siblings)
  9 siblings, 1 reply; 39+ messages in thread
From: Sumit Semwal @ 2019-09-19 16:51 UTC (permalink / raw)
  To: John Stultz
  Cc: lkml, Laura Abbott, Benjamin Gaignard, Liam Mark, Pratik Patel,
	Brian Starkey, Vincent Donnefort, Sudipto Paul, Andrew F . Davis,
	Christoph Hellwig, Chenbo Feng, Alistair Strachan,
	Hridya Valsaraju, DRI mailing list

Hello Christoph, everyone,

On Sat, 7 Sep 2019 at 00:17, John Stultz <john.stultz@linaro.org> wrote:
>
> Here is yet another pass at the dma-buf heaps patchset Andrew
> and I have been working on which tries to destage a fair chunk
> of ION functionality.
>
> The patchset implements per-heap devices which can be opened
> directly and then an ioctl is used to allocate a dmabuf from the
> heap.
>
> The interface is similar, but much simpler then IONs, only
> providing an ALLOC ioctl.
>
> Also, I've provided relatively simple system and cma heaps.
>
> I've booted and tested these patches with AOSP on the HiKey960
> using the kernel tree here:
>   https://git.linaro.org/people/john.stultz/android-dev.git/log/?h=dev/dma-buf-heap
>
> And the userspace changes here:
>   https://android-review.googlesource.com/c/device/linaro/hikey/+/909436
>
> Compared to ION, this patchset is missing the system-contig,
> carveout and chunk heaps, as I don't have a device that uses
> those, so I'm unable to do much useful validation there.
> Additionally we have no upstream users of chunk or carveout,
> and the system-contig has been deprecated in the common/andoid-*
> kernels, so this should be ok.
>
> I've also removed the stats accounting, since any such accounting
> should be implemented by dma-buf core or the heaps themselves.
>
> Most of the changes in this revision are adddressing the more
> concrete feedback from Christoph (many thanks!). Though I'm not
> sure if some of the less specific feedback was completely resolved
> in discussion last time around. Please let me know!

It looks like most of the feedback has been taken care of. If there's
no more objection to this series, I'd like to merge it in soon.

If there are any more review comments, may I request you to please provide them?

>
> New in v8:
> * Make struct dma_heap_ops consts (Suggested by Christoph)
> * Add flush_kernel_vmap_range/invalidate_kernel_vmap_range calls
>   (suggested by Christoph)
> * Condense dma_heap_buffer and heap_helper_buffer (suggested by
>   Christoph)
> * Get rid of needless struct system_heap (suggested by Christoph)
> * Fix indentation by using shorter argument names (suggested by
>   Christoph)
> * Remove unused private_flags value
> * Add forgotten include file to fix build issue on x86
> * Checkpatch whitespace fixups
>
> Thoughts and feedback would be greatly appreciated!
>
> thanks
> -john
Best,
Sumit.
>
> Cc: Laura Abbott <labbott@redhat.com>
> Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: Liam Mark <lmark@codeaurora.org>
> Cc: Pratik Patel <pratikp@codeaurora.org>
> Cc: Brian Starkey <Brian.Starkey@arm.com>
> Cc: Vincent Donnefort <Vincent.Donnefort@arm.com>
> Cc: Sudipto Paul <Sudipto.Paul@arm.com>
> Cc: Andrew F. Davis <afd@ti.com>
> Cc: Christoph Hellwig <hch@infradead.org>
> Cc: Chenbo Feng <fengc@google.com>
> Cc: Alistair Strachan <astrachan@google.com>
> Cc: Hridya Valsaraju <hridya@google.com>
> Cc: dri-devel@lists.freedesktop.org
>
>
> Andrew F. Davis (1):
>   dma-buf: Add dma-buf heaps framework
>
> John Stultz (4):
>   dma-buf: heaps: Add heap helpers
>   dma-buf: heaps: Add system heap to dmabuf heaps
>   dma-buf: heaps: Add CMA heap to dmabuf heaps
>   kselftests: Add dma-heap test
>
>  MAINTAINERS                                   |  18 ++
>  drivers/dma-buf/Kconfig                       |  11 +
>  drivers/dma-buf/Makefile                      |   2 +
>  drivers/dma-buf/dma-heap.c                    | 250 ++++++++++++++++
>  drivers/dma-buf/heaps/Kconfig                 |  14 +
>  drivers/dma-buf/heaps/Makefile                |   4 +
>  drivers/dma-buf/heaps/cma_heap.c              | 164 +++++++++++
>  drivers/dma-buf/heaps/heap-helpers.c          | 269 ++++++++++++++++++
>  drivers/dma-buf/heaps/heap-helpers.h          |  55 ++++
>  drivers/dma-buf/heaps/system_heap.c           | 122 ++++++++
>  include/linux/dma-heap.h                      |  59 ++++
>  include/uapi/linux/dma-heap.h                 |  55 ++++
>  tools/testing/selftests/dmabuf-heaps/Makefile |   9 +
>  .../selftests/dmabuf-heaps/dmabuf-heap.c      | 230 +++++++++++++++
>  14 files changed, 1262 insertions(+)
>  create mode 100644 drivers/dma-buf/dma-heap.c
>  create mode 100644 drivers/dma-buf/heaps/Kconfig
>  create mode 100644 drivers/dma-buf/heaps/Makefile
>  create mode 100644 drivers/dma-buf/heaps/cma_heap.c
>  create mode 100644 drivers/dma-buf/heaps/heap-helpers.c
>  create mode 100644 drivers/dma-buf/heaps/heap-helpers.h
>  create mode 100644 drivers/dma-buf/heaps/system_heap.c
>  create mode 100644 include/linux/dma-heap.h
>  create mode 100644 include/uapi/linux/dma-heap.h
>  create mode 100644 tools/testing/selftests/dmabuf-heaps/Makefile
>  create mode 100644 tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
>
> --
> 2.17.1
>


-- 
Thanks and regards,

Sumit Semwal
Linaro Consumer Group - Kernel Team Lead
Linaro.org │ Open source software for ARM SoCs

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 1/5] dma-buf: Add dma-buf heaps framework
  2019-09-06 18:47 ` [RESEND][PATCH v8 1/5] dma-buf: Add dma-buf heaps framework John Stultz
@ 2019-09-23 22:08   ` Brian Starkey
  2019-09-24 17:10     ` John Stultz
  0 siblings, 1 reply; 39+ messages in thread
From: Brian Starkey @ 2019-09-23 22:08 UTC (permalink / raw)
  To: John Stultz
  Cc: lkml, Andrew F. Davis, Laura Abbott, Benjamin Gaignard,
	Sumit Semwal, Liam Mark, Pratik Patel, Vincent Donnefort,
	Sudipto Paul, Christoph Hellwig, Chenbo Feng, Alistair Strachan,
	Hridya Valsaraju, dri-devel, nd

Hi John,

On Fri, Sep 06, 2019 at 06:47:08PM +0000, John Stultz wrote:
> From: "Andrew F. Davis" <afd@ti.com>
> 
> This framework allows a unified userspace interface for dma-buf
> exporters, allowing userland to allocate specific types of memory
> for use in dma-buf sharing.
> 
> Each heap is given its own device node, which a user can allocate
> a dma-buf fd from using the DMA_HEAP_IOC_ALLOC.
> 
> This code is an evoluiton of the Android ION implementation,
> and a big thanks is due to its authors/maintainers over time
> for their effort:
>   Rebecca Schultz Zavin, Colin Cross, Benjamin Gaignard,
>   Laura Abbott, and many other contributors!
> 
> Cc: Laura Abbott <labbott@redhat.com>
> Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: Liam Mark <lmark@codeaurora.org>
> Cc: Pratik Patel <pratikp@codeaurora.org>
> Cc: Brian Starkey <Brian.Starkey@arm.com>
> Cc: Vincent Donnefort <Vincent.Donnefort@arm.com>
> Cc: Sudipto Paul <Sudipto.Paul@arm.com>
> Cc: Andrew F. Davis <afd@ti.com>
> Cc: Christoph Hellwig <hch@infradead.org>
> Cc: Chenbo Feng <fengc@google.com>
> Cc: Alistair Strachan <astrachan@google.com>
> Cc: Hridya Valsaraju <hridya@google.com>
> Cc: dri-devel@lists.freedesktop.org
> Reviewed-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
> Signed-off-by: Andrew F. Davis <afd@ti.com>
> Signed-off-by: John Stultz <john.stultz@linaro.org>

One miniscule nit from me below, but whether you change it or not, you
can add my r-b:

Reviewed-by: Brian Starkey <brian.starkey@arm.com>

Thanks for pushing this through!

-Brian

> ---

...

> +
> +	dev_ret = device_create(dma_heap_class,
> +				NULL,
> +				heap->heap_devt,
> +				NULL,
> +				heap->name);
> +	if (IS_ERR(dev_ret)) {
> +		pr_err("dma_heap: Unable to create device\n");
> +		err_ret = (struct dma_heap *)dev_ret;

Tiny nit: ERR_CAST() would be more obvious for me here.


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 2/5] dma-buf: heaps: Add heap helpers
  2019-09-06 18:47 ` [RESEND][PATCH v8 2/5] dma-buf: heaps: Add heap helpers John Stultz
@ 2019-09-23 22:08   ` Brian Starkey
  0 siblings, 0 replies; 39+ messages in thread
From: Brian Starkey @ 2019-09-23 22:08 UTC (permalink / raw)
  To: John Stultz
  Cc: lkml, Laura Abbott, Benjamin Gaignard, Sumit Semwal, Liam Mark,
	Pratik Patel, Vincent Donnefort, Sudipto Paul, Andrew F . Davis,
	Christoph Hellwig, Chenbo Feng, Alistair Strachan,
	Hridya Valsaraju, dri-devel, nd

Hi John,

On Fri, Sep 06, 2019 at 06:47:09PM +0000, John Stultz wrote:
> Add generic helper dmabuf ops for dma heaps, so we can reduce
> the amount of duplicative code for the exported dmabufs.
> 
> This code is an evolution of the Android ION implementation, so
> thanks to its original authors and maintainters:
>   Rebecca Schultz Zavin, Colin Cross, Laura Abbott, and others!
> 
> Cc: Laura Abbott <labbott@redhat.com>
> Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: Liam Mark <lmark@codeaurora.org>
> Cc: Pratik Patel <pratikp@codeaurora.org>
> Cc: Brian Starkey <Brian.Starkey@arm.com>
> Cc: Vincent Donnefort <Vincent.Donnefort@arm.com>
> Cc: Sudipto Paul <Sudipto.Paul@arm.com>
> Cc: Andrew F. Davis <afd@ti.com>
> Cc: Christoph Hellwig <hch@infradead.org>
> Cc: Chenbo Feng <fengc@google.com>
> Cc: Alistair Strachan <astrachan@google.com>
> Cc: Hridya Valsaraju <hridya@google.com>
> Cc: dri-devel@lists.freedesktop.org
> Reviewed-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
> Signed-off-by: John Stultz <john.stultz@linaro.org>

Two minor things below.

> ---
> v2:
> * Removed cache management performance hack that I had
>   accidentally folded in.
> * Removed stats code that was in helpers
> * Lots of checkpatch cleanups
> v3:
> * Uninline INIT_HEAP_HELPER_BUFFER (suggested by Christoph)
> * Switch to WARN on buffer destroy failure (suggested by Brian)
> * buffer->kmap_cnt decrementing cleanup (suggested by Christoph)
> * Extra buffer->vaddr checking in dma_heap_dma_buf_kmap
>   (suggested by Brian)
> * Switch to_helper_buffer from macro to inline function
>   (suggested by Benjamin)
> * Rename kmap->vmap (folded in from Andrew)
> * Use vmap for vmapping - not begin_cpu_access (folded in from
>   Andrew)
> * Drop kmap for now, as its optional (folded in from Andrew)
> * Fold dma_heap_map_user into the single caller (foled in from
>   Andrew)
> * Folded in patch from Andrew to track page list per heap not
>   sglist, which simplifies the tracking logic
> v4:
> * Moved dma-heap.h change out to previous patch
> v6:
> * Minor cleanups and typo fixes suggested by Brian
> v7:
> * Removed stray ;
> * Make init_heap_helper_buffer lowercase, as suggested by Christoph
> * Add dmabuf export helper to reduce boilerplate code
> v8:
> * Remove unused private_flags value
> * Condense dma_heap_buffer and heap_helper_buffer (suggested by
>   Christoph)
> * Fix indentation by using shorter argument names (suggested by
>   Christoph)
> * Add flush_kernel_vmap_range/invalidate_kernel_vmap_range calls
>   (suggested by Christoph)
> * Checkpatch whitespace fixups
> ---

...

> +
> +static void *dma_heap_buffer_vmap_get(struct heap_helper_buffer *buffer)
> +{
> +	void *vaddr;
> +
> +	if (buffer->vmap_cnt) {
> +		buffer->vmap_cnt++;
> +		return buffer->vaddr;
> +	}
> +	vaddr = dma_heap_map_kernel(buffer);
> +	if (WARN_ONCE(!vaddr,
> +		      "heap->ops->map_kernel should return ERR_PTR on error"))

Looks like the message is out-of-date here.

...

> +
> +/**
> + * struct heap_helper_buffer - helper buffer metadata
> + * @heap:		back pointer to the heap the buffer came from
> + * @dmabuf:		backing dma-buf for this buffer
> + * @size:		size of the buffer
> + * @flags:		buffer specific flags
> + * @priv_virt		pointer to heap specific private value
> + * @lock		mutext to protect the data in this structure
> + * @vmap_cnt		count of vmap references on the buffer
> + * @vaddr		vmap'ed virtual address
> + * @pagecount		number of pages in the buffer
> + * @pages		list of page pointers
> + * @attachment		list of device attachments

s/attachment/attachments/

With those fixed, feel free to add:

Reviewed-by: Brian Starkey <brian.starkey@arm.com>

Thanks,
-Brian


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 3/5] dma-buf: heaps: Add system heap to dmabuf heaps
  2019-09-06 18:47 ` [RESEND][PATCH v8 3/5] dma-buf: heaps: Add system heap to dmabuf heaps John Stultz
@ 2019-09-23 22:09   ` Brian Starkey
  0 siblings, 0 replies; 39+ messages in thread
From: Brian Starkey @ 2019-09-23 22:09 UTC (permalink / raw)
  To: John Stultz
  Cc: lkml, Laura Abbott, Benjamin Gaignard, Sumit Semwal, Liam Mark,
	Pratik Patel, Vincent Donnefort, Sudipto Paul, Andrew F . Davis,
	Christoph Hellwig, Chenbo Feng, Alistair Strachan,
	Hridya Valsaraju, dri-devel, nd

On Fri, Sep 06, 2019 at 06:47:10PM +0000, John Stultz wrote:
> This patch adds system heap to the dma-buf heaps framework.
> 
> This allows applications to get a page-allocator backed dma-buf
> for non-contiguous memory.
> 
> This code is an evolution of the Android ION implementation, so
> thanks to its original authors and maintainters:
>   Rebecca Schultz Zavin, Colin Cross, Laura Abbott, and others!
> 
> Cc: Laura Abbott <labbott@redhat.com>
> Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: Liam Mark <lmark@codeaurora.org>
> Cc: Pratik Patel <pratikp@codeaurora.org>
> Cc: Brian Starkey <Brian.Starkey@arm.com>
> Cc: Vincent Donnefort <Vincent.Donnefort@arm.com>
> Cc: Sudipto Paul <Sudipto.Paul@arm.com>
> Cc: Andrew F. Davis <afd@ti.com>
> Cc: Christoph Hellwig <hch@infradead.org>
> Cc: Chenbo Feng <fengc@google.com>
> Cc: Alistair Strachan <astrachan@google.com>
> Cc: Hridya Valsaraju <hridya@google.com>
> Cc: dri-devel@lists.freedesktop.org
> Reviewed-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
> Signed-off-by: John Stultz <john.stultz@linaro.org>

LGTM:

Reviewed-by: Brian Starkey <brian.starkey@arm.com>

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 4/5] dma-buf: heaps: Add CMA heap to dmabuf heaps
  2019-09-06 18:47 ` [RESEND][PATCH v8 4/5] dma-buf: heaps: Add CMA " John Stultz
@ 2019-09-23 22:10   ` Brian Starkey
  0 siblings, 0 replies; 39+ messages in thread
From: Brian Starkey @ 2019-09-23 22:10 UTC (permalink / raw)
  To: John Stultz
  Cc: lkml, Laura Abbott, Benjamin Gaignard, Sumit Semwal, Liam Mark,
	Pratik Patel, Vincent Donnefort, Sudipto Paul, Andrew F . Davis,
	Christoph Hellwig, Chenbo Feng, Alistair Strachan,
	Hridya Valsaraju, dri-devel, nd

Hi John,

I spotted one thing below which might be harmless, but best to check.

On Fri, Sep 06, 2019 at 06:47:11PM +0000, John Stultz wrote:
> This adds a CMA heap, which allows userspace to allocate
> a dma-buf of contiguous memory out of a CMA region.
> 
> This code is an evolution of the Android ION implementation, so
> thanks to its original author and maintainters:
>   Benjamin Gaignard, Laura Abbott, and others!
> 
> Cc: Laura Abbott <labbott@redhat.com>
> Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: Liam Mark <lmark@codeaurora.org>
> Cc: Pratik Patel <pratikp@codeaurora.org>
> Cc: Brian Starkey <Brian.Starkey@arm.com>
> Cc: Vincent Donnefort <Vincent.Donnefort@arm.com>
> Cc: Sudipto Paul <Sudipto.Paul@arm.com>
> Cc: Andrew F. Davis <afd@ti.com>
> Cc: Christoph Hellwig <hch@infradead.org>
> Cc: Chenbo Feng <fengc@google.com>
> Cc: Alistair Strachan <astrachan@google.com>
> Cc: Hridya Valsaraju <hridya@google.com>
> Cc: dri-devel@lists.freedesktop.org
> Reviewed-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
> Signed-off-by: John Stultz <john.stultz@linaro.org>
> ---
> v2:
> * Switch allocate to return dmabuf fd
> * Simplify init code
> * Checkpatch fixups
> v3:
> * Switch to inline function for to_cma_heap()
> * Minor cleanups suggested by Brian
> * Fold in new registration style from Andrew
> * Folded in changes from Andrew to use simplified page list
>   from the heap helpers
> v4:
> * Use the fd_flags when creating dmabuf fd (Suggested by
>   Benjamin)
> * Use precalculated pagecount (Suggested by Andrew)
> v6:
> * Changed variable names to improve clarity, as suggested
>   by Brian
> v7:
> * Use newly lower-cased init_heap_helper_buffer helper
> * Use new dmabuf export helper
> v8:
> * Make struct dma_heap_ops const (Suggested by Christoph)
> * Condense dma_heap_buffer and heap_helper_buffer (suggested by
>   Christoph)
> * Checkpatch whitespace fixups
> ---

...

> +
> +/* dmabuf heap CMA operations functions */
> +static int cma_heap_allocate(struct dma_heap *heap,
> +			     unsigned long len,
> +			     unsigned long fd_flags,
> +			     unsigned long heap_flags)
> +{
> +	struct cma_heap *cma_heap = dma_heap_get_data(heap);
> +	struct heap_helper_buffer *helper_buffer;
> +	struct page *cma_pages;
> +	size_t size = PAGE_ALIGN(len);
> +	unsigned long nr_pages = size >> PAGE_SHIFT;
> +	unsigned long align = get_order(size);
> +	struct dma_buf *dmabuf;
> +	int ret = -ENOMEM;
> +	pgoff_t pg;
> +
> +	if (align > CONFIG_CMA_ALIGNMENT)
> +		align = CONFIG_CMA_ALIGNMENT;
> +
> +	helper_buffer = kzalloc(sizeof(*helper_buffer), GFP_KERNEL);
> +	if (!helper_buffer)
> +		return -ENOMEM;
> +
> +	init_heap_helper_buffer(helper_buffer, cma_heap_free);
> +	helper_buffer->flags = heap_flags;
> +	helper_buffer->heap = heap;
> +	helper_buffer->size = len;
> +
> +	cma_pages = cma_alloc(cma_heap->cma, nr_pages, align, false);
> +	if (!cma_pages)
> +		goto free_buf;
> +
> +	if (PageHighMem(cma_pages)) {
> +		unsigned long nr_clear_pages = nr_pages;
> +		struct page *page = cma_pages;
> +
> +		while (nr_clear_pages > 0) {
> +			void *vaddr = kmap_atomic(page);
> +
> +			memset(vaddr, 0, PAGE_SIZE);
> +			kunmap_atomic(vaddr);
> +			page++;
> +			nr_clear_pages--;
> +		}
> +	} else {
> +		memset(page_address(cma_pages), 0, size);
> +	}
> +
> +	helper_buffer->pagecount = nr_pages;
> +	helper_buffer->pages = kmalloc_array(helper_buffer->pagecount,
> +					     sizeof(*helper_buffer->pages),
> +					     GFP_KERNEL);
> +	if (!helper_buffer->pages) {
> +		ret = -ENOMEM;
> +		goto free_cma;
> +	}
> +
> +	for (pg = 0; pg < helper_buffer->pagecount; pg++) {
> +		helper_buffer->pages[pg] = &cma_pages[pg];
> +		if (!helper_buffer->pages[pg])

Is this ever really possible? If cma_pages is non-NULL (which you
check earlier), then only if the pointer arithmetic overflows right?

If it's just redundant, then you could remove it (and in that case add
my r-b). But maybe you meant to check something else?

Cheers,
-Brian

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 5/5] kselftests: Add dma-heap test
  2019-09-06 18:47 ` [RESEND][PATCH v8 5/5] kselftests: Add dma-heap test John Stultz
@ 2019-09-23 22:11   ` Brian Starkey
  2019-09-26 21:36     ` John Stultz
  0 siblings, 1 reply; 39+ messages in thread
From: Brian Starkey @ 2019-09-23 22:11 UTC (permalink / raw)
  To: John Stultz
  Cc: lkml, Benjamin Gaignard, Sumit Semwal, Liam Mark, Pratik Patel,
	Vincent Donnefort, Sudipto Paul, Andrew F . Davis,
	Christoph Hellwig, Chenbo Feng, Alistair Strachan,
	Hridya Valsaraju, dri-devel, nd

Hi John,

I didn't see any response about using the test harness. Did you decide
against it?

On Fri, Sep 06, 2019 at 06:47:12PM +0000, John Stultz wrote:
> Add very trivial allocation and import test for dma-heaps,
> utilizing the vgem driver as a test importer.
> 
> A good chunk of this code taken from:
>   tools/testing/selftests/android/ion/ionmap_test.c
>   Originally by Laura Abbott <labbott@redhat.com>
> 
> Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: Liam Mark <lmark@codeaurora.org>
> Cc: Pratik Patel <pratikp@codeaurora.org>
> Cc: Brian Starkey <Brian.Starkey@arm.com>
> Cc: Vincent Donnefort <Vincent.Donnefort@arm.com>
> Cc: Sudipto Paul <Sudipto.Paul@arm.com>
> Cc: Andrew F. Davis <afd@ti.com>
> Cc: Christoph Hellwig <hch@infradead.org>
> Cc: Chenbo Feng <fengc@google.com>
> Cc: Alistair Strachan <astrachan@google.com>
> Cc: Hridya Valsaraju <hridya@google.com>
> Cc: dri-devel@lists.freedesktop.org
> Reviewed-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
> Signed-off-by: John Stultz <john.stultz@linaro.org>
> ---
> v2:
> * Switched to use reworked dma-heap apis
> v3:
> * Add simple mmap
> * Utilize dma-buf testdev to test importing
> v4:
> * Rework to use vgem
> * Pass in fd_flags to match interface changes
> * Skip . and .. dirs
> v6:
> * Number of style/cleanups suggested by Brian
> v7:
> * Whitespace fixup for checkpatch
> v8:
> * More checkpatch whitespace fixups
> ---
>  tools/testing/selftests/dmabuf-heaps/Makefile |   9 +
>  .../selftests/dmabuf-heaps/dmabuf-heap.c      | 230 ++++++++++++++++++
>  2 files changed, 239 insertions(+)
>  create mode 100644 tools/testing/selftests/dmabuf-heaps/Makefile
>  create mode 100644 tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
> 
> diff --git a/tools/testing/selftests/dmabuf-heaps/Makefile b/tools/testing/selftests/dmabuf-heaps/Makefile
> new file mode 100644
> index 000000000000..8c4c36e2972d
> --- /dev/null
> +++ b/tools/testing/selftests/dmabuf-heaps/Makefile
> @@ -0,0 +1,9 @@
> +# SPDX-License-Identifier: GPL-2.0
> +CFLAGS += -static -O3 -Wl,-no-as-needed -Wall
> +#LDLIBS += -lrt -lpthread -lm
> +
> +# these are all "safe" tests that don't modify
> +# system time or require escalated privileges
> +TEST_GEN_PROGS = dmabuf-heap
> +
> +include ../lib.mk
> diff --git a/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
> new file mode 100644
> index 000000000000..e439d6cf3d81
> --- /dev/null
> +++ b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
> @@ -0,0 +1,230 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include <dirent.h>
> +#include <errno.h>
> +#include <fcntl.h>
> +#include <stdio.h>
> +#include <stdlib.h>
> +#include <stdint.h>
> +#include <string.h>
> +#include <unistd.h>
> +#include <sys/ioctl.h>
> +#include <sys/mman.h>
> +#include <sys/types.h>
> +
> +#include <linux/dma-buf.h>
> +#include <drm/drm.h>
> +
> +#include "../../../../include/uapi/linux/dma-heap.h"
> +
> +#define DEVPATH "/dev/dma_heap"
> +
> +static int check_vgem(int fd)
> +{
> +	drm_version_t version = { 0 };
> +	char name[5];
> +	int ret;
> +
> +	version.name_len = 4;
> +	version.name = name;
> +
> +	ret = ioctl(fd, DRM_IOCTL_VERSION, &version);
> +	if (ret)
> +		return 0;
> +
> +	return !strcmp(name, "vgem");
> +}
> +
> +static int open_vgem(void)
> +{
> +	int i, fd;
> +	const char *drmstr = "/dev/dri/card";
> +
> +	fd = -1;
> +	for (i = 0; i < 16; i++) {
> +		char name[80];
> +
> +		sprintf(name, "%s%u", drmstr, i);
> +
> +		fd = open(name, O_RDWR);
> +		if (fd < 0)
> +			continue;
> +
> +		if (!check_vgem(fd)) {
> +			close(fd);

I didn't spot this last time, but there's an (unlikely) error scenario
here if there's >= 16 DRM devices and none of them are vgem, then
you'll return a stale fd.

> +			continue;
> +		} else {
> +			break;
> +		}
> +	}
> +	return fd;
> +}
> +
> +static int import_vgem_fd(int vgem_fd, int dma_buf_fd, uint32_t *handle)
> +{
> +	struct drm_prime_handle import_handle = {
> +		.fd = dma_buf_fd,
> +		.flags = 0,
> +		.handle = 0,
> +	 };
> +	int ret;
> +
> +	ret = ioctl(vgem_fd, DRM_IOCTL_PRIME_FD_TO_HANDLE, &import_handle);
> +	if (ret == 0)
> +		*handle = import_handle.handle;
> +	return ret;
> +}
> +
> +static void close_handle(int vgem_fd, uint32_t handle)
> +{
> +	struct drm_gem_close close = {
> +		.handle = handle,
> +	};
> +
> +	ioctl(vgem_fd, DRM_IOCTL_GEM_CLOSE, &close);
> +}
> +
> +static int dmabuf_heap_open(char *name)
> +{
> +	int ret, fd;
> +	char buf[256];
> +
> +	ret = sprintf(buf, "%s/%s", DEVPATH, name);

snprintf(), just because why not?

> +	if (ret < 0) {
> +		printf("sprintf failed!\n");
> +		return ret;
> +	}
> +
> +	fd = open(buf, O_RDWR);
> +	if (fd < 0)
> +		printf("open %s failed!\n", buf);
> +	return fd;
> +}
> +
> +static int dmabuf_heap_alloc(int fd, size_t len, unsigned int flags,
> +			     int *dmabuf_fd)
> +{
> +	struct dma_heap_allocation_data data = {
> +		.len = len,
> +		.fd_flags = O_RDWR | O_CLOEXEC,
> +		.heap_flags = flags,
> +	};
> +	int ret;
> +
> +	if (!dmabuf_fd)
> +		return -EINVAL;
> +
> +	ret = ioctl(fd, DMA_HEAP_IOC_ALLOC, &data);
> +	if (ret < 0)
> +		return ret;
> +	*dmabuf_fd = (int)data.fd;
> +	return ret;
> +}
> +
> +static void dmabuf_sync(int fd, int start_stop)
> +{
> +	struct dma_buf_sync sync = {
> +		.flags = start_stop | DMA_BUF_SYNC_RW,
> +	};
> +	int ret;
> +
> +	ret = ioctl(fd, DMA_BUF_IOCTL_SYNC, &sync);
> +	if (ret)
> +		printf("sync failed %d\n", errno);
> +}
> +
> +#define ONE_MEG (1024 * 1024)
> +
> +static void do_test(char *heap_name)
> +{
> +	int heap_fd = -1, dmabuf_fd = -1, importer_fd = -1;
> +	uint32_t handle = 0;
> +	void *p = NULL;
> +	int ret;
> +
> +	printf("Testing heap: %s\n", heap_name);
> +
> +	heap_fd = dmabuf_heap_open(heap_name);
> +	if (heap_fd < 0)
> +		return;
> +
> +	printf("Allocating 1 MEG\n");
> +	ret = dmabuf_heap_alloc(heap_fd, ONE_MEG, 0, &dmabuf_fd);
> +	if (ret) {
> +		printf("Allocation Failed!\n");
> +		goto out;
> +	}
> +	/* mmap and write a simple pattern */
> +	p = mmap(NULL,
> +		 ONE_MEG,
> +		 PROT_READ | PROT_WRITE,
> +		 MAP_SHARED,
> +		 dmabuf_fd,
> +		 0);
> +	if (p == MAP_FAILED) {
> +		printf("mmap() failed: %m\n");
> +		goto out;
> +	}
> +	printf("mmap passed\n");
> +
> +	dmabuf_sync(dmabuf_fd, DMA_BUF_SYNC_START);
> +
> +	memset(p, 1, ONE_MEG / 2);
> +	memset((char *)p + ONE_MEG / 2, 0, ONE_MEG / 2);
> +	dmabuf_sync(dmabuf_fd, DMA_BUF_SYNC_END);
> +
> +	importer_fd = open_vgem();
> +	if (importer_fd < 0) {
> +		ret = importer_fd;
> +		printf("Failed to open vgem\n");
> +		goto out;
> +	}
> +
> +	ret = import_vgem_fd(importer_fd, dmabuf_fd, &handle);
> +	if (ret < 0) {
> +		printf("Failed to import buffer\n");
> +		goto out;
> +	}
> +	printf("import passed\n");
> +
> +	dmabuf_sync(dmabuf_fd, DMA_BUF_SYNC_START);
> +	memset(p, 0xff, ONE_MEG);
> +	dmabuf_sync(dmabuf_fd, DMA_BUF_SYNC_END);
> +	printf("syncs passed\n");
> +
> +	close_handle(importer_fd, handle);
> +
> +out:
> +	if (p)
> +		munmap(p, ONE_MEG);
> +	if (importer_fd >= 0)
> +		close(importer_fd);
> +	if (dmabuf_fd >= 0)
> +		close(dmabuf_fd);
> +	if (heap_fd >= 0)
> +		close(heap_fd);
> +}
> +
> +int main(void)
> +{
> +	DIR *d;
> +	struct dirent *dir;
> +
> +	d = opendir(DEVPATH);
> +	if (!d) {
> +		printf("No %s directory?\n", DEVPATH);
> +		return -1;
> +	}
> +
> +	while ((dir = readdir(d)) != NULL) {
> +		if (!strncmp(dir->d_name, ".", 2))
> +			continue;
> +		if (!strncmp(dir->d_name, "..", 3))
> +			continue;
> +
> +		do_test(dir->d_name);

As far as I understand it, if main() always returns zero, this test will
always be indicated as a "pass" - shouldn't there be at least some
failure scenarios?

Cheers,
-Brian

> +	}
> +	closedir(d);
> +
> +	return 0;
> +}
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION)
  2019-09-19 16:51 ` [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION) Sumit Semwal
@ 2019-09-24 16:22   ` Ayan Halder
  2019-09-24 16:28     ` John Stultz
  2019-10-09 17:37     ` Ayan Halder
  0 siblings, 2 replies; 39+ messages in thread
From: Ayan Halder @ 2019-09-24 16:22 UTC (permalink / raw)
  To: Sumit Semwal
  Cc: John Stultz, Sudipto Paul, Vincent Donnefort, Chenbo Feng, lkml,
	Liam Mark, Andrew F . Davis, Christoph Hellwig,
	Alistair Strachan, DRI mailing list, Hridya Valsaraju,
	Pratik Patel, nd

On Thu, Sep 19, 2019 at 10:21:52PM +0530, Sumit Semwal wrote:
> Hello Christoph, everyone,
> 
> On Sat, 7 Sep 2019 at 00:17, John Stultz <john.stultz@linaro.org> wrote:
> >
> > Here is yet another pass at the dma-buf heaps patchset Andrew
> > and I have been working on which tries to destage a fair chunk
> > of ION functionality.
> >
> > The patchset implements per-heap devices which can be opened
> > directly and then an ioctl is used to allocate a dmabuf from the
> > heap.
> >
> > The interface is similar, but much simpler then IONs, only
> > providing an ALLOC ioctl.
> >
> > Also, I've provided relatively simple system and cma heaps.
> >
> > I've booted and tested these patches with AOSP on the HiKey960
> > using the kernel tree here:
> >   https://git.linaro.org/people/john.stultz/android-dev.git/log/?h=dev/dma-buf-heap
> >
> > And the userspace changes here:
> >   https://android-review.googlesource.com/c/device/linaro/hikey/+/909436
> >
> > Compared to ION, this patchset is missing the system-contig,
> > carveout and chunk heaps, as I don't have a device that uses
> > those, so I'm unable to do much useful validation there.
> > Additionally we have no upstream users of chunk or carveout,
> > and the system-contig has been deprecated in the common/andoid-*
> > kernels, so this should be ok.
> >
> > I've also removed the stats accounting, since any such accounting
> > should be implemented by dma-buf core or the heaps themselves.
> >
> > Most of the changes in this revision are adddressing the more
> > concrete feedback from Christoph (many thanks!). Though I'm not
> > sure if some of the less specific feedback was completely resolved
> > in discussion last time around. Please let me know!
> 
> It looks like most of the feedback has been taken care of. If there's
> no more objection to this series, I'd like to merge it in soon.
> 
> If there are any more review comments, may I request you to please provide them?

I tested these patches using our internal test suite with Arm,komeda
driver and the following node in dts

        reserved-memory {
                #address-cells = <0x2>;
                #size-cells = <0x2>;
                ranges;

                framebuffer@60000000 {
                        compatible = "shared-dma-pool";
                        linux,cma-default;
                        reg = <0x0 0x60000000 0x0 0x8000000>;
                };
        }

The tests went fine. Our tests allocates framebuffers of different
sizes, posts them on screen and the driver writes back to one of the
framebuffers. I havenot tested for any performance, latency or
cache management related stuff. So, it that looks appropriate, feel
free to add:-
Tested-by:- Ayan Kumar Halder <ayan.halder@arm.com>

Are you planning to write some igt tests for it ?
> 
> >
> > New in v8:
> > * Make struct dma_heap_ops consts (Suggested by Christoph)
> > * Add flush_kernel_vmap_range/invalidate_kernel_vmap_range calls
> >   (suggested by Christoph)
> > * Condense dma_heap_buffer and heap_helper_buffer (suggested by
> >   Christoph)
> > * Get rid of needless struct system_heap (suggested by Christoph)
> > * Fix indentation by using shorter argument names (suggested by
> >   Christoph)
> > * Remove unused private_flags value
> > * Add forgotten include file to fix build issue on x86
> > * Checkpatch whitespace fixups
> >
> > Thoughts and feedback would be greatly appreciated!
> >
> > thanks
> > -john
> Best,
> Sumit.
> >
> > Cc: Laura Abbott <labbott@redhat.com>
> > Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
> > Cc: Sumit Semwal <sumit.semwal@linaro.org>
> > Cc: Liam Mark <lmark@codeaurora.org>
> > Cc: Pratik Patel <pratikp@codeaurora.org>
> > Cc: Brian Starkey <Brian.Starkey@arm.com>
> > Cc: Vincent Donnefort <Vincent.Donnefort@arm.com>
> > Cc: Sudipto Paul <Sudipto.Paul@arm.com>
> > Cc: Andrew F. Davis <afd@ti.com>
> > Cc: Christoph Hellwig <hch@infradead.org>
> > Cc: Chenbo Feng <fengc@google.com>
> > Cc: Alistair Strachan <astrachan@google.com>
> > Cc: Hridya Valsaraju <hridya@google.com>
> > Cc: dri-devel@lists.freedesktop.org
> >
> >
> > Andrew F. Davis (1):
> >   dma-buf: Add dma-buf heaps framework
> >
> > John Stultz (4):
> >   dma-buf: heaps: Add heap helpers
> >   dma-buf: heaps: Add system heap to dmabuf heaps
> >   dma-buf: heaps: Add CMA heap to dmabuf heaps
> >   kselftests: Add dma-heap test
> >
> >  MAINTAINERS                                   |  18 ++
> >  drivers/dma-buf/Kconfig                       |  11 +
> >  drivers/dma-buf/Makefile                      |   2 +
> >  drivers/dma-buf/dma-heap.c                    | 250 ++++++++++++++++
> >  drivers/dma-buf/heaps/Kconfig                 |  14 +
> >  drivers/dma-buf/heaps/Makefile                |   4 +
> >  drivers/dma-buf/heaps/cma_heap.c              | 164 +++++++++++
> >  drivers/dma-buf/heaps/heap-helpers.c          | 269 ++++++++++++++++++
> >  drivers/dma-buf/heaps/heap-helpers.h          |  55 ++++
> >  drivers/dma-buf/heaps/system_heap.c           | 122 ++++++++
> >  include/linux/dma-heap.h                      |  59 ++++
> >  include/uapi/linux/dma-heap.h                 |  55 ++++
> >  tools/testing/selftests/dmabuf-heaps/Makefile |   9 +
> >  .../selftests/dmabuf-heaps/dmabuf-heap.c      | 230 +++++++++++++++
> >  14 files changed, 1262 insertions(+)
> >  create mode 100644 drivers/dma-buf/dma-heap.c
> >  create mode 100644 drivers/dma-buf/heaps/Kconfig
> >  create mode 100644 drivers/dma-buf/heaps/Makefile
> >  create mode 100644 drivers/dma-buf/heaps/cma_heap.c
> >  create mode 100644 drivers/dma-buf/heaps/heap-helpers.c
> >  create mode 100644 drivers/dma-buf/heaps/heap-helpers.h
> >  create mode 100644 drivers/dma-buf/heaps/system_heap.c
> >  create mode 100644 include/linux/dma-heap.h
> >  create mode 100644 include/uapi/linux/dma-heap.h
> >  create mode 100644 tools/testing/selftests/dmabuf-heaps/Makefile
> >  create mode 100644 tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
> >
> > --
> > 2.17.1
> >
> 
> 
> -- 
> Thanks and regards,
> 
> Sumit Semwal
> Linaro Consumer Group - Kernel Team Lead
> Linaro.org │ Open source software for ARM SoCs
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION)
  2019-09-24 16:22   ` Ayan Halder
@ 2019-09-24 16:28     ` John Stultz
  2019-10-09 17:37     ` Ayan Halder
  1 sibling, 0 replies; 39+ messages in thread
From: John Stultz @ 2019-09-24 16:28 UTC (permalink / raw)
  To: Ayan Halder
  Cc: Sumit Semwal, Sudipto Paul, Vincent Donnefort, Chenbo Feng, lkml,
	Liam Mark, Andrew F . Davis, Christoph Hellwig,
	Alistair Strachan, DRI mailing list, Hridya Valsaraju,
	Pratik Patel, nd

On Tue, Sep 24, 2019 at 9:22 AM Ayan Halder <Ayan.Halder@arm.com> wrote:
> I tested these patches using our internal test suite with Arm,komeda
> driver and the following node in dts
>
>         reserved-memory {
>                 #address-cells = <0x2>;
>                 #size-cells = <0x2>;
>                 ranges;
>
>                 framebuffer@60000000 {
>                         compatible = "shared-dma-pool";
>                         linux,cma-default;
>                         reg = <0x0 0x60000000 0x0 0x8000000>;
>                 };
>         }
>
> The tests went fine. Our tests allocates framebuffers of different
> sizes, posts them on screen and the driver writes back to one of the
> framebuffers. I havenot tested for any performance, latency or
> cache management related stuff. So, it that looks appropriate, feel
> free to add:-
> Tested-by:- Ayan Kumar Halder <ayan.halder@arm.com>

Thanks so much for testing! I really appreciate it!

> Are you planning to write some igt tests for it ?

I've not personally as familiar with igt yet, which is why I started
with kselftest, but it's a good idea. I'll take a look and try to take
a swing at it after Sumit queues the patchset (I need to resubmit to
address Brian's feedback).

thanks
-john

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 1/5] dma-buf: Add dma-buf heaps framework
  2019-09-23 22:08   ` Brian Starkey
@ 2019-09-24 17:10     ` John Stultz
  0 siblings, 0 replies; 39+ messages in thread
From: John Stultz @ 2019-09-24 17:10 UTC (permalink / raw)
  To: Brian Starkey
  Cc: lkml, Andrew F. Davis, Laura Abbott, Benjamin Gaignard,
	Sumit Semwal, Liam Mark, Pratik Patel, Vincent Donnefort,
	Sudipto Paul, Christoph Hellwig, Chenbo Feng, Alistair Strachan,
	Hridya Valsaraju, dri-devel, nd

On Mon, Sep 23, 2019 at 3:08 PM Brian Starkey <Brian.Starkey@arm.com> wrote:
> One miniscule nit from me below, but whether you change it or not, you
> can add my r-b:
>
> Reviewed-by: Brian Starkey <brian.starkey@arm.com>
>
> Thanks for pushing this through!

Thanks again for the review! I'll address your issues and resubmit.

thanks
-john

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 5/5] kselftests: Add dma-heap test
  2019-09-23 22:11   ` Brian Starkey
@ 2019-09-26 21:36     ` John Stultz
  2019-09-27  9:20       ` Brian Starkey
  0 siblings, 1 reply; 39+ messages in thread
From: John Stultz @ 2019-09-26 21:36 UTC (permalink / raw)
  To: Brian Starkey
  Cc: lkml, Benjamin Gaignard, Sumit Semwal, Liam Mark, Pratik Patel,
	Vincent Donnefort, Sudipto Paul, Andrew F . Davis,
	Christoph Hellwig, Chenbo Feng, Alistair Strachan,
	Hridya Valsaraju, dri-devel, nd

On Mon, Sep 23, 2019 at 3:12 PM Brian Starkey <Brian.Starkey@arm.com> wrote:
>
> I didn't see any response about using the test harness. Did you decide
> against it?

Hey! Spent a little time looking at this bit and just wanted to reply
to this point.  So first, apologies, I think I missed the suggestion
earlier. That said, now that I've looked a little bit at the test
harness, and at least at this point it feels like it makes it harder
to reason with than standard c code.  Maybe I need to spend a bit more
time on it, but I'm a little hesitant to swap over just yet.

I'm not particularly passionate on this point, but are you?  Or was
this just a recommendation to check it out and consider it?

thanks
-john

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 5/5] kselftests: Add dma-heap test
  2019-09-26 21:36     ` John Stultz
@ 2019-09-27  9:20       ` Brian Starkey
  0 siblings, 0 replies; 39+ messages in thread
From: Brian Starkey @ 2019-09-27  9:20 UTC (permalink / raw)
  To: John Stultz
  Cc: lkml, Benjamin Gaignard, Sumit Semwal, Liam Mark, Pratik Patel,
	Vincent Donnefort, Sudipto Paul, Andrew F . Davis,
	Christoph Hellwig, Chenbo Feng, Alistair Strachan,
	Hridya Valsaraju, dri-devel, nd

Hi John,

On Thu, Sep 26, 2019 at 02:36:33PM -0700, John Stultz wrote:
> On Mon, Sep 23, 2019 at 3:12 PM Brian Starkey <Brian.Starkey@arm.com> wrote:
> >
> > I didn't see any response about using the test harness. Did you decide
> > against it?
> 
> Hey! Spent a little time looking at this bit and just wanted to reply
> to this point.  So first, apologies, I think I missed the suggestion
> earlier. That said, now that I've looked a little bit at the test
> harness, and at least at this point it feels like it makes it harder
> to reason with than standard c code.  Maybe I need to spend a bit more
> time on it, but I'm a little hesitant to swap over just yet.
> 
> I'm not particularly passionate on this point, but are you?  Or was
> this just a recommendation to check it out and consider it?

No particular strong feelings. I was just poking around the kernel
docs for testing and the only info there is about the test harness, so
wanted to check if you'd seen/considered it.

A quick grep of tools/testing shows that it's obviously not that
popular... so if you don't fancy it I won't complain.

Thanks,
-Brian

> 
> thanks
> -john

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION)
  2019-09-06 18:47 [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION) John Stultz
                   ` (5 preceding siblings ...)
  2019-09-19 16:51 ` [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION) Sumit Semwal
@ 2019-09-30 13:40 ` Laura Abbott
       [not found] ` <20190930074335.6636-1-hdanton@sina.com>
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 39+ messages in thread
From: Laura Abbott @ 2019-09-30 13:40 UTC (permalink / raw)
  To: John Stultz, lkml
  Cc: Benjamin Gaignard, Sumit Semwal, Liam Mark, Pratik Patel,
	Brian Starkey, Vincent Donnefort, Sudipto Paul, Andrew F . Davis,
	Christoph Hellwig, Chenbo Feng, Alistair Strachan,
	Hridya Valsaraju, dri-devel

On 9/6/19 2:47 PM, John Stultz wrote:
> Here is yet another pass at the dma-buf heaps patchset Andrew
> and I have been working on which tries to destage a fair chunk
> of ION functionality.
> 
> The patchset implements per-heap devices which can be opened
> directly and then an ioctl is used to allocate a dmabuf from the
> heap.
> 
> The interface is similar, but much simpler then IONs, only
> providing an ALLOC ioctl.
> 
> Also, I've provided relatively simple system and cma heaps.
> 
> I've booted and tested these patches with AOSP on the HiKey960
> using the kernel tree here:
>    https://git.linaro.org/people/john.stultz/android-dev.git/log/?h=dev/dma-buf-heap
> 
> And the userspace changes here:
>    https://android-review.googlesource.com/c/device/linaro/hikey/+/909436
> 
> Compared to ION, this patchset is missing the system-contig,
> carveout and chunk heaps, as I don't have a device that uses
> those, so I'm unable to do much useful validation there.
> Additionally we have no upstream users of chunk or carveout,
> and the system-contig has been deprecated in the common/andoid-*
> kernels, so this should be ok.
> 
> I've also removed the stats accounting, since any such accounting
> should be implemented by dma-buf core or the heaps themselves.
> 
> Most of the changes in this revision are adddressing the more
> concrete feedback from Christoph (many thanks!). Though I'm not
> sure if some of the less specific feedback was completely resolved
> in discussion last time around. Please let me know!
> 
> New in v8:
> * Make struct dma_heap_ops consts (Suggested by Christoph)
> * Add flush_kernel_vmap_range/invalidate_kernel_vmap_range calls
>    (suggested by Christoph)
> * Condense dma_heap_buffer and heap_helper_buffer (suggested by
>    Christoph)
> * Get rid of needless struct system_heap (suggested by Christoph)
> * Fix indentation by using shorter argument names (suggested by
>    Christoph)
> * Remove unused private_flags value
> * Add forgotten include file to fix build issue on x86
> * Checkpatch whitespace fixups
> 
> Thoughts and feedback would be greatly appreciated!
> 
> thanks
> -john
> 
> Cc: Laura Abbott <labbott@redhat.com>
> Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
> Cc: Sumit Semwal <sumit.semwal@linaro.org>
> Cc: Liam Mark <lmark@codeaurora.org>
> Cc: Pratik Patel <pratikp@codeaurora.org>
> Cc: Brian Starkey <Brian.Starkey@arm.com>
> Cc: Vincent Donnefort <Vincent.Donnefort@arm.com>
> Cc: Sudipto Paul <Sudipto.Paul@arm.com>
> Cc: Andrew F. Davis <afd@ti.com>
> Cc: Christoph Hellwig <hch@infradead.org>
> Cc: Chenbo Feng <fengc@google.com>
> Cc: Alistair Strachan <astrachan@google.com>
> Cc: Hridya Valsaraju <hridya@google.com>
> Cc: dri-devel@lists.freedesktop.org
> 
> 
> Andrew F. Davis (1):
>    dma-buf: Add dma-buf heaps framework
> 
> John Stultz (4):
>    dma-buf: heaps: Add heap helpers
>    dma-buf: heaps: Add system heap to dmabuf heaps
>    dma-buf: heaps: Add CMA heap to dmabuf heaps
>    kselftests: Add dma-heap test
> 
>   MAINTAINERS                                   |  18 ++
>   drivers/dma-buf/Kconfig                       |  11 +
>   drivers/dma-buf/Makefile                      |   2 +
>   drivers/dma-buf/dma-heap.c                    | 250 ++++++++++++++++
>   drivers/dma-buf/heaps/Kconfig                 |  14 +
>   drivers/dma-buf/heaps/Makefile                |   4 +
>   drivers/dma-buf/heaps/cma_heap.c              | 164 +++++++++++
>   drivers/dma-buf/heaps/heap-helpers.c          | 269 ++++++++++++++++++
>   drivers/dma-buf/heaps/heap-helpers.h          |  55 ++++
>   drivers/dma-buf/heaps/system_heap.c           | 122 ++++++++
>   include/linux/dma-heap.h                      |  59 ++++
>   include/uapi/linux/dma-heap.h                 |  55 ++++
>   tools/testing/selftests/dmabuf-heaps/Makefile |   9 +
>   .../selftests/dmabuf-heaps/dmabuf-heap.c      | 230 +++++++++++++++
>   14 files changed, 1262 insertions(+)
>   create mode 100644 drivers/dma-buf/dma-heap.c
>   create mode 100644 drivers/dma-buf/heaps/Kconfig
>   create mode 100644 drivers/dma-buf/heaps/Makefile
>   create mode 100644 drivers/dma-buf/heaps/cma_heap.c
>   create mode 100644 drivers/dma-buf/heaps/heap-helpers.c
>   create mode 100644 drivers/dma-buf/heaps/heap-helpers.h
>   create mode 100644 drivers/dma-buf/heaps/system_heap.c
>   create mode 100644 include/linux/dma-heap.h
>   create mode 100644 include/uapi/linux/dma-heap.h
>   create mode 100644 tools/testing/selftests/dmabuf-heaps/Makefile
>   create mode 100644 tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
> 

I've seen a couple of details that need to be fixed and can be
fixed fairly easily but as far as the overall design goes it looks
good. Once those are fixed up, you can add

Acked-by: Laura Abbott <labbott@redhat.com>


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 3/5] dma-buf: heaps: Add system heap to dmabuf heaps
       [not found] ` <20190930074335.6636-1-hdanton@sina.com>
@ 2019-10-01 20:50   ` John Stultz
  0 siblings, 0 replies; 39+ messages in thread
From: John Stultz @ 2019-10-01 20:50 UTC (permalink / raw)
  To: Hillf Danton
  Cc: lkml, Laura Abbott, Benjamin Gaignard, Sumit Semwal, Liam Mark,
	Pratik Patel, Brian Starkey, Vincent Donnefort, Sudipto Paul,
	Andrew F . Davis, Christoph Hellwig, Chenbo Feng,
	Alistair Strachan, Hridya Valsaraju, dri-devel

On Mon, Sep 30, 2019 at 12:43 AM Hillf Danton <hdanton@sina.com> wrote:
>
>
> On Fri,  6 Sep 2019 18:47:09 +0000 John Stultz wrote:
> >
> > +static int system_heap_allocate(struct dma_heap *heap,
> > +                             unsigned long len,
> > +                             unsigned long fd_flags,
> > +                             unsigned long heap_flags)
> > +{
> > +     struct heap_helper_buffer *helper_buffer;
> > +     struct dma_buf *dmabuf;
> > +     int ret = -ENOMEM;
> > +     pgoff_t pg;
> > +
> > +     helper_buffer = kzalloc(sizeof(*helper_buffer), GFP_KERNEL);
> > +     if (!helper_buffer)
> > +             return -ENOMEM;
> > +
> > +     init_heap_helper_buffer(helper_buffer, system_heap_free);
> > +     helper_buffer->flags = heap_flags;
> > +     helper_buffer->heap = heap;
> > +     helper_buffer->size = len;
> > +
> A couple of lines looks needed to handle len if it is not
> PAGE_SIZE aligned.

Hey! Thanks so much for the review!

dma_heap_buffer_alloc() sets "len = PAGE_ALIGN(len);" before calling
into the heap allocation hook.
So hopefully this isn't a concern, or am I missing something?

thanks
-john

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 1/5] dma-buf: Add dma-buf heaps framework
       [not found] ` <20190930032651.8264-1-hdanton@sina.com>
@ 2019-10-02 16:14   ` John Stultz
  0 siblings, 0 replies; 39+ messages in thread
From: John Stultz @ 2019-10-02 16:14 UTC (permalink / raw)
  To: Hillf Danton
  Cc: lkml, Andrew F . Davis, Laura Abbott, Benjamin Gaignard,
	Sumit Semwal, Liam Mark, Pratik Patel, Brian Starkey,
	Vincent Donnefort, Sudipto Paul, Christoph Hellwig, Chenbo Feng,
	Alistair Strachan, Hridya Valsaraju, dri-devel

On Sun, Sep 29, 2019 at 8:27 PM Hillf Danton <hdanton@sina.com> wrote:
> On Fri,  6 Sep 2019 18:47:08 +0000 John Stultz wrote:
> > +/**
> > + * dma_heap_get_data() - get per-heap driver data
> > + * @heap: DMA-Heap to retrieve private data for
> > + *
> > + * Returns:
> > + * The per-heap data for the heap.
> > + */
> > +void *dma_heap_get_data(struct dma_heap *heap);
> > +
>
> It will help readers more than thought understand this framework
> if s/get_data/get_drvdata/

Sounds good!

Thanks for the review and suggestion!
-john

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 4/5] dma-buf: heaps: Add CMA heap to dmabuf heaps
       [not found] ` <20190930081434.248-1-hdanton@sina.com>
@ 2019-10-02 16:15   ` John Stultz
  0 siblings, 0 replies; 39+ messages in thread
From: John Stultz @ 2019-10-02 16:15 UTC (permalink / raw)
  To: Hillf Danton
  Cc: lkml, Laura Abbott, Benjamin Gaignard, Sumit Semwal, Liam Mark,
	Pratik Patel, Brian Starkey, Vincent Donnefort, Sudipto Paul,
	Andrew F . Davis, Christoph Hellwig, Chenbo Feng,
	Alistair Strachan, Hridya Valsaraju, dri-devel

On Mon, Sep 30, 2019 at 1:14 AM Hillf Danton <hdanton@sina.com> wrote:
> On Fri,  6 Sep 2019 18:47:09 +0000 John Stultz wrote:
> >
> > +     cma_pages = cma_alloc(cma_heap->cma, nr_pages, align, false);
> > +     if (!cma_pages)
> > +             goto free_buf;
> > +
> > +     if (PageHighMem(cma_pages)) {
> > +             unsigned long nr_clear_pages = nr_pages;
> > +             struct page *page = cma_pages;
> > +
> > +             while (nr_clear_pages > 0) {
> > +                     void *vaddr = kmap_atomic(page);
> > +
> > +                     memset(vaddr, 0, PAGE_SIZE);
> > +                     kunmap_atomic(vaddr);
> > +                     page++;
> > +                     nr_clear_pages--;
> > +             }
> > +     } else {
> > +             memset(page_address(cma_pages), 0, size);
> > +     }
>
> Take a breath after zeroing a page, and a peep at pending signal.

Ok. Took a swing at this. It will be in the next revision.

Thanks again for the review!
-john

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION)
  2019-09-24 16:22   ` Ayan Halder
  2019-09-24 16:28     ` John Stultz
@ 2019-10-09 17:37     ` Ayan Halder
  2019-10-09 18:27       ` Andrew F. Davis
  2019-10-16 17:34       ` John Stultz
  1 sibling, 2 replies; 39+ messages in thread
From: Ayan Halder @ 2019-10-09 17:37 UTC (permalink / raw)
  To: Sumit Semwal
  Cc: nd, Alistair Strachan, Vincent Donnefort, Chenbo Feng, lkml,
	Liam Mark, Andrew F . Davis, Christoph Hellwig, DRI mailing list,
	Hridya Valsaraju, Sudipto Paul, Pratik Patel

On Tue, Sep 24, 2019 at 04:22:18PM +0000, Ayan Halder wrote:
> On Thu, Sep 19, 2019 at 10:21:52PM +0530, Sumit Semwal wrote:
> > Hello Christoph, everyone,
> > 
> > On Sat, 7 Sep 2019 at 00:17, John Stultz <john.stultz@linaro.org> wrote:
> > >
> > > Here is yet another pass at the dma-buf heaps patchset Andrew
> > > and I have been working on which tries to destage a fair chunk
> > > of ION functionality.
> > >
> > > The patchset implements per-heap devices which can be opened
> > > directly and then an ioctl is used to allocate a dmabuf from the
> > > heap.
> > >
> > > The interface is similar, but much simpler then IONs, only
> > > providing an ALLOC ioctl.
> > >
> > > Also, I've provided relatively simple system and cma heaps.
> > >
> > > I've booted and tested these patches with AOSP on the HiKey960
> > > using the kernel tree here:
> > >   https://git.linaro.org/people/john.stultz/android-dev.git/log/?h=dev/dma-buf-heap
> > >
> > > And the userspace changes here:
> > >   https://android-review.googlesource.com/c/device/linaro/hikey/+/909436
> > >
> > > Compared to ION, this patchset is missing the system-contig,
> > > carveout and chunk heaps, as I don't have a device that uses
> > > those, so I'm unable to do much useful validation there.
> > > Additionally we have no upstream users of chunk or carveout,
> > > and the system-contig has been deprecated in the common/andoid-*
> > > kernels, so this should be ok.
> > >
> > > I've also removed the stats accounting, since any such accounting
> > > should be implemented by dma-buf core or the heaps themselves.
> > >
> > > Most of the changes in this revision are adddressing the more
> > > concrete feedback from Christoph (many thanks!). Though I'm not
> > > sure if some of the less specific feedback was completely resolved
> > > in discussion last time around. Please let me know!
> > 
> > It looks like most of the feedback has been taken care of. If there's
> > no more objection to this series, I'd like to merge it in soon.
> > 
> > If there are any more review comments, may I request you to please provide them?
> 
> I tested these patches using our internal test suite with Arm,komeda
> driver and the following node in dts
> 
>         reserved-memory {
>                 #address-cells = <0x2>;
>                 #size-cells = <0x2>;
>                 ranges;
> 
>                 framebuffer@60000000 {
>                         compatible = "shared-dma-pool";
>                         linux,cma-default;
>                         reg = <0x0 0x60000000 0x0 0x8000000>;
>                 };
>         }
Apologies for the confusion, this dts node is irrelevant as our tests were using
the cma heap (via /dev/dma_heap/reserved).

That raises a question. How do we represent the reserved-memory nodes
(as shown above) via the dma-buf heaps framework ?
> 
> The tests went fine. Our tests allocates framebuffers of different
> sizes, posts them on screen and the driver writes back to one of the
> framebuffers. I havenot tested for any performance, latency or
> cache management related stuff. So, it that looks appropriate, feel
> free to add:-
> Tested-by:- Ayan Kumar Halder <ayan.halder@arm.com>
> 
> Are you planning to write some igt tests for it ?
> > 
> > >
> > > New in v8:
> > > * Make struct dma_heap_ops consts (Suggested by Christoph)
> > > * Add flush_kernel_vmap_range/invalidate_kernel_vmap_range calls
> > >   (suggested by Christoph)
> > > * Condense dma_heap_buffer and heap_helper_buffer (suggested by
> > >   Christoph)
> > > * Get rid of needless struct system_heap (suggested by Christoph)
> > > * Fix indentation by using shorter argument names (suggested by
> > >   Christoph)
> > > * Remove unused private_flags value
> > > * Add forgotten include file to fix build issue on x86
> > > * Checkpatch whitespace fixups
> > >
> > > Thoughts and feedback would be greatly appreciated!
> > >
> > > thanks
> > > -john
> > Best,
> > Sumit.
> > >
> > > Cc: Laura Abbott <labbott@redhat.com>
> > > Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
> > > Cc: Sumit Semwal <sumit.semwal@linaro.org>
> > > Cc: Liam Mark <lmark@codeaurora.org>
> > > Cc: Pratik Patel <pratikp@codeaurora.org>
> > > Cc: Brian Starkey <Brian.Starkey@arm.com>
> > > Cc: Vincent Donnefort <Vincent.Donnefort@arm.com>
> > > Cc: Sudipto Paul <Sudipto.Paul@arm.com>
> > > Cc: Andrew F. Davis <afd@ti.com>
> > > Cc: Christoph Hellwig <hch@infradead.org>
> > > Cc: Chenbo Feng <fengc@google.com>
> > > Cc: Alistair Strachan <astrachan@google.com>
> > > Cc: Hridya Valsaraju <hridya@google.com>
> > > Cc: dri-devel@lists.freedesktop.org
> > >
> > >
> > > Andrew F. Davis (1):
> > >   dma-buf: Add dma-buf heaps framework
> > >
> > > John Stultz (4):
> > >   dma-buf: heaps: Add heap helpers
> > >   dma-buf: heaps: Add system heap to dmabuf heaps
> > >   dma-buf: heaps: Add CMA heap to dmabuf heaps
> > >   kselftests: Add dma-heap test
> > >
> > >  MAINTAINERS                                   |  18 ++
> > >  drivers/dma-buf/Kconfig                       |  11 +
> > >  drivers/dma-buf/Makefile                      |   2 +
> > >  drivers/dma-buf/dma-heap.c                    | 250 ++++++++++++++++
> > >  drivers/dma-buf/heaps/Kconfig                 |  14 +
> > >  drivers/dma-buf/heaps/Makefile                |   4 +
> > >  drivers/dma-buf/heaps/cma_heap.c              | 164 +++++++++++
> > >  drivers/dma-buf/heaps/heap-helpers.c          | 269 ++++++++++++++++++
> > >  drivers/dma-buf/heaps/heap-helpers.h          |  55 ++++
> > >  drivers/dma-buf/heaps/system_heap.c           | 122 ++++++++
> > >  include/linux/dma-heap.h                      |  59 ++++
> > >  include/uapi/linux/dma-heap.h                 |  55 ++++
> > >  tools/testing/selftests/dmabuf-heaps/Makefile |   9 +
> > >  .../selftests/dmabuf-heaps/dmabuf-heap.c      | 230 +++++++++++++++
> > >  14 files changed, 1262 insertions(+)
> > >  create mode 100644 drivers/dma-buf/dma-heap.c
> > >  create mode 100644 drivers/dma-buf/heaps/Kconfig
> > >  create mode 100644 drivers/dma-buf/heaps/Makefile
> > >  create mode 100644 drivers/dma-buf/heaps/cma_heap.c
> > >  create mode 100644 drivers/dma-buf/heaps/heap-helpers.c
> > >  create mode 100644 drivers/dma-buf/heaps/heap-helpers.h
> > >  create mode 100644 drivers/dma-buf/heaps/system_heap.c
> > >  create mode 100644 include/linux/dma-heap.h
> > >  create mode 100644 include/uapi/linux/dma-heap.h
> > >  create mode 100644 tools/testing/selftests/dmabuf-heaps/Makefile
> > >  create mode 100644 tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
> > >
> > > --
> > > 2.17.1
> > >
> > 
> > 
> > -- 
> > Thanks and regards,
> > 
> > Sumit Semwal
> > Linaro Consumer Group - Kernel Team Lead
> > Linaro.org │ Open source software for ARM SoCs
> > _______________________________________________
> > dri-devel mailing list
> > dri-devel@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION)
  2019-10-09 17:37     ` Ayan Halder
@ 2019-10-09 18:27       ` Andrew F. Davis
  2019-10-14  9:07         ` Brian Starkey
  2019-10-16 17:34       ` John Stultz
  1 sibling, 1 reply; 39+ messages in thread
From: Andrew F. Davis @ 2019-10-09 18:27 UTC (permalink / raw)
  To: Ayan Halder, Sumit Semwal
  Cc: nd, Alistair Strachan, Vincent Donnefort, Chenbo Feng, lkml,
	Liam Mark, Christoph Hellwig, DRI mailing list, Hridya Valsaraju,
	Sudipto Paul, Pratik Patel

On 10/9/19 1:37 PM, Ayan Halder wrote:
> On Tue, Sep 24, 2019 at 04:22:18PM +0000, Ayan Halder wrote:
>> On Thu, Sep 19, 2019 at 10:21:52PM +0530, Sumit Semwal wrote:
>>> Hello Christoph, everyone,
>>>
>>> On Sat, 7 Sep 2019 at 00:17, John Stultz <john.stultz@linaro.org> wrote:
>>>>
>>>> Here is yet another pass at the dma-buf heaps patchset Andrew
>>>> and I have been working on which tries to destage a fair chunk
>>>> of ION functionality.
>>>>
>>>> The patchset implements per-heap devices which can be opened
>>>> directly and then an ioctl is used to allocate a dmabuf from the
>>>> heap.
>>>>
>>>> The interface is similar, but much simpler then IONs, only
>>>> providing an ALLOC ioctl.
>>>>
>>>> Also, I've provided relatively simple system and cma heaps.
>>>>
>>>> I've booted and tested these patches with AOSP on the HiKey960
>>>> using the kernel tree here:
>>>>   https://git.linaro.org/people/john.stultz/android-dev.git/log/?h=dev/dma-buf-heap
>>>>
>>>> And the userspace changes here:
>>>>   https://android-review.googlesource.com/c/device/linaro/hikey/+/909436
>>>>
>>>> Compared to ION, this patchset is missing the system-contig,
>>>> carveout and chunk heaps, as I don't have a device that uses
>>>> those, so I'm unable to do much useful validation there.
>>>> Additionally we have no upstream users of chunk or carveout,
>>>> and the system-contig has been deprecated in the common/andoid-*
>>>> kernels, so this should be ok.
>>>>
>>>> I've also removed the stats accounting, since any such accounting
>>>> should be implemented by dma-buf core or the heaps themselves.
>>>>
>>>> Most of the changes in this revision are adddressing the more
>>>> concrete feedback from Christoph (many thanks!). Though I'm not
>>>> sure if some of the less specific feedback was completely resolved
>>>> in discussion last time around. Please let me know!
>>>
>>> It looks like most of the feedback has been taken care of. If there's
>>> no more objection to this series, I'd like to merge it in soon.
>>>
>>> If there are any more review comments, may I request you to please provide them?
>>
>> I tested these patches using our internal test suite with Arm,komeda
>> driver and the following node in dts
>>
>>         reserved-memory {
>>                 #address-cells = <0x2>;
>>                 #size-cells = <0x2>;
>>                 ranges;
>>
>>                 framebuffer@60000000 {
>>                         compatible = "shared-dma-pool";
>>                         linux,cma-default;
>>                         reg = <0x0 0x60000000 0x0 0x8000000>;
>>                 };
>>         }
> Apologies for the confusion, this dts node is irrelevant as our tests were using
> the cma heap (via /dev/dma_heap/reserved).
> 
> That raises a question. How do we represent the reserved-memory nodes
> (as shown above) via the dma-buf heaps framework ?


The CMA driver that registers these nodes will have to be expanded to
export them using this framework as needed. We do something similar to
export SRAM nodes:

https://lkml.org/lkml/2019/3/21/575

Unlike the system/default-cma driver which can be centralized in the
tree, these extra exporters will probably live out in other subsystems
and so are added in later steps.

Andrew


>>
>> The tests went fine. Our tests allocates framebuffers of different
>> sizes, posts them on screen and the driver writes back to one of the
>> framebuffers. I havenot tested for any performance, latency or
>> cache management related stuff. So, it that looks appropriate, feel
>> free to add:-
>> Tested-by:- Ayan Kumar Halder <ayan.halder@arm.com>
>>
>> Are you planning to write some igt tests for it ?
>>>
>>>>
>>>> New in v8:
>>>> * Make struct dma_heap_ops consts (Suggested by Christoph)
>>>> * Add flush_kernel_vmap_range/invalidate_kernel_vmap_range calls
>>>>   (suggested by Christoph)
>>>> * Condense dma_heap_buffer and heap_helper_buffer (suggested by
>>>>   Christoph)
>>>> * Get rid of needless struct system_heap (suggested by Christoph)
>>>> * Fix indentation by using shorter argument names (suggested by
>>>>   Christoph)
>>>> * Remove unused private_flags value
>>>> * Add forgotten include file to fix build issue on x86
>>>> * Checkpatch whitespace fixups
>>>>
>>>> Thoughts and feedback would be greatly appreciated!
>>>>
>>>> thanks
>>>> -john
>>> Best,
>>> Sumit.
>>>>
>>>> Cc: Laura Abbott <labbott@redhat.com>
>>>> Cc: Benjamin Gaignard <benjamin.gaignard@linaro.org>
>>>> Cc: Sumit Semwal <sumit.semwal@linaro.org>
>>>> Cc: Liam Mark <lmark@codeaurora.org>
>>>> Cc: Pratik Patel <pratikp@codeaurora.org>
>>>> Cc: Brian Starkey <Brian.Starkey@arm.com>
>>>> Cc: Vincent Donnefort <Vincent.Donnefort@arm.com>
>>>> Cc: Sudipto Paul <Sudipto.Paul@arm.com>
>>>> Cc: Andrew F. Davis <afd@ti.com>
>>>> Cc: Christoph Hellwig <hch@infradead.org>
>>>> Cc: Chenbo Feng <fengc@google.com>
>>>> Cc: Alistair Strachan <astrachan@google.com>
>>>> Cc: Hridya Valsaraju <hridya@google.com>
>>>> Cc: dri-devel@lists.freedesktop.org
>>>>
>>>>
>>>> Andrew F. Davis (1):
>>>>   dma-buf: Add dma-buf heaps framework
>>>>
>>>> John Stultz (4):
>>>>   dma-buf: heaps: Add heap helpers
>>>>   dma-buf: heaps: Add system heap to dmabuf heaps
>>>>   dma-buf: heaps: Add CMA heap to dmabuf heaps
>>>>   kselftests: Add dma-heap test
>>>>
>>>>  MAINTAINERS                                   |  18 ++
>>>>  drivers/dma-buf/Kconfig                       |  11 +
>>>>  drivers/dma-buf/Makefile                      |   2 +
>>>>  drivers/dma-buf/dma-heap.c                    | 250 ++++++++++++++++
>>>>  drivers/dma-buf/heaps/Kconfig                 |  14 +
>>>>  drivers/dma-buf/heaps/Makefile                |   4 +
>>>>  drivers/dma-buf/heaps/cma_heap.c              | 164 +++++++++++
>>>>  drivers/dma-buf/heaps/heap-helpers.c          | 269 ++++++++++++++++++
>>>>  drivers/dma-buf/heaps/heap-helpers.h          |  55 ++++
>>>>  drivers/dma-buf/heaps/system_heap.c           | 122 ++++++++
>>>>  include/linux/dma-heap.h                      |  59 ++++
>>>>  include/uapi/linux/dma-heap.h                 |  55 ++++
>>>>  tools/testing/selftests/dmabuf-heaps/Makefile |   9 +
>>>>  .../selftests/dmabuf-heaps/dmabuf-heap.c      | 230 +++++++++++++++
>>>>  14 files changed, 1262 insertions(+)
>>>>  create mode 100644 drivers/dma-buf/dma-heap.c
>>>>  create mode 100644 drivers/dma-buf/heaps/Kconfig
>>>>  create mode 100644 drivers/dma-buf/heaps/Makefile
>>>>  create mode 100644 drivers/dma-buf/heaps/cma_heap.c
>>>>  create mode 100644 drivers/dma-buf/heaps/heap-helpers.c
>>>>  create mode 100644 drivers/dma-buf/heaps/heap-helpers.h
>>>>  create mode 100644 drivers/dma-buf/heaps/system_heap.c
>>>>  create mode 100644 include/linux/dma-heap.h
>>>>  create mode 100644 include/uapi/linux/dma-heap.h
>>>>  create mode 100644 tools/testing/selftests/dmabuf-heaps/Makefile
>>>>  create mode 100644 tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
>>>>
>>>> --
>>>> 2.17.1
>>>>
>>>
>>>
>>> -- 
>>> Thanks and regards,
>>>
>>> Sumit Semwal
>>> Linaro Consumer Group - Kernel Team Lead
>>> Linaro.org │ Open source software for ARM SoCs
>>> _______________________________________________
>>> dri-devel mailing list
>>> dri-devel@lists.freedesktop.org
>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION)
  2019-10-09 18:27       ` Andrew F. Davis
@ 2019-10-14  9:07         ` Brian Starkey
  2019-10-16 17:40           ` Andrew F. Davis
  0 siblings, 1 reply; 39+ messages in thread
From: Brian Starkey @ 2019-10-14  9:07 UTC (permalink / raw)
  To: Andrew F. Davis
  Cc: Ayan Halder, Sumit Semwal, Sudipto Paul, Vincent Donnefort,
	Chenbo Feng, Alistair Strachan, Liam Mark, lkml,
	Christoph Hellwig, DRI mailing list, Hridya Valsaraju, nd,
	Pratik Patel

Hi Andrew,

On Wed, Oct 09, 2019 at 02:27:15PM -0400, Andrew F. Davis wrote:
> The CMA driver that registers these nodes will have to be expanded to
> export them using this framework as needed. We do something similar to
> export SRAM nodes:
> 
> https://lkml.org/lkml/2019/3/21/575
> 
> Unlike the system/default-cma driver which can be centralized in the
> tree, these extra exporters will probably live out in other subsystems
> and so are added in later steps.
> 
> Andrew

I was under the impression that the "cma_for_each_area" loop in patch
4 would do that (add_cma_heaps). Is it not the case?

Thanks,
-Brian


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION)
  2019-10-09 17:37     ` Ayan Halder
  2019-10-09 18:27       ` Andrew F. Davis
@ 2019-10-16 17:34       ` John Stultz
  1 sibling, 0 replies; 39+ messages in thread
From: John Stultz @ 2019-10-16 17:34 UTC (permalink / raw)
  To: Ayan Halder
  Cc: Sumit Semwal, Sudipto Paul, Vincent Donnefort, Chenbo Feng,
	Alistair Strachan, Liam Mark, lkml, Christoph Hellwig,
	DRI mailing list, Andrew F . Davis, Hridya Valsaraju, nd,
	Pratik Patel

On Wed, Oct 9, 2019 at 10:38 AM Ayan Halder <Ayan.Halder@arm.com> wrote:
>
> On Tue, Sep 24, 2019 at 04:22:18PM +0000, Ayan Halder wrote:
> > On Thu, Sep 19, 2019 at 10:21:52PM +0530, Sumit Semwal wrote:
> > > Hello Christoph, everyone,
> > >
> > > On Sat, 7 Sep 2019 at 00:17, John Stultz <john.stultz@linaro.org> wrote:
> > > >
> > > > Here is yet another pass at the dma-buf heaps patchset Andrew
> > > > and I have been working on which tries to destage a fair chunk
> > > > of ION functionality.
> > > >
> > > > The patchset implements per-heap devices which can be opened
> > > > directly and then an ioctl is used to allocate a dmabuf from the
> > > > heap.
> > > >
> > > > The interface is similar, but much simpler then IONs, only
> > > > providing an ALLOC ioctl.
> > > >
> > > > Also, I've provided relatively simple system and cma heaps.
> > > >
> > > > I've booted and tested these patches with AOSP on the HiKey960
> > > > using the kernel tree here:
> > > >   https://git.linaro.org/people/john.stultz/android-dev.git/log/?h=dev/dma-buf-heap
> > > >
> > > > And the userspace changes here:
> > > >   https://android-review.googlesource.com/c/device/linaro/hikey/+/909436
> > > >
> > > > Compared to ION, this patchset is missing the system-contig,
> > > > carveout and chunk heaps, as I don't have a device that uses
> > > > those, so I'm unable to do much useful validation there.
> > > > Additionally we have no upstream users of chunk or carveout,
> > > > and the system-contig has been deprecated in the common/andoid-*
> > > > kernels, so this should be ok.
> > > >
> > > > I've also removed the stats accounting, since any such accounting
> > > > should be implemented by dma-buf core or the heaps themselves.
> > > >
> > > > Most of the changes in this revision are adddressing the more
> > > > concrete feedback from Christoph (many thanks!). Though I'm not
> > > > sure if some of the less specific feedback was completely resolved
> > > > in discussion last time around. Please let me know!
> > >
> > > It looks like most of the feedback has been taken care of. If there's
> > > no more objection to this series, I'd like to merge it in soon.
> > >
> > > If there are any more review comments, may I request you to please provide them?
> >
> > I tested these patches using our internal test suite with Arm,komeda
> > driver and the following node in dts
> >
> >         reserved-memory {
> >                 #address-cells = <0x2>;
> >                 #size-cells = <0x2>;
> >                 ranges;
> >
> >                 framebuffer@60000000 {
> >                         compatible = "shared-dma-pool";
> >                         linux,cma-default;
> >                         reg = <0x0 0x60000000 0x0 0x8000000>;
> >                 };
> >         }
> Apologies for the confusion, this dts node is irrelevant as our tests were using
> the cma heap (via /dev/dma_heap/reserved).
>
> That raises a question. How do we represent the reserved-memory nodes
> (as shown above) via the dma-buf heaps framework ?

(Apologies I didn't initially see this as you somehow left me off the
reply list)

So yea, as Brian mentioned, we'll generate a heap for each cma area.
So the above should generate a heap named "framebuffer".

For example, on HiKey960 the following patch adds (originally for ION,
but the same dt node will work for dmabuf heaps) the cma heap
"linux,cma"
  https://git.linaro.org/people/john.stultz/android-dev.git/commit/?h=dev/dma-buf-heap&id=00538fe70e17acf07fdcbc441816d91cdd227207

thanks
-john

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION)
  2019-10-14  9:07         ` Brian Starkey
@ 2019-10-16 17:40           ` Andrew F. Davis
  2019-10-17 19:14             ` John Stultz
  0 siblings, 1 reply; 39+ messages in thread
From: Andrew F. Davis @ 2019-10-16 17:40 UTC (permalink / raw)
  To: Brian Starkey
  Cc: Ayan Halder, Sumit Semwal, Sudipto Paul, Vincent Donnefort,
	Chenbo Feng, Alistair Strachan, Liam Mark, lkml,
	Christoph Hellwig, DRI mailing list, Hridya Valsaraju, nd,
	Pratik Patel

On 10/14/19 5:07 AM, Brian Starkey wrote:
> Hi Andrew,
> 
> On Wed, Oct 09, 2019 at 02:27:15PM -0400, Andrew F. Davis wrote:
>> The CMA driver that registers these nodes will have to be expanded to
>> export them using this framework as needed. We do something similar to
>> export SRAM nodes:
>>
>> https://lkml.org/lkml/2019/3/21/575
>>
>> Unlike the system/default-cma driver which can be centralized in the
>> tree, these extra exporters will probably live out in other subsystems
>> and so are added in later steps.
>>
>> Andrew
> 
> I was under the impression that the "cma_for_each_area" loop in patch
> 4 would do that (add_cma_heaps). Is it not the case?
> 

For these cma nodes yes, I thought you meant reserved memory areas in
general.

Just as a side note, I'm not a huge fan of the cma_for_each_area() to
begin with, it seems a bit out of place when they could be selectively
added as heaps as needed. Not sure how that will work with cma nodes
specifically assigned to devices, seems like we could just steal their
memory space from userspace with this..

Andrew

> Thanks,
> -Brian
> 

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION)
  2019-10-16 17:40           ` Andrew F. Davis
@ 2019-10-17 19:14             ` John Stultz
  2019-10-17 19:29               ` Andrew F. Davis
  0 siblings, 1 reply; 39+ messages in thread
From: John Stultz @ 2019-10-17 19:14 UTC (permalink / raw)
  To: Andrew F. Davis
  Cc: Brian Starkey, nd, Sudipto Paul, Vincent Donnefort, Chenbo Feng,
	Alistair Strachan, Liam Mark, lkml, Christoph Hellwig,
	DRI mailing list, Hridya Valsaraju, Ayan Halder, Pratik Patel

On Wed, Oct 16, 2019 at 10:41 AM Andrew F. Davis <afd@ti.com> wrote:
> On 10/14/19 5:07 AM, Brian Starkey wrote:
> > Hi Andrew,
> >
> > On Wed, Oct 09, 2019 at 02:27:15PM -0400, Andrew F. Davis wrote:
> >> The CMA driver that registers these nodes will have to be expanded to
> >> export them using this framework as needed. We do something similar to
> >> export SRAM nodes:
> >>
> >> https://lkml.org/lkml/2019/3/21/575
> >>
> >> Unlike the system/default-cma driver which can be centralized in the
> >> tree, these extra exporters will probably live out in other subsystems
> >> and so are added in later steps.
> >>
> >> Andrew
> >
> > I was under the impression that the "cma_for_each_area" loop in patch
> > 4 would do that (add_cma_heaps). Is it not the case?
> >
>
> For these cma nodes yes, I thought you meant reserved memory areas in
> general.

Ok, sorry I didn't see this earlier, not only was I still dropped from
the To list, but the copy I got from dri-devel ended up marked as
spam.

> Just as a side note, I'm not a huge fan of the cma_for_each_area() to
> begin with, it seems a bit out of place when they could be selectively
> added as heaps as needed. Not sure how that will work with cma nodes
> specifically assigned to devices, seems like we could just steal their
> memory space from userspace with this..

So this would be a concern with ION as well, since it does the same
thing because being able to allocate from multiple CMA heaps for
device specific purpose is really useful.
And at least with dmabuf heaps each heap can be given its own
permissions so there's less likelihood for any abuse as you describe.

And it also allows various device cma nodes to still be allocated from
using the same interface (rather then having to use a custom driver
ioctl for each device).

But if the objection stands, do you have a proposal for an alternative
way to enumerate a subset of CMA heaps?

thanks
-john

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION)
  2019-10-17 19:14             ` John Stultz
@ 2019-10-17 19:29               ` Andrew F. Davis
  2019-10-17 20:57                 ` John Stultz
  0 siblings, 1 reply; 39+ messages in thread
From: Andrew F. Davis @ 2019-10-17 19:29 UTC (permalink / raw)
  To: John Stultz
  Cc: Brian Starkey, nd, Sudipto Paul, Vincent Donnefort, Chenbo Feng,
	Alistair Strachan, Liam Mark, lkml, Christoph Hellwig,
	DRI mailing list, Hridya Valsaraju, Ayan Halder, Pratik Patel

On 10/17/19 3:14 PM, John Stultz wrote:
> On Wed, Oct 16, 2019 at 10:41 AM Andrew F. Davis <afd@ti.com> wrote:
>> On 10/14/19 5:07 AM, Brian Starkey wrote:
>>> Hi Andrew,
>>>
>>> On Wed, Oct 09, 2019 at 02:27:15PM -0400, Andrew F. Davis wrote:
>>>> The CMA driver that registers these nodes will have to be expanded to
>>>> export them using this framework as needed. We do something similar to
>>>> export SRAM nodes:
>>>>
>>>> https://lkml.org/lkml/2019/3/21/575
>>>>
>>>> Unlike the system/default-cma driver which can be centralized in the
>>>> tree, these extra exporters will probably live out in other subsystems
>>>> and so are added in later steps.
>>>>
>>>> Andrew
>>>
>>> I was under the impression that the "cma_for_each_area" loop in patch
>>> 4 would do that (add_cma_heaps). Is it not the case?
>>>
>>
>> For these cma nodes yes, I thought you meant reserved memory areas in
>> general.
> 
> Ok, sorry I didn't see this earlier, not only was I still dropped from
> the To list, but the copy I got from dri-devel ended up marked as
> spam.
> 
>> Just as a side note, I'm not a huge fan of the cma_for_each_area() to
>> begin with, it seems a bit out of place when they could be selectively
>> added as heaps as needed. Not sure how that will work with cma nodes
>> specifically assigned to devices, seems like we could just steal their
>> memory space from userspace with this..
> 
> So this would be a concern with ION as well, since it does the same
> thing because being able to allocate from multiple CMA heaps for
> device specific purpose is really useful.
> And at least with dmabuf heaps each heap can be given its own
> permissions so there's less likelihood for any abuse as you describe.
> 


Yes it was a problem with ION also, having individual files per heap
does help with some permissions, but my issue is what if I don't want my
CMA exported at all, cma_for_each_area() just grabs them all anyway.


> And it also allows various device cma nodes to still be allocated from
> using the same interface (rather then having to use a custom driver
> ioctl for each device).
> 


This is definitely the way to go, it's the implementation of how we get
the CMAs to export in the first place that is a bit odd.


> But if the objection stands, do you have a proposal for an alternative
> way to enumerate a subset of CMA heaps?
> 


When in staging ION had to reach into the CMA framework as the other
direction would not be allowed, so cma_for_each_area() was added. If
DMA-BUF heaps is not in staging then we can do the opposite, and have
the CMA framework register heaps itself using our framework. That way
the CMA system could decide what areas to export or not (maybe based on
a DT property or similar).

The end result is the same so we can make this change later (it has to
come after DMA-BUF heaps is in anyway).

Andrew


> thanks
> -john
> 

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION)
  2019-10-17 19:29               ` Andrew F. Davis
@ 2019-10-17 20:57                 ` John Stultz
  2019-10-18  9:55                   ` Brian Starkey
  0 siblings, 1 reply; 39+ messages in thread
From: John Stultz @ 2019-10-17 20:57 UTC (permalink / raw)
  To: Andrew F. Davis
  Cc: Brian Starkey, nd, Sudipto Paul, Vincent Donnefort, Chenbo Feng,
	Alistair Strachan, Liam Mark, lkml, Christoph Hellwig,
	DRI mailing list, Hridya Valsaraju, Ayan Halder, Pratik Patel

On Thu, Oct 17, 2019 at 12:29 PM Andrew F. Davis <afd@ti.com> wrote:
> On 10/17/19 3:14 PM, John Stultz wrote:
> > But if the objection stands, do you have a proposal for an alternative
> > way to enumerate a subset of CMA heaps?
> >
> When in staging ION had to reach into the CMA framework as the other
> direction would not be allowed, so cma_for_each_area() was added. If
> DMA-BUF heaps is not in staging then we can do the opposite, and have
> the CMA framework register heaps itself using our framework. That way
> the CMA system could decide what areas to export or not (maybe based on
> a DT property or similar).

Ok. Though the CMA core doesn't have much sense of DT details either,
so it would probably have to be done in the reserved_mem logic, which
doesn't feel right to me.

I'd probably guess we should have some sort of dt binding to describe
a dmabuf cma heap and from that node link to a CMA node via a
memory-region phandle. Along with maybe the default heap as well? Not
eager to get into another binding review cycle, and I'm not sure what
non-DT systems will do yet, but I'll take a shot at it and iterate.

> The end result is the same so we can make this change later (it has to
> come after DMA-BUF heaps is in anyway).

Well, I'm hesitant to merge code that exposes all the CMA heaps and
then add patches that becomes more selective, should anyone depend on
the initial behavior. :/

So, <sigh>, I'll start on the rework for the CMA bits.

That said, I'm definitely wanting to make some progress on this patch
series, so maybe we can still merge the core/helpers/system heap and
just hold the cma heap for a rework on the enumeration bits. That way
we can at least get other folks working on switching their vendor
heaps from ION.

Sumit: Does that sound ok? Assuming no other objections, can you take
the v11 set minus the CMA heap patch?

thanks
-john

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION)
  2019-10-17 20:57                 ` John Stultz
@ 2019-10-18  9:55                   ` Brian Starkey
  2019-10-18 18:33                     ` John Stultz
  2019-10-18 18:41                     ` Ayan Halder
  0 siblings, 2 replies; 39+ messages in thread
From: Brian Starkey @ 2019-10-18  9:55 UTC (permalink / raw)
  To: John Stultz
  Cc: Andrew F. Davis, nd, Sudipto Paul, Vincent Donnefort,
	Chenbo Feng, Alistair Strachan, Liam Mark, lkml,
	Christoph Hellwig, DRI mailing list, Hridya Valsaraju,
	Ayan Halder, Pratik Patel

On Thu, Oct 17, 2019 at 01:57:45PM -0700, John Stultz wrote:
> On Thu, Oct 17, 2019 at 12:29 PM Andrew F. Davis <afd@ti.com> wrote:
> > On 10/17/19 3:14 PM, John Stultz wrote:
> > > But if the objection stands, do you have a proposal for an alternative
> > > way to enumerate a subset of CMA heaps?
> > >
> > When in staging ION had to reach into the CMA framework as the other
> > direction would not be allowed, so cma_for_each_area() was added. If
> > DMA-BUF heaps is not in staging then we can do the opposite, and have
> > the CMA framework register heaps itself using our framework. That way
> > the CMA system could decide what areas to export or not (maybe based on
> > a DT property or similar).
> 
> Ok. Though the CMA core doesn't have much sense of DT details either,
> so it would probably have to be done in the reserved_mem logic, which
> doesn't feel right to me.
> 
> I'd probably guess we should have some sort of dt binding to describe
> a dmabuf cma heap and from that node link to a CMA node via a
> memory-region phandle. Along with maybe the default heap as well? Not
> eager to get into another binding review cycle, and I'm not sure what
> non-DT systems will do yet, but I'll take a shot at it and iterate.
> 
> > The end result is the same so we can make this change later (it has to
> > come after DMA-BUF heaps is in anyway).
> 
> Well, I'm hesitant to merge code that exposes all the CMA heaps and
> then add patches that becomes more selective, should anyone depend on
> the initial behavior. :/

How about only auto-adding the system default CMA region (cma->name ==
"reserved")?

And/or the CMA auto-add could be behind a config option? It seems a
shame to further delay this, and the CMA heap itself really is useful.

Cheers,
-Brian

> 
> So, <sigh>, I'll start on the rework for the CMA bits.
> 
> That said, I'm definitely wanting to make some progress on this patch
> series, so maybe we can still merge the core/helpers/system heap and
> just hold the cma heap for a rework on the enumeration bits. That way
> we can at least get other folks working on switching their vendor
> heaps from ION.
> 
> Sumit: Does that sound ok? Assuming no other objections, can you take
> the v11 set minus the CMA heap patch?
> 
> thanks
> -john

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION)
  2019-10-18  9:55                   ` Brian Starkey
@ 2019-10-18 18:33                     ` John Stultz
  2019-10-18 18:41                     ` Ayan Halder
  1 sibling, 0 replies; 39+ messages in thread
From: John Stultz @ 2019-10-18 18:33 UTC (permalink / raw)
  To: Brian Starkey
  Cc: Andrew F. Davis, nd, Sudipto Paul, Vincent Donnefort,
	Chenbo Feng, Alistair Strachan, Liam Mark, lkml,
	Christoph Hellwig, DRI mailing list, Hridya Valsaraju,
	Ayan Halder, Pratik Patel

On Fri, Oct 18, 2019 at 2:55 AM Brian Starkey <Brian.Starkey@arm.com> wrote:
> On Thu, Oct 17, 2019 at 01:57:45PM -0700, John Stultz wrote:
> > On Thu, Oct 17, 2019 at 12:29 PM Andrew F. Davis <afd@ti.com> wrote:
> > > On 10/17/19 3:14 PM, John Stultz wrote:
> > > > But if the objection stands, do you have a proposal for an alternative
> > > > way to enumerate a subset of CMA heaps?
> > > >
> > > When in staging ION had to reach into the CMA framework as the other
> > > direction would not be allowed, so cma_for_each_area() was added. If
> > > DMA-BUF heaps is not in staging then we can do the opposite, and have
> > > the CMA framework register heaps itself using our framework. That way
> > > the CMA system could decide what areas to export or not (maybe based on
> > > a DT property or similar).
> >
> > Ok. Though the CMA core doesn't have much sense of DT details either,
> > so it would probably have to be done in the reserved_mem logic, which
> > doesn't feel right to me.
> >
> > I'd probably guess we should have some sort of dt binding to describe
> > a dmabuf cma heap and from that node link to a CMA node via a
> > memory-region phandle. Along with maybe the default heap as well? Not
> > eager to get into another binding review cycle, and I'm not sure what
> > non-DT systems will do yet, but I'll take a shot at it and iterate.
> >
> > > The end result is the same so we can make this change later (it has to
> > > come after DMA-BUF heaps is in anyway).
> >
> > Well, I'm hesitant to merge code that exposes all the CMA heaps and
> > then add patches that becomes more selective, should anyone depend on
> > the initial behavior. :/
>
> How about only auto-adding the system default CMA region (cma->name ==
> "reserved")?

Great minds... :)

> And/or the CMA auto-add could be behind a config option? It seems a
> shame to further delay this, and the CMA heap itself really is useful.

thanks
-john

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION)
  2019-10-18  9:55                   ` Brian Starkey
  2019-10-18 18:33                     ` John Stultz
@ 2019-10-18 18:41                     ` Ayan Halder
  2019-10-18 18:49                       ` John Stultz
  2019-10-18 18:51                       ` Ayan Halder
  1 sibling, 2 replies; 39+ messages in thread
From: Ayan Halder @ 2019-10-18 18:41 UTC (permalink / raw)
  To: Brian Starkey
  Cc: John Stultz, Andrew F. Davis, nd, Sudipto Paul,
	Vincent Donnefort, Chenbo Feng, Alistair Strachan, Liam Mark,
	lkml, Christoph Hellwig, DRI mailing list, Hridya Valsaraju,
	Pratik Patel

On Fri, Oct 18, 2019 at 09:55:17AM +0000, Brian Starkey wrote:
> On Thu, Oct 17, 2019 at 01:57:45PM -0700, John Stultz wrote:
> > On Thu, Oct 17, 2019 at 12:29 PM Andrew F. Davis <afd@ti.com> wrote:
> > > On 10/17/19 3:14 PM, John Stultz wrote:
> > > > But if the objection stands, do you have a proposal for an alternative
> > > > way to enumerate a subset of CMA heaps?
> > > >
> > > When in staging ION had to reach into the CMA framework as the other
> > > direction would not be allowed, so cma_for_each_area() was added. If
> > > DMA-BUF heaps is not in staging then we can do the opposite, and have
> > > the CMA framework register heaps itself using our framework. That way
> > > the CMA system could decide what areas to export or not (maybe based on
> > > a DT property or similar).
> > 
> > Ok. Though the CMA core doesn't have much sense of DT details either,
> > so it would probably have to be done in the reserved_mem logic, which
> > doesn't feel right to me.
> > 
> > I'd probably guess we should have some sort of dt binding to describe
> > a dmabuf cma heap and from that node link to a CMA node via a
> > memory-region phandle. Along with maybe the default heap as well? Not
> > eager to get into another binding review cycle, and I'm not sure what
> > non-DT systems will do yet, but I'll take a shot at it and iterate.
> > 
> > > The end result is the same so we can make this change later (it has to
> > > come after DMA-BUF heaps is in anyway).
> > 
> > Well, I'm hesitant to merge code that exposes all the CMA heaps and
> > then add patches that becomes more selective, should anyone depend on
> > the initial behavior. :/
> 
> How about only auto-adding the system default CMA region (cma->name ==
> "reserved")?
> 
> And/or the CMA auto-add could be behind a config option? It seems a
> shame to further delay this, and the CMA heap itself really is useful.
>
A bit of a detour, comming back to the issue why the following node
was not getting detected by the dma-buf heaps framework.

        reserved-memory {
                #address-cells = <2>;
                #size-cells = <2>;
                ranges;

                display_reserved: framebuffer@60000000 {
                        compatible = "shared-dma-pool";
                        linux,cma-default;
                        reusable; <<<<<<<<<<<<-----------This was missing in our
earlier node
                        reg = <0 0x60000000 0 0x08000000>;
                };
 
Quoting reserved-memory.txt :-
"The operating system can use the memory in this region with the limitation that
 the device driver(s) owning the region need to be able to reclaim it back"

Thus as per my observation, without 'reusable', rmem_cma_setup()
returns -EINVAL and the reserved-memory is not added as a cma region.

With 'reusable', rmem_cma_setup() succeeds , but the kernel crashes as follows :-

[    0.450562] WARNING: CPU: 2 PID: 1 at mm/cma.c:110 cma_init_reserved_areas+0xec/0x22c
[    0.458415] Modules linked in:                                                                                                             
[    0.461470] CPU: 2 PID: 1 Comm: swapper/0 Not tainted 5.3.0-rc4-01377-g51dbcf03884c-dirty #15                                              
[    0.470017] Hardware name: ARM Juno development board (r0) (DT)                                                                            
[    0.475953] pstate: 80000005 (Nzcv daif -PAN -UAO)                                                                                         
[    0.480755] pc : cma_init_reserved_areas+0xec/0x22c  
[    0.485643] lr : cma_init_reserved_areas+0xe8/0x22c 
<----snip register dump --->

[    0.600646] Unable to handle kernel paging request at virtual address ffff7dffff800000
[    0.608591] Mem abort info:
[    0.611386]   ESR = 0x96000006
<---snip uninteresting bits --->
[    0.681069] pc : cma_init_reserved_areas+0x114/0x22c
[    0.686043] lr : cma_init_reserved_areas+0xe8/0x22c


I am looking into this now. My final objective is to get "/dev/dma_heap/framebuffer"
(as a cma heap).
Any leads?

> Cheers,
> -Brian
> 
> > 
> > So, <sigh>, I'll start on the rework for the CMA bits.
> > 
> > That said, I'm definitely wanting to make some progress on this patch
> > series, so maybe we can still merge the core/helpers/system heap and
> > just hold the cma heap for a rework on the enumeration bits. That way
> > we can at least get other folks working on switching their vendor
> > heaps from ION.
> > 
> > Sumit: Does that sound ok? Assuming no other objections, can you take
> > the v11 set minus the CMA heap patch?
> > 
> > thanks
> > -john

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION)
  2019-10-18 18:41                     ` Ayan Halder
@ 2019-10-18 18:49                       ` John Stultz
  2019-10-18 18:57                         ` Ayan Halder
  2019-10-18 18:51                       ` Ayan Halder
  1 sibling, 1 reply; 39+ messages in thread
From: John Stultz @ 2019-10-18 18:49 UTC (permalink / raw)
  To: Ayan Halder
  Cc: Brian Starkey, Andrew F. Davis, nd, Sudipto Paul,
	Vincent Donnefort, Chenbo Feng, Alistair Strachan, Liam Mark,
	lkml, Christoph Hellwig, DRI mailing list, Hridya Valsaraju,
	Pratik Patel

On Fri, Oct 18, 2019 at 11:41 AM Ayan Halder <Ayan.Halder@arm.com> wrote:
> On Fri, Oct 18, 2019 at 09:55:17AM +0000, Brian Starkey wrote:
> > On Thu, Oct 17, 2019 at 01:57:45PM -0700, John Stultz wrote:
> > > On Thu, Oct 17, 2019 at 12:29 PM Andrew F. Davis <afd@ti.com> wrote:
> > > > On 10/17/19 3:14 PM, John Stultz wrote:
> > > > > But if the objection stands, do you have a proposal for an alternative
> > > > > way to enumerate a subset of CMA heaps?
> > > > >
> > > > When in staging ION had to reach into the CMA framework as the other
> > > > direction would not be allowed, so cma_for_each_area() was added. If
> > > > DMA-BUF heaps is not in staging then we can do the opposite, and have
> > > > the CMA framework register heaps itself using our framework. That way
> > > > the CMA system could decide what areas to export or not (maybe based on
> > > > a DT property or similar).
> > >
> > > Ok. Though the CMA core doesn't have much sense of DT details either,
> > > so it would probably have to be done in the reserved_mem logic, which
> > > doesn't feel right to me.
> > >
> > > I'd probably guess we should have some sort of dt binding to describe
> > > a dmabuf cma heap and from that node link to a CMA node via a
> > > memory-region phandle. Along with maybe the default heap as well? Not
> > > eager to get into another binding review cycle, and I'm not sure what
> > > non-DT systems will do yet, but I'll take a shot at it and iterate.
> > >
> > > > The end result is the same so we can make this change later (it has to
> > > > come after DMA-BUF heaps is in anyway).
> > >
> > > Well, I'm hesitant to merge code that exposes all the CMA heaps and
> > > then add patches that becomes more selective, should anyone depend on
> > > the initial behavior. :/
> >
> > How about only auto-adding the system default CMA region (cma->name ==
> > "reserved")?
> >
> > And/or the CMA auto-add could be behind a config option? It seems a
> > shame to further delay this, and the CMA heap itself really is useful.
> >
> A bit of a detour, comming back to the issue why the following node
> was not getting detected by the dma-buf heaps framework.
>
>         reserved-memory {
>                 #address-cells = <2>;
>                 #size-cells = <2>;
>                 ranges;
>
>                 display_reserved: framebuffer@60000000 {
>                         compatible = "shared-dma-pool";
>                         linux,cma-default;
>                         reusable; <<<<<<<<<<<<-----------This was missing in our
> earlier node
>                         reg = <0 0x60000000 0 0x08000000>;
>                 };

Right. It has to be a CMA region for us to expose it from the cma heap.


> With 'reusable', rmem_cma_setup() succeeds , but the kernel crashes as follows :-
>
> [    0.450562] WARNING: CPU: 2 PID: 1 at mm/cma.c:110 cma_init_reserved_areas+0xec/0x22c

Is the value 0x60000000 you're using something you just guessed at? It
seems like the warning here is saying the pfn calculated from the base
address isn't valid.

thanks
-john

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION)
  2019-10-18 18:41                     ` Ayan Halder
  2019-10-18 18:49                       ` John Stultz
@ 2019-10-18 18:51                       ` Ayan Halder
  1 sibling, 0 replies; 39+ messages in thread
From: Ayan Halder @ 2019-10-18 18:51 UTC (permalink / raw)
  To: Brian Starkey
  Cc: Sudipto Paul, Vincent Donnefort, lkml, Chenbo Feng,
	Alistair Strachan, Liam Mark, Andrew F. Davis, Christoph Hellwig,
	DRI mailing list, Hridya Valsaraju, nd, Pratik Patel,
	john.stultz

++ john.stultz@linaro.org (Sorry, somehow I am missing your email while
sending. :( )
On Fri, Oct 18, 2019 at 06:41:24PM +0000, Ayan Halder wrote:
> On Fri, Oct 18, 2019 at 09:55:17AM +0000, Brian Starkey wrote:
> > On Thu, Oct 17, 2019 at 01:57:45PM -0700, John Stultz wrote:
> > > On Thu, Oct 17, 2019 at 12:29 PM Andrew F. Davis <afd@ti.com> wrote:
> > > > On 10/17/19 3:14 PM, John Stultz wrote:
> > > > > But if the objection stands, do you have a proposal for an alternative
> > > > > way to enumerate a subset of CMA heaps?
> > > > >
> > > > When in staging ION had to reach into the CMA framework as the other
> > > > direction would not be allowed, so cma_for_each_area() was added. If
> > > > DMA-BUF heaps is not in staging then we can do the opposite, and have
> > > > the CMA framework register heaps itself using our framework. That way
> > > > the CMA system could decide what areas to export or not (maybe based on
> > > > a DT property or similar).
> > > 
> > > Ok. Though the CMA core doesn't have much sense of DT details either,
> > > so it would probably have to be done in the reserved_mem logic, which
> > > doesn't feel right to me.
> > > 
> > > I'd probably guess we should have some sort of dt binding to describe
> > > a dmabuf cma heap and from that node link to a CMA node via a
> > > memory-region phandle. Along with maybe the default heap as well? Not
> > > eager to get into another binding review cycle, and I'm not sure what
> > > non-DT systems will do yet, but I'll take a shot at it and iterate.
> > > 
> > > > The end result is the same so we can make this change later (it has to
> > > > come after DMA-BUF heaps is in anyway).
> > > 
> > > Well, I'm hesitant to merge code that exposes all the CMA heaps and
> > > then add patches that becomes more selective, should anyone depend on
> > > the initial behavior. :/
> > 
> > How about only auto-adding the system default CMA region (cma->name ==
> > "reserved")?
> > 
> > And/or the CMA auto-add could be behind a config option? It seems a
> > shame to further delay this, and the CMA heap itself really is useful.
> >
> A bit of a detour, comming back to the issue why the following node
> was not getting detected by the dma-buf heaps framework.
> 
>         reserved-memory {
>                 #address-cells = <2>;
>                 #size-cells = <2>;
>                 ranges;
> 
>                 display_reserved: framebuffer@60000000 {
>                         compatible = "shared-dma-pool";
>                         linux,cma-default;
>                         reusable; <<<<<<<<<<<<-----------This was missing in our
> earlier node
>                         reg = <0 0x60000000 0 0x08000000>;
>                 };
>  
> Quoting reserved-memory.txt :-
> "The operating system can use the memory in this region with the limitation that
>  the device driver(s) owning the region need to be able to reclaim it back"
> 
> Thus as per my observation, without 'reusable', rmem_cma_setup()
> returns -EINVAL and the reserved-memory is not added as a cma region.
> 
> With 'reusable', rmem_cma_setup() succeeds , but the kernel crashes as follows :-
> 
> [    0.450562] WARNING: CPU: 2 PID: 1 at mm/cma.c:110 cma_init_reserved_areas+0xec/0x22c
> [    0.458415] Modules linked in:                                                                                                             
> [    0.461470] CPU: 2 PID: 1 Comm: swapper/0 Not tainted 5.3.0-rc4-01377-g51dbcf03884c-dirty #15                                              
> [    0.470017] Hardware name: ARM Juno development board (r0) (DT)                                                                            
> [    0.475953] pstate: 80000005 (Nzcv daif -PAN -UAO)                                                                                         
> [    0.480755] pc : cma_init_reserved_areas+0xec/0x22c  
> [    0.485643] lr : cma_init_reserved_areas+0xe8/0x22c 
> <----snip register dump --->
> 
> [    0.600646] Unable to handle kernel paging request at virtual address ffff7dffff800000
> [    0.608591] Mem abort info:
> [    0.611386]   ESR = 0x96000006
> <---snip uninteresting bits --->
> [    0.681069] pc : cma_init_reserved_areas+0x114/0x22c
> [    0.686043] lr : cma_init_reserved_areas+0xe8/0x22c
> 
> 
> I am looking into this now. My final objective is to get "/dev/dma_heap/framebuffer"
> (as a cma heap).
> Any leads?
> 
> > Cheers,
> > -Brian
> > 
> > > 
> > > So, <sigh>, I'll start on the rework for the CMA bits.
> > > 
> > > That said, I'm definitely wanting to make some progress on this patch
> > > series, so maybe we can still merge the core/helpers/system heap and
> > > just hold the cma heap for a rework on the enumeration bits. That way
> > > we can at least get other folks working on switching their vendor
> > > heaps from ION.
> > > 
> > > Sumit: Does that sound ok? Assuming no other objections, can you take
> > > the v11 set minus the CMA heap patch?
> > > 
> > > thanks
> > > -john
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION)
  2019-10-18 18:49                       ` John Stultz
@ 2019-10-18 18:57                         ` Ayan Halder
  2019-10-18 19:04                           ` John Stultz
  2019-10-19 13:41                           ` Andrew F. Davis
  0 siblings, 2 replies; 39+ messages in thread
From: Ayan Halder @ 2019-10-18 18:57 UTC (permalink / raw)
  To: John Stultz
  Cc: Brian Starkey, Andrew F. Davis, nd, Sudipto Paul,
	Vincent Donnefort, Chenbo Feng, Alistair Strachan, Liam Mark,
	lkml, Christoph Hellwig, DRI mailing list, Hridya Valsaraju,
	Pratik Patel

On Fri, Oct 18, 2019 at 11:49:22AM -0700, John Stultz wrote:
> On Fri, Oct 18, 2019 at 11:41 AM Ayan Halder <Ayan.Halder@arm.com> wrote:
> > On Fri, Oct 18, 2019 at 09:55:17AM +0000, Brian Starkey wrote:
> > > On Thu, Oct 17, 2019 at 01:57:45PM -0700, John Stultz wrote:
> > > > On Thu, Oct 17, 2019 at 12:29 PM Andrew F. Davis <afd@ti.com> wrote:
> > > > > On 10/17/19 3:14 PM, John Stultz wrote:
> > > > > > But if the objection stands, do you have a proposal for an alternative
> > > > > > way to enumerate a subset of CMA heaps?
> > > > > >
> > > > > When in staging ION had to reach into the CMA framework as the other
> > > > > direction would not be allowed, so cma_for_each_area() was added. If
> > > > > DMA-BUF heaps is not in staging then we can do the opposite, and have
> > > > > the CMA framework register heaps itself using our framework. That way
> > > > > the CMA system could decide what areas to export or not (maybe based on
> > > > > a DT property or similar).
> > > >
> > > > Ok. Though the CMA core doesn't have much sense of DT details either,
> > > > so it would probably have to be done in the reserved_mem logic, which
> > > > doesn't feel right to me.
> > > >
> > > > I'd probably guess we should have some sort of dt binding to describe
> > > > a dmabuf cma heap and from that node link to a CMA node via a
> > > > memory-region phandle. Along with maybe the default heap as well? Not
> > > > eager to get into another binding review cycle, and I'm not sure what
> > > > non-DT systems will do yet, but I'll take a shot at it and iterate.
> > > >
> > > > > The end result is the same so we can make this change later (it has to
> > > > > come after DMA-BUF heaps is in anyway).
> > > >
> > > > Well, I'm hesitant to merge code that exposes all the CMA heaps and
> > > > then add patches that becomes more selective, should anyone depend on
> > > > the initial behavior. :/
> > >
> > > How about only auto-adding the system default CMA region (cma->name ==
> > > "reserved")?
> > >
> > > And/or the CMA auto-add could be behind a config option? It seems a
> > > shame to further delay this, and the CMA heap itself really is useful.
> > >
> > A bit of a detour, comming back to the issue why the following node
> > was not getting detected by the dma-buf heaps framework.
> >
> >         reserved-memory {
> >                 #address-cells = <2>;
> >                 #size-cells = <2>;
> >                 ranges;
> >
> >                 display_reserved: framebuffer@60000000 {
> >                         compatible = "shared-dma-pool";
> >                         linux,cma-default;
> >                         reusable; <<<<<<<<<<<<-----------This was missing in our
> > earlier node
> >                         reg = <0 0x60000000 0 0x08000000>;
> >                 };
> 
> Right. It has to be a CMA region for us to expose it from the cma heap.
> 
> 
> > With 'reusable', rmem_cma_setup() succeeds , but the kernel crashes as follows :-
> >
> > [    0.450562] WARNING: CPU: 2 PID: 1 at mm/cma.c:110 cma_init_reserved_areas+0xec/0x22c
> 
> Is the value 0x60000000 you're using something you just guessed at? It
> seems like the warning here is saying the pfn calculated from the base
> address isn't valid.
It is a valid memory region we use to allocate framebuffers.
> 
> thanks
> -john

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION)
  2019-10-18 18:57                         ` Ayan Halder
@ 2019-10-18 19:04                           ` John Stultz
  2019-10-19 13:41                           ` Andrew F. Davis
  1 sibling, 0 replies; 39+ messages in thread
From: John Stultz @ 2019-10-18 19:04 UTC (permalink / raw)
  To: Ayan Halder
  Cc: Brian Starkey, Andrew F. Davis, nd, Sudipto Paul,
	Vincent Donnefort, Chenbo Feng, Alistair Strachan, Liam Mark,
	lkml, Christoph Hellwig, DRI mailing list, Hridya Valsaraju,
	Pratik Patel

On Fri, Oct 18, 2019 at 11:57 AM Ayan Halder <Ayan.Halder@arm.com> wrote:
> On Fri, Oct 18, 2019 at 11:49:22AM -0700, John Stultz wrote:
> > On Fri, Oct 18, 2019 at 11:41 AM Ayan Halder <Ayan.Halder@arm.com> wrote:
> > > With 'reusable', rmem_cma_setup() succeeds , but the kernel crashes as follows :-
> > >
> > > [    0.450562] WARNING: CPU: 2 PID: 1 at mm/cma.c:110 cma_init_reserved_areas+0xec/0x22c
> >
> > Is the value 0x60000000 you're using something you just guessed at? It
> > seems like the warning here is saying the pfn calculated from the base
> > address isn't valid.
> It is a valid memory region we use to allocate framebuffers.

Hrm. I guess I'd suggest digging to figure out why the kernel doesn't
see it as such.

Does this only happen with my patches applied? I'm sort of assuming
can you trip this even without them, but maybe I'm wrong?

thanks
-john

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION)
  2019-10-18 18:57                         ` Ayan Halder
  2019-10-18 19:04                           ` John Stultz
@ 2019-10-19 13:41                           ` Andrew F. Davis
  2019-10-21  9:18                             ` Brian Starkey
  1 sibling, 1 reply; 39+ messages in thread
From: Andrew F. Davis @ 2019-10-19 13:41 UTC (permalink / raw)
  To: Ayan Halder, John Stultz
  Cc: Brian Starkey, nd, Sudipto Paul, Vincent Donnefort, Chenbo Feng,
	Alistair Strachan, Liam Mark, lkml, Christoph Hellwig,
	DRI mailing list, Hridya Valsaraju, Pratik Patel

On 10/18/19 2:57 PM, Ayan Halder wrote:
> On Fri, Oct 18, 2019 at 11:49:22AM -0700, John Stultz wrote:
>> On Fri, Oct 18, 2019 at 11:41 AM Ayan Halder <Ayan.Halder@arm.com> wrote:
>>> On Fri, Oct 18, 2019 at 09:55:17AM +0000, Brian Starkey wrote:
>>>> On Thu, Oct 17, 2019 at 01:57:45PM -0700, John Stultz wrote:
>>>>> On Thu, Oct 17, 2019 at 12:29 PM Andrew F. Davis <afd@ti.com> wrote:
>>>>>> On 10/17/19 3:14 PM, John Stultz wrote:
>>>>>>> But if the objection stands, do you have a proposal for an alternative
>>>>>>> way to enumerate a subset of CMA heaps?
>>>>>>>
>>>>>> When in staging ION had to reach into the CMA framework as the other
>>>>>> direction would not be allowed, so cma_for_each_area() was added. If
>>>>>> DMA-BUF heaps is not in staging then we can do the opposite, and have
>>>>>> the CMA framework register heaps itself using our framework. That way
>>>>>> the CMA system could decide what areas to export or not (maybe based on
>>>>>> a DT property or similar).
>>>>>
>>>>> Ok. Though the CMA core doesn't have much sense of DT details either,
>>>>> so it would probably have to be done in the reserved_mem logic, which
>>>>> doesn't feel right to me.
>>>>>
>>>>> I'd probably guess we should have some sort of dt binding to describe
>>>>> a dmabuf cma heap and from that node link to a CMA node via a
>>>>> memory-region phandle. Along with maybe the default heap as well? Not
>>>>> eager to get into another binding review cycle, and I'm not sure what
>>>>> non-DT systems will do yet, but I'll take a shot at it and iterate.
>>>>>
>>>>>> The end result is the same so we can make this change later (it has to
>>>>>> come after DMA-BUF heaps is in anyway).
>>>>>
>>>>> Well, I'm hesitant to merge code that exposes all the CMA heaps and
>>>>> then add patches that becomes more selective, should anyone depend on
>>>>> the initial behavior. :/
>>>>
>>>> How about only auto-adding the system default CMA region (cma->name ==
>>>> "reserved")?
>>>>
>>>> And/or the CMA auto-add could be behind a config option? It seems a
>>>> shame to further delay this, and the CMA heap itself really is useful.
>>>>
>>> A bit of a detour, comming back to the issue why the following node
>>> was not getting detected by the dma-buf heaps framework.
>>>
>>>         reserved-memory {
>>>                 #address-cells = <2>;
>>>                 #size-cells = <2>;
>>>                 ranges;
>>>
>>>                 display_reserved: framebuffer@60000000 {
>>>                         compatible = "shared-dma-pool";
>>>                         linux,cma-default;
>>>                         reusable; <<<<<<<<<<<<-----------This was missing in our
>>> earlier node
>>>                         reg = <0 0x60000000 0 0x08000000>;
>>>                 };
>>
>> Right. It has to be a CMA region for us to expose it from the cma heap.
>>
>>
>>> With 'reusable', rmem_cma_setup() succeeds , but the kernel crashes as follows :-
>>>
>>> [    0.450562] WARNING: CPU: 2 PID: 1 at mm/cma.c:110 cma_init_reserved_areas+0xec/0x22c
>>
>> Is the value 0x60000000 you're using something you just guessed at? It
>> seems like the warning here is saying the pfn calculated from the base
>> address isn't valid.
> It is a valid memory region we use to allocate framebuffers.


But does it have a valid kernel virtual mapping? Most ARM systems (just
assuming you are working on ARM :)) that I'm familiar with have the DRAM
space starting at 0x80000000 and so don't start having valid pfns until
that point. Is this address you are reserving an SRAM?

Andrew


>>
>> thanks
>> -john

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION)
  2019-10-19 13:41                           ` Andrew F. Davis
@ 2019-10-21  9:18                             ` Brian Starkey
  2019-10-22 13:51                               ` Ayan Halder
  0 siblings, 1 reply; 39+ messages in thread
From: Brian Starkey @ 2019-10-21  9:18 UTC (permalink / raw)
  To: Andrew F. Davis
  Cc: Ayan Halder, John Stultz, nd, Sudipto Paul, Vincent Donnefort,
	Chenbo Feng, Alistair Strachan, Liam Mark, lkml,
	Christoph Hellwig, DRI mailing list, Hridya Valsaraju,
	Pratik Patel

On Sat, Oct 19, 2019 at 09:41:27AM -0400, Andrew F. Davis wrote:
> On 10/18/19 2:57 PM, Ayan Halder wrote:
> > On Fri, Oct 18, 2019 at 11:49:22AM -0700, John Stultz wrote:
> >> On Fri, Oct 18, 2019 at 11:41 AM Ayan Halder <Ayan.Halder@arm.com> wrote:
> >>> On Fri, Oct 18, 2019 at 09:55:17AM +0000, Brian Starkey wrote:
> >>>> On Thu, Oct 17, 2019 at 01:57:45PM -0700, John Stultz wrote:
> >>>>> On Thu, Oct 17, 2019 at 12:29 PM Andrew F. Davis <afd@ti.com> wrote:
> >>>>>> On 10/17/19 3:14 PM, John Stultz wrote:
> >>>>>>> But if the objection stands, do you have a proposal for an alternative
> >>>>>>> way to enumerate a subset of CMA heaps?
> >>>>>>>
> >>>>>> When in staging ION had to reach into the CMA framework as the other
> >>>>>> direction would not be allowed, so cma_for_each_area() was added. If
> >>>>>> DMA-BUF heaps is not in staging then we can do the opposite, and have
> >>>>>> the CMA framework register heaps itself using our framework. That way
> >>>>>> the CMA system could decide what areas to export or not (maybe based on
> >>>>>> a DT property or similar).
> >>>>>
> >>>>> Ok. Though the CMA core doesn't have much sense of DT details either,
> >>>>> so it would probably have to be done in the reserved_mem logic, which
> >>>>> doesn't feel right to me.
> >>>>>
> >>>>> I'd probably guess we should have some sort of dt binding to describe
> >>>>> a dmabuf cma heap and from that node link to a CMA node via a
> >>>>> memory-region phandle. Along with maybe the default heap as well? Not
> >>>>> eager to get into another binding review cycle, and I'm not sure what
> >>>>> non-DT systems will do yet, but I'll take a shot at it and iterate.
> >>>>>
> >>>>>> The end result is the same so we can make this change later (it has to
> >>>>>> come after DMA-BUF heaps is in anyway).
> >>>>>
> >>>>> Well, I'm hesitant to merge code that exposes all the CMA heaps and
> >>>>> then add patches that becomes more selective, should anyone depend on
> >>>>> the initial behavior. :/
> >>>>
> >>>> How about only auto-adding the system default CMA region (cma->name ==
> >>>> "reserved")?
> >>>>
> >>>> And/or the CMA auto-add could be behind a config option? It seems a
> >>>> shame to further delay this, and the CMA heap itself really is useful.
> >>>>
> >>> A bit of a detour, comming back to the issue why the following node
> >>> was not getting detected by the dma-buf heaps framework.
> >>>
> >>>         reserved-memory {
> >>>                 #address-cells = <2>;
> >>>                 #size-cells = <2>;
> >>>                 ranges;
> >>>
> >>>                 display_reserved: framebuffer@60000000 {
> >>>                         compatible = "shared-dma-pool";
> >>>                         linux,cma-default;
> >>>                         reusable; <<<<<<<<<<<<-----------This was missing in our
> >>> earlier node
> >>>                         reg = <0 0x60000000 0 0x08000000>;
> >>>                 };
> >>
> >> Right. It has to be a CMA region for us to expose it from the cma heap.
> >>
> >>
> >>> With 'reusable', rmem_cma_setup() succeeds , but the kernel crashes as follows :-
> >>>
> >>> [    0.450562] WARNING: CPU: 2 PID: 1 at mm/cma.c:110 cma_init_reserved_areas+0xec/0x22c
> >>
> >> Is the value 0x60000000 you're using something you just guessed at? It
> >> seems like the warning here is saying the pfn calculated from the base
> >> address isn't valid.
> > It is a valid memory region we use to allocate framebuffers.
> 
> 
> But does it have a valid kernel virtual mapping? Most ARM systems (just
> assuming you are working on ARM :)) that I'm familiar with have the DRAM
> space starting at 0x80000000 and so don't start having valid pfns until
> that point. Is this address you are reserving an SRAM?
> 

Yeah, I think you've got it.

This region is DRAM on an FPGA expansion tile, but as you have noticed
its "below" the start of main RAM, and I expect it's not in any of the
declared /memory/ nodes.

When "reusable" isn't there, I think we'll end up going the coherent.c
route, with dma_init_coherent_memory() setting up some pages.

If "reusable" is there, then I think we'll end up in contiguous.c and
that expects us to already have pages.

So, @Ayan, you could perhaps try adding this region as a /memory/ node
as-well, which should mean the kernel sets up some pages for it as
normal memory. But, I have some ancient recollection that the arm64
kernel couldn't handle system RAM at addresses below 0x80000000 or
something. That might be different now, I'm talking about several
years ago.

Thanks,
-Brian

> Andrew
> 
> 
> >>
> >> thanks
> >> -john

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION)
  2019-10-21  9:18                             ` Brian Starkey
@ 2019-10-22 13:51                               ` Ayan Halder
  0 siblings, 0 replies; 39+ messages in thread
From: Ayan Halder @ 2019-10-22 13:51 UTC (permalink / raw)
  To: Brian Starkey
  Cc: Andrew F. Davis, John Stultz, nd, Sudipto Paul,
	Vincent Donnefort, Chenbo Feng, Alistair Strachan, Liam Mark,
	lkml, Christoph Hellwig, DRI mailing list, Hridya Valsaraju,
	Pratik Patel

On Mon, Oct 21, 2019 at 09:18:07AM +0000, Brian Starkey wrote:
> On Sat, Oct 19, 2019 at 09:41:27AM -0400, Andrew F. Davis wrote:
> > On 10/18/19 2:57 PM, Ayan Halder wrote:
> > > On Fri, Oct 18, 2019 at 11:49:22AM -0700, John Stultz wrote:
> > >> On Fri, Oct 18, 2019 at 11:41 AM Ayan Halder <Ayan.Halder@arm.com> wrote:
> > >>> On Fri, Oct 18, 2019 at 09:55:17AM +0000, Brian Starkey wrote:
> > >>>> On Thu, Oct 17, 2019 at 01:57:45PM -0700, John Stultz wrote:
> > >>>>> On Thu, Oct 17, 2019 at 12:29 PM Andrew F. Davis <afd@ti.com> wrote:
> > >>>>>> On 10/17/19 3:14 PM, John Stultz wrote:
> > >>>>>>> But if the objection stands, do you have a proposal for an alternative
> > >>>>>>> way to enumerate a subset of CMA heaps?
> > >>>>>>>
> > >>>>>> When in staging ION had to reach into the CMA framework as the other
> > >>>>>> direction would not be allowed, so cma_for_each_area() was added. If
> > >>>>>> DMA-BUF heaps is not in staging then we can do the opposite, and have
> > >>>>>> the CMA framework register heaps itself using our framework. That way
> > >>>>>> the CMA system could decide what areas to export or not (maybe based on
> > >>>>>> a DT property or similar).
> > >>>>>
> > >>>>> Ok. Though the CMA core doesn't have much sense of DT details either,
> > >>>>> so it would probably have to be done in the reserved_mem logic, which
> > >>>>> doesn't feel right to me.
> > >>>>>
> > >>>>> I'd probably guess we should have some sort of dt binding to describe
> > >>>>> a dmabuf cma heap and from that node link to a CMA node via a
> > >>>>> memory-region phandle. Along with maybe the default heap as well? Not
> > >>>>> eager to get into another binding review cycle, and I'm not sure what
> > >>>>> non-DT systems will do yet, but I'll take a shot at it and iterate.
> > >>>>>
> > >>>>>> The end result is the same so we can make this change later (it has to
> > >>>>>> come after DMA-BUF heaps is in anyway).
> > >>>>>
> > >>>>> Well, I'm hesitant to merge code that exposes all the CMA heaps and
> > >>>>> then add patches that becomes more selective, should anyone depend on
> > >>>>> the initial behavior. :/
> > >>>>
> > >>>> How about only auto-adding the system default CMA region (cma->name ==
> > >>>> "reserved")?
> > >>>>
> > >>>> And/or the CMA auto-add could be behind a config option? It seems a
> > >>>> shame to further delay this, and the CMA heap itself really is useful.
> > >>>>
> > >>> A bit of a detour, comming back to the issue why the following node
> > >>> was not getting detected by the dma-buf heaps framework.
> > >>>
> > >>>         reserved-memory {
> > >>>                 #address-cells = <2>;
> > >>>                 #size-cells = <2>;
> > >>>                 ranges;
> > >>>
> > >>>                 display_reserved: framebuffer@60000000 {
> > >>>                         compatible = "shared-dma-pool";
> > >>>                         linux,cma-default;
> > >>>                         reusable; <<<<<<<<<<<<-----------This was missing in our
> > >>> earlier node
> > >>>                         reg = <0 0x60000000 0 0x08000000>;
> > >>>                 };
> > >>
> > >> Right. It has to be a CMA region for us to expose it from the cma heap.
> > >>
> > >>
> > >>> With 'reusable', rmem_cma_setup() succeeds , but the kernel crashes as follows :-
> > >>>
> > >>> [    0.450562] WARNING: CPU: 2 PID: 1 at mm/cma.c:110 cma_init_reserved_areas+0xec/0x22c
> > >>
> > >> Is the value 0x60000000 you're using something you just guessed at? It
> > >> seems like the warning here is saying the pfn calculated from the base
> > >> address isn't valid.
> > > It is a valid memory region we use to allocate framebuffers.
> > 
> > 
> > But does it have a valid kernel virtual mapping? Most ARM systems (just
> > assuming you are working on ARM :)) that I'm familiar with have the DRAM
> > space starting at 0x80000000 and so don't start having valid pfns until
> > that point. Is this address you are reserving an SRAM?
> > 
> 
> Yeah, I think you've got it.
> 
> This region is DRAM on an FPGA expansion tile, but as you have noticed
> its "below" the start of main RAM, and I expect it's not in any of the
> declared /memory/ nodes.
> 
> When "reusable" isn't there, I think we'll end up going the coherent.c
> route, with dma_init_coherent_memory() setting up some pages.
> 
> If "reusable" is there, then I think we'll end up in contiguous.c and
> that expects us to already have pages.
> 
> So, @Ayan, you could perhaps try adding this region as a /memory/ node
> as-well, which should mean the kernel sets up some pages for it as
> normal memory. But, I have some ancient recollection that the arm64
> kernel couldn't handle system RAM at addresses below 0x80000000 or
> something. That might be different now, I'm talking about several
> years ago.
>
Thanks a lot for your suggestions.

I added the following node in the dts.

       memory@60000000 {
               device_type = "memory";
               reg = <0 0x60000000 0 0x08000000>;
       };

And kept the 'reusable' property in
        display_reserved:framebuffer@60000000 {...};

Now the kernel boots fine. I am able to get
/dev/dma_heap/framebuffer\@60000000 . :)

> Thanks,
> -Brian
> 
> > Andrew
> > 
> > 
> > >>
> > >> thanks
> > >> -john

^ permalink raw reply	[flat|nested] 39+ messages in thread

end of thread, other threads:[~2019-10-22 13:51 UTC | newest]

Thread overview: 39+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-06 18:47 [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION) John Stultz
2019-09-06 18:47 ` [RESEND][PATCH v8 1/5] dma-buf: Add dma-buf heaps framework John Stultz
2019-09-23 22:08   ` Brian Starkey
2019-09-24 17:10     ` John Stultz
2019-09-06 18:47 ` [RESEND][PATCH v8 2/5] dma-buf: heaps: Add heap helpers John Stultz
2019-09-23 22:08   ` Brian Starkey
2019-09-06 18:47 ` [RESEND][PATCH v8 3/5] dma-buf: heaps: Add system heap to dmabuf heaps John Stultz
2019-09-23 22:09   ` Brian Starkey
2019-09-06 18:47 ` [RESEND][PATCH v8 4/5] dma-buf: heaps: Add CMA " John Stultz
2019-09-23 22:10   ` Brian Starkey
2019-09-06 18:47 ` [RESEND][PATCH v8 5/5] kselftests: Add dma-heap test John Stultz
2019-09-23 22:11   ` Brian Starkey
2019-09-26 21:36     ` John Stultz
2019-09-27  9:20       ` Brian Starkey
2019-09-19 16:51 ` [RESEND][PATCH v8 0/5] DMA-BUF Heaps (destaging ION) Sumit Semwal
2019-09-24 16:22   ` Ayan Halder
2019-09-24 16:28     ` John Stultz
2019-10-09 17:37     ` Ayan Halder
2019-10-09 18:27       ` Andrew F. Davis
2019-10-14  9:07         ` Brian Starkey
2019-10-16 17:40           ` Andrew F. Davis
2019-10-17 19:14             ` John Stultz
2019-10-17 19:29               ` Andrew F. Davis
2019-10-17 20:57                 ` John Stultz
2019-10-18  9:55                   ` Brian Starkey
2019-10-18 18:33                     ` John Stultz
2019-10-18 18:41                     ` Ayan Halder
2019-10-18 18:49                       ` John Stultz
2019-10-18 18:57                         ` Ayan Halder
2019-10-18 19:04                           ` John Stultz
2019-10-19 13:41                           ` Andrew F. Davis
2019-10-21  9:18                             ` Brian Starkey
2019-10-22 13:51                               ` Ayan Halder
2019-10-18 18:51                       ` Ayan Halder
2019-10-16 17:34       ` John Stultz
2019-09-30 13:40 ` Laura Abbott
     [not found] ` <20190930074335.6636-1-hdanton@sina.com>
2019-10-01 20:50   ` [RESEND][PATCH v8 3/5] dma-buf: heaps: Add system heap to dmabuf heaps John Stultz
     [not found] ` <20190930032651.8264-1-hdanton@sina.com>
2019-10-02 16:14   ` [RESEND][PATCH v8 1/5] dma-buf: Add dma-buf heaps framework John Stultz
     [not found] ` <20190930081434.248-1-hdanton@sina.com>
2019-10-02 16:15   ` [RESEND][PATCH v8 4/5] dma-buf: heaps: Add CMA heap to dmabuf heaps John Stultz

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).