linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] Add dma-buf secure-heap
@ 2022-08-02  9:58 Olivier Masse
  2022-08-02  9:58 ` [PATCH 1/5] ANDROID: dma-buf: heaps: Add deferred-free-helper library code Olivier Masse
                   ` (4 more replies)
  0 siblings, 5 replies; 13+ messages in thread
From: Olivier Masse @ 2022-08-02  9:58 UTC (permalink / raw)
  To: sumit.semwal, benjamin.gaignard, Brian.Starkey, christian.koenig,
	linux-media, dri-devel, linaro-mm-sig, linux-kernel
  Cc: clement.faure, olivier.masse

Purpose of these patches is to add a new dma-buf heap: linaro,secure-heap
Linaro OPTEE OS Secure Data Path feature is relying on a reserved memory
defined at Linux Kernel level and OPTEE OS level.
From Linux Kernel side, heap management is using dma-buf heaps interface.

John Stultz (2):
  ANDROID: dma-buf: heaps: Add deferred-free-helper library code
  ANDROID: dma-buf: heaps: Add a shrinker controlled page pool

Olivier Masse (3):
  dma-buf: heaps: add Linaro secure dmabuf heap support
  dt-bindings: reserved-memory: add linaro,secure-heap
  plat-hikey: Add linaro,secure-heap compatible

 .../reserved-memory/linaro,secure-heap.yaml   |  56 ++
 .../arm64/boot/dts/hisilicon/hi6220-hikey.dts |  11 +
 arch/arm64/configs/defconfig                  |   4 +
 drivers/dma-buf/heaps/Kconfig                 |  19 +
 drivers/dma-buf/heaps/Makefile                |   3 +
 drivers/dma-buf/heaps/deferred-free-helper.c  | 136 ++++
 drivers/dma-buf/heaps/deferred-free-helper.h  |  63 ++
 drivers/dma-buf/heaps/page_pool.c             | 246 ++++++++
 drivers/dma-buf/heaps/page_pool.h             |  55 ++
 drivers/dma-buf/heaps/secure_heap.c           | 588 ++++++++++++++++++
 10 files changed, 1181 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/reserved-memory/linaro,secure-heap.yaml
 create mode 100644 drivers/dma-buf/heaps/deferred-free-helper.c
 create mode 100644 drivers/dma-buf/heaps/deferred-free-helper.h
 create mode 100644 drivers/dma-buf/heaps/page_pool.c
 create mode 100644 drivers/dma-buf/heaps/page_pool.h
 create mode 100644 drivers/dma-buf/heaps/secure_heap.c

-- 
2.25.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/5] ANDROID: dma-buf: heaps: Add deferred-free-helper library code
  2022-08-02  9:58 [PATCH 0/5] Add dma-buf secure-heap Olivier Masse
@ 2022-08-02  9:58 ` Olivier Masse
  2022-08-02  9:58 ` [PATCH 2/5] ANDROID: dma-buf: heaps: Add a shrinker controlled page pool Olivier Masse
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 13+ messages in thread
From: Olivier Masse @ 2022-08-02  9:58 UTC (permalink / raw)
  To: sumit.semwal, benjamin.gaignard, Brian.Starkey, christian.koenig,
	linux-media, dri-devel, linaro-mm-sig, linux-kernel
  Cc: clement.faure, olivier.masse

From: John Stultz <john.stultz@linaro.org>

This patch provides infrastructure for deferring buffer frees.

This is a feature ION provided which when used with some form
of a page pool, provides a nice performance boost in an
allocation microbenchmark. The reason it helps is it allows the
page-zeroing to be done out of the normal allocation/free path,
and pushed off to a kthread.

As not all heaps will find this useful, its implemented as
a optional helper library that heaps can utilize.

Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Liam Mark <lmark@codeaurora.org>
Cc: Chris Goldsworthy <cgoldswo@codeaurora.org>
Cc: Laura Abbott <labbott@kernel.org>
Cc: Brian Starkey <Brian.Starkey@arm.com>
Cc: Hridya Valsaraju <hridya@google.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Sandeep Patil <sspatil@google.com>
Cc: Daniel Mentz <danielmentz@google.com>
Cc: Ørjan Eide <orjan.eide@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Ezequiel Garcia <ezequiel@collabora.com>
Cc: Simon Ser <contact@emersion.fr>
Cc: James Jones <jajones@nvidia.com>
Cc: linux-media@vger.kernel.org
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: John Stultz <john.stultz@linaro.org>
Bug: 168742043
---
 drivers/dma-buf/heaps/Kconfig                |   3 +
 drivers/dma-buf/heaps/Makefile               |   1 +
 drivers/dma-buf/heaps/deferred-free-helper.c | 136 +++++++++++++++++++
 drivers/dma-buf/heaps/deferred-free-helper.h |  63 +++++++++
 4 files changed, 203 insertions(+)
 create mode 100644 drivers/dma-buf/heaps/deferred-free-helper.c
 create mode 100644 drivers/dma-buf/heaps/deferred-free-helper.h

diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
index 3782eeeb91c0..8ee64277a5d2 100644
--- a/drivers/dma-buf/heaps/Kconfig
+++ b/drivers/dma-buf/heaps/Kconfig
@@ -1,3 +1,6 @@
+config DMABUF_HEAPS_DEFERRED_FREE
+	tristate
+
 config DMABUF_HEAPS_SYSTEM
 	bool "DMA-BUF System Heap"
 	depends on DMABUF_HEAPS
diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
index 29733f84c354..5de95b77e169 100644
--- a/drivers/dma-buf/heaps/Makefile
+++ b/drivers/dma-buf/heaps/Makefile
@@ -1,4 +1,5 @@
 # SPDX-License-Identifier: GPL-2.0
+obj-$(CONFIG_DMABUF_HEAPS_DEFERRED_FREE) += deferred-free-helper.o
 obj-$(CONFIG_DMABUF_HEAPS_SYSTEM)	+= system_heap.o
 obj-$(CONFIG_DMABUF_HEAPS_CMA)		+= cma_heap.o
 obj-$(CONFIG_DMABUF_HEAPS_DSP)          += dsp_heap.o
diff --git a/drivers/dma-buf/heaps/deferred-free-helper.c b/drivers/dma-buf/heaps/deferred-free-helper.c
new file mode 100644
index 000000000000..478faa319908
--- /dev/null
+++ b/drivers/dma-buf/heaps/deferred-free-helper.c
@@ -0,0 +1,136 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Deferred dmabuf freeing helper
+ *
+ * Copyright (C) 2020 Linaro, Ltd.
+ *
+ * Based on the ION page pool code
+ * Copyright (C) 2011 Google, Inc.
+ */
+
+#include <linux/freezer.h>
+#include <linux/list.h>
+#include <linux/sched/signal.h>
+#include <linux/slab.h>
+#include <linux/swap.h>
+
+#include "deferred-free-helper.h"
+
+static LIST_HEAD(free_list);
+static size_t list_nr_pages;
+static wait_queue_head_t freelist_waitqueue;
+static struct task_struct *freelist_task;
+static DEFINE_SPINLOCK(free_list_lock);
+
+void deferred_free(struct deferred_freelist_item *item,
+		   void (*free)(struct deferred_freelist_item*,
+				enum df_reason),
+		   size_t nr_pages)
+{
+	unsigned long flags;
+
+	if (!nr_pages)
+		return;
+
+	INIT_LIST_HEAD(&item->list);
+	item->nr_pages = nr_pages;
+	item->free = free;
+
+	spin_lock_irqsave(&free_list_lock, flags);
+	list_add(&item->list, &free_list);
+	list_nr_pages += nr_pages;
+	spin_unlock_irqrestore(&free_list_lock, flags);
+	wake_up(&freelist_waitqueue);
+}
+EXPORT_SYMBOL_GPL(deferred_free);
+
+static size_t free_one_item(enum df_reason reason)
+{
+	unsigned long flags;
+	size_t nr_pages;
+	struct deferred_freelist_item *item;
+
+	spin_lock_irqsave(&free_list_lock, flags);
+	if (list_empty(&free_list)) {
+		spin_unlock_irqrestore(&free_list_lock, flags);
+		return 0;
+	}
+	item = list_first_entry(&free_list, struct deferred_freelist_item, list);
+	list_del(&item->list);
+	nr_pages = item->nr_pages;
+	list_nr_pages -= nr_pages;
+	spin_unlock_irqrestore(&free_list_lock, flags);
+
+	item->free(item, reason);
+	return nr_pages;
+}
+
+static unsigned long get_freelist_nr_pages(void)
+{
+	unsigned long nr_pages;
+	unsigned long flags;
+
+	spin_lock_irqsave(&free_list_lock, flags);
+	nr_pages = list_nr_pages;
+	spin_unlock_irqrestore(&free_list_lock, flags);
+	return nr_pages;
+}
+
+static unsigned long freelist_shrink_count(struct shrinker *shrinker,
+					   struct shrink_control *sc)
+{
+	return get_freelist_nr_pages();
+}
+
+static unsigned long freelist_shrink_scan(struct shrinker *shrinker,
+					  struct shrink_control *sc)
+{
+	unsigned long total_freed = 0;
+
+	while (total_freed < sc->nr_to_scan) {
+		size_t pages_freed = free_one_item(DF_UNDER_PRESSURE);
+
+		if (!pages_freed)
+			break;
+
+		total_freed += pages_freed;
+	}
+
+	return total_freed;
+}
+
+static struct shrinker freelist_shrinker = {
+	.count_objects = freelist_shrink_count,
+	.scan_objects = freelist_shrink_scan,
+	.seeks = DEFAULT_SEEKS,
+	.batch = 0,
+};
+
+static int deferred_free_thread(void *data)
+{
+	while (true) {
+		wait_event_freezable(freelist_waitqueue,
+				     get_freelist_nr_pages() > 0);
+
+		free_one_item(DF_NORMAL);
+	}
+
+	return 0;
+}
+
+static int deferred_freelist_init(void)
+{
+	init_waitqueue_head(&freelist_waitqueue);
+	freelist_task = kthread_run(deferred_free_thread, NULL,
+				    "%s", "dmabuf-deferred-free-worker");
+	if (IS_ERR(freelist_task)) {
+		pr_err("Creating thread for deferred free failed\n");
+		return PTR_ERR(freelist_task);
+	}
+	sched_set_normal(freelist_task, 19);
+
+	return register_shrinker(&freelist_shrinker);
+}
+module_init(deferred_freelist_init);
+MODULE_LICENSE("GPL v2");
+
diff --git a/drivers/dma-buf/heaps/deferred-free-helper.h b/drivers/dma-buf/heaps/deferred-free-helper.h
new file mode 100644
index 000000000000..4ed5893fdf3a
--- /dev/null
+++ b/drivers/dma-buf/heaps/deferred-free-helper.h
@@ -0,0 +1,63 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Deferred dmabuf freeing helper
+ *
+ * Copyright (C) 2020 Linaro, Ltd.
+ *
+ * Based on the ION page pool code
+ * Copyright (C) 2011 Google, Inc.
+ */
+
+#ifndef DEFERRED_FREE_HELPER_H
+#define DEFERRED_FREE_HELPER_H
+
+/**
+ * df_reason - enum for reason why item was freed
+ *
+ * This provides a reason for why the free function was called
+ * on the item. This is useful when deferred_free is used in
+ * combination with a pagepool, so under pressure the page can
+ * be immediately freed.
+ *
+ * DF_NORMAL:         Normal deferred free
+ *
+ * DF_UNDER_PRESSURE: Free was called because the system
+ *                    is under memory pressure. Usually
+ *                    from a shrinker. Avoid allocating
+ *                    memory in the free call, as it may
+ *                    fail.
+ */
+enum df_reason {
+	DF_NORMAL,
+	DF_UNDER_PRESSURE,
+};
+
+/**
+ * deferred_freelist_item - item structure for deferred freelist
+ *
+ * This is to be added to the structure for whatever you want to
+ * defer freeing on.
+ *
+ * @nr_pages: number of pages used by item to be freed
+ * @free: function pointer to be called when freeing the item
+ * @list: list entry for the deferred list
+ */
+struct deferred_freelist_item {
+	size_t nr_pages;
+	void (*free)(struct deferred_freelist_item *i,
+		     enum df_reason reason);
+	struct list_head list;
+};
+
+/**
+ * deferred_free - add item to the deferred free list
+ *
+ * @item: Pointer to deferred_freelist_item field of a structure
+ * @free: Function pointer to the free call
+ * @nr_pages: number of pages to be freed
+ */
+void deferred_free(struct deferred_freelist_item *item,
+		   void (*free)(struct deferred_freelist_item *i,
+				enum df_reason reason),
+		   size_t nr_pages);
+#endif
-- 
2.25.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/5] ANDROID: dma-buf: heaps: Add a shrinker controlled page pool
  2022-08-02  9:58 [PATCH 0/5] Add dma-buf secure-heap Olivier Masse
  2022-08-02  9:58 ` [PATCH 1/5] ANDROID: dma-buf: heaps: Add deferred-free-helper library code Olivier Masse
@ 2022-08-02  9:58 ` Olivier Masse
  2022-08-03 12:40   ` Brian Starkey
  2022-08-02  9:58 ` [PATCH 3/5] dma-buf: heaps: add Linaro secure dmabuf heap support Olivier Masse
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 13+ messages in thread
From: Olivier Masse @ 2022-08-02  9:58 UTC (permalink / raw)
  To: sumit.semwal, benjamin.gaignard, Brian.Starkey, christian.koenig,
	linux-media, dri-devel, linaro-mm-sig, linux-kernel
  Cc: clement.faure, olivier.masse

From: John Stultz <john.stultz@linaro.org>

This patch adds a simple shrinker controlled page pool to the
dmabuf heaps subsystem.

This replaces the use of the networking page_pool, over concerns
that the lack of a shrinker for that implementation may cause
additional low-memory kills

TODO: Take another pass at trying to unify this w/ the ttm pool

Thoughts and feedback would be greatly appreciated!

Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: Liam Mark <lmark@codeaurora.org>
Cc: Laura Abbott <labbott@kernel.org>
Cc: Brian Starkey <Brian.Starkey@arm.com>
Cc: Hridya Valsaraju <hridya@google.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Sandeep Patil <sspatil@google.com>
Cc: Daniel Mentz <danielmentz@google.com>
Cc: Chris Goldsworthy <cgoldswo@codeaurora.org>
Cc: Ørjan Eide <orjan.eide@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Ezequiel Garcia <ezequiel@collabora.com>
Cc: Simon Ser <contact@emersion.fr>
Cc: James Jones <jajones@nvidia.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: linux-mm@kvack.org
Cc: linux-media@vger.kernel.org
Cc: dri-devel@lists.freedesktop.org
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Olivier Masse <olivier.masse@nxp.com>
Bug: 168742043
---
 drivers/dma-buf/heaps/Kconfig     |   3 +
 drivers/dma-buf/heaps/Makefile    |   1 +
 drivers/dma-buf/heaps/page_pool.c | 246 ++++++++++++++++++++++++++++++
 drivers/dma-buf/heaps/page_pool.h |  55 +++++++
 4 files changed, 305 insertions(+)
 create mode 100644 drivers/dma-buf/heaps/page_pool.c
 create mode 100644 drivers/dma-buf/heaps/page_pool.h

diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
index 8ee64277a5d2..6a33193a7b3e 100644
--- a/drivers/dma-buf/heaps/Kconfig
+++ b/drivers/dma-buf/heaps/Kconfig
@@ -1,6 +1,9 @@
 config DMABUF_HEAPS_DEFERRED_FREE
 	tristate
 
+config DMABUF_HEAPS_PAGE_POOL
+	tristate
+
 config DMABUF_HEAPS_SYSTEM
 	bool "DMA-BUF System Heap"
 	depends on DMABUF_HEAPS
diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
index 5de95b77e169..e70722ea615e 100644
--- a/drivers/dma-buf/heaps/Makefile
+++ b/drivers/dma-buf/heaps/Makefile
@@ -1,5 +1,6 @@
 # SPDX-License-Identifier: GPL-2.0
 obj-$(CONFIG_DMABUF_HEAPS_DEFERRED_FREE) += deferred-free-helper.o
+obj-$(CONFIG_DMABUF_HEAPS_PAGE_POOL)	+= page_pool.o
 obj-$(CONFIG_DMABUF_HEAPS_SYSTEM)	+= system_heap.o
 obj-$(CONFIG_DMABUF_HEAPS_CMA)		+= cma_heap.o
 obj-$(CONFIG_DMABUF_HEAPS_DSP)          += dsp_heap.o
diff --git a/drivers/dma-buf/heaps/page_pool.c b/drivers/dma-buf/heaps/page_pool.c
new file mode 100644
index 000000000000..3dd4c3862dca
--- /dev/null
+++ b/drivers/dma-buf/heaps/page_pool.c
@@ -0,0 +1,246 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * DMA BUF page pool system
+ *
+ * Copyright (C) 2020 Linaro Ltd.
+ *
+ * Based on the ION page pool code
+ * Copyright (C) 2011 Google, Inc.
+ */
+
+#include <linux/freezer.h>
+#include <linux/list.h>
+#include <linux/sched/signal.h>
+#include <linux/slab.h>
+#include <linux/swap.h>
+#include "page_pool.h"
+
+static LIST_HEAD(pool_list);
+static DEFINE_MUTEX(pool_list_lock);
+
+static struct page *dmabuf_page_pool_alloc_pages(struct dmabuf_page_pool *pool)
+{
+	if (fatal_signal_pending(current))
+		return NULL;
+	return alloc_pages(pool->gfp_mask, pool->order);
+}
+
+static void dmabuf_page_pool_free_pages(struct dmabuf_page_pool *pool,
+					       struct page *page)
+{
+	__free_pages(page, pool->order);
+}
+
+static void dmabuf_page_pool_add(struct dmabuf_page_pool *pool, struct page *page)
+{
+	int index;
+
+	if (PageHighMem(page))
+		index = POOL_HIGHPAGE;
+	else
+		index = POOL_LOWPAGE;
+
+	mutex_lock(&pool->mutex);
+	list_add_tail(&page->lru, &pool->items[index]);
+	pool->count[index]++;
+	mod_node_page_state(page_pgdat(page), NR_KERNEL_MISC_RECLAIMABLE,
+			    1 << pool->order);
+	mutex_unlock(&pool->mutex);
+}
+
+static struct page *dmabuf_page_pool_remove(struct dmabuf_page_pool *pool, int index)
+{
+	struct page *page;
+
+	mutex_lock(&pool->mutex);
+	page = list_first_entry_or_null(&pool->items[index], struct page, lru);
+	if (page) {
+		pool->count[index]--;
+		list_del(&page->lru);
+		mod_node_page_state(page_pgdat(page), NR_KERNEL_MISC_RECLAIMABLE,
+				    -(1 << pool->order));
+	}
+	mutex_unlock(&pool->mutex);
+
+	return page;
+}
+
+static struct page *dmabuf_page_pool_fetch(struct dmabuf_page_pool *pool)
+{
+	struct page *page = NULL;
+
+	page = dmabuf_page_pool_remove(pool, POOL_HIGHPAGE);
+	if (!page)
+		page = dmabuf_page_pool_remove(pool, POOL_LOWPAGE);
+
+	return page;
+}
+
+struct page *dmabuf_page_pool_alloc(struct dmabuf_page_pool *pool)
+{
+	struct page *page = NULL;
+
+	if (WARN_ON(!pool))
+		return NULL;
+
+	page = dmabuf_page_pool_fetch(pool);
+	if (!page)
+		page = dmabuf_page_pool_alloc_pages(pool);
+
+	return page;
+}
+EXPORT_SYMBOL_GPL(dmabuf_page_pool_alloc);
+
+void dmabuf_page_pool_free(struct dmabuf_page_pool *pool, struct page *page)
+{
+	if (WARN_ON(pool->order != compound_order(page)))
+		return;
+
+	dmabuf_page_pool_add(pool, page);
+}
+EXPORT_SYMBOL_GPL(dmabuf_page_pool_free);
+
+static int dmabuf_page_pool_total(struct dmabuf_page_pool *pool, bool high)
+{
+	int count = pool->count[POOL_LOWPAGE];
+
+	if (high)
+		count += pool->count[POOL_HIGHPAGE];
+
+	return count << pool->order;
+}
+
+struct dmabuf_page_pool *dmabuf_page_pool_create(gfp_t gfp_mask, unsigned int order)
+{
+	struct dmabuf_page_pool *pool = kmalloc(sizeof(*pool), GFP_KERNEL);
+	int i;
+
+	if (!pool)
+		return NULL;
+
+	for (i = 0; i < POOL_TYPE_SIZE; i++) {
+		pool->count[i] = 0;
+		INIT_LIST_HEAD(&pool->items[i]);
+	}
+	pool->gfp_mask = gfp_mask | __GFP_COMP;
+	pool->order = order;
+	mutex_init(&pool->mutex);
+
+	mutex_lock(&pool_list_lock);
+	list_add(&pool->list, &pool_list);
+	mutex_unlock(&pool_list_lock);
+
+	return pool;
+}
+EXPORT_SYMBOL_GPL(dmabuf_page_pool_create);
+
+void dmabuf_page_pool_destroy(struct dmabuf_page_pool *pool)
+{
+	struct page *page;
+	int i;
+
+	/* Remove us from the pool list */
+	mutex_lock(&pool_list_lock);
+	list_del(&pool->list);
+	mutex_unlock(&pool_list_lock);
+
+	/* Free any remaining pages in the pool */
+	for (i = 0; i < POOL_TYPE_SIZE; i++) {
+		while ((page = dmabuf_page_pool_remove(pool, i)))
+			dmabuf_page_pool_free_pages(pool, page);
+	}
+
+	kfree(pool);
+}
+EXPORT_SYMBOL_GPL(dmabuf_page_pool_destroy);
+
+static int dmabuf_page_pool_do_shrink(struct dmabuf_page_pool *pool, gfp_t gfp_mask,
+				      int nr_to_scan)
+{
+	int freed = 0;
+	bool high;
+
+	if (current_is_kswapd())
+		high = true;
+	else
+		high = !!(gfp_mask & __GFP_HIGHMEM);
+
+	if (nr_to_scan == 0)
+		return dmabuf_page_pool_total(pool, high);
+
+	while (freed < nr_to_scan) {
+		struct page *page;
+
+		/* Try to free low pages first */
+		page = dmabuf_page_pool_remove(pool, POOL_LOWPAGE);
+		if (!page)
+			page = dmabuf_page_pool_remove(pool, POOL_HIGHPAGE);
+
+		if (!page)
+			break;
+
+		dmabuf_page_pool_free_pages(pool, page);
+		freed += (1 << pool->order);
+	}
+
+	return freed;
+}
+
+static int dmabuf_page_pool_shrink(gfp_t gfp_mask, int nr_to_scan)
+{
+	struct dmabuf_page_pool *pool;
+	int nr_total = 0;
+	int nr_freed;
+	bool only_scan = false;
+
+	if (!nr_to_scan)
+		only_scan = true;
+
+	mutex_lock(&pool_list_lock);
+	list_for_each_entry(pool, &pool_list, list) {
+		if (only_scan) {
+			nr_total += dmabuf_page_pool_do_shrink(pool,
+							       gfp_mask,
+							       nr_to_scan);
+		} else {
+			nr_freed = dmabuf_page_pool_do_shrink(pool,
+							      gfp_mask,
+							      nr_to_scan);
+			nr_to_scan -= nr_freed;
+			nr_total += nr_freed;
+			if (nr_to_scan <= 0)
+				break;
+		}
+	}
+	mutex_unlock(&pool_list_lock);
+
+	return nr_total;
+}
+
+static unsigned long dmabuf_page_pool_shrink_count(struct shrinker *shrinker,
+						   struct shrink_control *sc)
+{
+	return dmabuf_page_pool_shrink(sc->gfp_mask, 0);
+}
+
+static unsigned long dmabuf_page_pool_shrink_scan(struct shrinker *shrinker,
+						  struct shrink_control *sc)
+{
+	if (sc->nr_to_scan == 0)
+		return 0;
+	return dmabuf_page_pool_shrink(sc->gfp_mask, sc->nr_to_scan);
+}
+
+struct shrinker pool_shrinker = {
+	.count_objects = dmabuf_page_pool_shrink_count,
+	.scan_objects = dmabuf_page_pool_shrink_scan,
+	.seeks = DEFAULT_SEEKS,
+	.batch = 0,
+};
+
+static int dmabuf_page_pool_init_shrinker(void)
+{
+	return register_shrinker(&pool_shrinker);
+}
+module_init(dmabuf_page_pool_init_shrinker);
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/dma-buf/heaps/page_pool.h b/drivers/dma-buf/heaps/page_pool.h
new file mode 100644
index 000000000000..e3ec9eaacbc2
--- /dev/null
+++ b/drivers/dma-buf/heaps/page_pool.h
@@ -0,0 +1,55 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * DMA BUF PagePool implementation
+ * Based on earlier ION code by Google
+ *
+ * Copyright (C) 2011 Google, Inc.
+ * Copyright (C) 2020 Linaro Ltd.
+ */
+
+#ifndef _DMABUF_PAGE_POOL_H
+#define _DMABUF_PAGE_POOL_H
+
+#include <linux/device.h>
+#include <linux/kref.h>
+#include <linux/mm_types.h>
+#include <linux/mutex.h>
+#include <linux/shrinker.h>
+#include <linux/types.h>
+
+/* page types we track in the pool */
+enum {
+	POOL_LOWPAGE,      /* Clean lowmem pages */
+	POOL_HIGHPAGE,     /* Clean highmem pages */
+
+	POOL_TYPE_SIZE
+};
+
+/**
+ * struct dmabuf_page_pool - pagepool struct
+ * @count[]:		array of number of pages of that type in the pool
+ * @items[]:		array of list of pages of the specific type
+ * @mutex:		lock protecting this struct and especially the count
+ *			item list
+ * @gfp_mask:		gfp_mask to use from alloc
+ * @order:		order of pages in the pool
+ * @list:		list node for list of pools
+ *
+ * Allows you to keep a pool of pre allocated pages to use
+ */
+struct dmabuf_page_pool {
+	int count[POOL_TYPE_SIZE];
+	struct list_head items[POOL_TYPE_SIZE];
+	struct mutex mutex;
+	gfp_t gfp_mask;
+	unsigned int order;
+	struct list_head list;
+};
+
+struct dmabuf_page_pool *dmabuf_page_pool_create(gfp_t gfp_mask,
+						 unsigned int order);
+void dmabuf_page_pool_destroy(struct dmabuf_page_pool *pool);
+struct page *dmabuf_page_pool_alloc(struct dmabuf_page_pool *pool);
+void dmabuf_page_pool_free(struct dmabuf_page_pool *pool, struct page *page);
+
+#endif /* _DMABUF_PAGE_POOL_H */
-- 
2.25.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 3/5] dma-buf: heaps: add Linaro secure dmabuf heap support
  2022-08-02  9:58 [PATCH 0/5] Add dma-buf secure-heap Olivier Masse
  2022-08-02  9:58 ` [PATCH 1/5] ANDROID: dma-buf: heaps: Add deferred-free-helper library code Olivier Masse
  2022-08-02  9:58 ` [PATCH 2/5] ANDROID: dma-buf: heaps: Add a shrinker controlled page pool Olivier Masse
@ 2022-08-02  9:58 ` Olivier Masse
  2022-08-02 14:39   ` Brian Starkey
  2022-08-16 13:31   ` Nicolas Dufresne
  2022-08-02  9:58 ` [PATCH 4/5] dt-bindings: reserved-memory: add linaro,secure-heap Olivier Masse
  2022-08-02  9:58 ` [PATCH 5/5] plat-hikey: Add linaro,secure-heap compatible Olivier Masse
  4 siblings, 2 replies; 13+ messages in thread
From: Olivier Masse @ 2022-08-02  9:58 UTC (permalink / raw)
  To: sumit.semwal, benjamin.gaignard, Brian.Starkey, christian.koenig,
	linux-media, dri-devel, linaro-mm-sig, linux-kernel
  Cc: clement.faure, olivier.masse

add Linaro secure heap bindings: linaro,secure-heap
use genalloc to allocate/free buffer from buffer pool.
buffer pool info is from dts.
use sg_table instore the allocated memory info, the length of sg_table is 1.
implement secure_heap_buf_ops to implement buffer share in difference device:
1. Userspace passes this fd to all drivers it wants this buffer
to share with: First the filedescriptor is converted to a &dma_buf using
dma_buf_get(). Then the buffer is attached to the device using dma_buf_attach().
2. Once the buffer is attached to all devices userspace can initiate DMA
access to the shared buffer. In the kernel this is done by calling dma_buf_map_attachment()
3. get sg_table with dma_buf_map_attachment in difference device.

Signed-off-by: Olivier Masse <olivier.masse@nxp.com>
---
 drivers/dma-buf/heaps/Kconfig       |  21 +-
 drivers/dma-buf/heaps/Makefile      |   1 +
 drivers/dma-buf/heaps/secure_heap.c | 588 ++++++++++++++++++++++++++++
 3 files changed, 606 insertions(+), 4 deletions(-)
 create mode 100644 drivers/dma-buf/heaps/secure_heap.c

diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
index 6a33193a7b3e..b2406932192e 100644
--- a/drivers/dma-buf/heaps/Kconfig
+++ b/drivers/dma-buf/heaps/Kconfig
@@ -1,8 +1,12 @@
-config DMABUF_HEAPS_DEFERRED_FREE
-	tristate
+menuconfig DMABUF_HEAPS_DEFERRED_FREE
+	bool "DMA-BUF heaps deferred-free library"
+	help
+	  Choose this option to enable the DMA-BUF heaps deferred-free library.
 
-config DMABUF_HEAPS_PAGE_POOL
-	tristate
+menuconfig DMABUF_HEAPS_PAGE_POOL
+	bool "DMA-BUF heaps page-pool library"
+	help
+	  Choose this option to enable the DMA-BUF heaps page-pool library.
 
 config DMABUF_HEAPS_SYSTEM
 	bool "DMA-BUF System Heap"
@@ -26,3 +30,12 @@ config DMABUF_HEAPS_DSP
           Choose this option to enable the dsp dmabuf heap. The dsp heap
           is allocated by gen allocater. it's allocated according the dts.
           If in doubt, say Y.
+
+config DMABUF_HEAPS_SECURE
+	tristate "DMA-BUF Secure Heap"
+	depends on DMABUF_HEAPS && DMABUF_HEAPS_DEFERRED_FREE
+	help
+	  Choose this option to enable the secure dmabuf heap. The secure heap
+	  pools are defined according to the DT. Heaps are allocated
+	  in the pools using gen allocater.
+	  If in doubt, say Y.
diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
index e70722ea615e..08f6aa5919d1 100644
--- a/drivers/dma-buf/heaps/Makefile
+++ b/drivers/dma-buf/heaps/Makefile
@@ -4,3 +4,4 @@ obj-$(CONFIG_DMABUF_HEAPS_PAGE_POOL)	+= page_pool.o
 obj-$(CONFIG_DMABUF_HEAPS_SYSTEM)	+= system_heap.o
 obj-$(CONFIG_DMABUF_HEAPS_CMA)		+= cma_heap.o
 obj-$(CONFIG_DMABUF_HEAPS_DSP)          += dsp_heap.o
+obj-$(CONFIG_DMABUF_HEAPS_SECURE)	+= secure_heap.o
diff --git a/drivers/dma-buf/heaps/secure_heap.c b/drivers/dma-buf/heaps/secure_heap.c
new file mode 100644
index 000000000000..31aac5d050b4
--- /dev/null
+++ b/drivers/dma-buf/heaps/secure_heap.c
@@ -0,0 +1,588 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * DMABUF secure heap exporter
+ *
+ * Copyright 2021 NXP.
+ */
+
+#include <linux/dma-buf.h>
+#include <linux/dma-heap.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/genalloc.h>
+#include <linux/highmem.h>
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_fdt.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/scatterlist.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+
+#include "deferred-free-helper.h"
+#include "page_pool.h"
+
+#define MAX_SECURE_HEAP 2
+#define MAX_HEAP_NAME_LEN 32
+
+struct secure_heap_buffer {
+	struct dma_heap *heap;
+	struct list_head attachments;
+	struct mutex lock;
+	unsigned long len;
+	struct sg_table sg_table;
+	int vmap_cnt;
+	struct deferred_freelist_item deferred_free;
+	void *vaddr;
+	bool uncached;
+};
+
+struct dma_heap_attachment {
+	struct device *dev;
+	struct sg_table *table;
+	struct list_head list;
+	bool no_map;
+	bool mapped;
+	bool uncached;
+};
+
+struct secure_heap_info {
+	struct gen_pool *pool;
+
+	bool no_map;
+};
+
+struct rmem_secure {
+	phys_addr_t base;
+	phys_addr_t size;
+
+	char name[MAX_HEAP_NAME_LEN];
+
+	bool no_map;
+};
+
+static struct rmem_secure secure_data[MAX_SECURE_HEAP] = {0};
+static unsigned int secure_data_count;
+
+static struct sg_table *dup_sg_table(struct sg_table *table)
+{
+	struct sg_table *new_table;
+	int ret, i;
+	struct scatterlist *sg, *new_sg;
+
+	new_table = kzalloc(sizeof(*new_table), GFP_KERNEL);
+	if (!new_table)
+		return ERR_PTR(-ENOMEM);
+
+	ret = sg_alloc_table(new_table, table->orig_nents, GFP_KERNEL);
+	if (ret) {
+		kfree(new_table);
+		return ERR_PTR(-ENOMEM);
+	}
+
+	new_sg = new_table->sgl;
+	for_each_sgtable_sg(table, sg, i) {
+		sg_set_page(new_sg, sg_page(sg), sg->length, sg->offset);
+		new_sg->dma_address = sg->dma_address;
+#ifdef CONFIG_NEED_SG_DMA_LENGTH
+		new_sg->dma_length = sg->dma_length;
+#endif
+		new_sg = sg_next(new_sg);
+	}
+
+	return new_table;
+}
+
+static int secure_heap_attach(struct dma_buf *dmabuf,
+			      struct dma_buf_attachment *attachment)
+{
+	struct secure_heap_buffer *buffer = dmabuf->priv;
+	struct secure_heap_info *info = dma_heap_get_drvdata(buffer->heap);
+	struct dma_heap_attachment *a;
+	struct sg_table *table;
+
+	a = kzalloc(sizeof(*a), GFP_KERNEL);
+	if (!a)
+		return -ENOMEM;
+
+	table = dup_sg_table(&buffer->sg_table);
+	if (IS_ERR(table)) {
+		kfree(a);
+		return -ENOMEM;
+	}
+
+	a->table = table;
+	a->dev = attachment->dev;
+	INIT_LIST_HEAD(&a->list);
+	a->no_map = info->no_map;
+	a->mapped = false;
+	a->uncached = buffer->uncached;
+	attachment->priv = a;
+
+	mutex_lock(&buffer->lock);
+	list_add(&a->list, &buffer->attachments);
+	mutex_unlock(&buffer->lock);
+
+	return 0;
+}
+
+static void secure_heap_detach(struct dma_buf *dmabuf,
+			       struct dma_buf_attachment *attachment)
+{
+	struct secure_heap_buffer *buffer = dmabuf->priv;
+	struct dma_heap_attachment *a = attachment->priv;
+
+	mutex_lock(&buffer->lock);
+	list_del(&a->list);
+	mutex_unlock(&buffer->lock);
+
+	sg_free_table(a->table);
+	kfree(a->table);
+	kfree(a);
+}
+
+static struct sg_table *secure_heap_map_dma_buf(struct dma_buf_attachment *attachment,
+						enum dma_data_direction direction)
+{
+	struct dma_heap_attachment *a = attachment->priv;
+	struct sg_table *table = a->table;
+	int attr = 0;
+	int ret;
+
+	if (!a->no_map) {
+		if (a->uncached)
+			attr = DMA_ATTR_SKIP_CPU_SYNC;
+
+		ret = dma_map_sgtable(attachment->dev, table, direction, attr);
+		if (ret)
+			return ERR_PTR(ret);
+
+		a->mapped = true;
+	}
+
+	return table;
+}
+
+static void secure_heap_unmap_dma_buf(struct dma_buf_attachment *attachment,
+				      struct sg_table *table,
+				      enum dma_data_direction direction)
+{
+	struct dma_heap_attachment *a = attachment->priv;
+	int attr = 0;
+
+	if (!a->no_map)	{
+		if (a->uncached)
+			attr = DMA_ATTR_SKIP_CPU_SYNC;
+
+		a->mapped = false;
+		dma_unmap_sgtable(attachment->dev, table, direction, attr);
+	}
+}
+
+static int secure_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
+						enum dma_data_direction direction)
+{
+	struct secure_heap_buffer *buffer = dmabuf->priv;
+	struct dma_heap_attachment *a;
+
+	mutex_lock(&buffer->lock);
+
+	if (buffer->vmap_cnt)
+		invalidate_kernel_vmap_range(buffer->vaddr, buffer->len);
+
+	if (!buffer->uncached) {
+		list_for_each_entry(a, &buffer->attachments, list) {
+			if (!a->mapped)
+				continue;
+			dma_sync_sgtable_for_cpu(a->dev, a->table, direction);
+		}
+	}
+	mutex_unlock(&buffer->lock);
+
+	return 0;
+}
+
+static int secure_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
+					      enum dma_data_direction direction)
+{
+	struct secure_heap_buffer *buffer = dmabuf->priv;
+	struct dma_heap_attachment *a;
+
+	mutex_lock(&buffer->lock);
+
+	if (buffer->vmap_cnt)
+		flush_kernel_vmap_range(buffer->vaddr, buffer->len);
+
+	if (!buffer->uncached) {
+		list_for_each_entry(a, &buffer->attachments, list) {
+			if (!a->mapped)
+				continue;
+			dma_sync_sgtable_for_device(a->dev, a->table, direction);
+		}
+	}
+	mutex_unlock(&buffer->lock);
+
+	return 0;
+}
+
+static int secure_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
+{
+	struct secure_heap_buffer *buffer = dmabuf->priv;
+	struct sg_table *table = &buffer->sg_table;
+	unsigned long addr = vma->vm_start;
+	struct sg_page_iter piter;
+	int ret;
+
+	if (buffer->uncached)
+		vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
+
+	for_each_sgtable_page(table, &piter, vma->vm_pgoff) {
+		struct page *page = sg_page_iter_page(&piter);
+
+		ret = remap_pfn_range(vma, addr, page_to_pfn(page), PAGE_SIZE,
+				      vma->vm_page_prot);
+		if (ret)
+			return ret;
+		addr += PAGE_SIZE;
+	}
+	return 0;
+}
+
+static void *secure_heap_do_vmap(struct secure_heap_buffer *buffer)
+{
+	struct sg_table *table = &buffer->sg_table;
+	int npages = PAGE_ALIGN(buffer->len) / PAGE_SIZE;
+	struct page **pages = vmalloc(sizeof(struct page *) * npages);
+	struct page **tmp = pages;
+	struct sg_page_iter piter;
+	pgprot_t pgprot = PAGE_KERNEL;
+	void *vaddr;
+
+	if (!pages)
+		return ERR_PTR(-ENOMEM);
+
+	if (buffer->uncached)
+		pgprot = pgprot_writecombine(PAGE_KERNEL);
+
+	for_each_sgtable_page(table, &piter, 0) {
+		WARN_ON(tmp - pages >= npages);
+		*tmp++ = sg_page_iter_page(&piter);
+	}
+
+	vaddr = vmap(pages, npages, VM_MAP, pgprot);
+	vfree(pages);
+
+	if (!vaddr)
+		return ERR_PTR(-ENOMEM);
+
+	return vaddr;
+}
+
+static int secure_heap_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
+{
+	struct secure_heap_buffer *buffer = dmabuf->priv;
+	void *vaddr;
+	int ret = 0;
+
+	mutex_lock(&buffer->lock);
+	if (buffer->vmap_cnt) {
+		buffer->vmap_cnt++;
+		goto out;
+	}
+
+	vaddr = secure_heap_do_vmap(buffer);
+	if (IS_ERR(vaddr)) {
+		ret = PTR_ERR(vaddr);
+		goto out;
+	}
+
+	buffer->vaddr = vaddr;
+	buffer->vmap_cnt++;
+	dma_buf_map_set_vaddr(map, buffer->vaddr);
+out:
+	mutex_unlock(&buffer->lock);
+
+	return ret;
+}
+
+static void secure_heap_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
+{
+	struct secure_heap_buffer *buffer = dmabuf->priv;
+
+	mutex_lock(&buffer->lock);
+	if (!--buffer->vmap_cnt) {
+		vunmap(buffer->vaddr);
+		buffer->vaddr = NULL;
+	}
+	mutex_unlock(&buffer->lock);
+	dma_buf_map_clear(map);
+}
+
+static void secure_heap_zero_buffer(struct secure_heap_buffer *buffer)
+{
+	struct sg_table *sgt = &buffer->sg_table;
+	struct sg_page_iter piter;
+	struct page *p;
+	void *vaddr;
+
+	for_each_sgtable_page(sgt, &piter, 0) {
+		p = sg_page_iter_page(&piter);
+		vaddr = kmap_atomic(p);
+		memset(vaddr, 0, PAGE_SIZE);
+		kunmap_atomic(vaddr);
+	}
+}
+
+static void secure_heap_buf_free(struct deferred_freelist_item *item,
+				 enum df_reason reason)
+{
+	struct secure_heap_buffer *buffer;
+	struct secure_heap_info *info;
+	struct sg_table *table;
+	struct scatterlist *sg;
+	int i;
+
+	buffer = container_of(item, struct secure_heap_buffer, deferred_free);
+	info = dma_heap_get_drvdata(buffer->heap);
+
+	if (!info->no_map) {
+		// Zero the buffer pages before adding back to the pool
+		if (reason == DF_NORMAL)
+			secure_heap_zero_buffer(buffer);
+	}
+
+	table = &buffer->sg_table;
+	for_each_sg(table->sgl, sg, table->nents, i)
+		gen_pool_free(info->pool, sg_dma_address(sg), sg_dma_len(sg));
+
+	sg_free_table(table);
+	kfree(buffer);
+}
+
+static void secure_heap_dma_buf_release(struct dma_buf *dmabuf)
+{
+	struct secure_heap_buffer *buffer = dmabuf->priv;
+	int npages = PAGE_ALIGN(buffer->len) / PAGE_SIZE;
+
+	deferred_free(&buffer->deferred_free, secure_heap_buf_free, npages);
+}
+
+static const struct dma_buf_ops secure_heap_buf_ops = {
+	.attach = secure_heap_attach,
+	.detach = secure_heap_detach,
+	.map_dma_buf = secure_heap_map_dma_buf,
+	.unmap_dma_buf = secure_heap_unmap_dma_buf,
+	.begin_cpu_access = secure_heap_dma_buf_begin_cpu_access,
+	.end_cpu_access = secure_heap_dma_buf_end_cpu_access,
+	.mmap = secure_heap_mmap,
+	.vmap = secure_heap_vmap,
+	.vunmap = secure_heap_vunmap,
+	.release = secure_heap_dma_buf_release,
+};
+
+static struct dma_buf *secure_heap_do_allocate(struct dma_heap *heap,
+					       unsigned long len,
+					       unsigned long fd_flags,
+					       unsigned long heap_flags,
+					       bool uncached)
+{
+	struct secure_heap_buffer *buffer;
+	struct secure_heap_info *info = dma_heap_get_drvdata(heap);
+	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+	unsigned long size = roundup(len, PAGE_SIZE);
+	struct dma_buf *dmabuf;
+	struct sg_table *table;
+	int ret = -ENOMEM;
+	unsigned long phy_addr;
+
+	buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);
+	if (!buffer)
+		return ERR_PTR(-ENOMEM);
+
+	INIT_LIST_HEAD(&buffer->attachments);
+	mutex_init(&buffer->lock);
+	buffer->heap = heap;
+	buffer->len = size;
+	buffer->uncached = uncached;
+
+	phy_addr = gen_pool_alloc(info->pool, size);
+	if (!phy_addr)
+		goto free_buffer;
+
+	table = &buffer->sg_table;
+	if (sg_alloc_table(table, 1, GFP_KERNEL))
+		goto free_pool;
+
+	sg_set_page(table->sgl,	phys_to_page(phy_addr),	size, 0);
+	sg_dma_address(table->sgl) = phy_addr;
+	sg_dma_len(table->sgl) = size;
+
+	/* create the dmabuf */
+	exp_info.exp_name = dma_heap_get_name(heap);
+	exp_info.ops = &secure_heap_buf_ops;
+	exp_info.size = buffer->len;
+	exp_info.flags = fd_flags;
+	exp_info.priv = buffer;
+	dmabuf = dma_buf_export(&exp_info);
+	if (IS_ERR(dmabuf)) {
+		ret = PTR_ERR(dmabuf);
+		goto free_pages;
+	}
+
+	return dmabuf;
+
+free_pages:
+	sg_free_table(table);
+
+free_pool:
+	gen_pool_free(info->pool, phy_addr, size);
+
+free_buffer:
+	mutex_destroy(&buffer->lock);
+	kfree(buffer);
+
+	return ERR_PTR(ret);
+}
+
+static struct dma_buf *secure_heap_allocate(struct dma_heap *heap,
+					    unsigned long len,
+					    unsigned long fd_flags,
+					    unsigned long heap_flags)
+{
+	// use uncache buffer here by default
+	return secure_heap_do_allocate(heap, len, fd_flags, heap_flags, true);
+	// use cache buffer
+	// return secure_heap_do_allocate(heap, len, fd_flags, heap_flags, false);
+}
+
+static const struct dma_heap_ops secure_heap_ops = {
+	.allocate = secure_heap_allocate,
+};
+
+static int secure_heap_add(struct rmem_secure *rmem)
+{
+	struct dma_heap *secure_heap;
+	struct dma_heap_export_info exp_info;
+	struct secure_heap_info *info = NULL;
+	struct gen_pool *pool = NULL;
+	int ret = -EINVAL;
+
+	if (rmem->base == 0 || rmem->size == 0) {
+		pr_err("secure_data base or size is not correct\n");
+		goto error;
+	}
+
+	info = kzalloc(sizeof(*info), GFP_KERNEL);
+	if (!info) {
+		pr_err("dmabuf info allocation failed\n");
+		ret = -ENOMEM;
+		goto error;
+	}
+
+	pool = gen_pool_create(PAGE_SHIFT, -1);
+	if (!pool) {
+		pr_err("can't create gen pool\n");
+		ret = -ENOMEM;
+		goto error;
+	}
+
+	if (gen_pool_add(pool, rmem->base, rmem->size, -1) < 0) {
+		pr_err("failed to add memory into pool\n");
+		ret = -ENOMEM;
+		goto error;
+	}
+
+	info->pool = pool;
+	info->no_map = rmem->no_map;
+
+	exp_info.name = rmem->name;
+	exp_info.ops = &secure_heap_ops;
+	exp_info.priv = info;
+
+	secure_heap = dma_heap_add(&exp_info);
+	if (IS_ERR(secure_heap)) {
+		pr_err("dmabuf secure heap allocation failed\n");
+		ret = PTR_ERR(secure_heap);
+		goto error;
+	}
+
+	return 0;
+
+error:
+	kfree(info);
+	if (pool)
+		gen_pool_destroy(pool);
+
+	return ret;
+}
+
+static int secure_heap_create(void)
+{
+	unsigned int i;
+	int ret;
+
+	for (i = 0; i < secure_data_count; i++) {
+		ret = secure_heap_add(&secure_data[i]);
+		if (ret)
+			return ret;
+	}
+	return 0;
+}
+
+static int rmem_secure_heap_device_init(struct reserved_mem *rmem,
+					 struct device *dev)
+{
+	dev_set_drvdata(dev, rmem);
+	return 0;
+}
+
+static void rmem_secure_heap_device_release(struct reserved_mem *rmem,
+					 struct device *dev)
+{
+	dev_set_drvdata(dev, NULL);
+}
+
+static const struct reserved_mem_ops rmem_dma_ops = {
+	.device_init    = rmem_secure_heap_device_init,
+	.device_release = rmem_secure_heap_device_release,
+};
+
+static int __init rmem_secure_heap_setup(struct reserved_mem *rmem)
+{
+	if (secure_data_count < MAX_SECURE_HEAP) {
+		int name_len = 0;
+		char *s = rmem->name;
+
+		secure_data[secure_data_count].base = rmem->base;
+		secure_data[secure_data_count].size = rmem->size;
+		secure_data[secure_data_count].no_map =
+			(of_get_flat_dt_prop(rmem->fdt_node, "no-map", NULL) != NULL);
+
+		while (name_len < MAX_HEAP_NAME_LEN) {
+			if ((*s == '@') || (*s == '\0'))
+				break;
+			name_len++;
+			s++;
+		}
+		if (name_len == MAX_HEAP_NAME_LEN)
+			name_len--;
+
+		strncpy(secure_data[secure_data_count].name, rmem->name, name_len);
+
+		rmem->ops = &rmem_dma_ops;
+		pr_info("Reserved memory: DMA buf secure pool %s at %pa, size %ld MiB\n",
+			secure_data[secure_data_count].name,
+			&rmem->base, (unsigned long)rmem->size / SZ_1M);
+
+		secure_data_count++;
+		return 0;
+	}
+	WARN_ONCE(1, "Cannot handle more than %u secure heaps\n", MAX_SECURE_HEAP);
+	return -EINVAL;
+}
+
+RESERVEDMEM_OF_DECLARE(secure_heap, "linaro,secure-heap", rmem_secure_heap_setup);
+
+module_init(secure_heap_create);
+MODULE_LICENSE("GPL v2");
-- 
2.25.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 4/5] dt-bindings: reserved-memory: add linaro,secure-heap
  2022-08-02  9:58 [PATCH 0/5] Add dma-buf secure-heap Olivier Masse
                   ` (2 preceding siblings ...)
  2022-08-02  9:58 ` [PATCH 3/5] dma-buf: heaps: add Linaro secure dmabuf heap support Olivier Masse
@ 2022-08-02  9:58 ` Olivier Masse
  2022-08-02  9:58 ` [PATCH 5/5] plat-hikey: Add linaro,secure-heap compatible Olivier Masse
  4 siblings, 0 replies; 13+ messages in thread
From: Olivier Masse @ 2022-08-02  9:58 UTC (permalink / raw)
  To: sumit.semwal, benjamin.gaignard, Brian.Starkey, christian.koenig,
	linux-media, dri-devel, linaro-mm-sig, linux-kernel
  Cc: clement.faure, olivier.masse

DMABUF Reserved memory definition for OP-TEE SDP feaure.

Signed-off-by: Olivier Masse <olivier.masse@nxp.com>
---
 .../reserved-memory/linaro,secure-heap.yaml   | 56 +++++++++++++++++++
 1 file changed, 56 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/reserved-memory/linaro,secure-heap.yaml

diff --git a/Documentation/devicetree/bindings/reserved-memory/linaro,secure-heap.yaml b/Documentation/devicetree/bindings/reserved-memory/linaro,secure-heap.yaml
new file mode 100644
index 000000000000..80522a4e2989
--- /dev/null
+++ b/Documentation/devicetree/bindings/reserved-memory/linaro,secure-heap.yaml
@@ -0,0 +1,56 @@
+# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/reserved-memory/linaro,secure-heap.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Linaro Secure DMABUF Heap
+
+maintainers:
+  - Olivier Masse <olivier.masse@nxp.com>
+
+description:
+  Linaro OP-TEE firmware needs a reserved memory for the
+  Secure Data Path feature (aka SDP).
+  The purpose is to provide a secure memory heap which allow
+  non-secure OS to allocate/free secure buffers.
+  The TEE is reponsible for protecting the SDP memory buffers.
+  TEE Trusted Application can access secure memory references
+  provided as parameters (DMABUF file descriptor).
+
+allOf:
+  - $ref: "reserved-memory.yaml"
+
+properties:
+  compatible:
+    const: linaro,secure-heap
+
+  reg:
+    description:
+      Region of memory reserved for OP-TEE SDP feature
+
+  no-map:
+    $ref: /schemas/types.yaml#/definitions/flag
+    description:
+      Avoid creating a virtual mapping of the region as part of the OS'
+      standard mapping of system memory.
+
+unevaluatedProperties: false
+
+required:
+  - compatible
+  - reg
+  - no-map
+
+examples:
+  - |
+  reserved-memory {
+    #address-cells = <2>;
+    #size-cells = <2>;
+
+    sdp@3e800000 {
+      compatible = "linaro,secure-heap";
+      no-map;
+      reg = <0 0x3E800000 0 0x00400000>;
+    };
+  };
-- 
2.25.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 5/5] plat-hikey: Add linaro,secure-heap compatible
  2022-08-02  9:58 [PATCH 0/5] Add dma-buf secure-heap Olivier Masse
                   ` (3 preceding siblings ...)
  2022-08-02  9:58 ` [PATCH 4/5] dt-bindings: reserved-memory: add linaro,secure-heap Olivier Masse
@ 2022-08-02  9:58 ` Olivier Masse
  4 siblings, 0 replies; 13+ messages in thread
From: Olivier Masse @ 2022-08-02  9:58 UTC (permalink / raw)
  To: sumit.semwal, benjamin.gaignard, Brian.Starkey, christian.koenig,
	linux-media, dri-devel, linaro-mm-sig, linux-kernel
  Cc: clement.faure, olivier.masse

Add DMABUF_HEAPS_SECURE in defconfig

Signed-off-by: Olivier Masse <olivier.masse@nxp.com>
---
 arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts | 11 +++++++++++
 arch/arm64/configs/defconfig                   |  4 ++++
 2 files changed, 15 insertions(+)

diff --git a/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts b/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts
index 3df2afb2f637..e4af0d914279 100644
--- a/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts
+++ b/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts
@@ -258,6 +258,17 @@ optee {
 		};
 	};
 
+	reserved-memory {
+		#address-cells = <2>;
+		#size-cells = <2>;
+
+		sdp@3e800000 {
+			compatible = "linaro,secure-heap";
+			no-map;
+			reg = <0 0x3E800000 0 0x00400000>;
+		};
+	};
+
 	sound_card {
 		compatible = "audio-graph-card";
 		dais = <&i2s0_port0>;
diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig
index c09b07c22d57..4b625043313d 100644
--- a/arch/arm64/configs/defconfig
+++ b/arch/arm64/configs/defconfig
@@ -1465,6 +1465,10 @@ CONFIG_CRYPTO_DEV_HISI_SEC2=m
 CONFIG_CRYPTO_DEV_HISI_ZIP=m
 CONFIG_CRYPTO_DEV_HISI_HPRE=m
 CONFIG_CRYPTO_DEV_HISI_TRNG=m
+CONFIG_DMABUF_HEAPS=y
+CONFIG_DMABUF_HEAPS_DEFERRED_FREE=y
+CONFIG_DMABUF_HEAPS_PAGE_POOL=y
+CONFIG_DMABUF_HEAPS_SECURE=y
 CONFIG_CMA_SIZE_MBYTES=32
 CONFIG_PRINTK_TIME=y
 CONFIG_DEBUG_KERNEL=y
-- 
2.25.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/5] dma-buf: heaps: add Linaro secure dmabuf heap support
  2022-08-02  9:58 ` [PATCH 3/5] dma-buf: heaps: add Linaro secure dmabuf heap support Olivier Masse
@ 2022-08-02 14:39   ` Brian Starkey
  2022-08-03 11:07     ` [EXT] " Olivier Masse
  2022-08-16 13:31   ` Nicolas Dufresne
  1 sibling, 1 reply; 13+ messages in thread
From: Brian Starkey @ 2022-08-02 14:39 UTC (permalink / raw)
  To: Olivier Masse
  Cc: sumit.semwal, benjamin.gaignard, christian.koenig, linux-media,
	dri-devel, linaro-mm-sig, linux-kernel, clement.faure, nd

Hi Olivier,

Some comments below as I mentioned off-list.

One additional point: some devices need to know if a buffer is
protected, so that they can configure registers appropriately to make
use of that protected buffer. There was previously a discussion about
adding a flag to a dma_buf to indicate that it is allocated from
protected memory[1].

[1] https://lists.freedesktop.org/archives/dri-devel/2019-September/238059.html

On Tue, Aug 02, 2022 at 11:58:41AM +0200, Olivier Masse wrote:
> add Linaro secure heap bindings: linaro,secure-heap
> use genalloc to allocate/free buffer from buffer pool.
> buffer pool info is from dts.
> use sg_table instore the allocated memory info, the length of sg_table is 1.
> implement secure_heap_buf_ops to implement buffer share in difference device:
> 1. Userspace passes this fd to all drivers it wants this buffer
> to share with: First the filedescriptor is converted to a &dma_buf using
> dma_buf_get(). Then the buffer is attached to the device using dma_buf_attach().
> 2. Once the buffer is attached to all devices userspace can initiate DMA
> access to the shared buffer. In the kernel this is done by calling dma_buf_map_attachment()
> 3. get sg_table with dma_buf_map_attachment in difference device.
> 
> Signed-off-by: Olivier Masse <olivier.masse@nxp.com>
> ---
>  drivers/dma-buf/heaps/Kconfig       |  21 +-
>  drivers/dma-buf/heaps/Makefile      |   1 +
>  drivers/dma-buf/heaps/secure_heap.c | 588 ++++++++++++++++++++++++++++
>  3 files changed, 606 insertions(+), 4 deletions(-)
>  create mode 100644 drivers/dma-buf/heaps/secure_heap.c
> 
> diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
> index 6a33193a7b3e..b2406932192e 100644
> --- a/drivers/dma-buf/heaps/Kconfig
> +++ b/drivers/dma-buf/heaps/Kconfig
> @@ -1,8 +1,12 @@
> -config DMABUF_HEAPS_DEFERRED_FREE
> -	tristate
> +menuconfig DMABUF_HEAPS_DEFERRED_FREE
> +	bool "DMA-BUF heaps deferred-free library"
> +	help
> +	  Choose this option to enable the DMA-BUF heaps deferred-free library.
>  
> -config DMABUF_HEAPS_PAGE_POOL
> -	tristate
> +menuconfig DMABUF_HEAPS_PAGE_POOL
> +	bool "DMA-BUF heaps page-pool library"
> +	help
> +	  Choose this option to enable the DMA-BUF heaps page-pool library.
>  
>  config DMABUF_HEAPS_SYSTEM
>  	bool "DMA-BUF System Heap"
> @@ -26,3 +30,12 @@ config DMABUF_HEAPS_DSP
>            Choose this option to enable the dsp dmabuf heap. The dsp heap
>            is allocated by gen allocater. it's allocated according the dts.
>            If in doubt, say Y.
> +
> +config DMABUF_HEAPS_SECURE
> +	tristate "DMA-BUF Secure Heap"
> +	depends on DMABUF_HEAPS && DMABUF_HEAPS_DEFERRED_FREE
> +	help
> +	  Choose this option to enable the secure dmabuf heap. The secure heap
> +	  pools are defined according to the DT. Heaps are allocated
> +	  in the pools using gen allocater.
> +	  If in doubt, say Y.
> diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
> index e70722ea615e..08f6aa5919d1 100644
> --- a/drivers/dma-buf/heaps/Makefile
> +++ b/drivers/dma-buf/heaps/Makefile
> @@ -4,3 +4,4 @@ obj-$(CONFIG_DMABUF_HEAPS_PAGE_POOL)	+= page_pool.o
>  obj-$(CONFIG_DMABUF_HEAPS_SYSTEM)	+= system_heap.o
>  obj-$(CONFIG_DMABUF_HEAPS_CMA)		+= cma_heap.o
>  obj-$(CONFIG_DMABUF_HEAPS_DSP)          += dsp_heap.o
> +obj-$(CONFIG_DMABUF_HEAPS_SECURE)	+= secure_heap.o
> diff --git a/drivers/dma-buf/heaps/secure_heap.c b/drivers/dma-buf/heaps/secure_heap.c
> new file mode 100644
> index 000000000000..31aac5d050b4
> --- /dev/null
> +++ b/drivers/dma-buf/heaps/secure_heap.c
> @@ -0,0 +1,588 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * DMABUF secure heap exporter
> + *
> + * Copyright 2021 NXP.
> + */
> +
> +#include <linux/dma-buf.h>
> +#include <linux/dma-heap.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/err.h>
> +#include <linux/genalloc.h>
> +#include <linux/highmem.h>
> +#include <linux/mm.h>
> +#include <linux/module.h>
> +#include <linux/of.h>
> +#include <linux/of_fdt.h>
> +#include <linux/of_reserved_mem.h>
> +#include <linux/scatterlist.h>
> +#include <linux/slab.h>
> +#include <linux/vmalloc.h>
> +
> +#include "deferred-free-helper.h"
> +#include "page_pool.h"
> +
> +#define MAX_SECURE_HEAP 2
> +#define MAX_HEAP_NAME_LEN 32
> +
> +struct secure_heap_buffer {
> +	struct dma_heap *heap;
> +	struct list_head attachments;
> +	struct mutex lock;
> +	unsigned long len;
> +	struct sg_table sg_table;
> +	int vmap_cnt;
> +	struct deferred_freelist_item deferred_free;
> +	void *vaddr;
> +	bool uncached;
> +};
> +
> +struct dma_heap_attachment {
> +	struct device *dev;
> +	struct sg_table *table;
> +	struct list_head list;
> +	bool no_map;
> +	bool mapped;
> +	bool uncached;
> +};

I think dma_heap_attachment should have a more specific name,
otherwise it looks like some generic part of the dma_heap framework.

> +
> +struct secure_heap_info {
> +	struct gen_pool *pool;
> +
> +	bool no_map;
> +};
> +
> +struct rmem_secure {
> +	phys_addr_t base;
> +	phys_addr_t size;
> +
> +	char name[MAX_HEAP_NAME_LEN];
> +
> +	bool no_map;
> +};
> +
> +static struct rmem_secure secure_data[MAX_SECURE_HEAP] = {0};
> +static unsigned int secure_data_count;
> +
> +static struct sg_table *dup_sg_table(struct sg_table *table)
> +{
> +	struct sg_table *new_table;
> +	int ret, i;
> +	struct scatterlist *sg, *new_sg;
> +
> +	new_table = kzalloc(sizeof(*new_table), GFP_KERNEL);
> +	if (!new_table)
> +		return ERR_PTR(-ENOMEM);
> +
> +	ret = sg_alloc_table(new_table, table->orig_nents, GFP_KERNEL);
> +	if (ret) {
> +		kfree(new_table);
> +		return ERR_PTR(-ENOMEM);
> +	}
> +
> +	new_sg = new_table->sgl;
> +	for_each_sgtable_sg(table, sg, i) {
> +		sg_set_page(new_sg, sg_page(sg), sg->length, sg->offset);
> +		new_sg->dma_address = sg->dma_address;
> +#ifdef CONFIG_NEED_SG_DMA_LENGTH
> +		new_sg->dma_length = sg->dma_length;
> +#endif
> +		new_sg = sg_next(new_sg);
> +	}
> +
> +	return new_table;
> +}
> +
> +static int secure_heap_attach(struct dma_buf *dmabuf,
> +			      struct dma_buf_attachment *attachment)
> +{
> +	struct secure_heap_buffer *buffer = dmabuf->priv;
> +	struct secure_heap_info *info = dma_heap_get_drvdata(buffer->heap);
> +	struct dma_heap_attachment *a;
> +	struct sg_table *table;
> +
> +	a = kzalloc(sizeof(*a), GFP_KERNEL);
> +	if (!a)
> +		return -ENOMEM;
> +
> +	table = dup_sg_table(&buffer->sg_table);
> +	if (IS_ERR(table)) {
> +		kfree(a);
> +		return -ENOMEM;
> +	}
> +
> +	a->table = table;
> +	a->dev = attachment->dev;
> +	INIT_LIST_HEAD(&a->list);
> +	a->no_map = info->no_map;
> +	a->mapped = false;
> +	a->uncached = buffer->uncached;
> +	attachment->priv = a;
> +
> +	mutex_lock(&buffer->lock);
> +	list_add(&a->list, &buffer->attachments);
> +	mutex_unlock(&buffer->lock);
> +
> +	return 0;
> +}
> +
> +static void secure_heap_detach(struct dma_buf *dmabuf,
> +			       struct dma_buf_attachment *attachment)
> +{
> +	struct secure_heap_buffer *buffer = dmabuf->priv;
> +	struct dma_heap_attachment *a = attachment->priv;
> +
> +	mutex_lock(&buffer->lock);
> +	list_del(&a->list);
> +	mutex_unlock(&buffer->lock);
> +
> +	sg_free_table(a->table);
> +	kfree(a->table);
> +	kfree(a);
> +}
> +
> +static struct sg_table *secure_heap_map_dma_buf(struct dma_buf_attachment *attachment,
> +						enum dma_data_direction direction)
> +{
> +	struct dma_heap_attachment *a = attachment->priv;
> +	struct sg_table *table = a->table;
> +	int attr = 0;
> +	int ret;
> +
> +	if (!a->no_map) {

This looks strange - you're expecting this driver to be used on
regions with no-map set, but if no-map _is_ set, you don't allow the
dma_buf to get mapped to any devices. Doesn't that mean that these
buffers can never actually be used?

> +		if (a->uncached)
> +			attr = DMA_ATTR_SKIP_CPU_SYNC;
> +

If the CPU can never touch these buffers, is cached vs uncached
meaningful?

If the TEE touches the buffers from the CPU then perhaps the TEE would
need to do cache maintenance, but I'd expect that to be managed in the
TEE.

> +		ret = dma_map_sgtable(attachment->dev, table, direction, attr);
> +		if (ret)
> +			return ERR_PTR(ret);
> +
> +		a->mapped = true;
> +	}
> +
> +	return table;
> +}
> +
> +static void secure_heap_unmap_dma_buf(struct dma_buf_attachment *attachment,
> +				      struct sg_table *table,
> +				      enum dma_data_direction direction)
> +{
> +	struct dma_heap_attachment *a = attachment->priv;
> +	int attr = 0;
> +
> +	if (!a->no_map)	{
> +		if (a->uncached)
> +			attr = DMA_ATTR_SKIP_CPU_SYNC;
> +
> +		a->mapped = false;
> +		dma_unmap_sgtable(attachment->dev, table, direction, attr);
> +	}
> +}
> +
> +static int secure_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> +						enum dma_data_direction direction)

If the firewall is preventing CPU accesses, then shouldn't
begin_cpu_access and end_cpu_access either fail or be a no-op?

> +{
> +	struct secure_heap_buffer *buffer = dmabuf->priv;
> +	struct dma_heap_attachment *a;
> +
> +	mutex_lock(&buffer->lock);
> +
> +	if (buffer->vmap_cnt)
> +		invalidate_kernel_vmap_range(buffer->vaddr, buffer->len);
> +
> +	if (!buffer->uncached) {
> +		list_for_each_entry(a, &buffer->attachments, list) {
> +			if (!a->mapped)
> +				continue;
> +			dma_sync_sgtable_for_cpu(a->dev, a->table, direction);
> +		}
> +	}
> +	mutex_unlock(&buffer->lock);
> +
> +	return 0;
> +}
> +
> +static int secure_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
> +					      enum dma_data_direction direction)
> +{
> +	struct secure_heap_buffer *buffer = dmabuf->priv;
> +	struct dma_heap_attachment *a;
> +
> +	mutex_lock(&buffer->lock);
> +
> +	if (buffer->vmap_cnt)
> +		flush_kernel_vmap_range(buffer->vaddr, buffer->len);
> +
> +	if (!buffer->uncached) {
> +		list_for_each_entry(a, &buffer->attachments, list) {
> +			if (!a->mapped)
> +				continue;
> +			dma_sync_sgtable_for_device(a->dev, a->table, direction);
> +		}
> +	}
> +	mutex_unlock(&buffer->lock);
> +
> +	return 0;
> +}
> +
> +static int secure_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
> +{
> +	struct secure_heap_buffer *buffer = dmabuf->priv;
> +	struct sg_table *table = &buffer->sg_table;
> +	unsigned long addr = vma->vm_start;
> +	struct sg_page_iter piter;
> +	int ret;
> +
> +	if (buffer->uncached)
> +		vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
> +
> +	for_each_sgtable_page(table, &piter, vma->vm_pgoff) {
> +		struct page *page = sg_page_iter_page(&piter);
> +
> +		ret = remap_pfn_range(vma, addr, page_to_pfn(page), PAGE_SIZE,
> +				      vma->vm_page_prot);

If the CPU can't touch these buffers, what would they be mapped to
userspace for?

> +		if (ret)
> +			return ret;
> +		addr += PAGE_SIZE;
> +	}
> +	return 0;
> +}
> +
> +static void *secure_heap_do_vmap(struct secure_heap_buffer *buffer)
> +{
> +	struct sg_table *table = &buffer->sg_table;
> +	int npages = PAGE_ALIGN(buffer->len) / PAGE_SIZE;
> +	struct page **pages = vmalloc(sizeof(struct page *) * npages);
> +	struct page **tmp = pages;
> +	struct sg_page_iter piter;
> +	pgprot_t pgprot = PAGE_KERNEL;
> +	void *vaddr;
> +
> +	if (!pages)
> +		return ERR_PTR(-ENOMEM);
> +
> +	if (buffer->uncached)
> +		pgprot = pgprot_writecombine(PAGE_KERNEL);
> +
> +	for_each_sgtable_page(table, &piter, 0) {
> +		WARN_ON(tmp - pages >= npages);
> +		*tmp++ = sg_page_iter_page(&piter);
> +	}
> +
> +	vaddr = vmap(pages, npages, VM_MAP, pgprot);
> +	vfree(pages);

Similar to above, if the CPU can't touch these buffers, what would be
the point of mapping them to the kernel?

> +
> +	if (!vaddr)
> +		return ERR_PTR(-ENOMEM);
> +
> +	return vaddr;
> +}
> +
> +static int secure_heap_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
> +{
> +	struct secure_heap_buffer *buffer = dmabuf->priv;
> +	void *vaddr;
> +	int ret = 0;
> +
> +	mutex_lock(&buffer->lock);
> +	if (buffer->vmap_cnt) {
> +		buffer->vmap_cnt++;
> +		goto out;
> +	}
> +
> +	vaddr = secure_heap_do_vmap(buffer);
> +	if (IS_ERR(vaddr)) {
> +		ret = PTR_ERR(vaddr);
> +		goto out;
> +	}
> +
> +	buffer->vaddr = vaddr;
> +	buffer->vmap_cnt++;
> +	dma_buf_map_set_vaddr(map, buffer->vaddr);
> +out:
> +	mutex_unlock(&buffer->lock);
> +
> +	return ret;
> +}
> +
> +static void secure_heap_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
> +{
> +	struct secure_heap_buffer *buffer = dmabuf->priv;
> +
> +	mutex_lock(&buffer->lock);
> +	if (!--buffer->vmap_cnt) {
> +		vunmap(buffer->vaddr);
> +		buffer->vaddr = NULL;
> +	}
> +	mutex_unlock(&buffer->lock);
> +	dma_buf_map_clear(map);
> +}
> +
> +static void secure_heap_zero_buffer(struct secure_heap_buffer *buffer)
> +{
> +	struct sg_table *sgt = &buffer->sg_table;
> +	struct sg_page_iter piter;
> +	struct page *p;
> +	void *vaddr;
> +
> +	for_each_sgtable_page(sgt, &piter, 0) {
> +		p = sg_page_iter_page(&piter);
> +		vaddr = kmap_atomic(p);
> +		memset(vaddr, 0, PAGE_SIZE);

How can you do memset on the buffer if the firewall is preventing CPU
accesses?

> +		kunmap_atomic(vaddr);
> +	}
> +}
> +
> +static void secure_heap_buf_free(struct deferred_freelist_item *item,
> +				 enum df_reason reason)
> +{
> +	struct secure_heap_buffer *buffer;
> +	struct secure_heap_info *info;
> +	struct sg_table *table;
> +	struct scatterlist *sg;
> +	int i;
> +
> +	buffer = container_of(item, struct secure_heap_buffer, deferred_free);
> +	info = dma_heap_get_drvdata(buffer->heap);
> +
> +	if (!info->no_map) {
> +		// Zero the buffer pages before adding back to the pool
> +		if (reason == DF_NORMAL)
> +			secure_heap_zero_buffer(buffer);
> +	}
> +
> +	table = &buffer->sg_table;
> +	for_each_sg(table->sgl, sg, table->nents, i)
> +		gen_pool_free(info->pool, sg_dma_address(sg), sg_dma_len(sg));
> +
> +	sg_free_table(table);
> +	kfree(buffer);
> +}
> +
> +static void secure_heap_dma_buf_release(struct dma_buf *dmabuf)
> +{
> +	struct secure_heap_buffer *buffer = dmabuf->priv;
> +	int npages = PAGE_ALIGN(buffer->len) / PAGE_SIZE;
> +
> +	deferred_free(&buffer->deferred_free, secure_heap_buf_free, npages);
> +}
> +
> +static const struct dma_buf_ops secure_heap_buf_ops = {
> +	.attach = secure_heap_attach,
> +	.detach = secure_heap_detach,
> +	.map_dma_buf = secure_heap_map_dma_buf,
> +	.unmap_dma_buf = secure_heap_unmap_dma_buf,
> +	.begin_cpu_access = secure_heap_dma_buf_begin_cpu_access,
> +	.end_cpu_access = secure_heap_dma_buf_end_cpu_access,
> +	.mmap = secure_heap_mmap,
> +	.vmap = secure_heap_vmap,
> +	.vunmap = secure_heap_vunmap,
> +	.release = secure_heap_dma_buf_release,
> +};
> +
> +static struct dma_buf *secure_heap_do_allocate(struct dma_heap *heap,
> +					       unsigned long len,
> +					       unsigned long fd_flags,
> +					       unsigned long heap_flags,
> +					       bool uncached)
> +{
> +	struct secure_heap_buffer *buffer;
> +	struct secure_heap_info *info = dma_heap_get_drvdata(heap);
> +	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
> +	unsigned long size = roundup(len, PAGE_SIZE);
> +	struct dma_buf *dmabuf;
> +	struct sg_table *table;
> +	int ret = -ENOMEM;
> +	unsigned long phy_addr;
> +
> +	buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);
> +	if (!buffer)
> +		return ERR_PTR(-ENOMEM);
> +
> +	INIT_LIST_HEAD(&buffer->attachments);
> +	mutex_init(&buffer->lock);
> +	buffer->heap = heap;
> +	buffer->len = size;
> +	buffer->uncached = uncached;
> +
> +	phy_addr = gen_pool_alloc(info->pool, size);
> +	if (!phy_addr)
> +		goto free_buffer;
> +
> +	table = &buffer->sg_table;
> +	if (sg_alloc_table(table, 1, GFP_KERNEL))
> +		goto free_pool;
> +
> +	sg_set_page(table->sgl,	phys_to_page(phy_addr),	size, 0);
> +	sg_dma_address(table->sgl) = phy_addr;
> +	sg_dma_len(table->sgl) = size;
> +
> +	/* create the dmabuf */
> +	exp_info.exp_name = dma_heap_get_name(heap);
> +	exp_info.ops = &secure_heap_buf_ops;
> +	exp_info.size = buffer->len;
> +	exp_info.flags = fd_flags;
> +	exp_info.priv = buffer;
> +	dmabuf = dma_buf_export(&exp_info);
> +	if (IS_ERR(dmabuf)) {
> +		ret = PTR_ERR(dmabuf);
> +		goto free_pages;
> +	}
> +
> +	return dmabuf;
> +
> +free_pages:
> +	sg_free_table(table);
> +
> +free_pool:
> +	gen_pool_free(info->pool, phy_addr, size);
> +
> +free_buffer:
> +	mutex_destroy(&buffer->lock);
> +	kfree(buffer);
> +
> +	return ERR_PTR(ret);
> +}
> +
> +static struct dma_buf *secure_heap_allocate(struct dma_heap *heap,
> +					    unsigned long len,
> +					    unsigned long fd_flags,
> +					    unsigned long heap_flags)
> +{
> +	// use uncache buffer here by default
> +	return secure_heap_do_allocate(heap, len, fd_flags, heap_flags, true);
> +	// use cache buffer
> +	// return secure_heap_do_allocate(heap, len, fd_flags, heap_flags, false);
> +}
> +
> +static const struct dma_heap_ops secure_heap_ops = {
> +	.allocate = secure_heap_allocate,
> +};
> +
> +static int secure_heap_add(struct rmem_secure *rmem)
> +{
> +	struct dma_heap *secure_heap;
> +	struct dma_heap_export_info exp_info;
> +	struct secure_heap_info *info = NULL;
> +	struct gen_pool *pool = NULL;
> +	int ret = -EINVAL;
> +
> +	if (rmem->base == 0 || rmem->size == 0) {
> +		pr_err("secure_data base or size is not correct\n");
> +		goto error;
> +	}
> +
> +	info = kzalloc(sizeof(*info), GFP_KERNEL);
> +	if (!info) {
> +		pr_err("dmabuf info allocation failed\n");
> +		ret = -ENOMEM;
> +		goto error;
> +	}
> +
> +	pool = gen_pool_create(PAGE_SHIFT, -1);
> +	if (!pool) {
> +		pr_err("can't create gen pool\n");
> +		ret = -ENOMEM;
> +		goto error;
> +	}
> +
> +	if (gen_pool_add(pool, rmem->base, rmem->size, -1) < 0) {
> +		pr_err("failed to add memory into pool\n");
> +		ret = -ENOMEM;
> +		goto error;
> +	}
> +
> +	info->pool = pool;
> +	info->no_map = rmem->no_map;

This kind of heap probably can't work if the region doesn't have
no-map, so I think it would make sense to enforce that no_map is set
(or ignore regions without no-map in DT).

Cheers,
-Brian

> +
> +	exp_info.name = rmem->name;
> +	exp_info.ops = &secure_heap_ops;
> +	exp_info.priv = info;
> +
> +	secure_heap = dma_heap_add(&exp_info);
> +	if (IS_ERR(secure_heap)) {
> +		pr_err("dmabuf secure heap allocation failed\n");
> +		ret = PTR_ERR(secure_heap);
> +		goto error;
> +	}
> +
> +	return 0;
> +
> +error:
> +	kfree(info);
> +	if (pool)
> +		gen_pool_destroy(pool);
> +
> +	return ret;
> +}
> +
> +static int secure_heap_create(void)
> +{
> +	unsigned int i;
> +	int ret;
> +
> +	for (i = 0; i < secure_data_count; i++) {
> +		ret = secure_heap_add(&secure_data[i]);
> +		if (ret)
> +			return ret;
> +	}
> +	return 0;
> +}
> +
> +static int rmem_secure_heap_device_init(struct reserved_mem *rmem,
> +					 struct device *dev)
> +{
> +	dev_set_drvdata(dev, rmem);
> +	return 0;
> +}
> +
> +static void rmem_secure_heap_device_release(struct reserved_mem *rmem,
> +					 struct device *dev)
> +{
> +	dev_set_drvdata(dev, NULL);
> +}
> +
> +static const struct reserved_mem_ops rmem_dma_ops = {
> +	.device_init    = rmem_secure_heap_device_init,
> +	.device_release = rmem_secure_heap_device_release,
> +};
> +
> +static int __init rmem_secure_heap_setup(struct reserved_mem *rmem)
> +{
> +	if (secure_data_count < MAX_SECURE_HEAP) {
> +		int name_len = 0;
> +		char *s = rmem->name;
> +
> +		secure_data[secure_data_count].base = rmem->base;
> +		secure_data[secure_data_count].size = rmem->size;
> +		secure_data[secure_data_count].no_map =
> +			(of_get_flat_dt_prop(rmem->fdt_node, "no-map", NULL) != NULL);
> +
> +		while (name_len < MAX_HEAP_NAME_LEN) {
> +			if ((*s == '@') || (*s == '\0'))
> +				break;
> +			name_len++;
> +			s++;
> +		}
> +		if (name_len == MAX_HEAP_NAME_LEN)
> +			name_len--;
> +
> +		strncpy(secure_data[secure_data_count].name, rmem->name, name_len);
> +
> +		rmem->ops = &rmem_dma_ops;
> +		pr_info("Reserved memory: DMA buf secure pool %s at %pa, size %ld MiB\n",
> +			secure_data[secure_data_count].name,
> +			&rmem->base, (unsigned long)rmem->size / SZ_1M);
> +
> +		secure_data_count++;
> +		return 0;
> +	}
> +	WARN_ONCE(1, "Cannot handle more than %u secure heaps\n", MAX_SECURE_HEAP);
> +	return -EINVAL;
> +}
> +
> +RESERVEDMEM_OF_DECLARE(secure_heap, "linaro,secure-heap", rmem_secure_heap_setup);
> +
> +module_init(secure_heap_create);
> +MODULE_LICENSE("GPL v2");
> -- 
> 2.25.0
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [EXT] Re: [PATCH 3/5] dma-buf: heaps: add Linaro secure dmabuf heap support
  2022-08-02 14:39   ` Brian Starkey
@ 2022-08-03 11:07     ` Olivier Masse
  2022-08-03 12:37       ` Brian Starkey
  0 siblings, 1 reply; 13+ messages in thread
From: Olivier Masse @ 2022-08-03 11:07 UTC (permalink / raw)
  To: brian.starkey
  Cc: sumit.semwal, linux-kernel, linaro-mm-sig, christian.koenig,
	linux-media, nd, Clément Faure, dri-devel,
	benjamin.gaignard

Hi Brian,

Thanks for your comments, please find my reply below.

On mar., 2022-08-02 at 15:39 +0100, Brian Starkey wrote:
> Caution: EXT Email
> 
> Hi Olivier,
> 
> Some comments below as I mentioned off-list.
> 
> One additional point: some devices need to know if a buffer is
> protected, so that they can configure registers appropriately to make
> use of that protected buffer. There was previously a discussion about
> adding a flag to a dma_buf to indicate that it is allocated from
> protected memory[1].
> 
> [1] 
> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Farchives%2Fdri-devel%2F2019-September%2F238059.html&amp;data=05%7C01%7Colivier.masse%40nxp.com%7C64e0ce1952ac4e926a8208da7494d0bb%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637950479760002497%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=bmlw9uLeGtn%2F7JliZ07nSm6XDEzEqdwn4mBQHIVnma0%3D&amp;reserved=0
> 
> 

Interesting, could we introduce is_protected 1-bit flag into struct
dma_buf ?
struct dma_buf_ops.map_dma_buf and struct dma_buf_ops.unmap_dma_buf
could become optional for such buffer ?

> On Tue, Aug 02, 2022 at 11:58:41AM +0200, Olivier Masse wrote:
> > add Linaro secure heap bindings: linaro,secure-heap
> > use genalloc to allocate/free buffer from buffer pool.
> > buffer pool info is from dts.
> > use sg_table instore the allocated memory info, the length of
> > sg_table is 1.
> > implement secure_heap_buf_ops to implement buffer share in
> > difference device:
> > 1. Userspace passes this fd to all drivers it wants this buffer
> > to share with: First the filedescriptor is converted to a &dma_buf
> > using
> > dma_buf_get(). Then the buffer is attached to the device using
> > dma_buf_attach().
> > 2. Once the buffer is attached to all devices userspace can
> > initiate DMA
> > access to the shared buffer. In the kernel this is done by calling
> > dma_buf_map_attachment()
> > 3. get sg_table with dma_buf_map_attachment in difference device.
> > 
> > Signed-off-by: Olivier Masse <olivier.masse@nxp.com>
> > ---
> >  drivers/dma-buf/heaps/Kconfig       |  21 +-
> >  drivers/dma-buf/heaps/Makefile      |   1 +
> >  drivers/dma-buf/heaps/secure_heap.c | 588
> > ++++++++++++++++++++++++++++
> >  3 files changed, 606 insertions(+), 4 deletions(-)
> >  create mode 100644 drivers/dma-buf/heaps/secure_heap.c
> > 
> > diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-
> > buf/heaps/Kconfig
> > index 6a33193a7b3e..b2406932192e 100644
> > --- a/drivers/dma-buf/heaps/Kconfig
> > +++ b/drivers/dma-buf/heaps/Kconfig
> > @@ -1,8 +1,12 @@
> > -config DMABUF_HEAPS_DEFERRED_FREE
> > -     tristate
> > +menuconfig DMABUF_HEAPS_DEFERRED_FREE
> > +     bool "DMA-BUF heaps deferred-free library"
> > +     help
> > +       Choose this option to enable the DMA-BUF heaps deferred-
> > free library.
> > 
> > -config DMABUF_HEAPS_PAGE_POOL
> > -     tristate
> > +menuconfig DMABUF_HEAPS_PAGE_POOL
> > +     bool "DMA-BUF heaps page-pool library"
> > +     help
> > +       Choose this option to enable the DMA-BUF heaps page-pool
> > library.
> > 
> >  config DMABUF_HEAPS_SYSTEM
> >       bool "DMA-BUF System Heap"
> > @@ -26,3 +30,12 @@ config DMABUF_HEAPS_DSP
> >            Choose this option to enable the dsp dmabuf heap. The
> > dsp heap
> >            is allocated by gen allocater. it's allocated according
> > the dts.
> >            If in doubt, say Y.
> > +
> > +config DMABUF_HEAPS_SECURE
> > +     tristate "DMA-BUF Secure Heap"
> > +     depends on DMABUF_HEAPS && DMABUF_HEAPS_DEFERRED_FREE
> > +     help
> > +       Choose this option to enable the secure dmabuf heap. The
> > secure heap
> > +       pools are defined according to the DT. Heaps are allocated
> > +       in the pools using gen allocater.
> > +       If in doubt, say Y.
> > diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-
> > buf/heaps/Makefile
> > index e70722ea615e..08f6aa5919d1 100644
> > --- a/drivers/dma-buf/heaps/Makefile
> > +++ b/drivers/dma-buf/heaps/Makefile
> > @@ -4,3 +4,4 @@ obj-$(CONFIG_DMABUF_HEAPS_PAGE_POOL)  +=
> > page_pool.o
> >  obj-$(CONFIG_DMABUF_HEAPS_SYSTEM)    += system_heap.o
> >  obj-$(CONFIG_DMABUF_HEAPS_CMA)               += cma_heap.o
> >  obj-$(CONFIG_DMABUF_HEAPS_DSP)          += dsp_heap.o
> > +obj-$(CONFIG_DMABUF_HEAPS_SECURE)    += secure_heap.o
> > diff --git a/drivers/dma-buf/heaps/secure_heap.c b/drivers/dma-
> > buf/heaps/secure_heap.c
> > new file mode 100644
> > index 000000000000..31aac5d050b4
> > --- /dev/null
> > +++ b/drivers/dma-buf/heaps/secure_heap.c
> > @@ -0,0 +1,588 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * DMABUF secure heap exporter
> > + *
> > + * Copyright 2021 NXP.
> > + */
> > +
> > +#include <linux/dma-buf.h>
> > +#include <linux/dma-heap.h>
> > +#include <linux/dma-mapping.h>
> > +#include <linux/err.h>
> > +#include <linux/genalloc.h>
> > +#include <linux/highmem.h>
> > +#include <linux/mm.h>
> > +#include <linux/module.h>
> > +#include <linux/of.h>
> > +#include <linux/of_fdt.h>
> > +#include <linux/of_reserved_mem.h>
> > +#include <linux/scatterlist.h>
> > +#include <linux/slab.h>
> > +#include <linux/vmalloc.h>
> > +
> > +#include "deferred-free-helper.h"
> > +#include "page_pool.h"
> > +
> > +#define MAX_SECURE_HEAP 2
> > +#define MAX_HEAP_NAME_LEN 32
> > +
> > +struct secure_heap_buffer {
> > +     struct dma_heap *heap;
> > +     struct list_head attachments;
> > +     struct mutex lock;
> > +     unsigned long len;
> > +     struct sg_table sg_table;
> > +     int vmap_cnt;
> > +     struct deferred_freelist_item deferred_free;
> > +     void *vaddr;
> > +     bool uncached;
> > +};
> > +
> > +struct dma_heap_attachment {
> > +     struct device *dev;
> > +     struct sg_table *table;
> > +     struct list_head list;
> > +     bool no_map;
> > +     bool mapped;
> > +     bool uncached;
> > +};
> 
> I think dma_heap_attachment should have a more specific name,
> otherwise it looks like some generic part of the dma_heap framework.

ok, how about secure_heap_attachment which sound more logical ?

> 
> > +
> > +struct secure_heap_info {
> > +     struct gen_pool *pool;
> > +
> > +     bool no_map;
> > +};
> > +
> > +struct rmem_secure {
> > +     phys_addr_t base;
> > +     phys_addr_t size;
> > +
> > +     char name[MAX_HEAP_NAME_LEN];
> > +
> > +     bool no_map;
> > +};
> > +
> > +static struct rmem_secure secure_data[MAX_SECURE_HEAP] = {0};
> > +static unsigned int secure_data_count;
> > +
> > +static struct sg_table *dup_sg_table(struct sg_table *table)
> > +{
> > +     struct sg_table *new_table;
> > +     int ret, i;
> > +     struct scatterlist *sg, *new_sg;
> > +
> > +     new_table = kzalloc(sizeof(*new_table), GFP_KERNEL);
> > +     if (!new_table)
> > +             return ERR_PTR(-ENOMEM);
> > +
> > +     ret = sg_alloc_table(new_table, table->orig_nents,
> > GFP_KERNEL);
> > +     if (ret) {
> > +             kfree(new_table);
> > +             return ERR_PTR(-ENOMEM);
> > +     }
> > +
> > +     new_sg = new_table->sgl;
> > +     for_each_sgtable_sg(table, sg, i) {
> > +             sg_set_page(new_sg, sg_page(sg), sg->length, sg-
> > >offset);
> > +             new_sg->dma_address = sg->dma_address;
> > +#ifdef CONFIG_NEED_SG_DMA_LENGTH
> > +             new_sg->dma_length = sg->dma_length;
> > +#endif
> > +             new_sg = sg_next(new_sg);
> > +     }
> > +
> > +     return new_table;
> > +}
> > +
> > +static int secure_heap_attach(struct dma_buf *dmabuf,
> > +                           struct dma_buf_attachment *attachment)
> > +{
> > +     struct secure_heap_buffer *buffer = dmabuf->priv;
> > +     struct secure_heap_info *info = dma_heap_get_drvdata(buffer-
> > >heap);
> > +     struct dma_heap_attachment *a;
> > +     struct sg_table *table;
> > +
> > +     a = kzalloc(sizeof(*a), GFP_KERNEL);
> > +     if (!a)
> > +             return -ENOMEM;
> > +
> > +     table = dup_sg_table(&buffer->sg_table);
> > +     if (IS_ERR(table)) {
> > +             kfree(a);
> > +             return -ENOMEM;
> > +     }
> > +
> > +     a->table = table;
> > +     a->dev = attachment->dev;
> > +     INIT_LIST_HEAD(&a->list);
> > +     a->no_map = info->no_map;
> > +     a->mapped = false;
> > +     a->uncached = buffer->uncached;
> > +     attachment->priv = a;
> > +
> > +     mutex_lock(&buffer->lock);
> > +     list_add(&a->list, &buffer->attachments);
> > +     mutex_unlock(&buffer->lock);
> > +
> > +     return 0;
> > +}
> > +
> > +static void secure_heap_detach(struct dma_buf *dmabuf,
> > +                            struct dma_buf_attachment *attachment)
> > +{
> > +     struct secure_heap_buffer *buffer = dmabuf->priv;
> > +     struct dma_heap_attachment *a = attachment->priv;
> > +
> > +     mutex_lock(&buffer->lock);
> > +     list_del(&a->list);
> > +     mutex_unlock(&buffer->lock);
> > +
> > +     sg_free_table(a->table);
> > +     kfree(a->table);
> > +     kfree(a);
> > +}
> > +
> > +static struct sg_table *secure_heap_map_dma_buf(struct
> > dma_buf_attachment *attachment,
> > +                                             enum
> > dma_data_direction direction)
> > +{
> > +     struct dma_heap_attachment *a = attachment->priv;
> > +     struct sg_table *table = a->table;
> > +     int attr = 0;
> > +     int ret;
> > +
> > +     if (!a->no_map) {
> 
> This looks strange - you're expecting this driver to be used on
> regions with no-map set, but if no-map _is_ set, you don't allow the
> dma_buf to get mapped to any devices. Doesn't that mean that these
> buffers can never actually be used?

if no-map is not set, map_dma_buf is mapping the buffer.

> 
> > +             if (a->uncached)
> > +                     attr = DMA_ATTR_SKIP_CPU_SYNC;
> > +
> 
> If the CPU can never touch these buffers, is cached vs uncached
> meaningful?

indeed, but as dma_buf_ops.map_dma_buf is mandatory, this flag as well
as no-map were introduce to manage uncached mapped buffer.

to simplify everything, secure-heap could get rid of no-map and
uncached flags and set an is_protected flag in dma_buf ?


> If the TEE touches the buffers from the CPU then perhaps the TEE
> would
> need to do cache maintenance, but I'd expect that to be managed in
> the
> TEE.

yes, if needed cache maintenance should be done in TA.

> 
> > +             ret = dma_map_sgtable(attachment->dev, table,
> > direction, attr);
> > +             if (ret)
> > +                     return ERR_PTR(ret);
> > +
> > +             a->mapped = true;
> > +     }
> > +
> > +     return table;
> > +}
> > +
> > +static void secure_heap_unmap_dma_buf(struct dma_buf_attachment
> > *attachment,
> > +                                   struct sg_table *table,
> > +                                   enum dma_data_direction
> > direction)
> > +{
> > +     struct dma_heap_attachment *a = attachment->priv;
> > +     int attr = 0;
> > +
> > +     if (!a->no_map) {
> > +             if (a->uncached)
> > +                     attr = DMA_ATTR_SKIP_CPU_SYNC;
> > +
> > +             a->mapped = false;
> > +             dma_unmap_sgtable(attachment->dev, table, direction,
> > attr);
> > +     }
> > +}
> > +
> > +static int secure_heap_dma_buf_begin_cpu_access(struct dma_buf
> > *dmabuf,
> > +                                             enum
> > dma_data_direction direction)
> 
> If the firewall is preventing CPU accesses, then shouldn't
> begin_cpu_access and end_cpu_access either fail or be a no-op?

true, both of them are optional and could be removed.

> 
> > +{
> > +     struct secure_heap_buffer *buffer = dmabuf->priv;
> > +     struct dma_heap_attachment *a;
> > +
> > +     mutex_lock(&buffer->lock);
> > +
> > +     if (buffer->vmap_cnt)
> > +             invalidate_kernel_vmap_range(buffer->vaddr, buffer-
> > >len);
> > +
> > +     if (!buffer->uncached) {
> > +             list_for_each_entry(a, &buffer->attachments, list) {
> > +                     if (!a->mapped)
> > +                             continue;
> > +                     dma_sync_sgtable_for_cpu(a->dev, a->table,
> > direction);
> > +             }
> > +     }
> > +     mutex_unlock(&buffer->lock);
> > +
> > +     return 0;
> > +}
> > +
> > +static int secure_heap_dma_buf_end_cpu_access(struct dma_buf
> > *dmabuf,
> > +                                           enum dma_data_direction
> > direction)
> > +{
> > +     struct secure_heap_buffer *buffer = dmabuf->priv;
> > +     struct dma_heap_attachment *a;
> > +
> > +     mutex_lock(&buffer->lock);
> > +
> > +     if (buffer->vmap_cnt)
> > +             flush_kernel_vmap_range(buffer->vaddr, buffer->len);
> > +
> > +     if (!buffer->uncached) {
> > +             list_for_each_entry(a, &buffer->attachments, list) {
> > +                     if (!a->mapped)
> > +                             continue;
> > +                     dma_sync_sgtable_for_device(a->dev, a->table, 
> > direction);
> > +             }
> > +     }
> > +     mutex_unlock(&buffer->lock);
> > +
> > +     return 0;
> > +}
> > +
> > +static int secure_heap_mmap(struct dma_buf *dmabuf, struct
> > vm_area_struct *vma)
> > +{
> > +     struct secure_heap_buffer *buffer = dmabuf->priv;
> > +     struct sg_table *table = &buffer->sg_table;
> > +     unsigned long addr = vma->vm_start;
> > +     struct sg_page_iter piter;
> > +     int ret;
> > +
> > +     if (buffer->uncached)
> > +             vma->vm_page_prot = pgprot_writecombine(vma-
> > >vm_page_prot);
> > +
> > +     for_each_sgtable_page(table, &piter, vma->vm_pgoff) {
> > +             struct page *page = sg_page_iter_page(&piter);
> > +
> > +             ret = remap_pfn_range(vma, addr, page_to_pfn(page),
> > PAGE_SIZE,
> > +                                   vma->vm_page_prot);
> 
> If the CPU can't touch these buffers, what would they be mapped to
> userspace for?

again, let's remove this optional ops.

> 
> > +             if (ret)
> > +                     return ret;
> > +             addr += PAGE_SIZE;
> > +     }
> > +     return 0;
> > +}
> > +
> > +static void *secure_heap_do_vmap(struct secure_heap_buffer
> > *buffer)
> > +{
> > +     struct sg_table *table = &buffer->sg_table;
> > +     int npages = PAGE_ALIGN(buffer->len) / PAGE_SIZE;
> > +     struct page **pages = vmalloc(sizeof(struct page *) *
> > npages);
> > +     struct page **tmp = pages;
> > +     struct sg_page_iter piter;
> > +     pgprot_t pgprot = PAGE_KERNEL;
> > +     void *vaddr;
> > +
> > +     if (!pages)
> > +             return ERR_PTR(-ENOMEM);
> > +
> > +     if (buffer->uncached)
> > +             pgprot = pgprot_writecombine(PAGE_KERNEL);
> > +
> > +     for_each_sgtable_page(table, &piter, 0) {
> > +             WARN_ON(tmp - pages >= npages);
> > +             *tmp++ = sg_page_iter_page(&piter);
> > +     }
> > +
> > +     vaddr = vmap(pages, npages, VM_MAP, pgprot);
> > +     vfree(pages);
> 
> Similar to above, if the CPU can't touch these buffers, what would be
> the point of mapping them to the kernel?

indeed, useless code.

> 
> > +
> > +     if (!vaddr)
> > +             return ERR_PTR(-ENOMEM);
> > +
> > +     return vaddr;
> > +}
> > +
> > +static int secure_heap_vmap(struct dma_buf *dmabuf, struct
> > dma_buf_map *map)
> > +{
> > +     struct secure_heap_buffer *buffer = dmabuf->priv;
> > +     void *vaddr;
> > +     int ret = 0;
> > +
> > +     mutex_lock(&buffer->lock);
> > +     if (buffer->vmap_cnt) {
> > +             buffer->vmap_cnt++;
> > +             goto out;
> > +     }
> > +
> > +     vaddr = secure_heap_do_vmap(buffer);
> > +     if (IS_ERR(vaddr)) {
> > +             ret = PTR_ERR(vaddr);
> > +             goto out;
> > +     }
> > +
> > +     buffer->vaddr = vaddr;
> > +     buffer->vmap_cnt++;
> > +     dma_buf_map_set_vaddr(map, buffer->vaddr);
> > +out:
> > +     mutex_unlock(&buffer->lock);
> > +
> > +     return ret;
> > +}
> > +
> > +static void secure_heap_vunmap(struct dma_buf *dmabuf, struct
> > dma_buf_map *map)
> > +{
> > +     struct secure_heap_buffer *buffer = dmabuf->priv;
> > +
> > +     mutex_lock(&buffer->lock);
> > +     if (!--buffer->vmap_cnt) {
> > +             vunmap(buffer->vaddr);
> > +             buffer->vaddr = NULL;
> > +     }
> > +     mutex_unlock(&buffer->lock);
> > +     dma_buf_map_clear(map);
> > +}
> > +
> > +static void secure_heap_zero_buffer(struct secure_heap_buffer
> > *buffer)
> > +{
> > +     struct sg_table *sgt = &buffer->sg_table;
> > +     struct sg_page_iter piter;
> > +     struct page *p;
> > +     void *vaddr;
> > +
> > +     for_each_sgtable_page(sgt, &piter, 0) {
> > +             p = sg_page_iter_page(&piter);
> > +             vaddr = kmap_atomic(p);
> > +             memset(vaddr, 0, PAGE_SIZE);
> 
> How can you do memset on the buffer if the firewall is preventing CPU
> accesses?
> 

yes, useless if we set a secure flag to prevent driver from mapping
allocated buffer.

> > +             kunmap_atomic(vaddr);
> > +     }
> > +}
> > +
> > +static void secure_heap_buf_free(struct deferred_freelist_item
> > *item,
> > +                              enum df_reason reason)
> > +{
> > +     struct secure_heap_buffer *buffer;
> > +     struct secure_heap_info *info;
> > +     struct sg_table *table;
> > +     struct scatterlist *sg;
> > +     int i;
> > +
> > +     buffer = container_of(item, struct secure_heap_buffer,
> > deferred_free);
> > +     info = dma_heap_get_drvdata(buffer->heap);
> > +
> > +     if (!info->no_map) {
> > +             // Zero the buffer pages before adding back to the
> > pool
> > +             if (reason == DF_NORMAL)
> > +                     secure_heap_zero_buffer(buffer);
> > +     }
> > +
> > +     table = &buffer->sg_table;
> > +     for_each_sg(table->sgl, sg, table->nents, i)
> > +             gen_pool_free(info->pool, sg_dma_address(sg),
> > sg_dma_len(sg));
> > +
> > +     sg_free_table(table);
> > +     kfree(buffer);
> > +}
> > +
> > +static void secure_heap_dma_buf_release(struct dma_buf *dmabuf)
> > +{
> > +     struct secure_heap_buffer *buffer = dmabuf->priv;
> > +     int npages = PAGE_ALIGN(buffer->len) / PAGE_SIZE;
> > +
> > +     deferred_free(&buffer->deferred_free, secure_heap_buf_free,
> > npages);
> > +}
> > +
> > +static const struct dma_buf_ops secure_heap_buf_ops = {
> > +     .attach = secure_heap_attach,
> > +     .detach = secure_heap_detach,
> > +     .map_dma_buf = secure_heap_map_dma_buf,
> > +     .unmap_dma_buf = secure_heap_unmap_dma_buf,
> > +     .begin_cpu_access = secure_heap_dma_buf_begin_cpu_access,
> > +     .end_cpu_access = secure_heap_dma_buf_end_cpu_access,
> > +     .mmap = secure_heap_mmap,
> > +     .vmap = secure_heap_vmap,
> > +     .vunmap = secure_heap_vunmap,
> > +     .release = secure_heap_dma_buf_release,
> > +};
> > +
> > +static struct dma_buf *secure_heap_do_allocate(struct dma_heap
> > *heap,
> > +                                            unsigned long len,
> > +                                            unsigned long
> > fd_flags,
> > +                                            unsigned long
> > heap_flags,
> > +                                            bool uncached)
> > +{
> > +     struct secure_heap_buffer *buffer;
> > +     struct secure_heap_info *info = dma_heap_get_drvdata(heap);
> > +     DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
> > +     unsigned long size = roundup(len, PAGE_SIZE);
> > +     struct dma_buf *dmabuf;
> > +     struct sg_table *table;
> > +     int ret = -ENOMEM;
> > +     unsigned long phy_addr;
> > +
> > +     buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);
> > +     if (!buffer)
> > +             return ERR_PTR(-ENOMEM);
> > +
> > +     INIT_LIST_HEAD(&buffer->attachments);
> > +     mutex_init(&buffer->lock);
> > +     buffer->heap = heap;
> > +     buffer->len = size;
> > +     buffer->uncached = uncached;
> > +
> > +     phy_addr = gen_pool_alloc(info->pool, size);
> > +     if (!phy_addr)
> > +             goto free_buffer;
> > +
> > +     table = &buffer->sg_table;
> > +     if (sg_alloc_table(table, 1, GFP_KERNEL))
> > +             goto free_pool;
> > +
> > +     sg_set_page(table->sgl, phys_to_page(phy_addr), size, 0);
> > +     sg_dma_address(table->sgl) = phy_addr;
> > +     sg_dma_len(table->sgl) = size;
> > +
> > +     /* create the dmabuf */
> > +     exp_info.exp_name = dma_heap_get_name(heap);
> > +     exp_info.ops = &secure_heap_buf_ops;
> > +     exp_info.size = buffer->len;
> > +     exp_info.flags = fd_flags;
> > +     exp_info.priv = buffer;
> > +     dmabuf = dma_buf_export(&exp_info);
> > +     if (IS_ERR(dmabuf)) {
> > +             ret = PTR_ERR(dmabuf);
> > +             goto free_pages;
> > +     }
> > +
> > +     return dmabuf;
> > +
> > +free_pages:
> > +     sg_free_table(table);
> > +
> > +free_pool:
> > +     gen_pool_free(info->pool, phy_addr, size);
> > +
> > +free_buffer:
> > +     mutex_destroy(&buffer->lock);
> > +     kfree(buffer);
> > +
> > +     return ERR_PTR(ret);
> > +}
> > +
> > +static struct dma_buf *secure_heap_allocate(struct dma_heap *heap,
> > +                                         unsigned long len,
> > +                                         unsigned long fd_flags,
> > +                                         unsigned long heap_flags)
> > +{
> > +     // use uncache buffer here by default
> > +     return secure_heap_do_allocate(heap, len, fd_flags,
> > heap_flags, true);
> > +     // use cache buffer
> > +     // return secure_heap_do_allocate(heap, len, fd_flags,
> > heap_flags, false);
> > +}
> > +
> > +static const struct dma_heap_ops secure_heap_ops = {
> > +     .allocate = secure_heap_allocate,
> > +};
> > +
> > +static int secure_heap_add(struct rmem_secure *rmem)
> > +{
> > +     struct dma_heap *secure_heap;
> > +     struct dma_heap_export_info exp_info;
> > +     struct secure_heap_info *info = NULL;
> > +     struct gen_pool *pool = NULL;
> > +     int ret = -EINVAL;
> > +
> > +     if (rmem->base == 0 || rmem->size == 0) {
> > +             pr_err("secure_data base or size is not correct\n");
> > +             goto error;
> > +     }
> > +
> > +     info = kzalloc(sizeof(*info), GFP_KERNEL);
> > +     if (!info) {
> > +             pr_err("dmabuf info allocation failed\n");
> > +             ret = -ENOMEM;
> > +             goto error;
> > +     }
> > +
> > +     pool = gen_pool_create(PAGE_SHIFT, -1);
> > +     if (!pool) {
> > +             pr_err("can't create gen pool\n");
> > +             ret = -ENOMEM;
> > +             goto error;
> > +     }
> > +
> > +     if (gen_pool_add(pool, rmem->base, rmem->size, -1) < 0) {
> > +             pr_err("failed to add memory into pool\n");
> > +             ret = -ENOMEM;
> > +             goto error;
> > +     }
> > +
> > +     info->pool = pool;
> > +     info->no_map = rmem->no_map;
> 
> This kind of heap probably can't work if the region doesn't have
> no-map, so I think it would make sense to enforce that no_map is set
> (or ignore regions without no-map in DT).

if no-map is not set, secure-heap could be used as a heap with
dynamically protected buffer from the TEE.
but I agree that this is adding too much complexity and could be
simplify a lot without this mapping consideration.

then no-map is probably not the perfect term to describe this heap.
is_s
ecure or is_protected would be better ?

> 
> Cheers,
> -Brian
> 
> > +
> > +     exp_info.name = rmem->name;
> > +     exp_info.ops = &secure_heap_ops;
> > +     exp_info.priv = info;
> > +
> > +     secure_heap = dma_heap_add(&exp_info);
> > +     if (IS_ERR(secure_heap)) {
> > +             pr_err("dmabuf secure heap allocation failed\n");
> > +             ret = PTR_ERR(secure_heap);
> > +             goto error;
> > +     }
> > +
> > +     return 0;
> > +
> > +error:
> > +     kfree(info);
> > +     if (pool)
> > +             gen_pool_destroy(pool);
> > +
> > +     return ret;
> > +}
> > +
> > +static int secure_heap_create(void)
> > +{
> > +     unsigned int i;
> > +     int ret;
> > +
> > +     for (i = 0; i < secure_data_count; i++) {
> > +             ret = secure_heap_add(&secure_data[i]);
> > +             if (ret)
> > +                     return ret;
> > +     }
> > +     return 0;
> > +}
> > +
> > +static int rmem_secure_heap_device_init(struct reserved_mem *rmem,
> > +                                      struct device *dev)
> > +{
> > +     dev_set_drvdata(dev, rmem);
> > +     return 0;
> > +}
> > +
> > +static void rmem_secure_heap_device_release(struct reserved_mem
> > *rmem,
> > +                                      struct device *dev)
> > +{
> > +     dev_set_drvdata(dev, NULL);
> > +}
> > +
> > +static const struct reserved_mem_ops rmem_dma_ops = {
> > +     .device_init    = rmem_secure_heap_device_init,
> > +     .device_release = rmem_secure_heap_device_release,
> > +};
> > +
> > +static int __init rmem_secure_heap_setup(struct reserved_mem
> > *rmem)
> > +{
> > +     if (secure_data_count < MAX_SECURE_HEAP) {
> > +             int name_len = 0;
> > +             char *s = rmem->name;
> > +
> > +             secure_data[secure_data_count].base = rmem->base;
> > +             secure_data[secure_data_count].size = rmem->size;
> > +             secure_data[secure_data_count].no_map =
> > +                     (of_get_flat_dt_prop(rmem->fdt_node, "no-
> > map", NULL) != NULL);
> > +
> > +             while (name_len < MAX_HEAP_NAME_LEN) {
> > +                     if ((*s == '@') || (*s == '\0'))
> > +                             break;
> > +                     name_len++;
> > +                     s++;
> > +             }
> > +             if (name_len == MAX_HEAP_NAME_LEN)
> > +                     name_len--;
> > +
> > +             strncpy(secure_data[secure_data_count].name, rmem-
> > >name, name_len);
> > +
> > +             rmem->ops = &rmem_dma_ops;
> > +             pr_info("Reserved memory: DMA buf secure pool %s at
> > %pa, size %ld MiB\n",
> > +                     secure_data[secure_data_count].name,
> > +                     &rmem->base, (unsigned long)rmem->size /
> > SZ_1M);
> > +
> > +             secure_data_count++;
> > +             return 0;
> > +     }
> > +     WARN_ONCE(1, "Cannot handle more than %u secure heaps\n",
> > MAX_SECURE_HEAP);
> > +     return -EINVAL;
> > +}
> > +
> > +RESERVEDMEM_OF_DECLARE(secure_heap, "linaro,secure-heap",
> > rmem_secure_heap_setup);
> > +
> > +module_init(secure_heap_create);
> > +MODULE_LICENSE("GPL v2");
> > --
> > 2.25.0
> > 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [EXT] Re: [PATCH 3/5] dma-buf: heaps: add Linaro secure dmabuf heap support
  2022-08-03 11:07     ` [EXT] " Olivier Masse
@ 2022-08-03 12:37       ` Brian Starkey
  2022-08-05 14:06         ` Olivier Masse
  0 siblings, 1 reply; 13+ messages in thread
From: Brian Starkey @ 2022-08-03 12:37 UTC (permalink / raw)
  To: Olivier Masse
  Cc: sumit.semwal, linux-kernel, linaro-mm-sig, christian.koenig,
	linux-media, nd, Clément Faure, dri-devel,
	benjamin.gaignard

Hi,

On Wed, Aug 03, 2022 at 11:07:54AM +0000, Olivier Masse wrote:
> Hi Brian,
> 
> Thanks for your comments, please find my reply below.
> 
> On mar., 2022-08-02 at 15:39 +0100, Brian Starkey wrote:
> > Caution: EXT Email
> > 
> > Hi Olivier,
> > 
> > Some comments below as I mentioned off-list.
> > 
> > One additional point: some devices need to know if a buffer is
> > protected, so that they can configure registers appropriately to make
> > use of that protected buffer. There was previously a discussion about
> > adding a flag to a dma_buf to indicate that it is allocated from
> > protected memory[1].
> > 
> > [1] 
> > https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Farchives%2Fdri-devel%2F2019-September%2F238059.html&amp;data=05%7C01%7Colivier.masse%40nxp.com%7C64e0ce1952ac4e926a8208da7494d0bb%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637950479760002497%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=bmlw9uLeGtn%2F7JliZ07nSm6XDEzEqdwn4mBQHIVnma0%3D&amp;reserved=0
> > 
> > 
> 
> Interesting, could we introduce is_protected 1-bit flag into struct
> dma_buf ?

That was the earlier proposal, yeah.

> struct dma_buf_ops.map_dma_buf and struct dma_buf_ops.unmap_dma_buf
> could become optional for such buffer ?
> 

map_dma_buf and unmap_dma_buf are needed to give devices access to the
dma-buf, I don't think they should become optional.

Mapping to the CPU maybe should be optional/disallowed for protected
buffers.

> > On Tue, Aug 02, 2022 at 11:58:41AM +0200, Olivier Masse wrote:
> > > add Linaro secure heap bindings: linaro,secure-heap
> > > use genalloc to allocate/free buffer from buffer pool.
> > > buffer pool info is from dts.
> > > use sg_table instore the allocated memory info, the length of
> > > sg_table is 1.
> > > implement secure_heap_buf_ops to implement buffer share in
> > > difference device:
> > > 1. Userspace passes this fd to all drivers it wants this buffer
> > > to share with: First the filedescriptor is converted to a &dma_buf
> > > using
> > > dma_buf_get(). Then the buffer is attached to the device using
> > > dma_buf_attach().
> > > 2. Once the buffer is attached to all devices userspace can
> > > initiate DMA
> > > access to the shared buffer. In the kernel this is done by calling
> > > dma_buf_map_attachment()
> > > 3. get sg_table with dma_buf_map_attachment in difference device.
> > > 
> > > Signed-off-by: Olivier Masse <olivier.masse@nxp.com>
> > > ---
> > >  drivers/dma-buf/heaps/Kconfig       |  21 +-
> > >  drivers/dma-buf/heaps/Makefile      |   1 +
> > >  drivers/dma-buf/heaps/secure_heap.c | 588
> > > ++++++++++++++++++++++++++++
> > >  3 files changed, 606 insertions(+), 4 deletions(-)
> > >  create mode 100644 drivers/dma-buf/heaps/secure_heap.c
> > > 
> > > diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-
> > > buf/heaps/Kconfig
> > > index 6a33193a7b3e..b2406932192e 100644
> > > --- a/drivers/dma-buf/heaps/Kconfig
> > > +++ b/drivers/dma-buf/heaps/Kconfig
> > > @@ -1,8 +1,12 @@
> > > -config DMABUF_HEAPS_DEFERRED_FREE
> > > -     tristate
> > > +menuconfig DMABUF_HEAPS_DEFERRED_FREE
> > > +     bool "DMA-BUF heaps deferred-free library"
> > > +     help
> > > +       Choose this option to enable the DMA-BUF heaps deferred-
> > > free library.
> > > 
> > > -config DMABUF_HEAPS_PAGE_POOL
> > > -     tristate
> > > +menuconfig DMABUF_HEAPS_PAGE_POOL
> > > +     bool "DMA-BUF heaps page-pool library"
> > > +     help
> > > +       Choose this option to enable the DMA-BUF heaps page-pool
> > > library.
> > > 
> > >  config DMABUF_HEAPS_SYSTEM
> > >       bool "DMA-BUF System Heap"
> > > @@ -26,3 +30,12 @@ config DMABUF_HEAPS_DSP
> > >            Choose this option to enable the dsp dmabuf heap. The
> > > dsp heap
> > >            is allocated by gen allocater. it's allocated according
> > > the dts.
> > >            If in doubt, say Y.
> > > +
> > > +config DMABUF_HEAPS_SECURE
> > > +     tristate "DMA-BUF Secure Heap"
> > > +     depends on DMABUF_HEAPS && DMABUF_HEAPS_DEFERRED_FREE
> > > +     help
> > > +       Choose this option to enable the secure dmabuf heap. The
> > > secure heap
> > > +       pools are defined according to the DT. Heaps are allocated
> > > +       in the pools using gen allocater.
> > > +       If in doubt, say Y.
> > > diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-
> > > buf/heaps/Makefile
> > > index e70722ea615e..08f6aa5919d1 100644
> > > --- a/drivers/dma-buf/heaps/Makefile
> > > +++ b/drivers/dma-buf/heaps/Makefile
> > > @@ -4,3 +4,4 @@ obj-$(CONFIG_DMABUF_HEAPS_PAGE_POOL)  +=
> > > page_pool.o
> > >  obj-$(CONFIG_DMABUF_HEAPS_SYSTEM)    += system_heap.o
> > >  obj-$(CONFIG_DMABUF_HEAPS_CMA)               += cma_heap.o
> > >  obj-$(CONFIG_DMABUF_HEAPS_DSP)          += dsp_heap.o
> > > +obj-$(CONFIG_DMABUF_HEAPS_SECURE)    += secure_heap.o
> > > diff --git a/drivers/dma-buf/heaps/secure_heap.c b/drivers/dma-
> > > buf/heaps/secure_heap.c
> > > new file mode 100644
> > > index 000000000000..31aac5d050b4
> > > --- /dev/null
> > > +++ b/drivers/dma-buf/heaps/secure_heap.c
> > > @@ -0,0 +1,588 @@
> > > +// SPDX-License-Identifier: GPL-2.0
> > > +/*
> > > + * DMABUF secure heap exporter
> > > + *
> > > + * Copyright 2021 NXP.
> > > + */
> > > +
> > > +#include <linux/dma-buf.h>
> > > +#include <linux/dma-heap.h>
> > > +#include <linux/dma-mapping.h>
> > > +#include <linux/err.h>
> > > +#include <linux/genalloc.h>
> > > +#include <linux/highmem.h>
> > > +#include <linux/mm.h>
> > > +#include <linux/module.h>
> > > +#include <linux/of.h>
> > > +#include <linux/of_fdt.h>
> > > +#include <linux/of_reserved_mem.h>
> > > +#include <linux/scatterlist.h>
> > > +#include <linux/slab.h>
> > > +#include <linux/vmalloc.h>
> > > +
> > > +#include "deferred-free-helper.h"
> > > +#include "page_pool.h"
> > > +
> > > +#define MAX_SECURE_HEAP 2
> > > +#define MAX_HEAP_NAME_LEN 32
> > > +
> > > +struct secure_heap_buffer {
> > > +     struct dma_heap *heap;
> > > +     struct list_head attachments;
> > > +     struct mutex lock;
> > > +     unsigned long len;
> > > +     struct sg_table sg_table;
> > > +     int vmap_cnt;
> > > +     struct deferred_freelist_item deferred_free;
> > > +     void *vaddr;
> > > +     bool uncached;
> > > +};
> > > +
> > > +struct dma_heap_attachment {
> > > +     struct device *dev;
> > > +     struct sg_table *table;
> > > +     struct list_head list;
> > > +     bool no_map;
> > > +     bool mapped;
> > > +     bool uncached;
> > > +};
> > 
> > I think dma_heap_attachment should have a more specific name,
> > otherwise it looks like some generic part of the dma_heap framework.
> 
> ok, how about secure_heap_attachment which sound more logical ?
> 

Yes that sounds better.

> > 
> > > +
> > > +struct secure_heap_info {
> > > +     struct gen_pool *pool;
> > > +
> > > +     bool no_map;
> > > +};
> > > +
> > > +struct rmem_secure {
> > > +     phys_addr_t base;
> > > +     phys_addr_t size;
> > > +
> > > +     char name[MAX_HEAP_NAME_LEN];
> > > +
> > > +     bool no_map;
> > > +};
> > > +
> > > +static struct rmem_secure secure_data[MAX_SECURE_HEAP] = {0};
> > > +static unsigned int secure_data_count;
> > > +
> > > +static struct sg_table *dup_sg_table(struct sg_table *table)
> > > +{
> > > +     struct sg_table *new_table;
> > > +     int ret, i;
> > > +     struct scatterlist *sg, *new_sg;
> > > +
> > > +     new_table = kzalloc(sizeof(*new_table), GFP_KERNEL);
> > > +     if (!new_table)
> > > +             return ERR_PTR(-ENOMEM);
> > > +
> > > +     ret = sg_alloc_table(new_table, table->orig_nents,
> > > GFP_KERNEL);
> > > +     if (ret) {
> > > +             kfree(new_table);
> > > +             return ERR_PTR(-ENOMEM);
> > > +     }
> > > +
> > > +     new_sg = new_table->sgl;
> > > +     for_each_sgtable_sg(table, sg, i) {
> > > +             sg_set_page(new_sg, sg_page(sg), sg->length, sg-
> > > >offset);
> > > +             new_sg->dma_address = sg->dma_address;
> > > +#ifdef CONFIG_NEED_SG_DMA_LENGTH
> > > +             new_sg->dma_length = sg->dma_length;
> > > +#endif
> > > +             new_sg = sg_next(new_sg);
> > > +     }
> > > +
> > > +     return new_table;
> > > +}
> > > +
> > > +static int secure_heap_attach(struct dma_buf *dmabuf,
> > > +                           struct dma_buf_attachment *attachment)
> > > +{
> > > +     struct secure_heap_buffer *buffer = dmabuf->priv;
> > > +     struct secure_heap_info *info = dma_heap_get_drvdata(buffer-
> > > >heap);
> > > +     struct dma_heap_attachment *a;
> > > +     struct sg_table *table;
> > > +
> > > +     a = kzalloc(sizeof(*a), GFP_KERNEL);
> > > +     if (!a)
> > > +             return -ENOMEM;
> > > +
> > > +     table = dup_sg_table(&buffer->sg_table);
> > > +     if (IS_ERR(table)) {
> > > +             kfree(a);
> > > +             return -ENOMEM;
> > > +     }
> > > +
> > > +     a->table = table;
> > > +     a->dev = attachment->dev;
> > > +     INIT_LIST_HEAD(&a->list);
> > > +     a->no_map = info->no_map;
> > > +     a->mapped = false;
> > > +     a->uncached = buffer->uncached;
> > > +     attachment->priv = a;
> > > +
> > > +     mutex_lock(&buffer->lock);
> > > +     list_add(&a->list, &buffer->attachments);
> > > +     mutex_unlock(&buffer->lock);
> > > +
> > > +     return 0;
> > > +}
> > > +
> > > +static void secure_heap_detach(struct dma_buf *dmabuf,
> > > +                            struct dma_buf_attachment *attachment)
> > > +{
> > > +     struct secure_heap_buffer *buffer = dmabuf->priv;
> > > +     struct dma_heap_attachment *a = attachment->priv;
> > > +
> > > +     mutex_lock(&buffer->lock);
> > > +     list_del(&a->list);
> > > +     mutex_unlock(&buffer->lock);
> > > +
> > > +     sg_free_table(a->table);
> > > +     kfree(a->table);
> > > +     kfree(a);
> > > +}
> > > +
> > > +static struct sg_table *secure_heap_map_dma_buf(struct
> > > dma_buf_attachment *attachment,
> > > +                                             enum
> > > dma_data_direction direction)
> > > +{
> > > +     struct dma_heap_attachment *a = attachment->priv;
> > > +     struct sg_table *table = a->table;
> > > +     int attr = 0;
> > > +     int ret;
> > > +
> > > +     if (!a->no_map) {
> > 
> > This looks strange - you're expecting this driver to be used on
> > regions with no-map set, but if no-map _is_ set, you don't allow the
> > dma_buf to get mapped to any devices. Doesn't that mean that these
> > buffers can never actually be used?
> 
> if no-map is not set, map_dma_buf is mapping the buffer.
> 

no-map prevents the memory region from being added to the kernel's
linear mapping. Irrespective of no-map, map_dma_buf is needed to map
a buffer to a device, this should be orthogonal to no-map, and is
definitely required for devices to be able to use these buffers.

> > 
> > > +             if (a->uncached)
> > > +                     attr = DMA_ATTR_SKIP_CPU_SYNC;
> > > +
> > 
> > If the CPU can never touch these buffers, is cached vs uncached
> > meaningful?
> 
> indeed, but as dma_buf_ops.map_dma_buf is mandatory, this flag as well
> as no-map were introduce to manage uncached mapped buffer.
> 

Uncached mapping where? I think we're agreed that these buffers can
never be mapped for (non-trusted) CPU access, therefore (non-trusted)
CPU cache maintenance doesn't apply. Devices may still have cached
mappings, but it's up to device drivers to manage that.

> to simplify everything, secure-heap could get rid of no-map and
> uncached flags and set an is_protected flag in dma_buf ?
> 

I think that sounds better, yeah.

> 
> > If the TEE touches the buffers from the CPU then perhaps the TEE
> > would
> > need to do cache maintenance, but I'd expect that to be managed in
> > the
> > TEE.
> 
> yes, if needed cache maintenance should be done in TA.
> 
> > 
> > > +             ret = dma_map_sgtable(attachment->dev, table,
> > > direction, attr);
> > > +             if (ret)
> > > +                     return ERR_PTR(ret);
> > > +
> > > +             a->mapped = true;
> > > +     }
> > > +
> > > +     return table;
> > > +}
> > > +
> > > +static void secure_heap_unmap_dma_buf(struct dma_buf_attachment
> > > *attachment,
> > > +                                   struct sg_table *table,
> > > +                                   enum dma_data_direction
> > > direction)
> > > +{
> > > +     struct dma_heap_attachment *a = attachment->priv;
> > > +     int attr = 0;
> > > +
> > > +     if (!a->no_map) {
> > > +             if (a->uncached)
> > > +                     attr = DMA_ATTR_SKIP_CPU_SYNC;
> > > +
> > > +             a->mapped = false;
> > > +             dma_unmap_sgtable(attachment->dev, table, direction,
> > > attr);
> > > +     }
> > > +}
> > > +
> > > +static int secure_heap_dma_buf_begin_cpu_access(struct dma_buf
> > > *dmabuf,
> > > +                                             enum
> > > dma_data_direction direction)
> > 
> > If the firewall is preventing CPU accesses, then shouldn't
> > begin_cpu_access and end_cpu_access either fail or be a no-op?
> 
> true, both of them are optional and could be removed.
> 
> > 
> > > +{
> > > +     struct secure_heap_buffer *buffer = dmabuf->priv;
> > > +     struct dma_heap_attachment *a;
> > > +
> > > +     mutex_lock(&buffer->lock);
> > > +
> > > +     if (buffer->vmap_cnt)
> > > +             invalidate_kernel_vmap_range(buffer->vaddr, buffer-
> > > >len);
> > > +
> > > +     if (!buffer->uncached) {
> > > +             list_for_each_entry(a, &buffer->attachments, list) {
> > > +                     if (!a->mapped)
> > > +                             continue;
> > > +                     dma_sync_sgtable_for_cpu(a->dev, a->table,
> > > direction);
> > > +             }
> > > +     }
> > > +     mutex_unlock(&buffer->lock);
> > > +
> > > +     return 0;
> > > +}
> > > +
> > > +static int secure_heap_dma_buf_end_cpu_access(struct dma_buf
> > > *dmabuf,
> > > +                                           enum dma_data_direction
> > > direction)
> > > +{
> > > +     struct secure_heap_buffer *buffer = dmabuf->priv;
> > > +     struct dma_heap_attachment *a;
> > > +
> > > +     mutex_lock(&buffer->lock);
> > > +
> > > +     if (buffer->vmap_cnt)
> > > +             flush_kernel_vmap_range(buffer->vaddr, buffer->len);
> > > +
> > > +     if (!buffer->uncached) {
> > > +             list_for_each_entry(a, &buffer->attachments, list) {
> > > +                     if (!a->mapped)
> > > +                             continue;
> > > +                     dma_sync_sgtable_for_device(a->dev, a->table, 
> > > direction);
> > > +             }
> > > +     }
> > > +     mutex_unlock(&buffer->lock);
> > > +
> > > +     return 0;
> > > +}
> > > +
> > > +static int secure_heap_mmap(struct dma_buf *dmabuf, struct
> > > vm_area_struct *vma)
> > > +{
> > > +     struct secure_heap_buffer *buffer = dmabuf->priv;
> > > +     struct sg_table *table = &buffer->sg_table;
> > > +     unsigned long addr = vma->vm_start;
> > > +     struct sg_page_iter piter;
> > > +     int ret;
> > > +
> > > +     if (buffer->uncached)
> > > +             vma->vm_page_prot = pgprot_writecombine(vma-
> > > >vm_page_prot);
> > > +
> > > +     for_each_sgtable_page(table, &piter, vma->vm_pgoff) {
> > > +             struct page *page = sg_page_iter_page(&piter);
> > > +
> > > +             ret = remap_pfn_range(vma, addr, page_to_pfn(page),
> > > PAGE_SIZE,
> > > +                                   vma->vm_page_prot);
> > 
> > If the CPU can't touch these buffers, what would they be mapped to
> > userspace for?
> 
> again, let's remove this optional ops.
> 
> > 
> > > +             if (ret)
> > > +                     return ret;
> > > +             addr += PAGE_SIZE;
> > > +     }
> > > +     return 0;
> > > +}
> > > +
> > > +static void *secure_heap_do_vmap(struct secure_heap_buffer
> > > *buffer)
> > > +{
> > > +     struct sg_table *table = &buffer->sg_table;
> > > +     int npages = PAGE_ALIGN(buffer->len) / PAGE_SIZE;
> > > +     struct page **pages = vmalloc(sizeof(struct page *) *
> > > npages);
> > > +     struct page **tmp = pages;
> > > +     struct sg_page_iter piter;
> > > +     pgprot_t pgprot = PAGE_KERNEL;
> > > +     void *vaddr;
> > > +
> > > +     if (!pages)
> > > +             return ERR_PTR(-ENOMEM);
> > > +
> > > +     if (buffer->uncached)
> > > +             pgprot = pgprot_writecombine(PAGE_KERNEL);
> > > +
> > > +     for_each_sgtable_page(table, &piter, 0) {
> > > +             WARN_ON(tmp - pages >= npages);
> > > +             *tmp++ = sg_page_iter_page(&piter);
> > > +     }
> > > +
> > > +     vaddr = vmap(pages, npages, VM_MAP, pgprot);
> > > +     vfree(pages);
> > 
> > Similar to above, if the CPU can't touch these buffers, what would be
> > the point of mapping them to the kernel?
> 
> indeed, useless code.
> 
> > 
> > > +
> > > +     if (!vaddr)
> > > +             return ERR_PTR(-ENOMEM);
> > > +
> > > +     return vaddr;
> > > +}
> > > +
> > > +static int secure_heap_vmap(struct dma_buf *dmabuf, struct
> > > dma_buf_map *map)
> > > +{
> > > +     struct secure_heap_buffer *buffer = dmabuf->priv;
> > > +     void *vaddr;
> > > +     int ret = 0;
> > > +
> > > +     mutex_lock(&buffer->lock);
> > > +     if (buffer->vmap_cnt) {
> > > +             buffer->vmap_cnt++;
> > > +             goto out;
> > > +     }
> > > +
> > > +     vaddr = secure_heap_do_vmap(buffer);
> > > +     if (IS_ERR(vaddr)) {
> > > +             ret = PTR_ERR(vaddr);
> > > +             goto out;
> > > +     }
> > > +
> > > +     buffer->vaddr = vaddr;
> > > +     buffer->vmap_cnt++;
> > > +     dma_buf_map_set_vaddr(map, buffer->vaddr);
> > > +out:
> > > +     mutex_unlock(&buffer->lock);
> > > +
> > > +     return ret;
> > > +}
> > > +
> > > +static void secure_heap_vunmap(struct dma_buf *dmabuf, struct
> > > dma_buf_map *map)
> > > +{
> > > +     struct secure_heap_buffer *buffer = dmabuf->priv;
> > > +
> > > +     mutex_lock(&buffer->lock);
> > > +     if (!--buffer->vmap_cnt) {
> > > +             vunmap(buffer->vaddr);
> > > +             buffer->vaddr = NULL;
> > > +     }
> > > +     mutex_unlock(&buffer->lock);
> > > +     dma_buf_map_clear(map);
> > > +}
> > > +
> > > +static void secure_heap_zero_buffer(struct secure_heap_buffer
> > > *buffer)
> > > +{
> > > +     struct sg_table *sgt = &buffer->sg_table;
> > > +     struct sg_page_iter piter;
> > > +     struct page *p;
> > > +     void *vaddr;
> > > +
> > > +     for_each_sgtable_page(sgt, &piter, 0) {
> > > +             p = sg_page_iter_page(&piter);
> > > +             vaddr = kmap_atomic(p);
> > > +             memset(vaddr, 0, PAGE_SIZE);
> > 
> > How can you do memset on the buffer if the firewall is preventing CPU
> > accesses?
> > 
> 
> yes, useless if we set a secure flag to prevent driver from mapping
> allocated buffer.
> 

Wouldn't you still want the buffers to be zeroed? I think you need
a way to ask the TEE to zero them.

This also makes me wonder about the deferred free mechanism. If you
aren't zeroing then there's no need for the deferred free, and if
allocations are coming from a reserved-memory region then the shrinker
mechanism doesn't make sense because freeing up the deferred buffers
won't help relieve memory pressure.

I wonder if it would be better to have a more specialised deferred
free mechanism as part of this heap (if necessary), rather than
library-ise it as you have in patch 1.

> > > +             kunmap_atomic(vaddr);
> > > +     }
> > > +}
> > > +
> > > +static void secure_heap_buf_free(struct deferred_freelist_item
> > > *item,
> > > +                              enum df_reason reason)
> > > +{
> > > +     struct secure_heap_buffer *buffer;
> > > +     struct secure_heap_info *info;
> > > +     struct sg_table *table;
> > > +     struct scatterlist *sg;
> > > +     int i;
> > > +
> > > +     buffer = container_of(item, struct secure_heap_buffer,
> > > deferred_free);
> > > +     info = dma_heap_get_drvdata(buffer->heap);
> > > +
> > > +     if (!info->no_map) {
> > > +             // Zero the buffer pages before adding back to the
> > > pool
> > > +             if (reason == DF_NORMAL)
> > > +                     secure_heap_zero_buffer(buffer);
> > > +     }
> > > +
> > > +     table = &buffer->sg_table;
> > > +     for_each_sg(table->sgl, sg, table->nents, i)
> > > +             gen_pool_free(info->pool, sg_dma_address(sg),
> > > sg_dma_len(sg));
> > > +
> > > +     sg_free_table(table);
> > > +     kfree(buffer);
> > > +}
> > > +
> > > +static void secure_heap_dma_buf_release(struct dma_buf *dmabuf)
> > > +{
> > > +     struct secure_heap_buffer *buffer = dmabuf->priv;
> > > +     int npages = PAGE_ALIGN(buffer->len) / PAGE_SIZE;
> > > +
> > > +     deferred_free(&buffer->deferred_free, secure_heap_buf_free,
> > > npages);
> > > +}
> > > +
> > > +static const struct dma_buf_ops secure_heap_buf_ops = {
> > > +     .attach = secure_heap_attach,
> > > +     .detach = secure_heap_detach,
> > > +     .map_dma_buf = secure_heap_map_dma_buf,
> > > +     .unmap_dma_buf = secure_heap_unmap_dma_buf,
> > > +     .begin_cpu_access = secure_heap_dma_buf_begin_cpu_access,
> > > +     .end_cpu_access = secure_heap_dma_buf_end_cpu_access,
> > > +     .mmap = secure_heap_mmap,
> > > +     .vmap = secure_heap_vmap,
> > > +     .vunmap = secure_heap_vunmap,
> > > +     .release = secure_heap_dma_buf_release,
> > > +};
> > > +
> > > +static struct dma_buf *secure_heap_do_allocate(struct dma_heap
> > > *heap,
> > > +                                            unsigned long len,
> > > +                                            unsigned long
> > > fd_flags,
> > > +                                            unsigned long
> > > heap_flags,
> > > +                                            bool uncached)
> > > +{
> > > +     struct secure_heap_buffer *buffer;
> > > +     struct secure_heap_info *info = dma_heap_get_drvdata(heap);
> > > +     DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
> > > +     unsigned long size = roundup(len, PAGE_SIZE);
> > > +     struct dma_buf *dmabuf;
> > > +     struct sg_table *table;
> > > +     int ret = -ENOMEM;
> > > +     unsigned long phy_addr;
> > > +
> > > +     buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);
> > > +     if (!buffer)
> > > +             return ERR_PTR(-ENOMEM);
> > > +
> > > +     INIT_LIST_HEAD(&buffer->attachments);
> > > +     mutex_init(&buffer->lock);
> > > +     buffer->heap = heap;
> > > +     buffer->len = size;
> > > +     buffer->uncached = uncached;
> > > +
> > > +     phy_addr = gen_pool_alloc(info->pool, size);
> > > +     if (!phy_addr)
> > > +             goto free_buffer;
> > > +
> > > +     table = &buffer->sg_table;
> > > +     if (sg_alloc_table(table, 1, GFP_KERNEL))
> > > +             goto free_pool;
> > > +
> > > +     sg_set_page(table->sgl, phys_to_page(phy_addr), size, 0);
> > > +     sg_dma_address(table->sgl) = phy_addr;
> > > +     sg_dma_len(table->sgl) = size;
> > > +
> > > +     /* create the dmabuf */
> > > +     exp_info.exp_name = dma_heap_get_name(heap);
> > > +     exp_info.ops = &secure_heap_buf_ops;
> > > +     exp_info.size = buffer->len;
> > > +     exp_info.flags = fd_flags;
> > > +     exp_info.priv = buffer;
> > > +     dmabuf = dma_buf_export(&exp_info);
> > > +     if (IS_ERR(dmabuf)) {
> > > +             ret = PTR_ERR(dmabuf);
> > > +             goto free_pages;
> > > +     }
> > > +
> > > +     return dmabuf;
> > > +
> > > +free_pages:
> > > +     sg_free_table(table);
> > > +
> > > +free_pool:
> > > +     gen_pool_free(info->pool, phy_addr, size);
> > > +
> > > +free_buffer:
> > > +     mutex_destroy(&buffer->lock);
> > > +     kfree(buffer);
> > > +
> > > +     return ERR_PTR(ret);
> > > +}
> > > +
> > > +static struct dma_buf *secure_heap_allocate(struct dma_heap *heap,
> > > +                                         unsigned long len,
> > > +                                         unsigned long fd_flags,
> > > +                                         unsigned long heap_flags)
> > > +{
> > > +     // use uncache buffer here by default
> > > +     return secure_heap_do_allocate(heap, len, fd_flags,
> > > heap_flags, true);
> > > +     // use cache buffer
> > > +     // return secure_heap_do_allocate(heap, len, fd_flags,
> > > heap_flags, false);
> > > +}
> > > +
> > > +static const struct dma_heap_ops secure_heap_ops = {
> > > +     .allocate = secure_heap_allocate,
> > > +};
> > > +
> > > +static int secure_heap_add(struct rmem_secure *rmem)
> > > +{
> > > +     struct dma_heap *secure_heap;
> > > +     struct dma_heap_export_info exp_info;
> > > +     struct secure_heap_info *info = NULL;
> > > +     struct gen_pool *pool = NULL;
> > > +     int ret = -EINVAL;
> > > +
> > > +     if (rmem->base == 0 || rmem->size == 0) {
> > > +             pr_err("secure_data base or size is not correct\n");
> > > +             goto error;
> > > +     }
> > > +
> > > +     info = kzalloc(sizeof(*info), GFP_KERNEL);
> > > +     if (!info) {
> > > +             pr_err("dmabuf info allocation failed\n");
> > > +             ret = -ENOMEM;
> > > +             goto error;
> > > +     }
> > > +
> > > +     pool = gen_pool_create(PAGE_SHIFT, -1);
> > > +     if (!pool) {
> > > +             pr_err("can't create gen pool\n");
> > > +             ret = -ENOMEM;
> > > +             goto error;
> > > +     }
> > > +
> > > +     if (gen_pool_add(pool, rmem->base, rmem->size, -1) < 0) {
> > > +             pr_err("failed to add memory into pool\n");
> > > +             ret = -ENOMEM;
> > > +             goto error;
> > > +     }
> > > +
> > > +     info->pool = pool;
> > > +     info->no_map = rmem->no_map;
> > 
> > This kind of heap probably can't work if the region doesn't have
> > no-map, so I think it would make sense to enforce that no_map is set
> > (or ignore regions without no-map in DT).
> 
> if no-map is not set, secure-heap could be used as a heap with
> dynamically protected buffer from the TEE.

That would need a whole bunch more code to call in to the TEE to apply
the protection. So hypothetically yes, that's true, but not without
more code than what you have here.

> but I agree that this is adding too much complexity and could be
> simplify a lot without this mapping consideration.
> 
> then no-map is probably not the perfect term to describe this heap.
> is_s
> ecure or is_protected would be better ?

You mean in device-tree? no-map has a well-defined meaning; and if
this memory region shouldn't be added to the kernel's linear map
then it should be tagged with no-map.

Whether there should also be some DT property indicating that the
region is protected and can't be accessed normally: I don't have
much of an opinion on that.

Thanks,
-Brian

> 
> > 
> > Cheers,
> > -Brian
> > 
> > > +
> > > +     exp_info.name = rmem->name;
> > > +     exp_info.ops = &secure_heap_ops;
> > > +     exp_info.priv = info;
> > > +
> > > +     secure_heap = dma_heap_add(&exp_info);
> > > +     if (IS_ERR(secure_heap)) {
> > > +             pr_err("dmabuf secure heap allocation failed\n");
> > > +             ret = PTR_ERR(secure_heap);
> > > +             goto error;
> > > +     }
> > > +
> > > +     return 0;
> > > +
> > > +error:
> > > +     kfree(info);
> > > +     if (pool)
> > > +             gen_pool_destroy(pool);
> > > +
> > > +     return ret;
> > > +}
> > > +
> > > +static int secure_heap_create(void)
> > > +{
> > > +     unsigned int i;
> > > +     int ret;
> > > +
> > > +     for (i = 0; i < secure_data_count; i++) {
> > > +             ret = secure_heap_add(&secure_data[i]);
> > > +             if (ret)
> > > +                     return ret;
> > > +     }
> > > +     return 0;
> > > +}
> > > +
> > > +static int rmem_secure_heap_device_init(struct reserved_mem *rmem,
> > > +                                      struct device *dev)
> > > +{
> > > +     dev_set_drvdata(dev, rmem);
> > > +     return 0;
> > > +}
> > > +
> > > +static void rmem_secure_heap_device_release(struct reserved_mem
> > > *rmem,
> > > +                                      struct device *dev)
> > > +{
> > > +     dev_set_drvdata(dev, NULL);
> > > +}
> > > +
> > > +static const struct reserved_mem_ops rmem_dma_ops = {
> > > +     .device_init    = rmem_secure_heap_device_init,
> > > +     .device_release = rmem_secure_heap_device_release,
> > > +};
> > > +
> > > +static int __init rmem_secure_heap_setup(struct reserved_mem
> > > *rmem)
> > > +{
> > > +     if (secure_data_count < MAX_SECURE_HEAP) {
> > > +             int name_len = 0;
> > > +             char *s = rmem->name;
> > > +
> > > +             secure_data[secure_data_count].base = rmem->base;
> > > +             secure_data[secure_data_count].size = rmem->size;
> > > +             secure_data[secure_data_count].no_map =
> > > +                     (of_get_flat_dt_prop(rmem->fdt_node, "no-
> > > map", NULL) != NULL);
> > > +
> > > +             while (name_len < MAX_HEAP_NAME_LEN) {
> > > +                     if ((*s == '@') || (*s == '\0'))
> > > +                             break;
> > > +                     name_len++;
> > > +                     s++;
> > > +             }
> > > +             if (name_len == MAX_HEAP_NAME_LEN)
> > > +                     name_len--;
> > > +
> > > +             strncpy(secure_data[secure_data_count].name, rmem-
> > > >name, name_len);
> > > +
> > > +             rmem->ops = &rmem_dma_ops;
> > > +             pr_info("Reserved memory: DMA buf secure pool %s at
> > > %pa, size %ld MiB\n",
> > > +                     secure_data[secure_data_count].name,
> > > +                     &rmem->base, (unsigned long)rmem->size /
> > > SZ_1M);
> > > +
> > > +             secure_data_count++;
> > > +             return 0;
> > > +     }
> > > +     WARN_ONCE(1, "Cannot handle more than %u secure heaps\n",
> > > MAX_SECURE_HEAP);
> > > +     return -EINVAL;
> > > +}
> > > +
> > > +RESERVEDMEM_OF_DECLARE(secure_heap, "linaro,secure-heap",
> > > rmem_secure_heap_setup);
> > > +
> > > +module_init(secure_heap_create);
> > > +MODULE_LICENSE("GPL v2");
> > > --
> > > 2.25.0
> > > 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/5] ANDROID: dma-buf: heaps: Add a shrinker controlled page pool
  2022-08-02  9:58 ` [PATCH 2/5] ANDROID: dma-buf: heaps: Add a shrinker controlled page pool Olivier Masse
@ 2022-08-03 12:40   ` Brian Starkey
  0 siblings, 0 replies; 13+ messages in thread
From: Brian Starkey @ 2022-08-03 12:40 UTC (permalink / raw)
  To: Olivier Masse
  Cc: sumit.semwal, benjamin.gaignard, christian.koenig, linux-media,
	dri-devel, linaro-mm-sig, linux-kernel, clement.faure, nd

Hi Olivier,

On Tue, Aug 02, 2022 at 11:58:40AM +0200, Olivier Masse wrote:
> From: John Stultz <john.stultz@linaro.org>
> 
> This patch adds a simple shrinker controlled page pool to the
> dmabuf heaps subsystem.
> 
> This replaces the use of the networking page_pool, over concerns
> that the lack of a shrinker for that implementation may cause
> additional low-memory kills
> 
> TODO: Take another pass at trying to unify this w/ the ttm pool
> 
> Thoughts and feedback would be greatly appreciated!

Did I miss something, or is this not actually used anywhere?

Thanks,
-Brian

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [EXT] Re: [PATCH 3/5] dma-buf: heaps: add Linaro secure dmabuf heap support
  2022-08-03 12:37       ` Brian Starkey
@ 2022-08-05 14:06         ` Olivier Masse
  0 siblings, 0 replies; 13+ messages in thread
From: Olivier Masse @ 2022-08-05 14:06 UTC (permalink / raw)
  To: brian.starkey
  Cc: sumit.semwal, linux-kernel, linaro-mm-sig, christian.koenig,
	linux-media, nd, Clément Faure, dri-devel,
	benjamin.gaignard

Hi Brian,

Sorry for pushing all the patches again, but I've done some cleanup 
regarding your comments.

secure-heap are now protected by default and no mapping could be done
for the CPU.
deferred-free is no more needed.
useless page pool code removed too.

Best regards,
Olivier

On mer., 2022-08-03 at 13:37 +0100, Brian Starkey wrote:
> Caution: EXT Email
> 
> Hi,
> 
> On Wed, Aug 03, 2022 at 11:07:54AM +0000, Olivier Masse wrote:
> > Hi Brian,
> > 
> > Thanks for your comments, please find my reply below.
> > 
> > On mar., 2022-08-02 at 15:39 +0100, Brian Starkey wrote:
> > > Caution: EXT Email
> > > 
> > > Hi Olivier,
> > > 
> > > Some comments below as I mentioned off-list.
> > > 
> > > One additional point: some devices need to know if a buffer is
> > > protected, so that they can configure registers appropriately to
> > > make
> > > use of that protected buffer. There was previously a discussion
> > > about
> > > adding a flag to a dma_buf to indicate that it is allocated from
> > > protected memory[1].
> > > 
> > > [1]
> > > 
https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Farchives%2Fdri-devel%2F2019-September%2F238059.html&amp;data=05%7C01%7Colivier.masse%40nxp.com%7Cafa24901ad92491c9a6a08da754d00b5%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637951270862339002%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&amp;sdata=LoQfoWYr0yHvTWa2tth9XOm2PACji4ATHnx3BRNKnJM%3D&amp;reserved=0
> > > 
> > > 
> > 
> > Interesting, could we introduce is_protected 1-bit flag into struct
> > dma_buf ?
> 
> That was the earlier proposal, yeah.
> 
> > struct dma_buf_ops.map_dma_buf and struct dma_buf_ops.unmap_dma_buf
> > could become optional for such buffer ?
> > 
> 
> map_dma_buf and unmap_dma_buf are needed to give devices access to
> the
> dma-buf, I don't think they should become optional.
> 
> Mapping to the CPU maybe should be optional/disallowed for protected
> buffers.
> 
> > > On Tue, Aug 02, 2022 at 11:58:41AM +0200, Olivier Masse wrote:
> > > > add Linaro secure heap bindings: linaro,secure-heap
> > > > use genalloc to allocate/free buffer from buffer pool.
> > > > buffer pool info is from dts.
> > > > use sg_table instore the allocated memory info, the length of
> > > > sg_table is 1.
> > > > implement secure_heap_buf_ops to implement buffer share in
> > > > difference device:
> > > > 1. Userspace passes this fd to all drivers it wants this buffer
> > > > to share with: First the filedescriptor is converted to a
> > > > &dma_buf
> > > > using
> > > > dma_buf_get(). Then the buffer is attached to the device using
> > > > dma_buf_attach().
> > > > 2. Once the buffer is attached to all devices userspace can
> > > > initiate DMA
> > > > access to the shared buffer. In the kernel this is done by
> > > > calling
> > > > dma_buf_map_attachment()
> > > > 3. get sg_table with dma_buf_map_attachment in difference
> > > > device.
> > > > 
> > > > Signed-off-by: Olivier Masse <olivier.masse@nxp.com>
> > > > ---
> > > >  drivers/dma-buf/heaps/Kconfig       |  21 +-
> > > >  drivers/dma-buf/heaps/Makefile      |   1 +
> > > >  drivers/dma-buf/heaps/secure_heap.c | 588
> > > > ++++++++++++++++++++++++++++
> > > >  3 files changed, 606 insertions(+), 4 deletions(-)
> > > >  create mode 100644 drivers/dma-buf/heaps/secure_heap.c
> > > > 
> > > > diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-
> > > > buf/heaps/Kconfig
> > > > index 6a33193a7b3e..b2406932192e 100644
> > > > --- a/drivers/dma-buf/heaps/Kconfig
> > > > +++ b/drivers/dma-buf/heaps/Kconfig
> > > > @@ -1,8 +1,12 @@
> > > > -config DMABUF_HEAPS_DEFERRED_FREE
> > > > -     tristate
> > > > +menuconfig DMABUF_HEAPS_DEFERRED_FREE
> > > > +     bool "DMA-BUF heaps deferred-free library"
> > > > +     help
> > > > +       Choose this option to enable the DMA-BUF heaps
> > > > deferred-
> > > > free library.
> > > > 
> > > > -config DMABUF_HEAPS_PAGE_POOL
> > > > -     tristate
> > > > +menuconfig DMABUF_HEAPS_PAGE_POOL
> > > > +     bool "DMA-BUF heaps page-pool library"
> > > > +     help
> > > > +       Choose this option to enable the DMA-BUF heaps page-
> > > > pool
> > > > library.
> > > > 
> > > >  config DMABUF_HEAPS_SYSTEM
> > > >       bool "DMA-BUF System Heap"
> > > > @@ -26,3 +30,12 @@ config DMABUF_HEAPS_DSP
> > > >            Choose this option to enable the dsp dmabuf heap.
> > > > The
> > > > dsp heap
> > > >            is allocated by gen allocater. it's allocated
> > > > according
> > > > the dts.
> > > >            If in doubt, say Y.
> > > > +
> > > > +config DMABUF_HEAPS_SECURE
> > > > +     tristate "DMA-BUF Secure Heap"
> > > > +     depends on DMABUF_HEAPS && DMABUF_HEAPS_DEFERRED_FREE
> > > > +     help
> > > > +       Choose this option to enable the secure dmabuf heap.
> > > > The
> > > > secure heap
> > > > +       pools are defined according to the DT. Heaps are
> > > > allocated
> > > > +       in the pools using gen allocater.
> > > > +       If in doubt, say Y.
> > > > diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-
> > > > buf/heaps/Makefile
> > > > index e70722ea615e..08f6aa5919d1 100644
> > > > --- a/drivers/dma-buf/heaps/Makefile
> > > > +++ b/drivers/dma-buf/heaps/Makefile
> > > > @@ -4,3 +4,4 @@ obj-$(CONFIG_DMABUF_HEAPS_PAGE_POOL)  +=
> > > > page_pool.o
> > > >  obj-$(CONFIG_DMABUF_HEAPS_SYSTEM)    += system_heap.o
> > > >  obj-$(CONFIG_DMABUF_HEAPS_CMA)               += cma_heap.o
> > > >  obj-$(CONFIG_DMABUF_HEAPS_DSP)          += dsp_heap.o
> > > > +obj-$(CONFIG_DMABUF_HEAPS_SECURE)    += secure_heap.o
> > > > diff --git a/drivers/dma-buf/heaps/secure_heap.c b/drivers/dma-
> > > > buf/heaps/secure_heap.c
> > > > new file mode 100644
> > > > index 000000000000..31aac5d050b4
> > > > --- /dev/null
> > > > +++ b/drivers/dma-buf/heaps/secure_heap.c
> > > > @@ -0,0 +1,588 @@
> > > > +// SPDX-License-Identifier: GPL-2.0
> > > > +/*
> > > > + * DMABUF secure heap exporter
> > > > + *
> > > > + * Copyright 2021 NXP.
> > > > + */
> > > > +
> > > > +#include <linux/dma-buf.h>
> > > > +#include <linux/dma-heap.h>
> > > > +#include <linux/dma-mapping.h>
> > > > +#include <linux/err.h>
> > > > +#include <linux/genalloc.h>
> > > > +#include <linux/highmem.h>
> > > > +#include <linux/mm.h>
> > > > +#include <linux/module.h>
> > > > +#include <linux/of.h>
> > > > +#include <linux/of_fdt.h>
> > > > +#include <linux/of_reserved_mem.h>
> > > > +#include <linux/scatterlist.h>
> > > > +#include <linux/slab.h>
> > > > +#include <linux/vmalloc.h>
> > > > +
> > > > +#include "deferred-free-helper.h"
> > > > +#include "page_pool.h"
> > > > +
> > > > +#define MAX_SECURE_HEAP 2
> > > > +#define MAX_HEAP_NAME_LEN 32
> > > > +
> > > > +struct secure_heap_buffer {
> > > > +     struct dma_heap *heap;
> > > > +     struct list_head attachments;
> > > > +     struct mutex lock;
> > > > +     unsigned long len;
> > > > +     struct sg_table sg_table;
> > > > +     int vmap_cnt;
> > > > +     struct deferred_freelist_item deferred_free;
> > > > +     void *vaddr;
> > > > +     bool uncached;
> > > > +};
> > > > +
> > > > +struct dma_heap_attachment {
> > > > +     struct device *dev;
> > > > +     struct sg_table *table;
> > > > +     struct list_head list;
> > > > +     bool no_map;
> > > > +     bool mapped;
> > > > +     bool uncached;
> > > > +};
> > > 
> > > I think dma_heap_attachment should have a more specific name,
> > > otherwise it looks like some generic part of the dma_heap
> > > framework.
> > 
> > ok, how about secure_heap_attachment which sound more logical ?
> > 
> 
> Yes that sounds better.
> 
> > > 
> > > > +
> > > > +struct secure_heap_info {
> > > > +     struct gen_pool *pool;
> > > > +
> > > > +     bool no_map;
> > > > +};
> > > > +
> > > > +struct rmem_secure {
> > > > +     phys_addr_t base;
> > > > +     phys_addr_t size;
> > > > +
> > > > +     char name[MAX_HEAP_NAME_LEN];
> > > > +
> > > > +     bool no_map;
> > > > +};
> > > > +
> > > > +static struct rmem_secure secure_data[MAX_SECURE_HEAP] = {0};
> > > > +static unsigned int secure_data_count;
> > > > +
> > > > +static struct sg_table *dup_sg_table(struct sg_table *table)
> > > > +{
> > > > +     struct sg_table *new_table;
> > > > +     int ret, i;
> > > > +     struct scatterlist *sg, *new_sg;
> > > > +
> > > > +     new_table = kzalloc(sizeof(*new_table), GFP_KERNEL);
> > > > +     if (!new_table)
> > > > +             return ERR_PTR(-ENOMEM);
> > > > +
> > > > +     ret = sg_alloc_table(new_table, table->orig_nents,
> > > > GFP_KERNEL);
> > > > +     if (ret) {
> > > > +             kfree(new_table);
> > > > +             return ERR_PTR(-ENOMEM);
> > > > +     }
> > > > +
> > > > +     new_sg = new_table->sgl;
> > > > +     for_each_sgtable_sg(table, sg, i) {
> > > > +             sg_set_page(new_sg, sg_page(sg), sg->length, sg-
> > > > > offset);
> > > > 
> > > > +             new_sg->dma_address = sg->dma_address;
> > > > +#ifdef CONFIG_NEED_SG_DMA_LENGTH
> > > > +             new_sg->dma_length = sg->dma_length;
> > > > +#endif
> > > > +             new_sg = sg_next(new_sg);
> > > > +     }
> > > > +
> > > > +     return new_table;
> > > > +}
> > > > +
> > > > +static int secure_heap_attach(struct dma_buf *dmabuf,
> > > > +                           struct dma_buf_attachment
> > > > *attachment)
> > > > +{
> > > > +     struct secure_heap_buffer *buffer = dmabuf->priv;
> > > > +     struct secure_heap_info *info =
> > > > dma_heap_get_drvdata(buffer-
> > > > > heap);
> > > > 
> > > > +     struct dma_heap_attachment *a;
> > > > +     struct sg_table *table;
> > > > +
> > > > +     a = kzalloc(sizeof(*a), GFP_KERNEL);
> > > > +     if (!a)
> > > > +             return -ENOMEM;
> > > > +
> > > > +     table = dup_sg_table(&buffer->sg_table);
> > > > +     if (IS_ERR(table)) {
> > > > +             kfree(a);
> > > > +             return -ENOMEM;
> > > > +     }
> > > > +
> > > > +     a->table = table;
> > > > +     a->dev = attachment->dev;
> > > > +     INIT_LIST_HEAD(&a->list);
> > > > +     a->no_map = info->no_map;
> > > > +     a->mapped = false;
> > > > +     a->uncached = buffer->uncached;
> > > > +     attachment->priv = a;
> > > > +
> > > > +     mutex_lock(&buffer->lock);
> > > > +     list_add(&a->list, &buffer->attachments);
> > > > +     mutex_unlock(&buffer->lock);
> > > > +
> > > > +     return 0;
> > > > +}
> > > > +
> > > > +static void secure_heap_detach(struct dma_buf *dmabuf,
> > > > +                            struct dma_buf_attachment
> > > > *attachment)
> > > > +{
> > > > +     struct secure_heap_buffer *buffer = dmabuf->priv;
> > > > +     struct dma_heap_attachment *a = attachment->priv;
> > > > +
> > > > +     mutex_lock(&buffer->lock);
> > > > +     list_del(&a->list);
> > > > +     mutex_unlock(&buffer->lock);
> > > > +
> > > > +     sg_free_table(a->table);
> > > > +     kfree(a->table);
> > > > +     kfree(a);
> > > > +}
> > > > +
> > > > +static struct sg_table *secure_heap_map_dma_buf(struct
> > > > dma_buf_attachment *attachment,
> > > > +                                             enum
> > > > dma_data_direction direction)
> > > > +{
> > > > +     struct dma_heap_attachment *a = attachment->priv;
> > > > +     struct sg_table *table = a->table;
> > > > +     int attr = 0;
> > > > +     int ret;
> > > > +
> > > > +     if (!a->no_map) {
> > > 
> > > This looks strange - you're expecting this driver to be used on
> > > regions with no-map set, but if no-map _is_ set, you don't allow
> > > the
> > > dma_buf to get mapped to any devices. Doesn't that mean that
> > > these
> > > buffers can never actually be used?
> > 
> > if no-map is not set, map_dma_buf is mapping the buffer.
> > 
> 
> no-map prevents the memory region from being added to the kernel's
> linear mapping. Irrespective of no-map, map_dma_buf is needed to map
> a buffer to a device, this should be orthogonal to no-map, and is
> definitely required for devices to be able to use these buffers.
> 
> > > 
> > > > +             if (a->uncached)
> > > > +                     attr = DMA_ATTR_SKIP_CPU_SYNC;
> > > > +
> > > 
> > > If the CPU can never touch these buffers, is cached vs uncached
> > > meaningful?
> > 
> > indeed, but as dma_buf_ops.map_dma_buf is mandatory, this flag as
> > well
> > as no-map were introduce to manage uncached mapped buffer.
> > 
> 
> Uncached mapping where? I think we're agreed that these buffers can
> never be mapped for (non-trusted) CPU access, therefore (non-trusted)
> CPU cache maintenance doesn't apply. Devices may still have cached
> mappings, but it's up to device drivers to manage that.
> 
> > to simplify everything, secure-heap could get rid of no-map and
> > uncached flags and set an is_protected flag in dma_buf ?
> > 
> 
> I think that sounds better, yeah.
> 
> > 
> > > If the TEE touches the buffers from the CPU then perhaps the TEE
> > > would
> > > need to do cache maintenance, but I'd expect that to be managed
> > > in
> > > the
> > > TEE.
> > 
> > yes, if needed cache maintenance should be done in TA.
> > 
> > > 
> > > > +             ret = dma_map_sgtable(attachment->dev, table,
> > > > direction, attr);
> > > > +             if (ret)
> > > > +                     return ERR_PTR(ret);
> > > > +
> > > > +             a->mapped = true;
> > > > +     }
> > > > +
> > > > +     return table;
> > > > +}
> > > > +
> > > > +static void secure_heap_unmap_dma_buf(struct
> > > > dma_buf_attachment
> > > > *attachment,
> > > > +                                   struct sg_table *table,
> > > > +                                   enum dma_data_direction
> > > > direction)
> > > > +{
> > > > +     struct dma_heap_attachment *a = attachment->priv;
> > > > +     int attr = 0;
> > > > +
> > > > +     if (!a->no_map) {
> > > > +             if (a->uncached)
> > > > +                     attr = DMA_ATTR_SKIP_CPU_SYNC;
> > > > +
> > > > +             a->mapped = false;
> > > > +             dma_unmap_sgtable(attachment->dev, table,
> > > > direction,
> > > > attr);
> > > > +     }
> > > > +}
> > > > +
> > > > +static int secure_heap_dma_buf_begin_cpu_access(struct dma_buf
> > > > *dmabuf,
> > > > +                                             enum
> > > > dma_data_direction direction)
> > > 
> > > If the firewall is preventing CPU accesses, then shouldn't
> > > begin_cpu_access and end_cpu_access either fail or be a no-op?
> > 
> > true, both of them are optional and could be removed.
> > 
> > > 
> > > > +{
> > > > +     struct secure_heap_buffer *buffer = dmabuf->priv;
> > > > +     struct dma_heap_attachment *a;
> > > > +
> > > > +     mutex_lock(&buffer->lock);
> > > > +
> > > > +     if (buffer->vmap_cnt)
> > > > +             invalidate_kernel_vmap_range(buffer->vaddr,
> > > > buffer-
> > > > > len);
> > > > 
> > > > +
> > > > +     if (!buffer->uncached) {
> > > > +             list_for_each_entry(a, &buffer->attachments,
> > > > list) {
> > > > +                     if (!a->mapped)
> > > > +                             continue;
> > > > +                     dma_sync_sgtable_for_cpu(a->dev, a-
> > > > >table,
> > > > direction);
> > > > +             }
> > > > +     }
> > > > +     mutex_unlock(&buffer->lock);
> > > > +
> > > > +     return 0;
> > > > +}
> > > > +
> > > > +static int secure_heap_dma_buf_end_cpu_access(struct dma_buf
> > > > *dmabuf,
> > > > +                                           enum
> > > > dma_data_direction
> > > > direction)
> > > > +{
> > > > +     struct secure_heap_buffer *buffer = dmabuf->priv;
> > > > +     struct dma_heap_attachment *a;
> > > > +
> > > > +     mutex_lock(&buffer->lock);
> > > > +
> > > > +     if (buffer->vmap_cnt)
> > > > +             flush_kernel_vmap_range(buffer->vaddr, buffer-
> > > > >len);
> > > > +
> > > > +     if (!buffer->uncached) {
> > > > +             list_for_each_entry(a, &buffer->attachments,
> > > > list) {
> > > > +                     if (!a->mapped)
> > > > +                             continue;
> > > > +                     dma_sync_sgtable_for_device(a->dev, a-
> > > > >table,
> > > > direction);
> > > > +             }
> > > > +     }
> > > > +     mutex_unlock(&buffer->lock);
> > > > +
> > > > +     return 0;
> > > > +}
> > > > +
> > > > +static int secure_heap_mmap(struct dma_buf *dmabuf, struct
> > > > vm_area_struct *vma)
> > > > +{
> > > > +     struct secure_heap_buffer *buffer = dmabuf->priv;
> > > > +     struct sg_table *table = &buffer->sg_table;
> > > > +     unsigned long addr = vma->vm_start;
> > > > +     struct sg_page_iter piter;
> > > > +     int ret;
> > > > +
> > > > +     if (buffer->uncached)
> > > > +             vma->vm_page_prot = pgprot_writecombine(vma-
> > > > > vm_page_prot);
> > > > 
> > > > +
> > > > +     for_each_sgtable_page(table, &piter, vma->vm_pgoff) {
> > > > +             struct page *page = sg_page_iter_page(&piter);
> > > > +
> > > > +             ret = remap_pfn_range(vma, addr,
> > > > page_to_pfn(page),
> > > > PAGE_SIZE,
> > > > +                                   vma->vm_page_prot);
> > > 
> > > If the CPU can't touch these buffers, what would they be mapped
> > > to
> > > userspace for?
> > 
> > again, let's remove this optional ops.
> > 
> > > 
> > > > +             if (ret)
> > > > +                     return ret;
> > > > +             addr += PAGE_SIZE;
> > > > +     }
> > > > +     return 0;
> > > > +}
> > > > +
> > > > +static void *secure_heap_do_vmap(struct secure_heap_buffer
> > > > *buffer)
> > > > +{
> > > > +     struct sg_table *table = &buffer->sg_table;
> > > > +     int npages = PAGE_ALIGN(buffer->len) / PAGE_SIZE;
> > > > +     struct page **pages = vmalloc(sizeof(struct page *) *
> > > > npages);
> > > > +     struct page **tmp = pages;
> > > > +     struct sg_page_iter piter;
> > > > +     pgprot_t pgprot = PAGE_KERNEL;
> > > > +     void *vaddr;
> > > > +
> > > > +     if (!pages)
> > > > +             return ERR_PTR(-ENOMEM);
> > > > +
> > > > +     if (buffer->uncached)
> > > > +             pgprot = pgprot_writecombine(PAGE_KERNEL);
> > > > +
> > > > +     for_each_sgtable_page(table, &piter, 0) {
> > > > +             WARN_ON(tmp - pages >= npages);
> > > > +             *tmp++ = sg_page_iter_page(&piter);
> > > > +     }
> > > > +
> > > > +     vaddr = vmap(pages, npages, VM_MAP, pgprot);
> > > > +     vfree(pages);
> > > 
> > > Similar to above, if the CPU can't touch these buffers, what
> > > would be
> > > the point of mapping them to the kernel?
> > 
> > indeed, useless code.
> > 
> > > 
> > > > +
> > > > +     if (!vaddr)
> > > > +             return ERR_PTR(-ENOMEM);
> > > > +
> > > > +     return vaddr;
> > > > +}
> > > > +
> > > > +static int secure_heap_vmap(struct dma_buf *dmabuf, struct
> > > > dma_buf_map *map)
> > > > +{
> > > > +     struct secure_heap_buffer *buffer = dmabuf->priv;
> > > > +     void *vaddr;
> > > > +     int ret = 0;
> > > > +
> > > > +     mutex_lock(&buffer->lock);
> > > > +     if (buffer->vmap_cnt) {
> > > > +             buffer->vmap_cnt++;
> > > > +             goto out;
> > > > +     }
> > > > +
> > > > +     vaddr = secure_heap_do_vmap(buffer);
> > > > +     if (IS_ERR(vaddr)) {
> > > > +             ret = PTR_ERR(vaddr);
> > > > +             goto out;
> > > > +     }
> > > > +
> > > > +     buffer->vaddr = vaddr;
> > > > +     buffer->vmap_cnt++;
> > > > +     dma_buf_map_set_vaddr(map, buffer->vaddr);
> > > > +out:
> > > > +     mutex_unlock(&buffer->lock);
> > > > +
> > > > +     return ret;
> > > > +}
> > > > +
> > > > +static void secure_heap_vunmap(struct dma_buf *dmabuf, struct
> > > > dma_buf_map *map)
> > > > +{
> > > > +     struct secure_heap_buffer *buffer = dmabuf->priv;
> > > > +
> > > > +     mutex_lock(&buffer->lock);
> > > > +     if (!--buffer->vmap_cnt) {
> > > > +             vunmap(buffer->vaddr);
> > > > +             buffer->vaddr = NULL;
> > > > +     }
> > > > +     mutex_unlock(&buffer->lock);
> > > > +     dma_buf_map_clear(map);
> > > > +}
> > > > +
> > > > +static void secure_heap_zero_buffer(struct secure_heap_buffer
> > > > *buffer)
> > > > +{
> > > > +     struct sg_table *sgt = &buffer->sg_table;
> > > > +     struct sg_page_iter piter;
> > > > +     struct page *p;
> > > > +     void *vaddr;
> > > > +
> > > > +     for_each_sgtable_page(sgt, &piter, 0) {
> > > > +             p = sg_page_iter_page(&piter);
> > > > +             vaddr = kmap_atomic(p);
> > > > +             memset(vaddr, 0, PAGE_SIZE);
> > > 
> > > How can you do memset on the buffer if the firewall is preventing
> > > CPU
> > > accesses?
> > > 
> > 
> > yes, useless if we set a secure flag to prevent driver from mapping
> > allocated buffer.
> > 
> 
> Wouldn't you still want the buffers to be zeroed? I think you need
> a way to ask the TEE to zero them.
> 
> This also makes me wonder about the deferred free mechanism. If you
> aren't zeroing then there's no need for the deferred free, and if
> allocations are coming from a reserved-memory region then the
> shrinker
> mechanism doesn't make sense because freeing up the deferred buffers
> won't help relieve memory pressure.
> 
> I wonder if it would be better to have a more specialised deferred
> free mechanism as part of this heap (if necessary), rather than
> library-ise it as you have in patch 1.
> 
> > > > +             kunmap_atomic(vaddr);
> > > > +     }
> > > > +}
> > > > +
> > > > +static void secure_heap_buf_free(struct deferred_freelist_item
> > > > *item,
> > > > +                              enum df_reason reason)
> > > > +{
> > > > +     struct secure_heap_buffer *buffer;
> > > > +     struct secure_heap_info *info;
> > > > +     struct sg_table *table;
> > > > +     struct scatterlist *sg;
> > > > +     int i;
> > > > +
> > > > +     buffer = container_of(item, struct secure_heap_buffer,
> > > > deferred_free);
> > > > +     info = dma_heap_get_drvdata(buffer->heap);
> > > > +
> > > > +     if (!info->no_map) {
> > > > +             // Zero the buffer pages before adding back to
> > > > the
> > > > pool
> > > > +             if (reason == DF_NORMAL)
> > > > +                     secure_heap_zero_buffer(buffer);
> > > > +     }
> > > > +
> > > > +     table = &buffer->sg_table;
> > > > +     for_each_sg(table->sgl, sg, table->nents, i)
> > > > +             gen_pool_free(info->pool, sg_dma_address(sg),
> > > > sg_dma_len(sg));
> > > > +
> > > > +     sg_free_table(table);
> > > > +     kfree(buffer);
> > > > +}
> > > > +
> > > > +static void secure_heap_dma_buf_release(struct dma_buf
> > > > *dmabuf)
> > > > +{
> > > > +     struct secure_heap_buffer *buffer = dmabuf->priv;
> > > > +     int npages = PAGE_ALIGN(buffer->len) / PAGE_SIZE;
> > > > +
> > > > +     deferred_free(&buffer->deferred_free,
> > > > secure_heap_buf_free,
> > > > npages);
> > > > +}
> > > > +
> > > > +static const struct dma_buf_ops secure_heap_buf_ops = {
> > > > +     .attach = secure_heap_attach,
> > > > +     .detach = secure_heap_detach,
> > > > +     .map_dma_buf = secure_heap_map_dma_buf,
> > > > +     .unmap_dma_buf = secure_heap_unmap_dma_buf,
> > > > +     .begin_cpu_access = secure_heap_dma_buf_begin_cpu_access,
> > > > +     .end_cpu_access = secure_heap_dma_buf_end_cpu_access,
> > > > +     .mmap = secure_heap_mmap,
> > > > +     .vmap = secure_heap_vmap,
> > > > +     .vunmap = secure_heap_vunmap,
> > > > +     .release = secure_heap_dma_buf_release,
> > > > +};
> > > > +
> > > > +static struct dma_buf *secure_heap_do_allocate(struct dma_heap
> > > > *heap,
> > > > +                                            unsigned long len,
> > > > +                                            unsigned long
> > > > fd_flags,
> > > > +                                            unsigned long
> > > > heap_flags,
> > > > +                                            bool uncached)
> > > > +{
> > > > +     struct secure_heap_buffer *buffer;
> > > > +     struct secure_heap_info *info =
> > > > dma_heap_get_drvdata(heap);
> > > > +     DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
> > > > +     unsigned long size = roundup(len, PAGE_SIZE);
> > > > +     struct dma_buf *dmabuf;
> > > > +     struct sg_table *table;
> > > > +     int ret = -ENOMEM;
> > > > +     unsigned long phy_addr;
> > > > +
> > > > +     buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);
> > > > +     if (!buffer)
> > > > +             return ERR_PTR(-ENOMEM);
> > > > +
> > > > +     INIT_LIST_HEAD(&buffer->attachments);
> > > > +     mutex_init(&buffer->lock);
> > > > +     buffer->heap = heap;
> > > > +     buffer->len = size;
> > > > +     buffer->uncached = uncached;
> > > > +
> > > > +     phy_addr = gen_pool_alloc(info->pool, size);
> > > > +     if (!phy_addr)
> > > > +             goto free_buffer;
> > > > +
> > > > +     table = &buffer->sg_table;
> > > > +     if (sg_alloc_table(table, 1, GFP_KERNEL))
> > > > +             goto free_pool;
> > > > +
> > > > +     sg_set_page(table->sgl, phys_to_page(phy_addr), size, 0);
> > > > +     sg_dma_address(table->sgl) = phy_addr;
> > > > +     sg_dma_len(table->sgl) = size;
> > > > +
> > > > +     /* create the dmabuf */
> > > > +     exp_info.exp_name = dma_heap_get_name(heap);
> > > > +     exp_info.ops = &secure_heap_buf_ops;
> > > > +     exp_info.size = buffer->len;
> > > > +     exp_info.flags = fd_flags;
> > > > +     exp_info.priv = buffer;
> > > > +     dmabuf = dma_buf_export(&exp_info);
> > > > +     if (IS_ERR(dmabuf)) {
> > > > +             ret = PTR_ERR(dmabuf);
> > > > +             goto free_pages;
> > > > +     }
> > > > +
> > > > +     return dmabuf;
> > > > +
> > > > +free_pages:
> > > > +     sg_free_table(table);
> > > > +
> > > > +free_pool:
> > > > +     gen_pool_free(info->pool, phy_addr, size);
> > > > +
> > > > +free_buffer:
> > > > +     mutex_destroy(&buffer->lock);
> > > > +     kfree(buffer);
> > > > +
> > > > +     return ERR_PTR(ret);
> > > > +}
> > > > +
> > > > +static struct dma_buf *secure_heap_allocate(struct dma_heap
> > > > *heap,
> > > > +                                         unsigned long len,
> > > > +                                         unsigned long
> > > > fd_flags,
> > > > +                                         unsigned long
> > > > heap_flags)
> > > > +{
> > > > +     // use uncache buffer here by default
> > > > +     return secure_heap_do_allocate(heap, len, fd_flags,
> > > > heap_flags, true);
> > > > +     // use cache buffer
> > > > +     // return secure_heap_do_allocate(heap, len, fd_flags,
> > > > heap_flags, false);
> > > > +}
> > > > +
> > > > +static const struct dma_heap_ops secure_heap_ops = {
> > > > +     .allocate = secure_heap_allocate,
> > > > +};
> > > > +
> > > > +static int secure_heap_add(struct rmem_secure *rmem)
> > > > +{
> > > > +     struct dma_heap *secure_heap;
> > > > +     struct dma_heap_export_info exp_info;
> > > > +     struct secure_heap_info *info = NULL;
> > > > +     struct gen_pool *pool = NULL;
> > > > +     int ret = -EINVAL;
> > > > +
> > > > +     if (rmem->base == 0 || rmem->size == 0) {
> > > > +             pr_err("secure_data base or size is not
> > > > correct\n");
> > > > +             goto error;
> > > > +     }
> > > > +
> > > > +     info = kzalloc(sizeof(*info), GFP_KERNEL);
> > > > +     if (!info) {
> > > > +             pr_err("dmabuf info allocation failed\n");
> > > > +             ret = -ENOMEM;
> > > > +             goto error;
> > > > +     }
> > > > +
> > > > +     pool = gen_pool_create(PAGE_SHIFT, -1);
> > > > +     if (!pool) {
> > > > +             pr_err("can't create gen pool\n");
> > > > +             ret = -ENOMEM;
> > > > +             goto error;
> > > > +     }
> > > > +
> > > > +     if (gen_pool_add(pool, rmem->base, rmem->size, -1) < 0) {
> > > > +             pr_err("failed to add memory into pool\n");
> > > > +             ret = -ENOMEM;
> > > > +             goto error;
> > > > +     }
> > > > +
> > > > +     info->pool = pool;
> > > > +     info->no_map = rmem->no_map;
> > > 
> > > This kind of heap probably can't work if the region doesn't have
> > > no-map, so I think it would make sense to enforce that no_map is
> > > set
> > > (or ignore regions without no-map in DT).
> > 
> > if no-map is not set, secure-heap could be used as a heap with
> > dynamically protected buffer from the TEE.
> 
> That would need a whole bunch more code to call in to the TEE to
> apply
> the protection. So hypothetically yes, that's true, but not without
> more code than what you have here.
> 
> > but I agree that this is adding too much complexity and could be
> > simplify a lot without this mapping consideration.
> > 
> > then no-map is probably not the perfect term to describe this heap.
> > is_s
> > ecure or is_protected would be better ?
> 
> You mean in device-tree? no-map has a well-defined meaning; and if
> this memory region shouldn't be added to the kernel's linear map
> then it should be tagged with no-map.
> 
> Whether there should also be some DT property indicating that the
> region is protected and can't be accessed normally: I don't have
> much of an opinion on that.
> 
> Thanks,
> -Brian
> 
> > 
> > > 
> > > Cheers,
> > > -Brian
> > > 
> > > > +
> > > > +     exp_info.name = rmem->name;
> > > > +     exp_info.ops = &secure_heap_ops;
> > > > +     exp_info.priv = info;
> > > > +
> > > > +     secure_heap = dma_heap_add(&exp_info);
> > > > +     if (IS_ERR(secure_heap)) {
> > > > +             pr_err("dmabuf secure heap allocation failed\n");
> > > > +             ret = PTR_ERR(secure_heap);
> > > > +             goto error;
> > > > +     }
> > > > +
> > > > +     return 0;
> > > > +
> > > > +error:
> > > > +     kfree(info);
> > > > +     if (pool)
> > > > +             gen_pool_destroy(pool);
> > > > +
> > > > +     return ret;
> > > > +}
> > > > +
> > > > +static int secure_heap_create(void)
> > > > +{
> > > > +     unsigned int i;
> > > > +     int ret;
> > > > +
> > > > +     for (i = 0; i < secure_data_count; i++) {
> > > > +             ret = secure_heap_add(&secure_data[i]);
> > > > +             if (ret)
> > > > +                     return ret;
> > > > +     }
> > > > +     return 0;
> > > > +}
> > > > +
> > > > +static int rmem_secure_heap_device_init(struct reserved_mem
> > > > *rmem,
> > > > +                                      struct device *dev)
> > > > +{
> > > > +     dev_set_drvdata(dev, rmem);
> > > > +     return 0;
> > > > +}
> > > > +
> > > > +static void rmem_secure_heap_device_release(struct
> > > > reserved_mem
> > > > *rmem,
> > > > +                                      struct device *dev)
> > > > +{
> > > > +     dev_set_drvdata(dev, NULL);
> > > > +}
> > > > +
> > > > +static const struct reserved_mem_ops rmem_dma_ops = {
> > > > +     .device_init    = rmem_secure_heap_device_init,
> > > > +     .device_release = rmem_secure_heap_device_release,
> > > > +};
> > > > +
> > > > +static int __init rmem_secure_heap_setup(struct reserved_mem
> > > > *rmem)
> > > > +{
> > > > +     if (secure_data_count < MAX_SECURE_HEAP) {
> > > > +             int name_len = 0;
> > > > +             char *s = rmem->name;
> > > > +
> > > > +             secure_data[secure_data_count].base = rmem->base;
> > > > +             secure_data[secure_data_count].size = rmem->size;
> > > > +             secure_data[secure_data_count].no_map =
> > > > +                     (of_get_flat_dt_prop(rmem->fdt_node, "no-
> > > > map", NULL) != NULL);
> > > > +
> > > > +             while (name_len < MAX_HEAP_NAME_LEN) {
> > > > +                     if ((*s == '@') || (*s == '\0'))
> > > > +                             break;
> > > > +                     name_len++;
> > > > +                     s++;
> > > > +             }
> > > > +             if (name_len == MAX_HEAP_NAME_LEN)
> > > > +                     name_len--;
> > > > +
> > > > +             strncpy(secure_data[secure_data_count].name,
> > > > rmem-
> > > > > name, name_len);
> > > > 
> > > > +
> > > > +             rmem->ops = &rmem_dma_ops;
> > > > +             pr_info("Reserved memory: DMA buf secure pool %s
> > > > at
> > > > %pa, size %ld MiB\n",
> > > > +                     secure_data[secure_data_count].name,
> > > > +                     &rmem->base, (unsigned long)rmem->size /
> > > > SZ_1M);
> > > > +
> > > > +             secure_data_count++;
> > > > +             return 0;
> > > > +     }
> > > > +     WARN_ONCE(1, "Cannot handle more than %u secure heaps\n",
> > > > MAX_SECURE_HEAP);
> > > > +     return -EINVAL;
> > > > +}
> > > > +
> > > > +RESERVEDMEM_OF_DECLARE(secure_heap, "linaro,secure-heap",
> > > > rmem_secure_heap_setup);
> > > > +
> > > > +module_init(secure_heap_create);
> > > > +MODULE_LICENSE("GPL v2");
> > > > --
> > > > 2.25.0
> > > > 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/5] dma-buf: heaps: add Linaro secure dmabuf heap support
  2022-08-02  9:58 ` [PATCH 3/5] dma-buf: heaps: add Linaro secure dmabuf heap support Olivier Masse
  2022-08-02 14:39   ` Brian Starkey
@ 2022-08-16 13:31   ` Nicolas Dufresne
  2022-08-16 15:01     ` [EXT] " Olivier Masse
  1 sibling, 1 reply; 13+ messages in thread
From: Nicolas Dufresne @ 2022-08-16 13:31 UTC (permalink / raw)
  To: Olivier Masse, sumit.semwal, benjamin.gaignard, Brian.Starkey,
	christian.koenig, linux-media, dri-devel, linaro-mm-sig,
	linux-kernel
  Cc: clement.faure

Hi,

Le mardi 02 août 2022 à 11:58 +0200, Olivier Masse a écrit :
> add Linaro secure heap bindings: linaro,secure-heap

Just a curiosity, how is this specific to Linaro OPTEE OS ? Shouldn't it be "de-
linaro-ified" somehow ?

regards,
Nicolas

> use genalloc to allocate/free buffer from buffer pool.
> buffer pool info is from dts.
> use sg_table instore the allocated memory info, the length of sg_table is 1.
> implement secure_heap_buf_ops to implement buffer share in difference device:
> 1. Userspace passes this fd to all drivers it wants this buffer
> to share with: First the filedescriptor is converted to a &dma_buf using
> dma_buf_get(). Then the buffer is attached to the device using dma_buf_attach().
> 2. Once the buffer is attached to all devices userspace can initiate DMA
> access to the shared buffer. In the kernel this is done by calling dma_buf_map_attachment()
> 3. get sg_table with dma_buf_map_attachment in difference device.
> 
> Signed-off-by: Olivier Masse <olivier.masse@nxp.com>
> ---
>  drivers/dma-buf/heaps/Kconfig       |  21 +-
>  drivers/dma-buf/heaps/Makefile      |   1 +
>  drivers/dma-buf/heaps/secure_heap.c | 588 ++++++++++++++++++++++++++++
>  3 files changed, 606 insertions(+), 4 deletions(-)
>  create mode 100644 drivers/dma-buf/heaps/secure_heap.c
> 
> diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
> index 6a33193a7b3e..b2406932192e 100644
> --- a/drivers/dma-buf/heaps/Kconfig
> +++ b/drivers/dma-buf/heaps/Kconfig
> @@ -1,8 +1,12 @@
> -config DMABUF_HEAPS_DEFERRED_FREE
> -	tristate
> +menuconfig DMABUF_HEAPS_DEFERRED_FREE
> +	bool "DMA-BUF heaps deferred-free library"
> +	help
> +	  Choose this option to enable the DMA-BUF heaps deferred-free library.
>  
> -config DMABUF_HEAPS_PAGE_POOL
> -	tristate
> +menuconfig DMABUF_HEAPS_PAGE_POOL
> +	bool "DMA-BUF heaps page-pool library"
> +	help
> +	  Choose this option to enable the DMA-BUF heaps page-pool library.
>  
>  config DMABUF_HEAPS_SYSTEM
>  	bool "DMA-BUF System Heap"
> @@ -26,3 +30,12 @@ config DMABUF_HEAPS_DSP
>            Choose this option to enable the dsp dmabuf heap. The dsp heap
>            is allocated by gen allocater. it's allocated according the dts.
>            If in doubt, say Y.
> +
> +config DMABUF_HEAPS_SECURE
> +	tristate "DMA-BUF Secure Heap"
> +	depends on DMABUF_HEAPS && DMABUF_HEAPS_DEFERRED_FREE
> +	help
> +	  Choose this option to enable the secure dmabuf heap. The secure heap
> +	  pools are defined according to the DT. Heaps are allocated
> +	  in the pools using gen allocater.
> +	  If in doubt, say Y.
> diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
> index e70722ea615e..08f6aa5919d1 100644
> --- a/drivers/dma-buf/heaps/Makefile
> +++ b/drivers/dma-buf/heaps/Makefile
> @@ -4,3 +4,4 @@ obj-$(CONFIG_DMABUF_HEAPS_PAGE_POOL)	+= page_pool.o
>  obj-$(CONFIG_DMABUF_HEAPS_SYSTEM)	+= system_heap.o
>  obj-$(CONFIG_DMABUF_HEAPS_CMA)		+= cma_heap.o
>  obj-$(CONFIG_DMABUF_HEAPS_DSP)          += dsp_heap.o
> +obj-$(CONFIG_DMABUF_HEAPS_SECURE)	+= secure_heap.o
> diff --git a/drivers/dma-buf/heaps/secure_heap.c b/drivers/dma-buf/heaps/secure_heap.c
> new file mode 100644
> index 000000000000..31aac5d050b4
> --- /dev/null
> +++ b/drivers/dma-buf/heaps/secure_heap.c
> @@ -0,0 +1,588 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * DMABUF secure heap exporter
> + *
> + * Copyright 2021 NXP.
> + */
> +
> +#include <linux/dma-buf.h>
> +#include <linux/dma-heap.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/err.h>
> +#include <linux/genalloc.h>
> +#include <linux/highmem.h>
> +#include <linux/mm.h>
> +#include <linux/module.h>
> +#include <linux/of.h>
> +#include <linux/of_fdt.h>
> +#include <linux/of_reserved_mem.h>
> +#include <linux/scatterlist.h>
> +#include <linux/slab.h>
> +#include <linux/vmalloc.h>
> +
> +#include "deferred-free-helper.h"
> +#include "page_pool.h"
> +
> +#define MAX_SECURE_HEAP 2
> +#define MAX_HEAP_NAME_LEN 32
> +
> +struct secure_heap_buffer {
> +	struct dma_heap *heap;
> +	struct list_head attachments;
> +	struct mutex lock;
> +	unsigned long len;
> +	struct sg_table sg_table;
> +	int vmap_cnt;
> +	struct deferred_freelist_item deferred_free;
> +	void *vaddr;
> +	bool uncached;
> +};
> +
> +struct dma_heap_attachment {
> +	struct device *dev;
> +	struct sg_table *table;
> +	struct list_head list;
> +	bool no_map;
> +	bool mapped;
> +	bool uncached;
> +};
> +
> +struct secure_heap_info {
> +	struct gen_pool *pool;
> +
> +	bool no_map;
> +};
> +
> +struct rmem_secure {
> +	phys_addr_t base;
> +	phys_addr_t size;
> +
> +	char name[MAX_HEAP_NAME_LEN];
> +
> +	bool no_map;
> +};
> +
> +static struct rmem_secure secure_data[MAX_SECURE_HEAP] = {0};
> +static unsigned int secure_data_count;
> +
> +static struct sg_table *dup_sg_table(struct sg_table *table)
> +{
> +	struct sg_table *new_table;
> +	int ret, i;
> +	struct scatterlist *sg, *new_sg;
> +
> +	new_table = kzalloc(sizeof(*new_table), GFP_KERNEL);
> +	if (!new_table)
> +		return ERR_PTR(-ENOMEM);
> +
> +	ret = sg_alloc_table(new_table, table->orig_nents, GFP_KERNEL);
> +	if (ret) {
> +		kfree(new_table);
> +		return ERR_PTR(-ENOMEM);
> +	}
> +
> +	new_sg = new_table->sgl;
> +	for_each_sgtable_sg(table, sg, i) {
> +		sg_set_page(new_sg, sg_page(sg), sg->length, sg->offset);
> +		new_sg->dma_address = sg->dma_address;
> +#ifdef CONFIG_NEED_SG_DMA_LENGTH
> +		new_sg->dma_length = sg->dma_length;
> +#endif
> +		new_sg = sg_next(new_sg);
> +	}
> +
> +	return new_table;
> +}
> +
> +static int secure_heap_attach(struct dma_buf *dmabuf,
> +			      struct dma_buf_attachment *attachment)
> +{
> +	struct secure_heap_buffer *buffer = dmabuf->priv;
> +	struct secure_heap_info *info = dma_heap_get_drvdata(buffer->heap);
> +	struct dma_heap_attachment *a;
> +	struct sg_table *table;
> +
> +	a = kzalloc(sizeof(*a), GFP_KERNEL);
> +	if (!a)
> +		return -ENOMEM;
> +
> +	table = dup_sg_table(&buffer->sg_table);
> +	if (IS_ERR(table)) {
> +		kfree(a);
> +		return -ENOMEM;
> +	}
> +
> +	a->table = table;
> +	a->dev = attachment->dev;
> +	INIT_LIST_HEAD(&a->list);
> +	a->no_map = info->no_map;
> +	a->mapped = false;
> +	a->uncached = buffer->uncached;
> +	attachment->priv = a;
> +
> +	mutex_lock(&buffer->lock);
> +	list_add(&a->list, &buffer->attachments);
> +	mutex_unlock(&buffer->lock);
> +
> +	return 0;
> +}
> +
> +static void secure_heap_detach(struct dma_buf *dmabuf,
> +			       struct dma_buf_attachment *attachment)
> +{
> +	struct secure_heap_buffer *buffer = dmabuf->priv;
> +	struct dma_heap_attachment *a = attachment->priv;
> +
> +	mutex_lock(&buffer->lock);
> +	list_del(&a->list);
> +	mutex_unlock(&buffer->lock);
> +
> +	sg_free_table(a->table);
> +	kfree(a->table);
> +	kfree(a);
> +}
> +
> +static struct sg_table *secure_heap_map_dma_buf(struct dma_buf_attachment *attachment,
> +						enum dma_data_direction direction)
> +{
> +	struct dma_heap_attachment *a = attachment->priv;
> +	struct sg_table *table = a->table;
> +	int attr = 0;
> +	int ret;
> +
> +	if (!a->no_map) {
> +		if (a->uncached)
> +			attr = DMA_ATTR_SKIP_CPU_SYNC;
> +
> +		ret = dma_map_sgtable(attachment->dev, table, direction, attr);
> +		if (ret)
> +			return ERR_PTR(ret);
> +
> +		a->mapped = true;
> +	}
> +
> +	return table;
> +}
> +
> +static void secure_heap_unmap_dma_buf(struct dma_buf_attachment *attachment,
> +				      struct sg_table *table,
> +				      enum dma_data_direction direction)
> +{
> +	struct dma_heap_attachment *a = attachment->priv;
> +	int attr = 0;
> +
> +	if (!a->no_map)	{
> +		if (a->uncached)
> +			attr = DMA_ATTR_SKIP_CPU_SYNC;
> +
> +		a->mapped = false;
> +		dma_unmap_sgtable(attachment->dev, table, direction, attr);
> +	}
> +}
> +
> +static int secure_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> +						enum dma_data_direction direction)
> +{
> +	struct secure_heap_buffer *buffer = dmabuf->priv;
> +	struct dma_heap_attachment *a;
> +
> +	mutex_lock(&buffer->lock);
> +
> +	if (buffer->vmap_cnt)
> +		invalidate_kernel_vmap_range(buffer->vaddr, buffer->len);
> +
> +	if (!buffer->uncached) {
> +		list_for_each_entry(a, &buffer->attachments, list) {
> +			if (!a->mapped)
> +				continue;
> +			dma_sync_sgtable_for_cpu(a->dev, a->table, direction);
> +		}
> +	}
> +	mutex_unlock(&buffer->lock);
> +
> +	return 0;
> +}
> +
> +static int secure_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
> +					      enum dma_data_direction direction)
> +{
> +	struct secure_heap_buffer *buffer = dmabuf->priv;
> +	struct dma_heap_attachment *a;
> +
> +	mutex_lock(&buffer->lock);
> +
> +	if (buffer->vmap_cnt)
> +		flush_kernel_vmap_range(buffer->vaddr, buffer->len);
> +
> +	if (!buffer->uncached) {
> +		list_for_each_entry(a, &buffer->attachments, list) {
> +			if (!a->mapped)
> +				continue;
> +			dma_sync_sgtable_for_device(a->dev, a->table, direction);
> +		}
> +	}
> +	mutex_unlock(&buffer->lock);
> +
> +	return 0;
> +}
> +
> +static int secure_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
> +{
> +	struct secure_heap_buffer *buffer = dmabuf->priv;
> +	struct sg_table *table = &buffer->sg_table;
> +	unsigned long addr = vma->vm_start;
> +	struct sg_page_iter piter;
> +	int ret;
> +
> +	if (buffer->uncached)
> +		vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
> +
> +	for_each_sgtable_page(table, &piter, vma->vm_pgoff) {
> +		struct page *page = sg_page_iter_page(&piter);
> +
> +		ret = remap_pfn_range(vma, addr, page_to_pfn(page), PAGE_SIZE,
> +				      vma->vm_page_prot);
> +		if (ret)
> +			return ret;
> +		addr += PAGE_SIZE;
> +	}
> +	return 0;
> +}
> +
> +static void *secure_heap_do_vmap(struct secure_heap_buffer *buffer)
> +{
> +	struct sg_table *table = &buffer->sg_table;
> +	int npages = PAGE_ALIGN(buffer->len) / PAGE_SIZE;
> +	struct page **pages = vmalloc(sizeof(struct page *) * npages);
> +	struct page **tmp = pages;
> +	struct sg_page_iter piter;
> +	pgprot_t pgprot = PAGE_KERNEL;
> +	void *vaddr;
> +
> +	if (!pages)
> +		return ERR_PTR(-ENOMEM);
> +
> +	if (buffer->uncached)
> +		pgprot = pgprot_writecombine(PAGE_KERNEL);
> +
> +	for_each_sgtable_page(table, &piter, 0) {
> +		WARN_ON(tmp - pages >= npages);
> +		*tmp++ = sg_page_iter_page(&piter);
> +	}
> +
> +	vaddr = vmap(pages, npages, VM_MAP, pgprot);
> +	vfree(pages);
> +
> +	if (!vaddr)
> +		return ERR_PTR(-ENOMEM);
> +
> +	return vaddr;
> +}
> +
> +static int secure_heap_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
> +{
> +	struct secure_heap_buffer *buffer = dmabuf->priv;
> +	void *vaddr;
> +	int ret = 0;
> +
> +	mutex_lock(&buffer->lock);
> +	if (buffer->vmap_cnt) {
> +		buffer->vmap_cnt++;
> +		goto out;
> +	}
> +
> +	vaddr = secure_heap_do_vmap(buffer);
> +	if (IS_ERR(vaddr)) {
> +		ret = PTR_ERR(vaddr);
> +		goto out;
> +	}
> +
> +	buffer->vaddr = vaddr;
> +	buffer->vmap_cnt++;
> +	dma_buf_map_set_vaddr(map, buffer->vaddr);
> +out:
> +	mutex_unlock(&buffer->lock);
> +
> +	return ret;
> +}
> +
> +static void secure_heap_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
> +{
> +	struct secure_heap_buffer *buffer = dmabuf->priv;
> +
> +	mutex_lock(&buffer->lock);
> +	if (!--buffer->vmap_cnt) {
> +		vunmap(buffer->vaddr);
> +		buffer->vaddr = NULL;
> +	}
> +	mutex_unlock(&buffer->lock);
> +	dma_buf_map_clear(map);
> +}
> +
> +static void secure_heap_zero_buffer(struct secure_heap_buffer *buffer)
> +{
> +	struct sg_table *sgt = &buffer->sg_table;
> +	struct sg_page_iter piter;
> +	struct page *p;
> +	void *vaddr;
> +
> +	for_each_sgtable_page(sgt, &piter, 0) {
> +		p = sg_page_iter_page(&piter);
> +		vaddr = kmap_atomic(p);
> +		memset(vaddr, 0, PAGE_SIZE);
> +		kunmap_atomic(vaddr);
> +	}
> +}
> +
> +static void secure_heap_buf_free(struct deferred_freelist_item *item,
> +				 enum df_reason reason)
> +{
> +	struct secure_heap_buffer *buffer;
> +	struct secure_heap_info *info;
> +	struct sg_table *table;
> +	struct scatterlist *sg;
> +	int i;
> +
> +	buffer = container_of(item, struct secure_heap_buffer, deferred_free);
> +	info = dma_heap_get_drvdata(buffer->heap);
> +
> +	if (!info->no_map) {
> +		// Zero the buffer pages before adding back to the pool
> +		if (reason == DF_NORMAL)
> +			secure_heap_zero_buffer(buffer);
> +	}
> +
> +	table = &buffer->sg_table;
> +	for_each_sg(table->sgl, sg, table->nents, i)
> +		gen_pool_free(info->pool, sg_dma_address(sg), sg_dma_len(sg));
> +
> +	sg_free_table(table);
> +	kfree(buffer);
> +}
> +
> +static void secure_heap_dma_buf_release(struct dma_buf *dmabuf)
> +{
> +	struct secure_heap_buffer *buffer = dmabuf->priv;
> +	int npages = PAGE_ALIGN(buffer->len) / PAGE_SIZE;
> +
> +	deferred_free(&buffer->deferred_free, secure_heap_buf_free, npages);
> +}
> +
> +static const struct dma_buf_ops secure_heap_buf_ops = {
> +	.attach = secure_heap_attach,
> +	.detach = secure_heap_detach,
> +	.map_dma_buf = secure_heap_map_dma_buf,
> +	.unmap_dma_buf = secure_heap_unmap_dma_buf,
> +	.begin_cpu_access = secure_heap_dma_buf_begin_cpu_access,
> +	.end_cpu_access = secure_heap_dma_buf_end_cpu_access,
> +	.mmap = secure_heap_mmap,
> +	.vmap = secure_heap_vmap,
> +	.vunmap = secure_heap_vunmap,
> +	.release = secure_heap_dma_buf_release,
> +};
> +
> +static struct dma_buf *secure_heap_do_allocate(struct dma_heap *heap,
> +					       unsigned long len,
> +					       unsigned long fd_flags,
> +					       unsigned long heap_flags,
> +					       bool uncached)
> +{
> +	struct secure_heap_buffer *buffer;
> +	struct secure_heap_info *info = dma_heap_get_drvdata(heap);
> +	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
> +	unsigned long size = roundup(len, PAGE_SIZE);
> +	struct dma_buf *dmabuf;
> +	struct sg_table *table;
> +	int ret = -ENOMEM;
> +	unsigned long phy_addr;
> +
> +	buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);
> +	if (!buffer)
> +		return ERR_PTR(-ENOMEM);
> +
> +	INIT_LIST_HEAD(&buffer->attachments);
> +	mutex_init(&buffer->lock);
> +	buffer->heap = heap;
> +	buffer->len = size;
> +	buffer->uncached = uncached;
> +
> +	phy_addr = gen_pool_alloc(info->pool, size);
> +	if (!phy_addr)
> +		goto free_buffer;
> +
> +	table = &buffer->sg_table;
> +	if (sg_alloc_table(table, 1, GFP_KERNEL))
> +		goto free_pool;
> +
> +	sg_set_page(table->sgl,	phys_to_page(phy_addr),	size, 0);
> +	sg_dma_address(table->sgl) = phy_addr;
> +	sg_dma_len(table->sgl) = size;
> +
> +	/* create the dmabuf */
> +	exp_info.exp_name = dma_heap_get_name(heap);
> +	exp_info.ops = &secure_heap_buf_ops;
> +	exp_info.size = buffer->len;
> +	exp_info.flags = fd_flags;
> +	exp_info.priv = buffer;
> +	dmabuf = dma_buf_export(&exp_info);
> +	if (IS_ERR(dmabuf)) {
> +		ret = PTR_ERR(dmabuf);
> +		goto free_pages;
> +	}
> +
> +	return dmabuf;
> +
> +free_pages:
> +	sg_free_table(table);
> +
> +free_pool:
> +	gen_pool_free(info->pool, phy_addr, size);
> +
> +free_buffer:
> +	mutex_destroy(&buffer->lock);
> +	kfree(buffer);
> +
> +	return ERR_PTR(ret);
> +}
> +
> +static struct dma_buf *secure_heap_allocate(struct dma_heap *heap,
> +					    unsigned long len,
> +					    unsigned long fd_flags,
> +					    unsigned long heap_flags)
> +{
> +	// use uncache buffer here by default
> +	return secure_heap_do_allocate(heap, len, fd_flags, heap_flags, true);
> +	// use cache buffer
> +	// return secure_heap_do_allocate(heap, len, fd_flags, heap_flags, false);
> +}
> +
> +static const struct dma_heap_ops secure_heap_ops = {
> +	.allocate = secure_heap_allocate,
> +};
> +
> +static int secure_heap_add(struct rmem_secure *rmem)
> +{
> +	struct dma_heap *secure_heap;
> +	struct dma_heap_export_info exp_info;
> +	struct secure_heap_info *info = NULL;
> +	struct gen_pool *pool = NULL;
> +	int ret = -EINVAL;
> +
> +	if (rmem->base == 0 || rmem->size == 0) {
> +		pr_err("secure_data base or size is not correct\n");
> +		goto error;
> +	}
> +
> +	info = kzalloc(sizeof(*info), GFP_KERNEL);
> +	if (!info) {
> +		pr_err("dmabuf info allocation failed\n");
> +		ret = -ENOMEM;
> +		goto error;
> +	}
> +
> +	pool = gen_pool_create(PAGE_SHIFT, -1);
> +	if (!pool) {
> +		pr_err("can't create gen pool\n");
> +		ret = -ENOMEM;
> +		goto error;
> +	}
> +
> +	if (gen_pool_add(pool, rmem->base, rmem->size, -1) < 0) {
> +		pr_err("failed to add memory into pool\n");
> +		ret = -ENOMEM;
> +		goto error;
> +	}
> +
> +	info->pool = pool;
> +	info->no_map = rmem->no_map;
> +
> +	exp_info.name = rmem->name;
> +	exp_info.ops = &secure_heap_ops;
> +	exp_info.priv = info;
> +
> +	secure_heap = dma_heap_add(&exp_info);
> +	if (IS_ERR(secure_heap)) {
> +		pr_err("dmabuf secure heap allocation failed\n");
> +		ret = PTR_ERR(secure_heap);
> +		goto error;
> +	}
> +
> +	return 0;
> +
> +error:
> +	kfree(info);
> +	if (pool)
> +		gen_pool_destroy(pool);
> +
> +	return ret;
> +}
> +
> +static int secure_heap_create(void)
> +{
> +	unsigned int i;
> +	int ret;
> +
> +	for (i = 0; i < secure_data_count; i++) {
> +		ret = secure_heap_add(&secure_data[i]);
> +		if (ret)
> +			return ret;
> +	}
> +	return 0;
> +}
> +
> +static int rmem_secure_heap_device_init(struct reserved_mem *rmem,
> +					 struct device *dev)
> +{
> +	dev_set_drvdata(dev, rmem);
> +	return 0;
> +}
> +
> +static void rmem_secure_heap_device_release(struct reserved_mem *rmem,
> +					 struct device *dev)
> +{
> +	dev_set_drvdata(dev, NULL);
> +}
> +
> +static const struct reserved_mem_ops rmem_dma_ops = {
> +	.device_init    = rmem_secure_heap_device_init,
> +	.device_release = rmem_secure_heap_device_release,
> +};
> +
> +static int __init rmem_secure_heap_setup(struct reserved_mem *rmem)
> +{
> +	if (secure_data_count < MAX_SECURE_HEAP) {
> +		int name_len = 0;
> +		char *s = rmem->name;
> +
> +		secure_data[secure_data_count].base = rmem->base;
> +		secure_data[secure_data_count].size = rmem->size;
> +		secure_data[secure_data_count].no_map =
> +			(of_get_flat_dt_prop(rmem->fdt_node, "no-map", NULL) != NULL);
> +
> +		while (name_len < MAX_HEAP_NAME_LEN) {
> +			if ((*s == '@') || (*s == '\0'))
> +				break;
> +			name_len++;
> +			s++;
> +		}
> +		if (name_len == MAX_HEAP_NAME_LEN)
> +			name_len--;
> +
> +		strncpy(secure_data[secure_data_count].name, rmem->name, name_len);
> +
> +		rmem->ops = &rmem_dma_ops;
> +		pr_info("Reserved memory: DMA buf secure pool %s at %pa, size %ld MiB\n",
> +			secure_data[secure_data_count].name,
> +			&rmem->base, (unsigned long)rmem->size / SZ_1M);
> +
> +		secure_data_count++;
> +		return 0;
> +	}
> +	WARN_ONCE(1, "Cannot handle more than %u secure heaps\n", MAX_SECURE_HEAP);
> +	return -EINVAL;
> +}
> +
> +RESERVEDMEM_OF_DECLARE(secure_heap, "linaro,secure-heap", rmem_secure_heap_setup);
> +
> +module_init(secure_heap_create);
> +MODULE_LICENSE("GPL v2");


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [EXT] Re: [PATCH 3/5] dma-buf: heaps: add Linaro secure dmabuf heap support
  2022-08-16 13:31   ` Nicolas Dufresne
@ 2022-08-16 15:01     ` Olivier Masse
  0 siblings, 0 replies; 13+ messages in thread
From: Olivier Masse @ 2022-08-16 15:01 UTC (permalink / raw)
  To: sumit.semwal, nicolas, linux-kernel, linaro-mm-sig,
	christian.koenig, linux-media, dri-devel, Brian.Starkey,
	benjamin.gaignard
  Cc: jens.wiklander, joakim.bech, Cyrille Fleury, Clément Faure, jerome

Hi Nicolas,

Thanks for your comment, indeed these PR is linked to OPTEE OS.
This one is using the same bindings to define the Secure Data Path
reserved memory:
https://github.com/OP-TEE/optee_os/commit/eb108a04369fbfaf60c03c0e00bbe9489a761c69

However, I'm not aware of another shared heap that could match our
need.
For information. it was previously done using ION heap:
https://www.slideshare.net/linaroorg/bud17400-secure-data-path-with-optee

Best regards,
Olivier

On mar., 2022-08-16 at 09:31 -0400, Nicolas Dufresne wrote:
> Caution: EXT Email
> 
> Hi,
> 
> Le mardi 02 août 2022 à 11:58 +0200, Olivier Masse a écrit :
> > add Linaro secure heap bindings: linaro,secure-heap
> 
> Just a curiosity, how is this specific to Linaro OPTEE OS ? Shouldn't
> it be "de-
> linaro-ified" somehow ?
> 
> regards,
> Nicolas
> 
> > use genalloc to allocate/free buffer from buffer pool.
> > buffer pool info is from dts.
> > use sg_table instore the allocated memory info, the length of
> > sg_table is 1.
> > implement secure_heap_buf_ops to implement buffer share in
> > difference device:
> > 1. Userspace passes this fd to all drivers it wants this buffer
> > to share with: First the filedescriptor is converted to a &dma_buf
> > using
> > dma_buf_get(). Then the buffer is attached to the device using
> > dma_buf_attach().
> > 2. Once the buffer is attached to all devices userspace can
> > initiate DMA
> > access to the shared buffer. In the kernel this is done by calling
> > dma_buf_map_attachment()
> > 3. get sg_table with dma_buf_map_attachment in difference device.
> > 
> > Signed-off-by: Olivier Masse <olivier.masse@nxp.com>
> > ---
> >  drivers/dma-buf/heaps/Kconfig       |  21 +-
> >  drivers/dma-buf/heaps/Makefile      |   1 +
> >  drivers/dma-buf/heaps/secure_heap.c | 588
> > ++++++++++++++++++++++++++++
> >  3 files changed, 606 insertions(+), 4 deletions(-)
> >  create mode 100644 drivers/dma-buf/heaps/secure_heap.c
> > 
> > diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-
> > buf/heaps/Kconfig
> > index 6a33193a7b3e..b2406932192e 100644
> > --- a/drivers/dma-buf/heaps/Kconfig
> > +++ b/drivers/dma-buf/heaps/Kconfig
> > @@ -1,8 +1,12 @@
> > -config DMABUF_HEAPS_DEFERRED_FREE
> > -     tristate
> > +menuconfig DMABUF_HEAPS_DEFERRED_FREE
> > +     bool "DMA-BUF heaps deferred-free library"
> > +     help
> > +       Choose this option to enable the DMA-BUF heaps deferred-
> > free library.
> > 
> > -config DMABUF_HEAPS_PAGE_POOL
> > -     tristate
> > +menuconfig DMABUF_HEAPS_PAGE_POOL
> > +     bool "DMA-BUF heaps page-pool library"
> > +     help
> > +       Choose this option to enable the DMA-BUF heaps page-pool
> > library.
> > 
> >  config DMABUF_HEAPS_SYSTEM
> >       bool "DMA-BUF System Heap"
> > @@ -26,3 +30,12 @@ config DMABUF_HEAPS_DSP
> >            Choose this option to enable the dsp dmabuf heap. The
> > dsp heap
> >            is allocated by gen allocater. it's allocated according
> > the dts.
> >            If in doubt, say Y.
> > +
> > +config DMABUF_HEAPS_SECURE
> > +     tristate "DMA-BUF Secure Heap"
> > +     depends on DMABUF_HEAPS && DMABUF_HEAPS_DEFERRED_FREE
> > +     help
> > +       Choose this option to enable the secure dmabuf heap. The
> > secure heap
> > +       pools are defined according to the DT. Heaps are allocated
> > +       in the pools using gen allocater.
> > +       If in doubt, say Y.
> > diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-
> > buf/heaps/Makefile
> > index e70722ea615e..08f6aa5919d1 100644
> > --- a/drivers/dma-buf/heaps/Makefile
> > +++ b/drivers/dma-buf/heaps/Makefile
> > @@ -4,3 +4,4 @@ obj-$(CONFIG_DMABUF_HEAPS_PAGE_POOL)  +=
> > page_pool.o
> >  obj-$(CONFIG_DMABUF_HEAPS_SYSTEM)    += system_heap.o
> >  obj-$(CONFIG_DMABUF_HEAPS_CMA)               += cma_heap.o
> >  obj-$(CONFIG_DMABUF_HEAPS_DSP)          += dsp_heap.o
> > +obj-$(CONFIG_DMABUF_HEAPS_SECURE)    += secure_heap.o
> > diff --git a/drivers/dma-buf/heaps/secure_heap.c b/drivers/dma-
> > buf/heaps/secure_heap.c
> > new file mode 100644
> > index 000000000000..31aac5d050b4
> > --- /dev/null
> > +++ b/drivers/dma-buf/heaps/secure_heap.c
> > @@ -0,0 +1,588 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * DMABUF secure heap exporter
> > + *
> > + * Copyright 2021 NXP.
> > + */
> > +
> > +#include <linux/dma-buf.h>
> > +#include <linux/dma-heap.h>
> > +#include <linux/dma-mapping.h>
> > +#include <linux/err.h>
> > +#include <linux/genalloc.h>
> > +#include <linux/highmem.h>
> > +#include <linux/mm.h>
> > +#include <linux/module.h>
> > +#include <linux/of.h>
> > +#include <linux/of_fdt.h>
> > +#include <linux/of_reserved_mem.h>
> > +#include <linux/scatterlist.h>
> > +#include <linux/slab.h>
> > +#include <linux/vmalloc.h>
> > +
> > +#include "deferred-free-helper.h"
> > +#include "page_pool.h"
> > +
> > +#define MAX_SECURE_HEAP 2
> > +#define MAX_HEAP_NAME_LEN 32
> > +
> > +struct secure_heap_buffer {
> > +     struct dma_heap *heap;
> > +     struct list_head attachments;
> > +     struct mutex lock;
> > +     unsigned long len;
> > +     struct sg_table sg_table;
> > +     int vmap_cnt;
> > +     struct deferred_freelist_item deferred_free;
> > +     void *vaddr;
> > +     bool uncached;
> > +};
> > +
> > +struct dma_heap_attachment {
> > +     struct device *dev;
> > +     struct sg_table *table;
> > +     struct list_head list;
> > +     bool no_map;
> > +     bool mapped;
> > +     bool uncached;
> > +};
> > +
> > +struct secure_heap_info {
> > +     struct gen_pool *pool;
> > +
> > +     bool no_map;
> > +};
> > +
> > +struct rmem_secure {
> > +     phys_addr_t base;
> > +     phys_addr_t size;
> > +
> > +     char name[MAX_HEAP_NAME_LEN];
> > +
> > +     bool no_map;
> > +};
> > +
> > +static struct rmem_secure secure_data[MAX_SECURE_HEAP] = {0};
> > +static unsigned int secure_data_count;
> > +
> > +static struct sg_table *dup_sg_table(struct sg_table *table)
> > +{
> > +     struct sg_table *new_table;
> > +     int ret, i;
> > +     struct scatterlist *sg, *new_sg;
> > +
> > +     new_table = kzalloc(sizeof(*new_table), GFP_KERNEL);
> > +     if (!new_table)
> > +             return ERR_PTR(-ENOMEM);
> > +
> > +     ret = sg_alloc_table(new_table, table->orig_nents,
> > GFP_KERNEL);
> > +     if (ret) {
> > +             kfree(new_table);
> > +             return ERR_PTR(-ENOMEM);
> > +     }
> > +
> > +     new_sg = new_table->sgl;
> > +     for_each_sgtable_sg(table, sg, i) {
> > +             sg_set_page(new_sg, sg_page(sg), sg->length, sg-
> > >offset);
> > +             new_sg->dma_address = sg->dma_address;
> > +#ifdef CONFIG_NEED_SG_DMA_LENGTH
> > +             new_sg->dma_length = sg->dma_length;
> > +#endif
> > +             new_sg = sg_next(new_sg);
> > +     }
> > +
> > +     return new_table;
> > +}
> > +
> > +static int secure_heap_attach(struct dma_buf *dmabuf,
> > +                           struct dma_buf_attachment *attachment)
> > +{
> > +     struct secure_heap_buffer *buffer = dmabuf->priv;
> > +     struct secure_heap_info *info = dma_heap_get_drvdata(buffer-
> > >heap);
> > +     struct dma_heap_attachment *a;
> > +     struct sg_table *table;
> > +
> > +     a = kzalloc(sizeof(*a), GFP_KERNEL);
> > +     if (!a)
> > +             return -ENOMEM;
> > +
> > +     table = dup_sg_table(&buffer->sg_table);
> > +     if (IS_ERR(table)) {
> > +             kfree(a);
> > +             return -ENOMEM;
> > +     }
> > +
> > +     a->table = table;
> > +     a->dev = attachment->dev;
> > +     INIT_LIST_HEAD(&a->list);
> > +     a->no_map = info->no_map;
> > +     a->mapped = false;
> > +     a->uncached = buffer->uncached;
> > +     attachment->priv = a;
> > +
> > +     mutex_lock(&buffer->lock);
> > +     list_add(&a->list, &buffer->attachments);
> > +     mutex_unlock(&buffer->lock);
> > +
> > +     return 0;
> > +}
> > +
> > +static void secure_heap_detach(struct dma_buf *dmabuf,
> > +                            struct dma_buf_attachment *attachment)
> > +{
> > +     struct secure_heap_buffer *buffer = dmabuf->priv;
> > +     struct dma_heap_attachment *a = attachment->priv;
> > +
> > +     mutex_lock(&buffer->lock);
> > +     list_del(&a->list);
> > +     mutex_unlock(&buffer->lock);
> > +
> > +     sg_free_table(a->table);
> > +     kfree(a->table);
> > +     kfree(a);
> > +}
> > +
> > +static struct sg_table *secure_heap_map_dma_buf(struct
> > dma_buf_attachment *attachment,
> > +                                             enum
> > dma_data_direction direction)
> > +{
> > +     struct dma_heap_attachment *a = attachment->priv;
> > +     struct sg_table *table = a->table;
> > +     int attr = 0;
> > +     int ret;
> > +
> > +     if (!a->no_map) {
> > +             if (a->uncached)
> > +                     attr = DMA_ATTR_SKIP_CPU_SYNC;
> > +
> > +             ret = dma_map_sgtable(attachment->dev, table,
> > direction, attr);
> > +             if (ret)
> > +                     return ERR_PTR(ret);
> > +
> > +             a->mapped = true;
> > +     }
> > +
> > +     return table;
> > +}
> > +
> > +static void secure_heap_unmap_dma_buf(struct dma_buf_attachment
> > *attachment,
> > +                                   struct sg_table *table,
> > +                                   enum dma_data_direction
> > direction)
> > +{
> > +     struct dma_heap_attachment *a = attachment->priv;
> > +     int attr = 0;
> > +
> > +     if (!a->no_map) {
> > +             if (a->uncached)
> > +                     attr = DMA_ATTR_SKIP_CPU_SYNC;
> > +
> > +             a->mapped = false;
> > +             dma_unmap_sgtable(attachment->dev, table, direction,
> > attr);
> > +     }
> > +}
> > +
> > +static int secure_heap_dma_buf_begin_cpu_access(struct dma_buf
> > *dmabuf,
> > +                                             enum
> > dma_data_direction direction)
> > +{
> > +     struct secure_heap_buffer *buffer = dmabuf->priv;
> > +     struct dma_heap_attachment *a;
> > +
> > +     mutex_lock(&buffer->lock);
> > +
> > +     if (buffer->vmap_cnt)
> > +             invalidate_kernel_vmap_range(buffer->vaddr, buffer-
> > >len);
> > +
> > +     if (!buffer->uncached) {
> > +             list_for_each_entry(a, &buffer->attachments, list) {
> > +                     if (!a->mapped)
> > +                             continue;
> > +                     dma_sync_sgtable_for_cpu(a->dev, a->table,
> > direction);
> > +             }
> > +     }
> > +     mutex_unlock(&buffer->lock);
> > +
> > +     return 0;
> > +}
> > +
> > +static int secure_heap_dma_buf_end_cpu_access(struct dma_buf
> > *dmabuf,
> > +                                           enum dma_data_direction
> > direction)
> > +{
> > +     struct secure_heap_buffer *buffer = dmabuf->priv;
> > +     struct dma_heap_attachment *a;
> > +
> > +     mutex_lock(&buffer->lock);
> > +
> > +     if (buffer->vmap_cnt)
> > +             flush_kernel_vmap_range(buffer->vaddr, buffer->len);
> > +
> > +     if (!buffer->uncached) {
> > +             list_for_each_entry(a, &buffer->attachments, list) {
> > +                     if (!a->mapped)
> > +                             continue;
> > +                     dma_sync_sgtable_for_device(a->dev, a->table, 
> > direction);
> > +             }
> > +     }
> > +     mutex_unlock(&buffer->lock);
> > +
> > +     return 0;
> > +}
> > +
> > +static int secure_heap_mmap(struct dma_buf *dmabuf, struct
> > vm_area_struct *vma)
> > +{
> > +     struct secure_heap_buffer *buffer = dmabuf->priv;
> > +     struct sg_table *table = &buffer->sg_table;
> > +     unsigned long addr = vma->vm_start;
> > +     struct sg_page_iter piter;
> > +     int ret;
> > +
> > +     if (buffer->uncached)
> > +             vma->vm_page_prot = pgprot_writecombine(vma-
> > >vm_page_prot);
> > +
> > +     for_each_sgtable_page(table, &piter, vma->vm_pgoff) {
> > +             struct page *page = sg_page_iter_page(&piter);
> > +
> > +             ret = remap_pfn_range(vma, addr, page_to_pfn(page),
> > PAGE_SIZE,
> > +                                   vma->vm_page_prot);
> > +             if (ret)
> > +                     return ret;
> > +             addr += PAGE_SIZE;
> > +     }
> > +     return 0;
> > +}
> > +
> > +static void *secure_heap_do_vmap(struct secure_heap_buffer
> > *buffer)
> > +{
> > +     struct sg_table *table = &buffer->sg_table;
> > +     int npages = PAGE_ALIGN(buffer->len) / PAGE_SIZE;
> > +     struct page **pages = vmalloc(sizeof(struct page *) *
> > npages);
> > +     struct page **tmp = pages;
> > +     struct sg_page_iter piter;
> > +     pgprot_t pgprot = PAGE_KERNEL;
> > +     void *vaddr;
> > +
> > +     if (!pages)
> > +             return ERR_PTR(-ENOMEM);
> > +
> > +     if (buffer->uncached)
> > +             pgprot = pgprot_writecombine(PAGE_KERNEL);
> > +
> > +     for_each_sgtable_page(table, &piter, 0) {
> > +             WARN_ON(tmp - pages >= npages);
> > +             *tmp++ = sg_page_iter_page(&piter);
> > +     }
> > +
> > +     vaddr = vmap(pages, npages, VM_MAP, pgprot);
> > +     vfree(pages);
> > +
> > +     if (!vaddr)
> > +             return ERR_PTR(-ENOMEM);
> > +
> > +     return vaddr;
> > +}
> > +
> > +static int secure_heap_vmap(struct dma_buf *dmabuf, struct
> > dma_buf_map *map)
> > +{
> > +     struct secure_heap_buffer *buffer = dmabuf->priv;
> > +     void *vaddr;
> > +     int ret = 0;
> > +
> > +     mutex_lock(&buffer->lock);
> > +     if (buffer->vmap_cnt) {
> > +             buffer->vmap_cnt++;
> > +             goto out;
> > +     }
> > +
> > +     vaddr = secure_heap_do_vmap(buffer);
> > +     if (IS_ERR(vaddr)) {
> > +             ret = PTR_ERR(vaddr);
> > +             goto out;
> > +     }
> > +
> > +     buffer->vaddr = vaddr;
> > +     buffer->vmap_cnt++;
> > +     dma_buf_map_set_vaddr(map, buffer->vaddr);
> > +out:
> > +     mutex_unlock(&buffer->lock);
> > +
> > +     return ret;
> > +}
> > +
> > +static void secure_heap_vunmap(struct dma_buf *dmabuf, struct
> > dma_buf_map *map)
> > +{
> > +     struct secure_heap_buffer *buffer = dmabuf->priv;
> > +
> > +     mutex_lock(&buffer->lock);
> > +     if (!--buffer->vmap_cnt) {
> > +             vunmap(buffer->vaddr);
> > +             buffer->vaddr = NULL;
> > +     }
> > +     mutex_unlock(&buffer->lock);
> > +     dma_buf_map_clear(map);
> > +}
> > +
> > +static void secure_heap_zero_buffer(struct secure_heap_buffer
> > *buffer)
> > +{
> > +     struct sg_table *sgt = &buffer->sg_table;
> > +     struct sg_page_iter piter;
> > +     struct page *p;
> > +     void *vaddr;
> > +
> > +     for_each_sgtable_page(sgt, &piter, 0) {
> > +             p = sg_page_iter_page(&piter);
> > +             vaddr = kmap_atomic(p);
> > +             memset(vaddr, 0, PAGE_SIZE);
> > +             kunmap_atomic(vaddr);
> > +     }
> > +}
> > +
> > +static void secure_heap_buf_free(struct deferred_freelist_item
> > *item,
> > +                              enum df_reason reason)
> > +{
> > +     struct secure_heap_buffer *buffer;
> > +     struct secure_heap_info *info;
> > +     struct sg_table *table;
> > +     struct scatterlist *sg;
> > +     int i;
> > +
> > +     buffer = container_of(item, struct secure_heap_buffer,
> > deferred_free);
> > +     info = dma_heap_get_drvdata(buffer->heap);
> > +
> > +     if (!info->no_map) {
> > +             // Zero the buffer pages before adding back to the
> > pool
> > +             if (reason == DF_NORMAL)
> > +                     secure_heap_zero_buffer(buffer);
> > +     }
> > +
> > +     table = &buffer->sg_table;
> > +     for_each_sg(table->sgl, sg, table->nents, i)
> > +             gen_pool_free(info->pool, sg_dma_address(sg),
> > sg_dma_len(sg));
> > +
> > +     sg_free_table(table);
> > +     kfree(buffer);
> > +}
> > +
> > +static void secure_heap_dma_buf_release(struct dma_buf *dmabuf)
> > +{
> > +     struct secure_heap_buffer *buffer = dmabuf->priv;
> > +     int npages = PAGE_ALIGN(buffer->len) / PAGE_SIZE;
> > +
> > +     deferred_free(&buffer->deferred_free, secure_heap_buf_free,
> > npages);
> > +}
> > +
> > +static const struct dma_buf_ops secure_heap_buf_ops = {
> > +     .attach = secure_heap_attach,
> > +     .detach = secure_heap_detach,
> > +     .map_dma_buf = secure_heap_map_dma_buf,
> > +     .unmap_dma_buf = secure_heap_unmap_dma_buf,
> > +     .begin_cpu_access = secure_heap_dma_buf_begin_cpu_access,
> > +     .end_cpu_access = secure_heap_dma_buf_end_cpu_access,
> > +     .mmap = secure_heap_mmap,
> > +     .vmap = secure_heap_vmap,
> > +     .vunmap = secure_heap_vunmap,
> > +     .release = secure_heap_dma_buf_release,
> > +};
> > +
> > +static struct dma_buf *secure_heap_do_allocate(struct dma_heap
> > *heap,
> > +                                            unsigned long len,
> > +                                            unsigned long
> > fd_flags,
> > +                                            unsigned long
> > heap_flags,
> > +                                            bool uncached)
> > +{
> > +     struct secure_heap_buffer *buffer;
> > +     struct secure_heap_info *info = dma_heap_get_drvdata(heap);
> > +     DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
> > +     unsigned long size = roundup(len, PAGE_SIZE);
> > +     struct dma_buf *dmabuf;
> > +     struct sg_table *table;
> > +     int ret = -ENOMEM;
> > +     unsigned long phy_addr;
> > +
> > +     buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);
> > +     if (!buffer)
> > +             return ERR_PTR(-ENOMEM);
> > +
> > +     INIT_LIST_HEAD(&buffer->attachments);
> > +     mutex_init(&buffer->lock);
> > +     buffer->heap = heap;
> > +     buffer->len = size;
> > +     buffer->uncached = uncached;
> > +
> > +     phy_addr = gen_pool_alloc(info->pool, size);
> > +     if (!phy_addr)
> > +             goto free_buffer;
> > +
> > +     table = &buffer->sg_table;
> > +     if (sg_alloc_table(table, 1, GFP_KERNEL))
> > +             goto free_pool;
> > +
> > +     sg_set_page(table->sgl, phys_to_page(phy_addr), size, 0);
> > +     sg_dma_address(table->sgl) = phy_addr;
> > +     sg_dma_len(table->sgl) = size;
> > +
> > +     /* create the dmabuf */
> > +     exp_info.exp_name = dma_heap_get_name(heap);
> > +     exp_info.ops = &secure_heap_buf_ops;
> > +     exp_info.size = buffer->len;
> > +     exp_info.flags = fd_flags;
> > +     exp_info.priv = buffer;
> > +     dmabuf = dma_buf_export(&exp_info);
> > +     if (IS_ERR(dmabuf)) {
> > +             ret = PTR_ERR(dmabuf);
> > +             goto free_pages;
> > +     }
> > +
> > +     return dmabuf;
> > +
> > +free_pages:
> > +     sg_free_table(table);
> > +
> > +free_pool:
> > +     gen_pool_free(info->pool, phy_addr, size);
> > +
> > +free_buffer:
> > +     mutex_destroy(&buffer->lock);
> > +     kfree(buffer);
> > +
> > +     return ERR_PTR(ret);
> > +}
> > +
> > +static struct dma_buf *secure_heap_allocate(struct dma_heap *heap,
> > +                                         unsigned long len,
> > +                                         unsigned long fd_flags,
> > +                                         unsigned long heap_flags)
> > +{
> > +     // use uncache buffer here by default
> > +     return secure_heap_do_allocate(heap, len, fd_flags,
> > heap_flags, true);
> > +     // use cache buffer
> > +     // return secure_heap_do_allocate(heap, len, fd_flags,
> > heap_flags, false);
> > +}
> > +
> > +static const struct dma_heap_ops secure_heap_ops = {
> > +     .allocate = secure_heap_allocate,
> > +};
> > +
> > +static int secure_heap_add(struct rmem_secure *rmem)
> > +{
> > +     struct dma_heap *secure_heap;
> > +     struct dma_heap_export_info exp_info;
> > +     struct secure_heap_info *info = NULL;
> > +     struct gen_pool *pool = NULL;
> > +     int ret = -EINVAL;
> > +
> > +     if (rmem->base == 0 || rmem->size == 0) {
> > +             pr_err("secure_data base or size is not correct\n");
> > +             goto error;
> > +     }
> > +
> > +     info = kzalloc(sizeof(*info), GFP_KERNEL);
> > +     if (!info) {
> > +             pr_err("dmabuf info allocation failed\n");
> > +             ret = -ENOMEM;
> > +             goto error;
> > +     }
> > +
> > +     pool = gen_pool_create(PAGE_SHIFT, -1);
> > +     if (!pool) {
> > +             pr_err("can't create gen pool\n");
> > +             ret = -ENOMEM;
> > +             goto error;
> > +     }
> > +
> > +     if (gen_pool_add(pool, rmem->base, rmem->size, -1) < 0) {
> > +             pr_err("failed to add memory into pool\n");
> > +             ret = -ENOMEM;
> > +             goto error;
> > +     }
> > +
> > +     info->pool = pool;
> > +     info->no_map = rmem->no_map;
> > +
> > +     exp_info.name = rmem->name;
> > +     exp_info.ops = &secure_heap_ops;
> > +     exp_info.priv = info;
> > +
> > +     secure_heap = dma_heap_add(&exp_info);
> > +     if (IS_ERR(secure_heap)) {
> > +             pr_err("dmabuf secure heap allocation failed\n");
> > +             ret = PTR_ERR(secure_heap);
> > +             goto error;
> > +     }
> > +
> > +     return 0;
> > +
> > +error:
> > +     kfree(info);
> > +     if (pool)
> > +             gen_pool_destroy(pool);
> > +
> > +     return ret;
> > +}
> > +
> > +static int secure_heap_create(void)
> > +{
> > +     unsigned int i;
> > +     int ret;
> > +
> > +     for (i = 0; i < secure_data_count; i++) {
> > +             ret = secure_heap_add(&secure_data[i]);
> > +             if (ret)
> > +                     return ret;
> > +     }
> > +     return 0;
> > +}
> > +
> > +static int rmem_secure_heap_device_init(struct reserved_mem *rmem,
> > +                                      struct device *dev)
> > +{
> > +     dev_set_drvdata(dev, rmem);
> > +     return 0;
> > +}
> > +
> > +static void rmem_secure_heap_device_release(struct reserved_mem
> > *rmem,
> > +                                      struct device *dev)
> > +{
> > +     dev_set_drvdata(dev, NULL);
> > +}
> > +
> > +static const struct reserved_mem_ops rmem_dma_ops = {
> > +     .device_init    = rmem_secure_heap_device_init,
> > +     .device_release = rmem_secure_heap_device_release,
> > +};
> > +
> > +static int __init rmem_secure_heap_setup(struct reserved_mem
> > *rmem)
> > +{
> > +     if (secure_data_count < MAX_SECURE_HEAP) {
> > +             int name_len = 0;
> > +             char *s = rmem->name;
> > +
> > +             secure_data[secure_data_count].base = rmem->base;
> > +             secure_data[secure_data_count].size = rmem->size;
> > +             secure_data[secure_data_count].no_map =
> > +                     (of_get_flat_dt_prop(rmem->fdt_node, "no-
> > map", NULL) != NULL);
> > +
> > +             while (name_len < MAX_HEAP_NAME_LEN) {
> > +                     if ((*s == '@') || (*s == '\0'))
> > +                             break;
> > +                     name_len++;
> > +                     s++;
> > +             }
> > +             if (name_len == MAX_HEAP_NAME_LEN)
> > +                     name_len--;
> > +
> > +             strncpy(secure_data[secure_data_count].name, rmem-
> > >name, name_len);
> > +
> > +             rmem->ops = &rmem_dma_ops;
> > +             pr_info("Reserved memory: DMA buf secure pool %s at
> > %pa, size %ld MiB\n",
> > +                     secure_data[secure_data_count].name,
> > +                     &rmem->base, (unsigned long)rmem->size /
> > SZ_1M);
> > +
> > +             secure_data_count++;
> > +             return 0;
> > +     }
> > +     WARN_ONCE(1, "Cannot handle more than %u secure heaps\n",
> > MAX_SECURE_HEAP);
> > +     return -EINVAL;
> > +}
> > +
> > +RESERVEDMEM_OF_DECLARE(secure_heap, "linaro,secure-heap",
> > rmem_secure_heap_setup);
> > +
> > +module_init(secure_heap_create);
> > +MODULE_LICENSE("GPL v2");
> 
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2022-08-16 15:03 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-02  9:58 [PATCH 0/5] Add dma-buf secure-heap Olivier Masse
2022-08-02  9:58 ` [PATCH 1/5] ANDROID: dma-buf: heaps: Add deferred-free-helper library code Olivier Masse
2022-08-02  9:58 ` [PATCH 2/5] ANDROID: dma-buf: heaps: Add a shrinker controlled page pool Olivier Masse
2022-08-03 12:40   ` Brian Starkey
2022-08-02  9:58 ` [PATCH 3/5] dma-buf: heaps: add Linaro secure dmabuf heap support Olivier Masse
2022-08-02 14:39   ` Brian Starkey
2022-08-03 11:07     ` [EXT] " Olivier Masse
2022-08-03 12:37       ` Brian Starkey
2022-08-05 14:06         ` Olivier Masse
2022-08-16 13:31   ` Nicolas Dufresne
2022-08-16 15:01     ` [EXT] " Olivier Masse
2022-08-02  9:58 ` [PATCH 4/5] dt-bindings: reserved-memory: add linaro,secure-heap Olivier Masse
2022-08-02  9:58 ` [PATCH 5/5] plat-hikey: Add linaro,secure-heap compatible Olivier Masse

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).