linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/4] Chunk Heap Support on DMA-HEAP
@ 2021-01-13  1:21 Minchan Kim
  2021-01-13  1:21 ` [PATCH v3 1/4] mm: cma: introduce gfp flag in cma_alloc instead of no_warn Minchan Kim
                   ` (3 more replies)
  0 siblings, 4 replies; 22+ messages in thread
From: Minchan Kim @ 2021-01-13  1:21 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, LKML, hyesoo.yu, david, mhocko, surenb, pullip.cho,
	joaodias, hridya, john.stultz, sumit.semwal, linux-media,
	devicetree, hch, robh+dt, linaro-mm-sig, Minchan Kim

This patchset introduces a new dma heap, "chunk-heap" that makes it
easy to perform the bulk allocation of high order pages.
It has been created to help optimize the 4K/8K HDR video playback
with secure DRM HW to protect contents on memory. The HW needs
physically contiguous memory chunks(e.g, 64K) up to several hundred
MB memory.

To make such high-order big bulk allocations work, chunk-heap uses
CMA area. To avoid CMA allocation long stall on blocking pages(e.g.,
page writeback and/or page locking), it uses failfast mode of the
CMA API(i.e., __GFP_NORETRY) so it will continue to find easy
migratable pages in different pageblocks without stalling. At last
resort, it will allow the blocking only if it couldn't find the
available memory in the end.

First two patches introduces the failfast mode as __GFP_NORETRY
in alloc_contig_range and the allow to use it from the CMA API.
Third patch introduces device tree syntax for chunk-heap to bind
the specific CMA area with chunk-heap.
Finally, last patch implements chunk-heap as dma-buf heap.

* since v2 -
  * introduce gfp_mask with __GFP_NORETRY on cma_alloc - mhocko
  * do not expoert CMA APIs - Christoph
  * use compatible string for DT instead of dma-heap specific property - Hridya

* Since v1 - https://lore.kernel.org/linux-mm/20201117181935.3613581-1-minchan@kernel.org/
  * introduce alloc_contig_mode - David
  * use default CMA instead of device tree - John

Hyesoo Yu (2):
  dt-bindings: reserved-memory: Make DMA-BUF CMA heap DT-configurable
  dma-buf: heaps: add chunk heap to dmabuf heaps

Minchan Kim (2):
  mm: cma: introduce gfp flag in cma_alloc instead of no_warn
  mm: failfast mode with __GFP_NORETRY in alloc_contig_range

 .../reserved-memory/dma_heap_chunk.yaml       |  58 +++
 drivers/dma-buf/heaps/Kconfig                 |   8 +
 drivers/dma-buf/heaps/Makefile                |   1 +
 drivers/dma-buf/heaps/chunk_heap.c            | 477 ++++++++++++++++++
 drivers/dma-buf/heaps/cma_heap.c              |   2 +-
 drivers/s390/char/vmcp.c                      |   2 +-
 include/linux/cma.h                           |   2 +-
 kernel/dma/contiguous.c                       |   3 +-
 mm/cma.c                                      |  12 +-
 mm/cma_debug.c                                |   2 +-
 mm/hugetlb.c                                  |   6 +-
 mm/page_alloc.c                               |   8 +-
 mm/secretmem.c                                |   3 +-
 13 files changed, 568 insertions(+), 16 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/reserved-memory/dma_heap_chunk.yaml
 create mode 100644 drivers/dma-buf/heaps/chunk_heap.c

-- 
2.30.0.284.gd98b1dd5eaa7-goog



^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v3 1/4] mm: cma: introduce gfp flag in cma_alloc instead of no_warn
  2021-01-13  1:21 [PATCH v3 0/4] Chunk Heap Support on DMA-HEAP Minchan Kim
@ 2021-01-13  1:21 ` Minchan Kim
  2021-01-20 21:08   ` Suren Baghdasaryan
  2021-01-13  1:21 ` [PATCH v3 2/4] mm: failfast mode with __GFP_NORETRY in alloc_contig_range Minchan Kim
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 22+ messages in thread
From: Minchan Kim @ 2021-01-13  1:21 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, LKML, hyesoo.yu, david, mhocko, surenb, pullip.cho,
	joaodias, hridya, john.stultz, sumit.semwal, linux-media,
	devicetree, hch, robh+dt, linaro-mm-sig, Minchan Kim

The upcoming patch will introduce __GFP_NORETRY semantic
in alloc_contig_range which is a failfast mode of the API.
Instead of adding a additional parameter for gfp, replace
no_warn with gfp flag.

To keep old behaviors, it follows the rule below.

  no_warn 			gfp_flags

  false         		GFP_KERNEL
  true          		GFP_KERNEL|__GFP_NOWARN
  gfp & __GFP_NOWARN		GFP_KERNEL | (gfp & __GFP_NOWARN)

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 drivers/dma-buf/heaps/cma_heap.c |  2 +-
 drivers/s390/char/vmcp.c         |  2 +-
 include/linux/cma.h              |  2 +-
 kernel/dma/contiguous.c          |  3 ++-
 mm/cma.c                         | 12 ++++++------
 mm/cma_debug.c                   |  2 +-
 mm/hugetlb.c                     |  6 ++++--
 mm/secretmem.c                   |  3 ++-
 8 files changed, 18 insertions(+), 14 deletions(-)

diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c
index 364fc2f3e499..0afc1907887a 100644
--- a/drivers/dma-buf/heaps/cma_heap.c
+++ b/drivers/dma-buf/heaps/cma_heap.c
@@ -298,7 +298,7 @@ static int cma_heap_allocate(struct dma_heap *heap,
 	if (align > CONFIG_CMA_ALIGNMENT)
 		align = CONFIG_CMA_ALIGNMENT;
 
-	cma_pages = cma_alloc(cma_heap->cma, pagecount, align, false);
+	cma_pages = cma_alloc(cma_heap->cma, pagecount, align, GFP_KERNEL);
 	if (!cma_pages)
 		goto free_buffer;
 
diff --git a/drivers/s390/char/vmcp.c b/drivers/s390/char/vmcp.c
index 9e066281e2d0..78f9adf56456 100644
--- a/drivers/s390/char/vmcp.c
+++ b/drivers/s390/char/vmcp.c
@@ -70,7 +70,7 @@ static void vmcp_response_alloc(struct vmcp_session *session)
 	 * anymore the system won't work anyway.
 	 */
 	if (order > 2)
-		page = cma_alloc(vmcp_cma, nr_pages, 0, false);
+		page = cma_alloc(vmcp_cma, nr_pages, 0, GFP_KERNEL);
 	if (page) {
 		session->response = (char *)page_to_phys(page);
 		session->cma_alloc = 1;
diff --git a/include/linux/cma.h b/include/linux/cma.h
index 217999c8a762..d6c02d08ddbc 100644
--- a/include/linux/cma.h
+++ b/include/linux/cma.h
@@ -45,7 +45,7 @@ extern int cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
 					const char *name,
 					struct cma **res_cma);
 extern struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align,
-			      bool no_warn);
+			      gfp_t gfp_mask);
 extern bool cma_release(struct cma *cma, const struct page *pages, unsigned int count);
 
 extern int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data);
diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c
index 3d63d91cba5c..552ed531c018 100644
--- a/kernel/dma/contiguous.c
+++ b/kernel/dma/contiguous.c
@@ -260,7 +260,8 @@ struct page *dma_alloc_from_contiguous(struct device *dev, size_t count,
 	if (align > CONFIG_CMA_ALIGNMENT)
 		align = CONFIG_CMA_ALIGNMENT;
 
-	return cma_alloc(dev_get_cma_area(dev), count, align, no_warn);
+	return cma_alloc(dev_get_cma_area(dev), count, align, GFP_KERNEL |
+			(no_warn ? __GFP_NOWARN : 0));
 }
 
 /**
diff --git a/mm/cma.c b/mm/cma.c
index 0ba69cd16aeb..35053b82aedc 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -419,13 +419,13 @@ static inline void cma_debug_show_areas(struct cma *cma) { }
  * @cma:   Contiguous memory region for which the allocation is performed.
  * @count: Requested number of pages.
  * @align: Requested alignment of pages (in PAGE_SIZE order).
- * @no_warn: Avoid printing message about failed allocation
+ * @gfp_mask: GFP mask to use during during the cma allocation.
  *
  * This function allocates part of contiguous memory on specific
  * contiguous memory area.
  */
 struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align,
-		       bool no_warn)
+		       gfp_t gfp_mask)
 {
 	unsigned long mask, offset;
 	unsigned long pfn = -1;
@@ -438,8 +438,8 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align,
 	if (!cma || !cma->count || !cma->bitmap)
 		return NULL;
 
-	pr_debug("%s(cma %p, count %zu, align %d)\n", __func__, (void *)cma,
-		 count, align);
+	pr_debug("%s(cma %p, count %zu, align %d gfp_mask 0x%x)\n", __func__,
+			(void *)cma, count, align, gfp_mask);
 
 	if (!count)
 		return NULL;
@@ -471,7 +471,7 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align,
 
 		pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit);
 		ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA,
-				     GFP_KERNEL | (no_warn ? __GFP_NOWARN : 0));
+						gfp_mask);
 
 		if (ret == 0) {
 			page = pfn_to_page(pfn);
@@ -500,7 +500,7 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align,
 			page_kasan_tag_reset(page + i);
 	}
 
-	if (ret && !no_warn) {
+	if (ret && !(gfp_mask & __GFP_NOWARN)) {
 		pr_err("%s: alloc failed, req-size: %zu pages, ret: %d\n",
 			__func__, count, ret);
 		cma_debug_show_areas(cma);
diff --git a/mm/cma_debug.c b/mm/cma_debug.c
index d5bf8aa34fdc..00170c41cf81 100644
--- a/mm/cma_debug.c
+++ b/mm/cma_debug.c
@@ -137,7 +137,7 @@ static int cma_alloc_mem(struct cma *cma, int count)
 	if (!mem)
 		return -ENOMEM;
 
-	p = cma_alloc(cma, count, 0, false);
+	p = cma_alloc(cma, count, 0, GFP_KERNEL);
 	if (!p) {
 		kfree(mem);
 		return -ENOMEM;
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 737b2dce19e6..695af33aa66c 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1266,7 +1266,8 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
 
 		if (hugetlb_cma[nid]) {
 			page = cma_alloc(hugetlb_cma[nid], nr_pages,
-					huge_page_order(h), true);
+					huge_page_order(h),
+					GFP_KERNEL | __GFP_NOWARN);
 			if (page)
 				return page;
 		}
@@ -1277,7 +1278,8 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
 					continue;
 
 				page = cma_alloc(hugetlb_cma[node], nr_pages,
-						huge_page_order(h), true);
+						huge_page_order(h),
+						GFP_KERNEL | __GFP_NOWARN);
 				if (page)
 					return page;
 			}
diff --git a/mm/secretmem.c b/mm/secretmem.c
index b8a32954ac68..585d55b9f9d8 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -86,7 +86,8 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
 	struct page *page;
 	int err;
 
-	page = cma_alloc(secretmem_cma, nr_pages, PMD_SIZE, gfp & __GFP_NOWARN);
+	page = cma_alloc(secretmem_cma, nr_pages, PMD_SIZE,
+				GFP_KERNEL | (gfp & __GFP_NOWARN));
 	if (!page)
 		return -ENOMEM;
 
-- 
2.30.0.284.gd98b1dd5eaa7-goog



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v3 2/4] mm: failfast mode with __GFP_NORETRY in alloc_contig_range
  2021-01-13  1:21 [PATCH v3 0/4] Chunk Heap Support on DMA-HEAP Minchan Kim
  2021-01-13  1:21 ` [PATCH v3 1/4] mm: cma: introduce gfp flag in cma_alloc instead of no_warn Minchan Kim
@ 2021-01-13  1:21 ` Minchan Kim
  2021-01-13  8:39   ` David Hildenbrand
  2021-01-13  1:21 ` [PATCH v3 3/4] dt-bindings: reserved-memory: Make DMA-BUF CMA heap DT-configurable Minchan Kim
  2021-01-13  1:21 ` [PATCH v3 4/4] dma-buf: heaps: add chunk heap to dmabuf heaps Minchan Kim
  3 siblings, 1 reply; 22+ messages in thread
From: Minchan Kim @ 2021-01-13  1:21 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, LKML, hyesoo.yu, david, mhocko, surenb, pullip.cho,
	joaodias, hridya, john.stultz, sumit.semwal, linux-media,
	devicetree, hch, robh+dt, linaro-mm-sig, Minchan Kim

Contiguous memory allocation can be stalled due to waiting
on page writeback and/or page lock which causes unpredictable
delay. It's a unavoidable cost for the requestor to get *big*
contiguous memory but it's expensive for *small* contiguous
memory(e.g., order-4) because caller could retry the request
in diffrent range where would have easy migratable pages
without stalling.

This patch introduce __GFP_NORETRY as compaction gfp_mask in
alloc_contig_range so it will fail fast without blocking
when it encounters pages needed waitting.

Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 mm/page_alloc.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 5b3923db9158..ff41ceb4db51 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -8489,12 +8489,16 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
 	unsigned int nr_reclaimed;
 	unsigned long pfn = start;
 	unsigned int tries = 0;
+	unsigned int max_tries = 5;
 	int ret = 0;
 	struct migration_target_control mtc = {
 		.nid = zone_to_nid(cc->zone),
 		.gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL,
 	};
 
+	if (cc->alloc_contig && cc->mode == MIGRATE_ASYNC)
+		max_tries = 1;
+
 	migrate_prep();
 
 	while (pfn < end || !list_empty(&cc->migratepages)) {
@@ -8511,7 +8515,7 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
 				break;
 			}
 			tries = 0;
-		} else if (++tries == 5) {
+		} else if (++tries == max_tries) {
 			ret = ret < 0 ? ret : -EBUSY;
 			break;
 		}
@@ -8562,7 +8566,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
 		.nr_migratepages = 0,
 		.order = -1,
 		.zone = page_zone(pfn_to_page(start)),
-		.mode = MIGRATE_SYNC,
+		.mode = gfp_mask & __GFP_NORETRY ? MIGRATE_ASYNC : MIGRATE_SYNC,
 		.ignore_skip_hint = true,
 		.no_set_skip_hint = true,
 		.gfp_mask = current_gfp_context(gfp_mask),
-- 
2.30.0.284.gd98b1dd5eaa7-goog



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v3 3/4] dt-bindings: reserved-memory: Make DMA-BUF CMA heap DT-configurable
  2021-01-13  1:21 [PATCH v3 0/4] Chunk Heap Support on DMA-HEAP Minchan Kim
  2021-01-13  1:21 ` [PATCH v3 1/4] mm: cma: introduce gfp flag in cma_alloc instead of no_warn Minchan Kim
  2021-01-13  1:21 ` [PATCH v3 2/4] mm: failfast mode with __GFP_NORETRY in alloc_contig_range Minchan Kim
@ 2021-01-13  1:21 ` Minchan Kim
  2021-01-13 15:45   ` Rob Herring
  2021-01-14 14:01   ` Rob Herring
  2021-01-13  1:21 ` [PATCH v3 4/4] dma-buf: heaps: add chunk heap to dmabuf heaps Minchan Kim
  3 siblings, 2 replies; 22+ messages in thread
From: Minchan Kim @ 2021-01-13  1:21 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, LKML, hyesoo.yu, david, mhocko, surenb, pullip.cho,
	joaodias, hridya, john.stultz, sumit.semwal, linux-media,
	devicetree, hch, robh+dt, linaro-mm-sig, Minchan Kim

From: Hyesoo Yu <hyesoo.yu@samsung.com>

Document devicetree binding for chunk cma heap on dma heap framework.

The DMA chunk heap supports the bulk allocation of higher order pages.

Signed-off-by: Hyesoo Yu <hyesoo.yu@samsung.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Hridya Valsaraju <hridya@google.com>
Change-Id: I8fb231e5a8360e2d8f65947e155b12aa664dde01
---
 .../reserved-memory/dma_heap_chunk.yaml       | 58 +++++++++++++++++++
 1 file changed, 58 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/reserved-memory/dma_heap_chunk.yaml

diff --git a/Documentation/devicetree/bindings/reserved-memory/dma_heap_chunk.yaml b/Documentation/devicetree/bindings/reserved-memory/dma_heap_chunk.yaml
new file mode 100644
index 000000000000..3e7fed5fb006
--- /dev/null
+++ b/Documentation/devicetree/bindings/reserved-memory/dma_heap_chunk.yaml
@@ -0,0 +1,58 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/reserved-memory/dma_heap_chunk.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Device tree binding for chunk heap on DMA HEAP FRAMEWORK
+
+description: |
+  The DMA chunk heap is backed by the Contiguous Memory Allocator (CMA) and
+  supports bulk allocation of fixed size pages.
+
+maintainers:
+  - Hyesoo Yu <hyesoo.yu@samsung.com>
+  - John Stultz <john.stultz@linaro.org>
+  - Minchan Kim <minchan@kernel.org>
+  - Hridya Valsaraju<hridya@google.com>
+
+
+properties:
+  compatible:
+    enum:
+      - dma_heap,chunk
+
+  chunk-order:
+    description: |
+            order of pages that will get allocated from the chunk DMA heap.
+    maxItems: 1
+
+  size:
+    maxItems: 1
+
+  alignment:
+    maxItems: 1
+
+required:
+  - compatible
+  - size
+  - alignment
+  - chunk-order
+
+additionalProperties: false
+
+examples:
+  - |
+    reserved-memory {
+        #address-cells = <2>;
+        #size-cells = <1>;
+
+        chunk_memory: chunk_memory {
+            compatible = "dma_heap,chunk";
+            size = <0x3000000>;
+            alignment = <0x0 0x00010000>;
+            chunk-order = <4>;
+        };
+    };
+
+
-- 
2.30.0.284.gd98b1dd5eaa7-goog



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v3 4/4] dma-buf: heaps: add chunk heap to dmabuf heaps
  2021-01-13  1:21 [PATCH v3 0/4] Chunk Heap Support on DMA-HEAP Minchan Kim
                   ` (2 preceding siblings ...)
  2021-01-13  1:21 ` [PATCH v3 3/4] dt-bindings: reserved-memory: Make DMA-BUF CMA heap DT-configurable Minchan Kim
@ 2021-01-13  1:21 ` Minchan Kim
  2021-01-13  3:11   ` kernel test robot
                     ` (4 more replies)
  3 siblings, 5 replies; 22+ messages in thread
From: Minchan Kim @ 2021-01-13  1:21 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, LKML, hyesoo.yu, david, mhocko, surenb, pullip.cho,
	joaodias, hridya, john.stultz, sumit.semwal, linux-media,
	devicetree, hch, robh+dt, linaro-mm-sig, Minchan Kim

From: Hyesoo Yu <hyesoo.yu@samsung.com>

This patch supports chunk heap that allocates the buffers that
arranged into a list a fixed size chunks taken from CMA.

The chunk heap driver is bound directly to a reserved_memory
node by following Rob Herring's suggestion in [1].

[1] https://lore.kernel.org/lkml/20191025225009.50305-2-john.stultz@linaro.org/T/#m3dc63acd33fea269a584f43bb799a876f0b2b45d

Signed-off-by: Hyesoo Yu <hyesoo.yu@samsung.com>
Signed-off-by: Hridya Valsaraju <hridya@google.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
---
 drivers/dma-buf/heaps/Kconfig      |   8 +
 drivers/dma-buf/heaps/Makefile     |   1 +
 drivers/dma-buf/heaps/chunk_heap.c | 477 +++++++++++++++++++++++++++++
 3 files changed, 486 insertions(+)
 create mode 100644 drivers/dma-buf/heaps/chunk_heap.c

diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
index a5eef06c4226..6527233f52a8 100644
--- a/drivers/dma-buf/heaps/Kconfig
+++ b/drivers/dma-buf/heaps/Kconfig
@@ -12,3 +12,11 @@ config DMABUF_HEAPS_CMA
 	  Choose this option to enable dma-buf CMA heap. This heap is backed
 	  by the Contiguous Memory Allocator (CMA). If your system has these
 	  regions, you should say Y here.
+
+config DMABUF_HEAPS_CHUNK
+	bool "DMA-BUF CHUNK Heap"
+	depends on DMABUF_HEAPS && DMA_CMA
+	help
+	  Choose this option to enable dma-buf CHUNK heap. This heap is backed
+	  by the Contiguous Memory Allocator (CMA) and allocates the buffers that
+	  arranged into a list of fixed size chunks taken from CMA.
diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
index 974467791032..8faa6cfdc0c5 100644
--- a/drivers/dma-buf/heaps/Makefile
+++ b/drivers/dma-buf/heaps/Makefile
@@ -1,3 +1,4 @@
 # SPDX-License-Identifier: GPL-2.0
 obj-$(CONFIG_DMABUF_HEAPS_SYSTEM)	+= system_heap.o
 obj-$(CONFIG_DMABUF_HEAPS_CMA)		+= cma_heap.o
+obj-$(CONFIG_DMABUF_HEAPS_CHUNK)	+= chunk_heap.o
diff --git a/drivers/dma-buf/heaps/chunk_heap.c b/drivers/dma-buf/heaps/chunk_heap.c
new file mode 100644
index 000000000000..64f748c81e1f
--- /dev/null
+++ b/drivers/dma-buf/heaps/chunk_heap.c
@@ -0,0 +1,477 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * DMA-BUF chunk heap exporter
+ *
+ * Copyright (c) 2020 Samsung Electronics Co., Ltd.
+ * Author: <hyesoo.yu@samsung.com> for Samsung Electronics.
+ */
+
+#include <linux/cma.h>
+#include <linux/device.h>
+#include <linux/dma-buf.h>
+#include <linux/dma-heap.h>
+#include <linux/dma-mapping.h>
+#include <linux/dma-map-ops.h>
+#include <linux/err.h>
+#include <linux/errno.h>
+#include <linux/highmem.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_fdt.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/scatterlist.h>
+#include <linux/sched/signal.h>
+#include <linux/slab.h>
+
+struct chunk_heap {
+	struct dma_heap *heap;
+	uint32_t order;
+	struct cma *cma;
+};
+
+struct chunk_heap_buffer {
+	struct chunk_heap *heap;
+	struct list_head attachments;
+	struct mutex lock;
+	struct sg_table sg_table;
+	unsigned long len;
+	int vmap_cnt;
+	void *vaddr;
+};
+
+struct chunk_heap_attachment {
+	struct device *dev;
+	struct sg_table *table;
+	struct list_head list;
+	bool mapped;
+};
+
+struct chunk_heap chunk_heaps[MAX_CMA_AREAS];
+unsigned int chunk_heap_count;
+
+static struct sg_table *dup_sg_table(struct sg_table *table)
+{
+	struct sg_table *new_table;
+	int ret, i;
+	struct scatterlist *sg, *new_sg;
+
+	new_table = kzalloc(sizeof(*new_table), GFP_KERNEL);
+	if (!new_table)
+		return ERR_PTR(-ENOMEM);
+
+	ret = sg_alloc_table(new_table, table->orig_nents, GFP_KERNEL);
+	if (ret) {
+		kfree(new_table);
+		return ERR_PTR(-ENOMEM);
+	}
+
+	new_sg = new_table->sgl;
+	for_each_sgtable_sg(table, sg, i) {
+		sg_set_page(new_sg, sg_page(sg), sg->length, sg->offset);
+		new_sg = sg_next(new_sg);
+	}
+
+	return new_table;
+}
+
+static int chunk_heap_attach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment)
+{
+	struct chunk_heap_buffer *buffer = dmabuf->priv;
+	struct chunk_heap_attachment *a;
+	struct sg_table *table;
+
+	a = kzalloc(sizeof(*a), GFP_KERNEL);
+	if (!a)
+		return -ENOMEM;
+
+	table = dup_sg_table(&buffer->sg_table);
+	if (IS_ERR(table)) {
+		kfree(a);
+		return -ENOMEM;
+	}
+
+	a->table = table;
+	a->dev = attachment->dev;
+
+	attachment->priv = a;
+
+	mutex_lock(&buffer->lock);
+	list_add(&a->list, &buffer->attachments);
+	mutex_unlock(&buffer->lock);
+
+	return 0;
+}
+
+static void chunk_heap_detach(struct dma_buf *dmabuf, struct dma_buf_attachment *attachment)
+{
+	struct chunk_heap_buffer *buffer = dmabuf->priv;
+	struct chunk_heap_attachment *a = attachment->priv;
+
+	mutex_lock(&buffer->lock);
+	list_del(&a->list);
+	mutex_unlock(&buffer->lock);
+
+	sg_free_table(a->table);
+	kfree(a->table);
+	kfree(a);
+}
+
+static struct sg_table *chunk_heap_map_dma_buf(struct dma_buf_attachment *attachment,
+					       enum dma_data_direction direction)
+{
+	struct chunk_heap_attachment *a = attachment->priv;
+	struct sg_table *table = a->table;
+	int ret;
+
+	if (a->mapped)
+		return table;
+
+	ret = dma_map_sgtable(attachment->dev, table, direction, 0);
+	if (ret)
+		return ERR_PTR(ret);
+
+	a->mapped = true;
+	return table;
+}
+
+static void chunk_heap_unmap_dma_buf(struct dma_buf_attachment *attachment,
+				     struct sg_table *table,
+				     enum dma_data_direction direction)
+{
+	struct chunk_heap_attachment *a = attachment->priv;
+
+	a->mapped = false;
+	dma_unmap_sgtable(attachment->dev, table, direction, 0);
+}
+
+static int chunk_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
+						enum dma_data_direction direction)
+{
+	struct chunk_heap_buffer *buffer = dmabuf->priv;
+	struct chunk_heap_attachment *a;
+
+	mutex_lock(&buffer->lock);
+
+	if (buffer->vmap_cnt)
+		invalidate_kernel_vmap_range(buffer->vaddr, buffer->len);
+
+	list_for_each_entry(a, &buffer->attachments, list) {
+		if (!a->mapped)
+			continue;
+		dma_sync_sgtable_for_cpu(a->dev, a->table, direction);
+	}
+	mutex_unlock(&buffer->lock);
+
+	return 0;
+}
+
+static int chunk_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
+					      enum dma_data_direction direction)
+{
+	struct chunk_heap_buffer *buffer = dmabuf->priv;
+	struct chunk_heap_attachment *a;
+
+	mutex_lock(&buffer->lock);
+
+	if (buffer->vmap_cnt)
+		flush_kernel_vmap_range(buffer->vaddr, buffer->len);
+
+	list_for_each_entry(a, &buffer->attachments, list) {
+		if (!a->mapped)
+			continue;
+		dma_sync_sgtable_for_device(a->dev, a->table, direction);
+	}
+	mutex_unlock(&buffer->lock);
+
+	return 0;
+}
+
+static int chunk_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
+{
+	struct chunk_heap_buffer *buffer = dmabuf->priv;
+	struct sg_table *table = &buffer->sg_table;
+	unsigned long addr = vma->vm_start;
+	struct sg_page_iter piter;
+	int ret;
+
+	for_each_sgtable_page(table, &piter, vma->vm_pgoff) {
+		struct page *page = sg_page_iter_page(&piter);
+
+		ret = remap_pfn_range(vma, addr, page_to_pfn(page), PAGE_SIZE,
+				      vma->vm_page_prot);
+		if (ret)
+			return ret;
+		addr += PAGE_SIZE;
+		if (addr >= vma->vm_end)
+			return 0;
+	}
+	return 0;
+}
+
+static void *chunk_heap_do_vmap(struct chunk_heap_buffer *buffer)
+{
+	struct sg_table *table = &buffer->sg_table;
+	int npages = PAGE_ALIGN(buffer->len) / PAGE_SIZE;
+	struct page **pages = vmalloc(sizeof(struct page *) * npages);
+	struct page **tmp = pages;
+	struct sg_page_iter piter;
+	void *vaddr;
+
+	if (!pages)
+		return ERR_PTR(-ENOMEM);
+
+	for_each_sgtable_page(table, &piter, 0) {
+		WARN_ON(tmp - pages >= npages);
+		*tmp++ = sg_page_iter_page(&piter);
+	}
+
+	vaddr = vmap(pages, npages, VM_MAP, PAGE_KERNEL);
+	vfree(pages);
+
+	if (!vaddr)
+		return ERR_PTR(-ENOMEM);
+
+	return vaddr;
+}
+
+static int chunk_heap_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
+{
+	struct chunk_heap_buffer *buffer = dmabuf->priv;
+	void *vaddr;
+
+	mutex_lock(&buffer->lock);
+	if (buffer->vmap_cnt) {
+		vaddr = buffer->vaddr;
+	} else {
+		vaddr = chunk_heap_do_vmap(buffer);
+		if (IS_ERR(vaddr)) {
+			mutex_unlock(&buffer->lock);
+
+			return PTR_ERR(vaddr);
+		}
+		buffer->vaddr = vaddr;
+	}
+	buffer->vmap_cnt++;
+	dma_buf_map_set_vaddr(map, vaddr);
+
+	mutex_unlock(&buffer->lock);
+
+	return 0;
+}
+
+static void chunk_heap_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
+{
+	struct chunk_heap_buffer *buffer = dmabuf->priv;
+
+	mutex_lock(&buffer->lock);
+	if (!--buffer->vmap_cnt) {
+		vunmap(buffer->vaddr);
+		buffer->vaddr = NULL;
+	}
+	mutex_unlock(&buffer->lock);
+}
+
+static void chunk_heap_dma_buf_release(struct dma_buf *dmabuf)
+{
+	struct chunk_heap_buffer *buffer = dmabuf->priv;
+	struct chunk_heap *chunk_heap = buffer->heap;
+	struct sg_table *table;
+	struct scatterlist *sg;
+	int i;
+
+	table = &buffer->sg_table;
+	for_each_sgtable_sg(table, sg, i)
+		cma_release(chunk_heap->cma, sg_page(sg), 1 << chunk_heap->order);
+	sg_free_table(table);
+	kfree(buffer);
+}
+
+static const struct dma_buf_ops chunk_heap_buf_ops = {
+	.attach = chunk_heap_attach,
+	.detach = chunk_heap_detach,
+	.map_dma_buf = chunk_heap_map_dma_buf,
+	.unmap_dma_buf = chunk_heap_unmap_dma_buf,
+	.begin_cpu_access = chunk_heap_dma_buf_begin_cpu_access,
+	.end_cpu_access = chunk_heap_dma_buf_end_cpu_access,
+	.mmap = chunk_heap_mmap,
+	.vmap = chunk_heap_vmap,
+	.vunmap = chunk_heap_vunmap,
+	.release = chunk_heap_dma_buf_release,
+};
+
+static int chunk_heap_allocate(struct dma_heap *heap, unsigned long len,
+			       unsigned long fd_flags, unsigned long heap_flags)
+{
+	struct chunk_heap *chunk_heap = dma_heap_get_drvdata(heap);
+	struct chunk_heap_buffer *buffer;
+	DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+	struct dma_buf *dmabuf;
+	struct sg_table *table;
+	struct scatterlist *sg;
+	struct page **pages;
+	unsigned int chunk_size = PAGE_SIZE << chunk_heap->order;
+	unsigned int count, alloced = 0;
+	unsigned int alloc_order = max_t(unsigned int, pageblock_order, chunk_heap->order);
+	unsigned int nr_chunks_per_alloc = 1 << (alloc_order - chunk_heap->order);
+	gfp_t gfp_flags = GFP_KERNEL|__GFP_NORETRY;
+	int ret = -ENOMEM;
+	pgoff_t pg;
+
+	buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);
+	if (!buffer)
+		return ret;
+
+	INIT_LIST_HEAD(&buffer->attachments);
+	mutex_init(&buffer->lock);
+	buffer->heap = chunk_heap;
+	buffer->len = ALIGN(len, chunk_size);
+	count = buffer->len / chunk_size;
+
+	pages = kvmalloc_array(count, sizeof(*pages), GFP_KERNEL);
+	if (!pages)
+		goto err_pages;
+
+	while (alloced < count) {
+		struct page *page;
+		int i;
+
+		while (count - alloced < nr_chunks_per_alloc) {
+			alloc_order--;
+			nr_chunks_per_alloc >>= 1;
+		}
+
+		page = cma_alloc(chunk_heap->cma, 1 << alloc_order,
+					alloc_order, gfp_flags);
+		if (!page) {
+			if (gfp_flags & __GFP_NORETRY) {
+				gfp_flags &= ~__GFP_NORETRY;
+				continue;
+			}
+			break;
+		}
+
+		for (i = 0; i < nr_chunks_per_alloc; i++, alloced++) {
+			pages[alloced] = page;
+			page += 1 << chunk_heap->order;
+		}
+	}
+
+	if (alloced < count)
+		goto err_alloc;
+
+	table = &buffer->sg_table;
+	if (sg_alloc_table(table, count, GFP_KERNEL))
+		goto err_alloc;
+
+	sg = table->sgl;
+	for (pg = 0; pg < count; pg++) {
+		sg_set_page(sg, pages[pg], chunk_size, 0);
+		sg = sg_next(sg);
+	}
+
+	exp_info.ops = &chunk_heap_buf_ops;
+	exp_info.size = buffer->len;
+	exp_info.flags = fd_flags;
+	exp_info.priv = buffer;
+	dmabuf = dma_buf_export(&exp_info);
+	if (IS_ERR(dmabuf)) {
+		ret = PTR_ERR(dmabuf);
+		goto err_export;
+	}
+	kvfree(pages);
+
+	ret = dma_buf_fd(dmabuf, fd_flags);
+	if (ret < 0) {
+		dma_buf_put(dmabuf);
+		return ret;
+	}
+
+	return 0;
+err_export:
+	sg_free_table(table);
+err_alloc:
+	for (pg = 0; pg < alloced; pg++)
+		cma_release(chunk_heap->cma, pages[pg], 1 << chunk_heap->order);
+	kvfree(pages);
+err_pages:
+	kfree(buffer);
+
+	return ret;
+}
+
+static const struct dma_heap_ops chunk_heap_ops = {
+	.allocate = chunk_heap_allocate,
+};
+
+static int register_chunk_heap(struct chunk_heap *chunk_heap_info)
+{
+	struct dma_heap_export_info exp_info;
+
+	exp_info.name = cma_get_name(chunk_heap_info->cma);
+	exp_info.ops = &chunk_heap_ops;
+	exp_info.priv = chunk_heap_info;
+
+	chunk_heap_info->heap = dma_heap_add(&exp_info);
+	if (IS_ERR(chunk_heap_info->heap))
+		return PTR_ERR(chunk_heap_info->heap);
+
+	return 0;
+}
+
+static int __init chunk_heap_init(void)
+{
+	unsigned int i;
+
+	for (i = 0; i < chunk_heap_count; i++)
+		register_chunk_heap(&chunk_heaps[i]);
+
+	return 0;
+}
+module_init(chunk_heap_init);
+
+#ifdef CONFIG_OF_EARLY_FLATTREE
+
+static int __init dmabuf_chunk_heap_area_init(struct reserved_mem *rmem)
+{
+	int ret;
+	struct cma *cma;
+	struct chunk_heap *chunk_heap_info;
+	const __be32 *chunk_order;
+
+	phys_addr_t align = PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order);
+	phys_addr_t mask = align - 1;
+
+	if ((rmem->base & mask) || (rmem->size & mask)) {
+		pr_err("Incorrect alignment for CMA region\n");
+		return -EINVAL;
+	}
+
+	ret = cma_init_reserved_mem(rmem->base, rmem->size, 0, rmem->name, &cma);
+	if (ret) {
+		pr_err("Reserved memory: unable to setup CMA region\n");
+		return ret;
+	}
+
+	/* Architecture specific contiguous memory fixup. */
+	dma_contiguous_early_fixup(rmem->base, rmem->size);
+
+	chunk_heap_info = &chunk_heaps[chunk_heap_count];
+	chunk_heap_info->cma = cma;
+
+	chunk_order = of_get_flat_dt_prop(rmem->fdt_node, "chunk-order", NULL);
+
+	if (chunk_order)
+		chunk_heap_info->order = be32_to_cpu(*chunk_order);
+	else
+		chunk_heap_info->order = 4;
+
+	chunk_heap_count++;
+
+	return 0;
+}
+RESERVEDMEM_OF_DECLARE(dmabuf_chunk_heap, "dma_heap,chunk",
+		       dmabuf_chunk_heap_area_init);
+#endif
+
+MODULE_DESCRIPTION("DMA-BUF Chunk Heap");
+MODULE_LICENSE("GPL v2");
-- 
2.30.0.284.gd98b1dd5eaa7-goog



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 4/4] dma-buf: heaps: add chunk heap to dmabuf heaps
  2021-01-13  1:21 ` [PATCH v3 4/4] dma-buf: heaps: add chunk heap to dmabuf heaps Minchan Kim
@ 2021-01-13  3:11   ` kernel test robot
  2021-01-14  1:04     ` Minchan Kim
  2021-01-13  3:38   ` Randy Dunlap
                     ` (3 subsequent siblings)
  4 siblings, 1 reply; 22+ messages in thread
From: kernel test robot @ 2021-01-13  3:11 UTC (permalink / raw)
  To: Minchan Kim, Andrew Morton
  Cc: kbuild-all, Linux Memory Management List, LKML, hyesoo.yu, david,
	mhocko, surenb, pullip.cho, joaodias, hridya

[-- Attachment #1: Type: text/plain, Size: 4207 bytes --]

Hi Minchan,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on next-20210112]
[cannot apply to s390/features robh/for-next linux/master linus/master hnaz-linux-mm/master v5.11-rc3 v5.11-rc2 v5.11-rc1 v5.11-rc3]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Minchan-Kim/Chunk-Heap-Support-on-DMA-HEAP/20210113-092747
base:    df869cab4b3519d603806234861aa0a39df479c0
config: mips-allyesconfig (attached as .config)
compiler: mips-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/531ebc21d3c2584784d44714e3b4f1df46b80eee
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Minchan-Kim/Chunk-Heap-Support-on-DMA-HEAP/20210113-092747
        git checkout 531ebc21d3c2584784d44714e3b4f1df46b80eee
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=mips 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   drivers/dma-buf/heaps/chunk_heap.c: In function 'chunk_heap_do_vmap':
   drivers/dma-buf/heaps/chunk_heap.c:215:24: error: implicit declaration of function 'vmalloc'; did you mean 'kvmalloc'? [-Werror=implicit-function-declaration]
     215 |  struct page **pages = vmalloc(sizeof(struct page *) * npages);
         |                        ^~~~~~~
         |                        kvmalloc
>> drivers/dma-buf/heaps/chunk_heap.c:215:24: warning: initialization of 'struct page **' from 'int' makes pointer from integer without a cast [-Wint-conversion]
   drivers/dma-buf/heaps/chunk_heap.c:228:10: error: implicit declaration of function 'vmap'; did you mean 'kmap'? [-Werror=implicit-function-declaration]
     228 |  vaddr = vmap(pages, npages, VM_MAP, PAGE_KERNEL);
         |          ^~~~
         |          kmap
   drivers/dma-buf/heaps/chunk_heap.c:228:30: error: 'VM_MAP' undeclared (first use in this function); did you mean 'VM_MTE'?
     228 |  vaddr = vmap(pages, npages, VM_MAP, PAGE_KERNEL);
         |                              ^~~~~~
         |                              VM_MTE
   drivers/dma-buf/heaps/chunk_heap.c:228:30: note: each undeclared identifier is reported only once for each function it appears in
   drivers/dma-buf/heaps/chunk_heap.c:229:2: error: implicit declaration of function 'vfree'; did you mean 'kvfree'? [-Werror=implicit-function-declaration]
     229 |  vfree(pages);
         |  ^~~~~
         |  kvfree
   drivers/dma-buf/heaps/chunk_heap.c: In function 'chunk_heap_vunmap':
   drivers/dma-buf/heaps/chunk_heap.c:268:3: error: implicit declaration of function 'vunmap'; did you mean 'kunmap'? [-Werror=implicit-function-declaration]
     268 |   vunmap(buffer->vaddr);
         |   ^~~~~~
         |   kunmap
   cc1: some warnings being treated as errors


vim +215 drivers/dma-buf/heaps/chunk_heap.c

   210	
   211	static void *chunk_heap_do_vmap(struct chunk_heap_buffer *buffer)
   212	{
   213		struct sg_table *table = &buffer->sg_table;
   214		int npages = PAGE_ALIGN(buffer->len) / PAGE_SIZE;
 > 215		struct page **pages = vmalloc(sizeof(struct page *) * npages);
   216		struct page **tmp = pages;
   217		struct sg_page_iter piter;
   218		void *vaddr;
   219	
   220		if (!pages)
   221			return ERR_PTR(-ENOMEM);
   222	
   223		for_each_sgtable_page(table, &piter, 0) {
   224			WARN_ON(tmp - pages >= npages);
   225			*tmp++ = sg_page_iter_page(&piter);
   226		}
   227	
   228		vaddr = vmap(pages, npages, VM_MAP, PAGE_KERNEL);
   229		vfree(pages);
   230	
   231		if (!vaddr)
   232			return ERR_PTR(-ENOMEM);
   233	
   234		return vaddr;
   235	}
   236	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 70046 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 4/4] dma-buf: heaps: add chunk heap to dmabuf heaps
  2021-01-13  1:21 ` [PATCH v3 4/4] dma-buf: heaps: add chunk heap to dmabuf heaps Minchan Kim
  2021-01-13  3:11   ` kernel test robot
@ 2021-01-13  3:38   ` Randy Dunlap
  2021-01-14  1:04     ` Minchan Kim
  2021-01-13  6:25   ` kernel test robot
                     ` (2 subsequent siblings)
  4 siblings, 1 reply; 22+ messages in thread
From: Randy Dunlap @ 2021-01-13  3:38 UTC (permalink / raw)
  To: Minchan Kim, Andrew Morton
  Cc: linux-mm, LKML, hyesoo.yu, david, mhocko, surenb, pullip.cho,
	joaodias, hridya, john.stultz, sumit.semwal, linux-media,
	devicetree, hch, robh+dt, linaro-mm-sig

On 1/12/21 5:21 PM, Minchan Kim wrote:
> +config DMABUF_HEAPS_CHUNK
> +	bool "DMA-BUF CHUNK Heap"
> +	depends on DMABUF_HEAPS && DMA_CMA
> +	help
> +	  Choose this option to enable dma-buf CHUNK heap. This heap is backed
> +	  by the Contiguous Memory Allocator (CMA) and allocates the buffers that
> +	  arranged into a list of fixed size chunks taken from CMA.

maybe:
	  are arranged into

-- 
~Randy
You can't do anything without having to do something else first.
-- Belefant's Law


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 4/4] dma-buf: heaps: add chunk heap to dmabuf heaps
  2021-01-13  1:21 ` [PATCH v3 4/4] dma-buf: heaps: add chunk heap to dmabuf heaps Minchan Kim
  2021-01-13  3:11   ` kernel test robot
  2021-01-13  3:38   ` Randy Dunlap
@ 2021-01-13  6:25   ` kernel test robot
  2021-01-19 15:51   ` Minchan Kim
  2021-01-19 18:29   ` John Stultz
  4 siblings, 0 replies; 22+ messages in thread
From: kernel test robot @ 2021-01-13  6:25 UTC (permalink / raw)
  To: Minchan Kim, Andrew Morton
  Cc: kbuild-all, Linux Memory Management List, LKML, hyesoo.yu, david,
	mhocko, surenb, pullip.cho, joaodias, hridya

[-- Attachment #1: Type: text/plain, Size: 5217 bytes --]

Hi Minchan,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on next-20210112]
[cannot apply to s390/features robh/for-next linux/master linus/master hnaz-linux-mm/master v5.11-rc3 v5.11-rc2 v5.11-rc1 v5.11-rc3]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Minchan-Kim/Chunk-Heap-Support-on-DMA-HEAP/20210113-092747
base:    df869cab4b3519d603806234861aa0a39df479c0
config: mips-allyesconfig (attached as .config)
compiler: mips-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/531ebc21d3c2584784d44714e3b4f1df46b80eee
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Minchan-Kim/Chunk-Heap-Support-on-DMA-HEAP/20210113-092747
        git checkout 531ebc21d3c2584784d44714e3b4f1df46b80eee
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=mips 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   drivers/dma-buf/heaps/chunk_heap.c: In function 'chunk_heap_do_vmap':
>> drivers/dma-buf/heaps/chunk_heap.c:215:24: error: implicit declaration of function 'vmalloc'; did you mean 'kvmalloc'? [-Werror=implicit-function-declaration]
     215 |  struct page **pages = vmalloc(sizeof(struct page *) * npages);
         |                        ^~~~~~~
         |                        kvmalloc
   drivers/dma-buf/heaps/chunk_heap.c:215:24: warning: initialization of 'struct page **' from 'int' makes pointer from integer without a cast [-Wint-conversion]
>> drivers/dma-buf/heaps/chunk_heap.c:228:10: error: implicit declaration of function 'vmap'; did you mean 'kmap'? [-Werror=implicit-function-declaration]
     228 |  vaddr = vmap(pages, npages, VM_MAP, PAGE_KERNEL);
         |          ^~~~
         |          kmap
>> drivers/dma-buf/heaps/chunk_heap.c:228:30: error: 'VM_MAP' undeclared (first use in this function); did you mean 'VM_MTE'?
     228 |  vaddr = vmap(pages, npages, VM_MAP, PAGE_KERNEL);
         |                              ^~~~~~
         |                              VM_MTE
   drivers/dma-buf/heaps/chunk_heap.c:228:30: note: each undeclared identifier is reported only once for each function it appears in
>> drivers/dma-buf/heaps/chunk_heap.c:229:2: error: implicit declaration of function 'vfree'; did you mean 'kvfree'? [-Werror=implicit-function-declaration]
     229 |  vfree(pages);
         |  ^~~~~
         |  kvfree
   drivers/dma-buf/heaps/chunk_heap.c: In function 'chunk_heap_vunmap':
>> drivers/dma-buf/heaps/chunk_heap.c:268:3: error: implicit declaration of function 'vunmap'; did you mean 'kunmap'? [-Werror=implicit-function-declaration]
     268 |   vunmap(buffer->vaddr);
         |   ^~~~~~
         |   kunmap
   cc1: some warnings being treated as errors


vim +215 drivers/dma-buf/heaps/chunk_heap.c

   210	
   211	static void *chunk_heap_do_vmap(struct chunk_heap_buffer *buffer)
   212	{
   213		struct sg_table *table = &buffer->sg_table;
   214		int npages = PAGE_ALIGN(buffer->len) / PAGE_SIZE;
 > 215		struct page **pages = vmalloc(sizeof(struct page *) * npages);
   216		struct page **tmp = pages;
   217		struct sg_page_iter piter;
   218		void *vaddr;
   219	
   220		if (!pages)
   221			return ERR_PTR(-ENOMEM);
   222	
   223		for_each_sgtable_page(table, &piter, 0) {
   224			WARN_ON(tmp - pages >= npages);
   225			*tmp++ = sg_page_iter_page(&piter);
   226		}
   227	
 > 228		vaddr = vmap(pages, npages, VM_MAP, PAGE_KERNEL);
 > 229		vfree(pages);
   230	
   231		if (!vaddr)
   232			return ERR_PTR(-ENOMEM);
   233	
   234		return vaddr;
   235	}
   236	
   237	static int chunk_heap_vmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
   238	{
   239		struct chunk_heap_buffer *buffer = dmabuf->priv;
   240		void *vaddr;
   241	
   242		mutex_lock(&buffer->lock);
   243		if (buffer->vmap_cnt) {
   244			vaddr = buffer->vaddr;
   245		} else {
   246			vaddr = chunk_heap_do_vmap(buffer);
   247			if (IS_ERR(vaddr)) {
   248				mutex_unlock(&buffer->lock);
   249	
   250				return PTR_ERR(vaddr);
   251			}
   252			buffer->vaddr = vaddr;
   253		}
   254		buffer->vmap_cnt++;
   255		dma_buf_map_set_vaddr(map, vaddr);
   256	
   257		mutex_unlock(&buffer->lock);
   258	
   259		return 0;
   260	}
   261	
   262	static void chunk_heap_vunmap(struct dma_buf *dmabuf, struct dma_buf_map *map)
   263	{
   264		struct chunk_heap_buffer *buffer = dmabuf->priv;
   265	
   266		mutex_lock(&buffer->lock);
   267		if (!--buffer->vmap_cnt) {
 > 268			vunmap(buffer->vaddr);
   269			buffer->vaddr = NULL;
   270		}
   271		mutex_unlock(&buffer->lock);
   272	}
   273	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 70046 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 2/4] mm: failfast mode with __GFP_NORETRY in alloc_contig_range
  2021-01-13  1:21 ` [PATCH v3 2/4] mm: failfast mode with __GFP_NORETRY in alloc_contig_range Minchan Kim
@ 2021-01-13  8:39   ` David Hildenbrand
  2021-01-14 18:04     ` Minchan Kim
  0 siblings, 1 reply; 22+ messages in thread
From: David Hildenbrand @ 2021-01-13  8:39 UTC (permalink / raw)
  To: Minchan Kim, Andrew Morton
  Cc: linux-mm, LKML, hyesoo.yu, mhocko, surenb, pullip.cho, joaodias,
	hridya, john.stultz, sumit.semwal, linux-media, devicetree, hch,
	robh+dt, linaro-mm-sig

On 13.01.21 02:21, Minchan Kim wrote:
> Contiguous memory allocation can be stalled due to waiting
> on page writeback and/or page lock which causes unpredictable
> delay. It's a unavoidable cost for the requestor to get *big*
> contiguous memory but it's expensive for *small* contiguous
> memory(e.g., order-4) because caller could retry the request
> in diffrent range where would have easy migratable pages
> without stalling.

s/diffrent/different/

> 
> This patch introduce __GFP_NORETRY as compaction gfp_mask in
> alloc_contig_range so it will fail fast without blocking
> when it encounters pages needed waitting.

s/waitting/waiting/

> 
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
>  mm/page_alloc.c | 8 ++++++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 5b3923db9158..ff41ceb4db51 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -8489,12 +8489,16 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
>  	unsigned int nr_reclaimed;
>  	unsigned long pfn = start;
>  	unsigned int tries = 0;
> +	unsigned int max_tries = 5;
>  	int ret = 0;
>  	struct migration_target_control mtc = {
>  		.nid = zone_to_nid(cc->zone),
>  		.gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL,
>  	};
>  
> +	if (cc->alloc_contig && cc->mode == MIGRATE_ASYNC)
> +		max_tries = 1;
> +
>  	migrate_prep();
>  
>  	while (pfn < end || !list_empty(&cc->migratepages)) {
> @@ -8511,7 +8515,7 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
>  				break;
>  			}
>  			tries = 0;
> -		} else if (++tries == 5) {
> +		} else if (++tries == max_tries) {
>  			ret = ret < 0 ? ret : -EBUSY;
>  			break;
>  		}
> @@ -8562,7 +8566,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
>  		.nr_migratepages = 0,
>  		.order = -1,
>  		.zone = page_zone(pfn_to_page(start)),
> -		.mode = MIGRATE_SYNC,
> +		.mode = gfp_mask & __GFP_NORETRY ? MIGRATE_ASYNC : MIGRATE_SYNC,
>  		.ignore_skip_hint = true,
>  		.no_set_skip_hint = true,
>  		.gfp_mask = current_gfp_context(gfp_mask),
> 

I'm fine with using gfp flags (e.g., __GFP_NORETRY) as long as they
don't enable other implicit behavior (e.g., move draining X to the
caller) that's hard to get from the flag name.

IMHO, if we ever want to move draining to the caller, or change the
behavior of alloc_contig_range() in different ways (e.g., disable PCP),
we won't get around introducing a separate set of flags for
alloc_contig_range().

Let's see what Michal thinks. Thanks!

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 3/4] dt-bindings: reserved-memory: Make DMA-BUF CMA heap DT-configurable
  2021-01-13  1:21 ` [PATCH v3 3/4] dt-bindings: reserved-memory: Make DMA-BUF CMA heap DT-configurable Minchan Kim
@ 2021-01-13 15:45   ` Rob Herring
  2021-01-13 17:30     ` Hridya Valsaraju
  2021-01-14 14:01   ` Rob Herring
  1 sibling, 1 reply; 22+ messages in thread
From: Rob Herring @ 2021-01-13 15:45 UTC (permalink / raw)
  To: Minchan Kim
  Cc: hyesoo.yu, hch, joaodias, hridya, pullip.cho, LKML, mhocko,
	robh+dt, linaro-mm-sig, surenb, Andrew Morton, devicetree,
	linux-mm, sumit.semwal, john.stultz, linux-media, david

On Tue, 12 Jan 2021 17:21:42 -0800, Minchan Kim wrote:
> From: Hyesoo Yu <hyesoo.yu@samsung.com>
> 
> Document devicetree binding for chunk cma heap on dma heap framework.
> 
> The DMA chunk heap supports the bulk allocation of higher order pages.
> 
> Signed-off-by: Hyesoo Yu <hyesoo.yu@samsung.com>
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> Signed-off-by: Hridya Valsaraju <hridya@google.com>
> Change-Id: I8fb231e5a8360e2d8f65947e155b12aa664dde01
> ---
>  .../reserved-memory/dma_heap_chunk.yaml       | 58 +++++++++++++++++++
>  1 file changed, 58 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/reserved-memory/dma_heap_chunk.yaml
> 

My bot found errors running 'make dt_binding_check' on your patch:

yamllint warnings/errors:
./Documentation/devicetree/bindings/reserved-memory/dma_heap_chunk.yaml:58:1: [warning] too many blank lines (2 > 1) (empty-lines)

dtschema/dtc warnings/errors:

See https://patchwork.ozlabs.org/patch/1425577

This check can fail if there are any dependencies. The base for a patch
series is generally the most recent rc1.

If you already ran 'make dt_binding_check' and didn't see the above
error(s), then make sure 'yamllint' is installed and dt-schema is up to
date:

pip3 install dtschema --upgrade

Please check and re-submit.



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 3/4] dt-bindings: reserved-memory: Make DMA-BUF CMA heap DT-configurable
  2021-01-13 15:45   ` Rob Herring
@ 2021-01-13 17:30     ` Hridya Valsaraju
  0 siblings, 0 replies; 22+ messages in thread
From: Hridya Valsaraju @ 2021-01-13 17:30 UTC (permalink / raw)
  To: Rob Herring
  Cc: Minchan Kim, Hyesoo Yu, Christoph Hellwig, John Dias, pullip.cho,
	LKML, mhocko, robh+dt, linaro-mm-sig, Suren Baghdasaryan,
	Andrew Morton, devicetree, linux-mm, Sumit Semwal, John Stultz,
	linux-media, david

On Wed, Jan 13, 2021 at 7:45 AM Rob Herring <robh@kernel.org> wrote:
>
> On Tue, 12 Jan 2021 17:21:42 -0800, Minchan Kim wrote:
> > From: Hyesoo Yu <hyesoo.yu@samsung.com>
> >
> > Document devicetree binding for chunk cma heap on dma heap framework.
> >
> > The DMA chunk heap supports the bulk allocation of higher order pages.
> >
> > Signed-off-by: Hyesoo Yu <hyesoo.yu@samsung.com>
> > Signed-off-by: Minchan Kim <minchan@kernel.org>
> > Signed-off-by: Hridya Valsaraju <hridya@google.com>
> > Change-Id: I8fb231e5a8360e2d8f65947e155b12aa664dde01
> > ---
> >  .../reserved-memory/dma_heap_chunk.yaml       | 58 +++++++++++++++++++
> >  1 file changed, 58 insertions(+)
> >  create mode 100644 Documentation/devicetree/bindings/reserved-memory/dma_heap_chunk.yaml
> >
>
> My bot found errors running 'make dt_binding_check' on your patch:
>
> yamllint warnings/errors:
> ./Documentation/devicetree/bindings/reserved-memory/dma_heap_chunk.yaml:58:1: [warning] too many blank lines (2 > 1) (empty-lines)
>
> dtschema/dtc warnings/errors:
>
> See https://patchwork.ozlabs.org/patch/1425577
>
> This check can fail if there are any dependencies. The base for a patch
> series is generally the most recent rc1.
>
> If you already ran 'make dt_binding_check' and didn't see the above
> error(s), then make sure 'yamllint' is installed and dt-schema is up to
> date:
>
> pip3 install dtschema --upgrade
>
> Please check and re-submit.
>

Hi Rob,

Sorry about that, I can see the warning after installing yamllint.
Will fix it in the next version!

Thanks,
Hridya


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 4/4] dma-buf: heaps: add chunk heap to dmabuf heaps
  2021-01-13  3:11   ` kernel test robot
@ 2021-01-14  1:04     ` Minchan Kim
  0 siblings, 0 replies; 22+ messages in thread
From: Minchan Kim @ 2021-01-14  1:04 UTC (permalink / raw)
  To: kernel test robot
  Cc: Andrew Morton, kbuild-all, Linux Memory Management List, LKML,
	hyesoo.yu, david, mhocko, surenb, pullip.cho, joaodias, hridya

On Wed, Jan 13, 2021 at 11:11:56AM +0800, kernel test robot wrote:
> Hi Minchan,
> 
> Thank you for the patch! Perhaps something to improve:
> 
> [auto build test WARNING on next-20210112]
> [cannot apply to s390/features robh/for-next linux/master linus/master hnaz-linux-mm/master v5.11-rc3 v5.11-rc2 v5.11-rc1 v5.11-rc3]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch]
> 
> url:    https://github.com/0day-ci/linux/commits/Minchan-Kim/Chunk-Heap-Support-on-DMA-HEAP/20210113-092747
> base:    df869cab4b3519d603806234861aa0a39df479c0
> config: mips-allyesconfig (attached as .config)
> compiler: mips-linux-gcc (GCC) 9.3.0
> reproduce (this is a W=1 build):
>         wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
>         chmod +x ~/bin/make.cross
>         # https://github.com/0day-ci/linux/commit/531ebc21d3c2584784d44714e3b4f1df46b80eee
>         git remote add linux-review https://github.com/0day-ci/linux
>         git fetch --no-tags linux-review Minchan-Kim/Chunk-Heap-Support-on-DMA-HEAP/20210113-092747
>         git checkout 531ebc21d3c2584784d44714e3b4f1df46b80eee
>         # save the attached .config to linux build tree
>         COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=mips 
> 
> If you fix the issue, kindly add following tag as appropriate
> Reported-by: kernel test robot <lkp@intel.com>
> 
> All warnings (new ones prefixed by >>):
> 
>    drivers/dma-buf/heaps/chunk_heap.c: In function 'chunk_heap_do_vmap':
>    drivers/dma-buf/heaps/chunk_heap.c:215:24: error: implicit declaration of function 'vmalloc'; did you mean 'kvmalloc'? [-Werror=implicit-function-declaration]
>      215 |  struct page **pages = vmalloc(sizeof(struct page *) * npages);
>          |                        ^~~~~~~
>          |                        kvmalloc

Looks like we need vmalloc.h.


> >> drivers/dma-buf/heaps/chunk_heap.c:215:24: warning: initialization of 'struct page **' from 'int' makes pointer from integer without a cast [-Wint-conversion]
>    drivers/dma-buf/heaps/chunk_heap.c:228:10: error: implicit declaration of function 'vmap'; did you mean 'kmap'? [-Werror=implicit-function-declaration]
>      228 |  vaddr = vmap(pages, npages, VM_MAP, PAGE_KERNEL);
>          |          ^~~~
>          |          kmap

We need vmap, not kmap.

>    drivers/dma-buf/heaps/chunk_heap.c:228:30: error: 'VM_MAP' undeclared (first use in this function); did you mean 'VM_MTE'?
>      228 |  vaddr = vmap(pages, npages, VM_MAP, PAGE_KERNEL);
>          |                              ^~~~~~
>          |                              VM_MTE

Looks like bot was confused since we have missed the vmalloc.h
In next spin, let's fix it.

>    drivers/dma-buf/heaps/chunk_heap.c:228:30: note: each undeclared identifier is reported only once for each function it appears in
>    drivers/dma-buf/heaps/chunk_heap.c:229:2: error: implicit declaration of function 'vfree'; did you mean 'kvfree'? [-Werror=implicit-function-declaration]
>      229 |  vfree(pages);
>          |  ^~~~~
>          |  kvfree
>    drivers/dma-buf/heaps/chunk_heap.c: In function 'chunk_heap_vunmap':
>    drivers/dma-buf/heaps/chunk_heap.c:268:3: error: implicit declaration of function 'vunmap'; did you mean 'kunmap'? [-Werror=implicit-function-declaration]
>      268 |   vunmap(buffer->vaddr);
>          |   ^~~~~~
>          |   kunmap
>    cc1: some warnings being treated as errors
> 
> 
> vim +215 drivers/dma-buf/heaps/chunk_heap.c
> 
>    210	
>    211	static void *chunk_heap_do_vmap(struct chunk_heap_buffer *buffer)
>    212	{
>    213		struct sg_table *table = &buffer->sg_table;
>    214		int npages = PAGE_ALIGN(buffer->len) / PAGE_SIZE;
>  > 215		struct page **pages = vmalloc(sizeof(struct page *) * npages);
>    216		struct page **tmp = pages;
>    217		struct sg_page_iter piter;
>    218		void *vaddr;
>    219	
>    220		if (!pages)
>    221			return ERR_PTR(-ENOMEM);
>    222	
>    223		for_each_sgtable_page(table, &piter, 0) {
>    224			WARN_ON(tmp - pages >= npages);
>    225			*tmp++ = sg_page_iter_page(&piter);
>    226		}
>    227	
>    228		vaddr = vmap(pages, npages, VM_MAP, PAGE_KERNEL);
>    229		vfree(pages);
>    230	
>    231		if (!vaddr)
>    232			return ERR_PTR(-ENOMEM);
>    233	
>    234		return vaddr;
>    235	}
>    236	
> 
> ---
> 0-DAY CI Kernel Test Service, Intel Corporation
> https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org




^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 4/4] dma-buf: heaps: add chunk heap to dmabuf heaps
  2021-01-13  3:38   ` Randy Dunlap
@ 2021-01-14  1:04     ` Minchan Kim
  0 siblings, 0 replies; 22+ messages in thread
From: Minchan Kim @ 2021-01-14  1:04 UTC (permalink / raw)
  To: Randy Dunlap
  Cc: Andrew Morton, linux-mm, LKML, hyesoo.yu, david, mhocko, surenb,
	pullip.cho, joaodias, hridya, john.stultz, sumit.semwal,
	linux-media, devicetree, hch, robh+dt, linaro-mm-sig

On Tue, Jan 12, 2021 at 07:38:40PM -0800, Randy Dunlap wrote:
> On 1/12/21 5:21 PM, Minchan Kim wrote:
> > +config DMABUF_HEAPS_CHUNK
> > +	bool "DMA-BUF CHUNK Heap"
> > +	depends on DMABUF_HEAPS && DMA_CMA
> > +	help
> > +	  Choose this option to enable dma-buf CHUNK heap. This heap is backed
> > +	  by the Contiguous Memory Allocator (CMA) and allocates the buffers that
> > +	  arranged into a list of fixed size chunks taken from CMA.
> 
> maybe:
> 	  are arranged into

Let me fix it.

Thanks, Randy. 


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 3/4] dt-bindings: reserved-memory: Make DMA-BUF CMA heap DT-configurable
  2021-01-13  1:21 ` [PATCH v3 3/4] dt-bindings: reserved-memory: Make DMA-BUF CMA heap DT-configurable Minchan Kim
  2021-01-13 15:45   ` Rob Herring
@ 2021-01-14 14:01   ` Rob Herring
  2021-01-14 19:49     ` Hridya Valsaraju
  1 sibling, 1 reply; 22+ messages in thread
From: Rob Herring @ 2021-01-14 14:01 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, linux-mm, LKML, hyesoo.yu, david, mhocko, surenb,
	pullip.cho, joaodias, hridya, john.stultz, sumit.semwal,
	linux-media, devicetree, hch, linaro-mm-sig

On Tue, Jan 12, 2021 at 05:21:42PM -0800, Minchan Kim wrote:
> From: Hyesoo Yu <hyesoo.yu@samsung.com>
> 
> Document devicetree binding for chunk cma heap on dma heap framework.
> 
> The DMA chunk heap supports the bulk allocation of higher order pages.

Why do we need this? What does this do that CMA doesn't?

With a CMA area I can believe a carve out is a common, OS independent 
thing. This looks too closely tied to some Linux thing to go into DT.

> 
> Signed-off-by: Hyesoo Yu <hyesoo.yu@samsung.com>
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> Signed-off-by: Hridya Valsaraju <hridya@google.com>
> Change-Id: I8fb231e5a8360e2d8f65947e155b12aa664dde01

Drop this.

> ---
>  .../reserved-memory/dma_heap_chunk.yaml       | 58 +++++++++++++++++++
>  1 file changed, 58 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/reserved-memory/dma_heap_chunk.yaml
> 
> diff --git a/Documentation/devicetree/bindings/reserved-memory/dma_heap_chunk.yaml b/Documentation/devicetree/bindings/reserved-memory/dma_heap_chunk.yaml
> new file mode 100644
> index 000000000000..3e7fed5fb006
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/reserved-memory/dma_heap_chunk.yaml
> @@ -0,0 +1,58 @@
> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/reserved-memory/dma_heap_chunk.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: Device tree binding for chunk heap on DMA HEAP FRAMEWORK
> +
> +description: |
> +  The DMA chunk heap is backed by the Contiguous Memory Allocator (CMA) and
> +  supports bulk allocation of fixed size pages.
> +
> +maintainers:
> +  - Hyesoo Yu <hyesoo.yu@samsung.com>
> +  - John Stultz <john.stultz@linaro.org>
> +  - Minchan Kim <minchan@kernel.org>
> +  - Hridya Valsaraju<hridya@google.com>

space                  ^

> +
> +
> +properties:
> +  compatible:
> +    enum:
> +      - dma_heap,chunk

The format is <vendor>,<something> and 'dma_heap' is not a vendor.

> +
> +  chunk-order:
> +    description: |
> +            order of pages that will get allocated from the chunk DMA heap.
> +    maxItems: 1
> +
> +  size:
> +    maxItems: 1
> +
> +  alignment:
> +    maxItems: 1
> +
> +required:
> +  - compatible
> +  - size
> +  - alignment
> +  - chunk-order
> +
> +additionalProperties: false
> +
> +examples:
> +  - |
> +    reserved-memory {
> +        #address-cells = <2>;
> +        #size-cells = <1>;
> +
> +        chunk_memory: chunk_memory {
> +            compatible = "dma_heap,chunk";
> +            size = <0x3000000>;
> +            alignment = <0x0 0x00010000>;
> +            chunk-order = <4>;
> +        };
> +    };
> +
> +
> -- 
> 2.30.0.284.gd98b1dd5eaa7-goog
> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 2/4] mm: failfast mode with __GFP_NORETRY in alloc_contig_range
  2021-01-13  8:39   ` David Hildenbrand
@ 2021-01-14 18:04     ` Minchan Kim
  0 siblings, 0 replies; 22+ messages in thread
From: Minchan Kim @ 2021-01-14 18:04 UTC (permalink / raw)
  To: David Hildenbrand, mhocko
  Cc: Andrew Morton, linux-mm, LKML, hyesoo.yu, mhocko, surenb,
	pullip.cho, joaodias, hridya, john.stultz, sumit.semwal,
	linux-media, devicetree, hch, robh+dt, linaro-mm-sig

On Wed, Jan 13, 2021 at 09:39:26AM +0100, David Hildenbrand wrote:
> On 13.01.21 02:21, Minchan Kim wrote:
> > Contiguous memory allocation can be stalled due to waiting
> > on page writeback and/or page lock which causes unpredictable
> > delay. It's a unavoidable cost for the requestor to get *big*
> > contiguous memory but it's expensive for *small* contiguous
> > memory(e.g., order-4) because caller could retry the request
> > in diffrent range where would have easy migratable pages
> > without stalling.
> 
> s/diffrent/different/
> 
> > 
> > This patch introduce __GFP_NORETRY as compaction gfp_mask in
> > alloc_contig_range so it will fail fast without blocking
> > when it encounters pages needed waitting.
> 
> s/waitting/waiting/

Fxed both. Thanks.
Let me resend once I get some review.

Michal, I appreciate if you could give an review before
next revision.

Thanks!

> 
> > 
> > Signed-off-by: Minchan Kim <minchan@kernel.org>
> > ---
> >  mm/page_alloc.c | 8 ++++++--
> >  1 file changed, 6 insertions(+), 2 deletions(-)
> > 
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 5b3923db9158..ff41ceb4db51 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -8489,12 +8489,16 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
> >  	unsigned int nr_reclaimed;
> >  	unsigned long pfn = start;
> >  	unsigned int tries = 0;
> > +	unsigned int max_tries = 5;
> >  	int ret = 0;
> >  	struct migration_target_control mtc = {
> >  		.nid = zone_to_nid(cc->zone),
> >  		.gfp_mask = GFP_USER | __GFP_MOVABLE | __GFP_RETRY_MAYFAIL,
> >  	};
> >  
> > +	if (cc->alloc_contig && cc->mode == MIGRATE_ASYNC)
> > +		max_tries = 1;
> > +
> >  	migrate_prep();
> >  
> >  	while (pfn < end || !list_empty(&cc->migratepages)) {
> > @@ -8511,7 +8515,7 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
> >  				break;
> >  			}
> >  			tries = 0;
> > -		} else if (++tries == 5) {
> > +		} else if (++tries == max_tries) {
> >  			ret = ret < 0 ? ret : -EBUSY;
> >  			break;
> >  		}
> > @@ -8562,7 +8566,7 @@ int alloc_contig_range(unsigned long start, unsigned long end,
> >  		.nr_migratepages = 0,
> >  		.order = -1,
> >  		.zone = page_zone(pfn_to_page(start)),
> > -		.mode = MIGRATE_SYNC,
> > +		.mode = gfp_mask & __GFP_NORETRY ? MIGRATE_ASYNC : MIGRATE_SYNC,
> >  		.ignore_skip_hint = true,
> >  		.no_set_skip_hint = true,
> >  		.gfp_mask = current_gfp_context(gfp_mask),
> > 
> 
> I'm fine with using gfp flags (e.g., __GFP_NORETRY) as long as they
> don't enable other implicit behavior (e.g., move draining X to the
> caller) that's hard to get from the flag name.
> 
> IMHO, if we ever want to move draining to the caller, or change the
> behavior of alloc_contig_range() in different ways (e.g., disable PCP),
> we won't get around introducing a separate set of flags for
> alloc_contig_range().
> 
> Let's see what Michal thinks. Thanks!
> 
> -- 
> Thanks,
> 
> David / dhildenb
> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 3/4] dt-bindings: reserved-memory: Make DMA-BUF CMA heap DT-configurable
  2021-01-14 14:01   ` Rob Herring
@ 2021-01-14 19:49     ` Hridya Valsaraju
  0 siblings, 0 replies; 22+ messages in thread
From: Hridya Valsaraju @ 2021-01-14 19:49 UTC (permalink / raw)
  To: Rob Herring
  Cc: Minchan Kim, Andrew Morton, linux-mm, LKML, Hyesoo Yu, david,
	mhocko, Suren Baghdasaryan, pullip.cho, John Dias, John Stultz,
	Sumit Semwal, linux-media, devicetree, Christoph Hellwig,
	linaro-mm-sig

On Thu, Jan 14, 2021 at 6:01 AM Rob Herring <robh@kernel.org> wrote:
>
> On Tue, Jan 12, 2021 at 05:21:42PM -0800, Minchan Kim wrote:
> > From: Hyesoo Yu <hyesoo.yu@samsung.com>
> >
> > Document devicetree binding for chunk cma heap on dma heap framework.
> >
> > The DMA chunk heap supports the bulk allocation of higher order pages.
>
> Why do we need this? What does this do that CMA doesn't?
>
> With a CMA area I can believe a carve out is a common, OS independent
> thing. This looks too closely tied to some Linux thing to go into DT.

Hello Rob,

Thank you for the review!

The chunk heap's allocator also allocates from the CMA area. It is,
however, optimized to perform bulk allocation of higher order pages in
an efficient manner. For this purpose, the heap needs an exclusive CMA
area that will only be used for allocation by the heap. This is the
reason why we need to use the DT to create and configure a reserved
memory region for use by the chunk CMA heap driver. Since all
allocation from DMA-BUF heaps happen from the user-space, there is no
other appropriate device-driver that we can use to register the chunk
CMA heap and configure the reserved memory region for its use.

We have been following your guidance in [1] to bind the chunk CMA heap
driver directly to the reserved_memory region it will allocate from.
Is there an alternative that we are missing Rob?

[1]: https://lore.kernel.org/lkml/20191025225009.50305-2-john.stultz@linaro.org/T/#m3dc63acd33fea269a584f43bb799a876f0b2b45d

The use-case that we have for the heap currently will allocate memory
from it from userspace and use the allocated memory to optimize
4K/8K HDR video playback with a secure DRM HW pipeline.

Thank you for all the help and review :)

Regards,
Hridya






>
> >
> > Signed-off-by: Hyesoo Yu <hyesoo.yu@samsung.com>
> > Signed-off-by: Minchan Kim <minchan@kernel.org>
> > Signed-off-by: Hridya Valsaraju <hridya@google.com>
> > Change-Id: I8fb231e5a8360e2d8f65947e155b12aa664dde01
>
> Drop this.
>
> > ---
> >  .../reserved-memory/dma_heap_chunk.yaml       | 58 +++++++++++++++++++
> >  1 file changed, 58 insertions(+)
> >  create mode 100644 Documentation/devicetree/bindings/reserved-memory/dma_heap_chunk.yaml
> >
> > diff --git a/Documentation/devicetree/bindings/reserved-memory/dma_heap_chunk.yaml b/Documentation/devicetree/bindings/reserved-memory/dma_heap_chunk.yaml
> > new file mode 100644
> > index 000000000000..3e7fed5fb006
> > --- /dev/null
> > +++ b/Documentation/devicetree/bindings/reserved-memory/dma_heap_chunk.yaml
> > @@ -0,0 +1,58 @@
> > +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> > +%YAML 1.2
> > +---
> > +$id: http://devicetree.org/schemas/reserved-memory/dma_heap_chunk.yaml#
> > +$schema: http://devicetree.org/meta-schemas/core.yaml#
> > +
> > +title: Device tree binding for chunk heap on DMA HEAP FRAMEWORK
> > +
> > +description: |
> > +  The DMA chunk heap is backed by the Contiguous Memory Allocator (CMA) and
> > +  supports bulk allocation of fixed size pages.
> > +
> > +maintainers:
> > +  - Hyesoo Yu <hyesoo.yu@samsung.com>
> > +  - John Stultz <john.stultz@linaro.org>
> > +  - Minchan Kim <minchan@kernel.org>
> > +  - Hridya Valsaraju<hridya@google.com>
>
> space                  ^
>
> > +
> > +
> > +properties:
> > +  compatible:
> > +    enum:
> > +      - dma_heap,chunk
>
> The format is <vendor>,<something> and 'dma_heap' is not a vendor.
>
> > +
> > +  chunk-order:
> > +    description: |
> > +            order of pages that will get allocated from the chunk DMA heap.
> > +    maxItems: 1
> > +
> > +  size:
> > +    maxItems: 1
> > +
> > +  alignment:
> > +    maxItems: 1
> > +
> > +required:
> > +  - compatible
> > +  - size
> > +  - alignment
> > +  - chunk-order
> > +
> > +additionalProperties: false
> > +
> > +examples:
> > +  - |
> > +    reserved-memory {
> > +        #address-cells = <2>;
> > +        #size-cells = <1>;
> > +
> > +        chunk_memory: chunk_memory {
> > +            compatible = "dma_heap,chunk";
> > +            size = <0x3000000>;
> > +            alignment = <0x0 0x00010000>;
> > +            chunk-order = <4>;
> > +        };
> > +    };
> > +
> > +
> > --
> > 2.30.0.284.gd98b1dd5eaa7-goog
> >


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 4/4] dma-buf: heaps: add chunk heap to dmabuf heaps
  2021-01-13  1:21 ` [PATCH v3 4/4] dma-buf: heaps: add chunk heap to dmabuf heaps Minchan Kim
                     ` (2 preceding siblings ...)
  2021-01-13  6:25   ` kernel test robot
@ 2021-01-19 15:51   ` Minchan Kim
  2021-01-19 18:29   ` John Stultz
  4 siblings, 0 replies; 22+ messages in thread
From: Minchan Kim @ 2021-01-19 15:51 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, LKML, hyesoo.yu, david, mhocko, surenb, pullip.cho,
	joaodias, hridya, john.stultz, sumit.semwal, linux-media,
	devicetree, hch, robh+dt, linaro-mm-sig

On Tue, Jan 12, 2021 at 05:21:43PM -0800, Minchan Kim wrote:
> From: Hyesoo Yu <hyesoo.yu@samsung.com>
> 
> This patch supports chunk heap that allocates the buffers that
> arranged into a list a fixed size chunks taken from CMA.
> 
> The chunk heap driver is bound directly to a reserved_memory
> node by following Rob Herring's suggestion in [1].
> 
> [1] https://lore.kernel.org/lkml/20191025225009.50305-2-john.stultz@linaro.org/T/#m3dc63acd33fea269a584f43bb799a876f0b2b45d
> 
> Signed-off-by: Hyesoo Yu <hyesoo.yu@samsung.com>
> Signed-off-by: Hridya Valsaraju <hridya@google.com>
> Signed-off-by: Minchan Kim <minchan@kernel.org>

DMABUF folks,

It would be great if you guys give any comments.


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 4/4] dma-buf: heaps: add chunk heap to dmabuf heaps
  2021-01-13  1:21 ` [PATCH v3 4/4] dma-buf: heaps: add chunk heap to dmabuf heaps Minchan Kim
                     ` (3 preceding siblings ...)
  2021-01-19 15:51   ` Minchan Kim
@ 2021-01-19 18:29   ` John Stultz
  2021-01-19 20:36     ` Minchan Kim
  4 siblings, 1 reply; 22+ messages in thread
From: John Stultz @ 2021-01-19 18:29 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, linux-mm, LKML, Hyesoo Yu, david, Michal Hocko,
	Suren Baghdasaryan, KyongHo Cho, John Dias, Hridya Valsaraju,
	Sumit Semwal, linux-media,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Christoph Hellwig, Rob Herring,
	moderated list:DMA BUFFER SHARING FRAMEWORK

On Tue, Jan 12, 2021 at 5:22 PM Minchan Kim <minchan@kernel.org> wrote:
>
> From: Hyesoo Yu <hyesoo.yu@samsung.com>
>
> This patch supports chunk heap that allocates the buffers that
> arranged into a list a fixed size chunks taken from CMA.
>
> The chunk heap driver is bound directly to a reserved_memory
> node by following Rob Herring's suggestion in [1].
>
> [1] https://lore.kernel.org/lkml/20191025225009.50305-2-john.stultz@linaro.org/T/#m3dc63acd33fea269a584f43bb799a876f0b2b45d
>
> Signed-off-by: Hyesoo Yu <hyesoo.yu@samsung.com>
> Signed-off-by: Hridya Valsaraju <hridya@google.com>
> Signed-off-by: Minchan Kim <minchan@kernel.org>
> ---
...
> +static int register_chunk_heap(struct chunk_heap *chunk_heap_info)
> +{
> +       struct dma_heap_export_info exp_info;
> +
> +       exp_info.name = cma_get_name(chunk_heap_info->cma);

One potential issue here, you're setting the name to the same as the
CMA name. Since the CMA heap uses the CMA name, if one chunk was
registered as a chunk heap but also was the default CMA area, it might
be registered twice. But since both would have the same name it would
be an initialization race as to which one "wins".

So maybe could you postfix the CMA name with "-chunk" or something?

thanks
-john


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 4/4] dma-buf: heaps: add chunk heap to dmabuf heaps
  2021-01-19 18:29   ` John Stultz
@ 2021-01-19 20:36     ` Minchan Kim
  2021-01-20  3:32       ` Hyesoo Yu
  0 siblings, 1 reply; 22+ messages in thread
From: Minchan Kim @ 2021-01-19 20:36 UTC (permalink / raw)
  To: John Stultz
  Cc: Andrew Morton, linux-mm, LKML, Hyesoo Yu, david, Michal Hocko,
	Suren Baghdasaryan, KyongHo Cho, John Dias, Hridya Valsaraju,
	Sumit Semwal, linux-media,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Christoph Hellwig, Rob Herring,
	moderated list:DMA BUFFER SHARING FRAMEWORK

On Tue, Jan 19, 2021 at 10:29:29AM -0800, John Stultz wrote:
> On Tue, Jan 12, 2021 at 5:22 PM Minchan Kim <minchan@kernel.org> wrote:
> >
> > From: Hyesoo Yu <hyesoo.yu@samsung.com>
> >
> > This patch supports chunk heap that allocates the buffers that
> > arranged into a list a fixed size chunks taken from CMA.
> >
> > The chunk heap driver is bound directly to a reserved_memory
> > node by following Rob Herring's suggestion in [1].
> >
> > [1] https://lore.kernel.org/lkml/20191025225009.50305-2-john.stultz@linaro.org/T/#m3dc63acd33fea269a584f43bb799a876f0b2b45d
> >
> > Signed-off-by: Hyesoo Yu <hyesoo.yu@samsung.com>
> > Signed-off-by: Hridya Valsaraju <hridya@google.com>
> > Signed-off-by: Minchan Kim <minchan@kernel.org>
> > ---
> ...
> > +static int register_chunk_heap(struct chunk_heap *chunk_heap_info)
> > +{
> > +       struct dma_heap_export_info exp_info;
> > +
> > +       exp_info.name = cma_get_name(chunk_heap_info->cma);
> 
> One potential issue here, you're setting the name to the same as the
> CMA name. Since the CMA heap uses the CMA name, if one chunk was
> registered as a chunk heap but also was the default CMA area, it might
> be registered twice. But since both would have the same name it would
> be an initialization race as to which one "wins".

Good point. Maybe someone might want to use default CMA area for
both cma_heap and chunk_heap. I cannot come up with ideas why we
should prohibit it atm.

> 
> So maybe could you postfix the CMA name with "-chunk" or something?

Hyesoo, Any opinion?
Unless you have something other idea, let's fix it in next version.


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 4/4] dma-buf: heaps: add chunk heap to dmabuf heaps
  2021-01-19 20:36     ` Minchan Kim
@ 2021-01-20  3:32       ` Hyesoo Yu
  2021-01-20 20:53         ` Suren Baghdasaryan
  0 siblings, 1 reply; 22+ messages in thread
From: Hyesoo Yu @ 2021-01-20  3:32 UTC (permalink / raw)
  To: Minchan Kim
  Cc: John Stultz, Andrew Morton, linux-mm, LKML, david, Michal Hocko,
	Suren Baghdasaryan, KyongHo Cho, John Dias, Hridya Valsaraju,
	Sumit Semwal, linux-media,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Christoph Hellwig, Rob Herring,
	moderated list:DMA BUFFER SHARING FRAMEWORK

[-- Attachment #1: Type: text/plain, Size: 1871 bytes --]

On Tue, Jan 19, 2021 at 12:36:40PM -0800, Minchan Kim wrote:
> On Tue, Jan 19, 2021 at 10:29:29AM -0800, John Stultz wrote:
> > On Tue, Jan 12, 2021 at 5:22 PM Minchan Kim <minchan@kernel.org> wrote:
> > >
> > > From: Hyesoo Yu <hyesoo.yu@samsung.com>
> > >
> > > This patch supports chunk heap that allocates the buffers that
> > > arranged into a list a fixed size chunks taken from CMA.
> > >
> > > The chunk heap driver is bound directly to a reserved_memory
> > > node by following Rob Herring's suggestion in [1].
> > >
> > > [1] https://lore.kernel.org/lkml/20191025225009.50305-2-john.stultz@linaro.org/T/#m3dc63acd33fea269a584f43bb799a876f0b2b45d
> > >
> > > Signed-off-by: Hyesoo Yu <hyesoo.yu@samsung.com>
> > > Signed-off-by: Hridya Valsaraju <hridya@google.com>
> > > Signed-off-by: Minchan Kim <minchan@kernel.org>
> > > ---
> > ...
> > > +static int register_chunk_heap(struct chunk_heap *chunk_heap_info)
> > > +{
> > > +       struct dma_heap_export_info exp_info;
> > > +
> > > +       exp_info.name = cma_get_name(chunk_heap_info->cma);
> > 
> > One potential issue here, you're setting the name to the same as the
> > CMA name. Since the CMA heap uses the CMA name, if one chunk was
> > registered as a chunk heap but also was the default CMA area, it might
> > be registered twice. But since both would have the same name it would
> > be an initialization race as to which one "wins".
> 
> Good point. Maybe someone might want to use default CMA area for
> both cma_heap and chunk_heap. I cannot come up with ideas why we
> should prohibit it atm.
> 
> > 
> > So maybe could you postfix the CMA name with "-chunk" or something?
> 
> Hyesoo, Any opinion?
> Unless you have something other idea, let's fix it in next version.
>

I agree that. It is not good to use heap name directly as cma name.
Let's postfix the name with '-chunk'

Thanks,
Regards.

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 4/4] dma-buf: heaps: add chunk heap to dmabuf heaps
  2021-01-20  3:32       ` Hyesoo Yu
@ 2021-01-20 20:53         ` Suren Baghdasaryan
  0 siblings, 0 replies; 22+ messages in thread
From: Suren Baghdasaryan @ 2021-01-20 20:53 UTC (permalink / raw)
  To: Hyesoo Yu
  Cc: Minchan Kim, John Stultz, Andrew Morton, linux-mm, LKML, david,
	Michal Hocko, KyongHo Cho, John Dias, Hridya Valsaraju,
	Sumit Semwal, linux-media,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Christoph Hellwig, Rob Herring,
	moderated list:DMA BUFFER SHARING FRAMEWORK

On Tue, Jan 19, 2021 at 7:39 PM Hyesoo Yu <hyesoo.yu@samsung.com> wrote:
>
> On Tue, Jan 19, 2021 at 12:36:40PM -0800, Minchan Kim wrote:
> > On Tue, Jan 19, 2021 at 10:29:29AM -0800, John Stultz wrote:
> > > On Tue, Jan 12, 2021 at 5:22 PM Minchan Kim <minchan@kernel.org> wrote:
> > > >
> > > > From: Hyesoo Yu <hyesoo.yu@samsung.com>
> > > >
> > > > This patch supports chunk heap that allocates the buffers that
> > > > arranged into a list a fixed size chunks taken from CMA.
> > > >
> > > > The chunk heap driver is bound directly to a reserved_memory
> > > > node by following Rob Herring's suggestion in [1].
> > > >
> > > > [1] https://lore.kernel.org/lkml/20191025225009.50305-2-john.stultz@linaro.org/T/#m3dc63acd33fea269a584f43bb799a876f0b2b45d
> > > >
> > > > Signed-off-by: Hyesoo Yu <hyesoo.yu@samsung.com>
> > > > Signed-off-by: Hridya Valsaraju <hridya@google.com>
> > > > Signed-off-by: Minchan Kim <minchan@kernel.org>

After addressing John's comments feel free to add Reviewed-by: Suren
Baghdasaryan <surenb@google.com>

> > > > ---
> > > ...
> > > > +static int register_chunk_heap(struct chunk_heap *chunk_heap_info)
> > > > +{
> > > > +       struct dma_heap_export_info exp_info;
> > > > +
> > > > +       exp_info.name = cma_get_name(chunk_heap_info->cma);
> > >
> > > One potential issue here, you're setting the name to the same as the
> > > CMA name. Since the CMA heap uses the CMA name, if one chunk was
> > > registered as a chunk heap but also was the default CMA area, it might
> > > be registered twice. But since both would have the same name it would
> > > be an initialization race as to which one "wins".
> >
> > Good point. Maybe someone might want to use default CMA area for
> > both cma_heap and chunk_heap. I cannot come up with ideas why we
> > should prohibit it atm.
> >
> > >
> > > So maybe could you postfix the CMA name with "-chunk" or something?
> >
> > Hyesoo, Any opinion?
> > Unless you have something other idea, let's fix it in next version.
> >
>
> I agree that. It is not good to use heap name directly as cma name.
> Let's postfix the name with '-chunk'
>
> Thanks,
> Regards.


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 1/4] mm: cma: introduce gfp flag in cma_alloc instead of no_warn
  2021-01-13  1:21 ` [PATCH v3 1/4] mm: cma: introduce gfp flag in cma_alloc instead of no_warn Minchan Kim
@ 2021-01-20 21:08   ` Suren Baghdasaryan
  0 siblings, 0 replies; 22+ messages in thread
From: Suren Baghdasaryan @ 2021-01-20 21:08 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Andrew Morton, linux-mm, LKML, Hyesoo Yu, david, Michal Hocko,
	조경호,
	John Dias, Hridya Valsaraju, John Stultz, Sumit Semwal,
	linux-media,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Christoph Hellwig, Rob Herring,
	moderated list:DMA BUFFER SHARING FRAMEWORK

On Tue, Jan 12, 2021 at 5:21 PM Minchan Kim <minchan@kernel.org> wrote:
>
> The upcoming patch will introduce __GFP_NORETRY semantic
> in alloc_contig_range which is a failfast mode of the API.
> Instead of adding a additional parameter for gfp, replace
> no_warn with gfp flag.
>
> To keep old behaviors, it follows the rule below.
>
>   no_warn                       gfp_flags
>
>   false                         GFP_KERNEL
>   true                          GFP_KERNEL|__GFP_NOWARN
>   gfp & __GFP_NOWARN            GFP_KERNEL | (gfp & __GFP_NOWARN)
>
> Signed-off-by: Minchan Kim <minchan@kernel.org>

Reviewed-by: Suren Baghdasaryan <surenb@google.com>

> ---
>  drivers/dma-buf/heaps/cma_heap.c |  2 +-
>  drivers/s390/char/vmcp.c         |  2 +-
>  include/linux/cma.h              |  2 +-
>  kernel/dma/contiguous.c          |  3 ++-
>  mm/cma.c                         | 12 ++++++------
>  mm/cma_debug.c                   |  2 +-
>  mm/hugetlb.c                     |  6 ++++--
>  mm/secretmem.c                   |  3 ++-
>  8 files changed, 18 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c
> index 364fc2f3e499..0afc1907887a 100644
> --- a/drivers/dma-buf/heaps/cma_heap.c
> +++ b/drivers/dma-buf/heaps/cma_heap.c
> @@ -298,7 +298,7 @@ static int cma_heap_allocate(struct dma_heap *heap,
>         if (align > CONFIG_CMA_ALIGNMENT)
>                 align = CONFIG_CMA_ALIGNMENT;
>
> -       cma_pages = cma_alloc(cma_heap->cma, pagecount, align, false);
> +       cma_pages = cma_alloc(cma_heap->cma, pagecount, align, GFP_KERNEL);
>         if (!cma_pages)
>                 goto free_buffer;
>
> diff --git a/drivers/s390/char/vmcp.c b/drivers/s390/char/vmcp.c
> index 9e066281e2d0..78f9adf56456 100644
> --- a/drivers/s390/char/vmcp.c
> +++ b/drivers/s390/char/vmcp.c
> @@ -70,7 +70,7 @@ static void vmcp_response_alloc(struct vmcp_session *session)
>          * anymore the system won't work anyway.
>          */
>         if (order > 2)
> -               page = cma_alloc(vmcp_cma, nr_pages, 0, false);
> +               page = cma_alloc(vmcp_cma, nr_pages, 0, GFP_KERNEL);
>         if (page) {
>                 session->response = (char *)page_to_phys(page);
>                 session->cma_alloc = 1;
> diff --git a/include/linux/cma.h b/include/linux/cma.h
> index 217999c8a762..d6c02d08ddbc 100644
> --- a/include/linux/cma.h
> +++ b/include/linux/cma.h
> @@ -45,7 +45,7 @@ extern int cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
>                                         const char *name,
>                                         struct cma **res_cma);
>  extern struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align,
> -                             bool no_warn);
> +                             gfp_t gfp_mask);
>  extern bool cma_release(struct cma *cma, const struct page *pages, unsigned int count);
>
>  extern int cma_for_each_area(int (*it)(struct cma *cma, void *data), void *data);
> diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c
> index 3d63d91cba5c..552ed531c018 100644
> --- a/kernel/dma/contiguous.c
> +++ b/kernel/dma/contiguous.c
> @@ -260,7 +260,8 @@ struct page *dma_alloc_from_contiguous(struct device *dev, size_t count,
>         if (align > CONFIG_CMA_ALIGNMENT)
>                 align = CONFIG_CMA_ALIGNMENT;
>
> -       return cma_alloc(dev_get_cma_area(dev), count, align, no_warn);
> +       return cma_alloc(dev_get_cma_area(dev), count, align, GFP_KERNEL |
> +                       (no_warn ? __GFP_NOWARN : 0));
>  }
>
>  /**
> diff --git a/mm/cma.c b/mm/cma.c
> index 0ba69cd16aeb..35053b82aedc 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -419,13 +419,13 @@ static inline void cma_debug_show_areas(struct cma *cma) { }
>   * @cma:   Contiguous memory region for which the allocation is performed.
>   * @count: Requested number of pages.
>   * @align: Requested alignment of pages (in PAGE_SIZE order).
> - * @no_warn: Avoid printing message about failed allocation
> + * @gfp_mask: GFP mask to use during during the cma allocation.
>   *
>   * This function allocates part of contiguous memory on specific
>   * contiguous memory area.
>   */
>  struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align,
> -                      bool no_warn)
> +                      gfp_t gfp_mask)
>  {
>         unsigned long mask, offset;
>         unsigned long pfn = -1;
> @@ -438,8 +438,8 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align,
>         if (!cma || !cma->count || !cma->bitmap)
>                 return NULL;
>
> -       pr_debug("%s(cma %p, count %zu, align %d)\n", __func__, (void *)cma,
> -                count, align);
> +       pr_debug("%s(cma %p, count %zu, align %d gfp_mask 0x%x)\n", __func__,
> +                       (void *)cma, count, align, gfp_mask);
>
>         if (!count)
>                 return NULL;
> @@ -471,7 +471,7 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align,
>
>                 pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit);
>                 ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA,
> -                                    GFP_KERNEL | (no_warn ? __GFP_NOWARN : 0));
> +                                               gfp_mask);
>
>                 if (ret == 0) {
>                         page = pfn_to_page(pfn);
> @@ -500,7 +500,7 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align,
>                         page_kasan_tag_reset(page + i);
>         }
>
> -       if (ret && !no_warn) {
> +       if (ret && !(gfp_mask & __GFP_NOWARN)) {
>                 pr_err("%s: alloc failed, req-size: %zu pages, ret: %d\n",
>                         __func__, count, ret);
>                 cma_debug_show_areas(cma);
> diff --git a/mm/cma_debug.c b/mm/cma_debug.c
> index d5bf8aa34fdc..00170c41cf81 100644
> --- a/mm/cma_debug.c
> +++ b/mm/cma_debug.c
> @@ -137,7 +137,7 @@ static int cma_alloc_mem(struct cma *cma, int count)
>         if (!mem)
>                 return -ENOMEM;
>
> -       p = cma_alloc(cma, count, 0, false);
> +       p = cma_alloc(cma, count, 0, GFP_KERNEL);
>         if (!p) {
>                 kfree(mem);
>                 return -ENOMEM;
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 737b2dce19e6..695af33aa66c 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1266,7 +1266,8 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
>
>                 if (hugetlb_cma[nid]) {
>                         page = cma_alloc(hugetlb_cma[nid], nr_pages,
> -                                       huge_page_order(h), true);
> +                                       huge_page_order(h),
> +                                       GFP_KERNEL | __GFP_NOWARN);
>                         if (page)
>                                 return page;
>                 }
> @@ -1277,7 +1278,8 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
>                                         continue;
>
>                                 page = cma_alloc(hugetlb_cma[node], nr_pages,
> -                                               huge_page_order(h), true);
> +                                               huge_page_order(h),
> +                                               GFP_KERNEL | __GFP_NOWARN);
>                                 if (page)
>                                         return page;
>                         }
> diff --git a/mm/secretmem.c b/mm/secretmem.c
> index b8a32954ac68..585d55b9f9d8 100644
> --- a/mm/secretmem.c
> +++ b/mm/secretmem.c
> @@ -86,7 +86,8 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
>         struct page *page;
>         int err;
>
> -       page = cma_alloc(secretmem_cma, nr_pages, PMD_SIZE, gfp & __GFP_NOWARN);
> +       page = cma_alloc(secretmem_cma, nr_pages, PMD_SIZE,
> +                               GFP_KERNEL | (gfp & __GFP_NOWARN));
>         if (!page)
>                 return -ENOMEM;
>
> --
> 2.30.0.284.gd98b1dd5eaa7-goog
>


^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2021-01-20 21:09 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-13  1:21 [PATCH v3 0/4] Chunk Heap Support on DMA-HEAP Minchan Kim
2021-01-13  1:21 ` [PATCH v3 1/4] mm: cma: introduce gfp flag in cma_alloc instead of no_warn Minchan Kim
2021-01-20 21:08   ` Suren Baghdasaryan
2021-01-13  1:21 ` [PATCH v3 2/4] mm: failfast mode with __GFP_NORETRY in alloc_contig_range Minchan Kim
2021-01-13  8:39   ` David Hildenbrand
2021-01-14 18:04     ` Minchan Kim
2021-01-13  1:21 ` [PATCH v3 3/4] dt-bindings: reserved-memory: Make DMA-BUF CMA heap DT-configurable Minchan Kim
2021-01-13 15:45   ` Rob Herring
2021-01-13 17:30     ` Hridya Valsaraju
2021-01-14 14:01   ` Rob Herring
2021-01-14 19:49     ` Hridya Valsaraju
2021-01-13  1:21 ` [PATCH v3 4/4] dma-buf: heaps: add chunk heap to dmabuf heaps Minchan Kim
2021-01-13  3:11   ` kernel test robot
2021-01-14  1:04     ` Minchan Kim
2021-01-13  3:38   ` Randy Dunlap
2021-01-14  1:04     ` Minchan Kim
2021-01-13  6:25   ` kernel test robot
2021-01-19 15:51   ` Minchan Kim
2021-01-19 18:29   ` John Stultz
2021-01-19 20:36     ` Minchan Kim
2021-01-20  3:32       ` Hyesoo Yu
2021-01-20 20:53         ` Suren Baghdasaryan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).