All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/3] CMA: generalize CMA reserved area management code
@ 2014-06-03  1:11 ` Joonsoo Kim
  0 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-03  1:11 UTC (permalink / raw)
  To: Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski, Michal Nazarewicz
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Paolo Bonzini, Gleb Natapov, Alexander Graf,
	Benjamin Herrenschmidt, Paul Mackerras, linux-mm, linux-kernel,
	linux-arm-kernel, kvm, kvm-ppc, linuxppc-dev, Joonsoo Kim

Currently, there are two users on CMA functionality, one is the DMA
subsystem and the other is the kvm on powerpc. They have their own code
to manage CMA reserved area even if they looks really similar.
>From my guess, it is caused by some needs on bitmap management. Kvm side
wants to maintain bitmap not for 1 page, but for more size. Eventually it
use bitmap where one bit represents 64 pages.

When I implement CMA related patches, I should change those two places
to apply my change and it seem to be painful to me. I want to change
this situation and reduce future code management overhead through
this patch.

This change could also help developer who want to use CMA in their
new feature development, since they can use CMA easily without
copying & pasting this reserved area management code.

Now, we are in merge window, so this is not for merging. I'd like to
listen opinion from people who related to this stuff before actually
trying to merge this patchset. If all agree with this change, I will
resend it after rc1.

Thanks.

Joonsoo Kim (3):
  CMA: generalize CMA reserved area management functionality
  DMA, CMA: use general CMA reserved area management framework
  PPC, KVM, CMA: use general CMA reserved area management framework

 arch/powerpc/kvm/book3s_hv_builtin.c |   17 +-
 arch/powerpc/kvm/book3s_hv_cma.c     |  240 -------------------------
 arch/powerpc/kvm/book3s_hv_cma.h     |   27 ---
 drivers/base/Kconfig                 |   10 --
 drivers/base/dma-contiguous.c        |  230 ++----------------------
 include/linux/cma.h                  |   28 +++
 include/linux/dma-contiguous.h       |    7 +-
 mm/Kconfig                           |   11 ++
 mm/Makefile                          |    1 +
 mm/cma.c                             |  329 ++++++++++++++++++++++++++++++++++
 10 files changed, 396 insertions(+), 504 deletions(-)
 delete mode 100644 arch/powerpc/kvm/book3s_hv_cma.c
 delete mode 100644 arch/powerpc/kvm/book3s_hv_cma.h
 create mode 100644 include/linux/cma.h
 create mode 100644 mm/cma.c

-- 
1.7.9.5


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [RFC PATCH 0/3] CMA: generalize CMA reserved area management code
@ 2014-06-03  1:11 ` Joonsoo Kim
  0 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-03  1:11 UTC (permalink / raw)
  To: Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski, Michal Nazarewicz
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Paolo Bonzini, Gleb Natapov, Alexander Graf,
	Benjamin Herrenschmidt, Paul Mackerras, linux-mm, linux-kernel,
	linux-arm-kernel, kvm, kvm-ppc, linuxppc-dev, Joonsoo Kim

Currently, there are two users on CMA functionality, one is the DMA
subsystem and the other is the kvm on powerpc. They have their own code
to manage CMA reserved area even if they looks really similar.
>From my guess, it is caused by some needs on bitmap management. Kvm side
wants to maintain bitmap not for 1 page, but for more size. Eventually it
use bitmap where one bit represents 64 pages.

When I implement CMA related patches, I should change those two places
to apply my change and it seem to be painful to me. I want to change
this situation and reduce future code management overhead through
this patch.

This change could also help developer who want to use CMA in their
new feature development, since they can use CMA easily without
copying & pasting this reserved area management code.

Now, we are in merge window, so this is not for merging. I'd like to
listen opinion from people who related to this stuff before actually
trying to merge this patchset. If all agree with this change, I will
resend it after rc1.

Thanks.

Joonsoo Kim (3):
  CMA: generalize CMA reserved area management functionality
  DMA, CMA: use general CMA reserved area management framework
  PPC, KVM, CMA: use general CMA reserved area management framework

 arch/powerpc/kvm/book3s_hv_builtin.c |   17 +-
 arch/powerpc/kvm/book3s_hv_cma.c     |  240 -------------------------
 arch/powerpc/kvm/book3s_hv_cma.h     |   27 ---
 drivers/base/Kconfig                 |   10 --
 drivers/base/dma-contiguous.c        |  230 ++----------------------
 include/linux/cma.h                  |   28 +++
 include/linux/dma-contiguous.h       |    7 +-
 mm/Kconfig                           |   11 ++
 mm/Makefile                          |    1 +
 mm/cma.c                             |  329 ++++++++++++++++++++++++++++++++++
 10 files changed, 396 insertions(+), 504 deletions(-)
 delete mode 100644 arch/powerpc/kvm/book3s_hv_cma.c
 delete mode 100644 arch/powerpc/kvm/book3s_hv_cma.h
 create mode 100644 include/linux/cma.h
 create mode 100644 mm/cma.c

-- 
1.7.9.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 74+ messages in thread

* [RFC PATCH 0/3] CMA: generalize CMA reserved area management code
@ 2014-06-03  1:11 ` Joonsoo Kim
  0 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-03  1:11 UTC (permalink / raw)
  To: Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski, Michal Nazarewicz
  Cc: Russell King - ARM Linux, kvm, linux-mm, Gleb Natapov,
	Greg Kroah-Hartman, Alexander Graf, kvm-ppc, linux-kernel,
	Minchan Kim, Paul Mackerras, Paolo Bonzini, Joonsoo Kim,
	linuxppc-dev, linux-arm-kernel

Currently, there are two users on CMA functionality, one is the DMA
subsystem and the other is the kvm on powerpc. They have their own code
to manage CMA reserved area even if they looks really similar.
>From my guess, it is caused by some needs on bitmap management. Kvm side
wants to maintain bitmap not for 1 page, but for more size. Eventually it
use bitmap where one bit represents 64 pages.

When I implement CMA related patches, I should change those two places
to apply my change and it seem to be painful to me. I want to change
this situation and reduce future code management overhead through
this patch.

This change could also help developer who want to use CMA in their
new feature development, since they can use CMA easily without
copying & pasting this reserved area management code.

Now, we are in merge window, so this is not for merging. I'd like to
listen opinion from people who related to this stuff before actually
trying to merge this patchset. If all agree with this change, I will
resend it after rc1.

Thanks.

Joonsoo Kim (3):
  CMA: generalize CMA reserved area management functionality
  DMA, CMA: use general CMA reserved area management framework
  PPC, KVM, CMA: use general CMA reserved area management framework

 arch/powerpc/kvm/book3s_hv_builtin.c |   17 +-
 arch/powerpc/kvm/book3s_hv_cma.c     |  240 -------------------------
 arch/powerpc/kvm/book3s_hv_cma.h     |   27 ---
 drivers/base/Kconfig                 |   10 --
 drivers/base/dma-contiguous.c        |  230 ++----------------------
 include/linux/cma.h                  |   28 +++
 include/linux/dma-contiguous.h       |    7 +-
 mm/Kconfig                           |   11 ++
 mm/Makefile                          |    1 +
 mm/cma.c                             |  329 ++++++++++++++++++++++++++++++++++
 10 files changed, 396 insertions(+), 504 deletions(-)
 delete mode 100644 arch/powerpc/kvm/book3s_hv_cma.c
 delete mode 100644 arch/powerpc/kvm/book3s_hv_cma.h
 create mode 100644 include/linux/cma.h
 create mode 100644 mm/cma.c

-- 
1.7.9.5

^ permalink raw reply	[flat|nested] 74+ messages in thread

* [RFC PATCH 0/3] CMA: generalize CMA reserved area management code
@ 2014-06-03  1:11 ` Joonsoo Kim
  0 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-03  1:11 UTC (permalink / raw)
  To: linux-arm-kernel

Currently, there are two users on CMA functionality, one is the DMA
subsystem and the other is the kvm on powerpc. They have their own code
to manage CMA reserved area even if they looks really similar.
>From my guess, it is caused by some needs on bitmap management. Kvm side
wants to maintain bitmap not for 1 page, but for more size. Eventually it
use bitmap where one bit represents 64 pages.

When I implement CMA related patches, I should change those two places
to apply my change and it seem to be painful to me. I want to change
this situation and reduce future code management overhead through
this patch.

This change could also help developer who want to use CMA in their
new feature development, since they can use CMA easily without
copying & pasting this reserved area management code.

Now, we are in merge window, so this is not for merging. I'd like to
listen opinion from people who related to this stuff before actually
trying to merge this patchset. If all agree with this change, I will
resend it after rc1.

Thanks.

Joonsoo Kim (3):
  CMA: generalize CMA reserved area management functionality
  DMA, CMA: use general CMA reserved area management framework
  PPC, KVM, CMA: use general CMA reserved area management framework

 arch/powerpc/kvm/book3s_hv_builtin.c |   17 +-
 arch/powerpc/kvm/book3s_hv_cma.c     |  240 -------------------------
 arch/powerpc/kvm/book3s_hv_cma.h     |   27 ---
 drivers/base/Kconfig                 |   10 --
 drivers/base/dma-contiguous.c        |  230 ++----------------------
 include/linux/cma.h                  |   28 +++
 include/linux/dma-contiguous.h       |    7 +-
 mm/Kconfig                           |   11 ++
 mm/Makefile                          |    1 +
 mm/cma.c                             |  329 ++++++++++++++++++++++++++++++++++
 10 files changed, 396 insertions(+), 504 deletions(-)
 delete mode 100644 arch/powerpc/kvm/book3s_hv_cma.c
 delete mode 100644 arch/powerpc/kvm/book3s_hv_cma.h
 create mode 100644 include/linux/cma.h
 create mode 100644 mm/cma.c

-- 
1.7.9.5

^ permalink raw reply	[flat|nested] 74+ messages in thread

* [RFC PATCH 0/3] CMA: generalize CMA reserved area management code
@ 2014-06-03  1:11 ` Joonsoo Kim
  0 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-03  1:11 UTC (permalink / raw)
  To: Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski, Michal Nazarewicz
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Paolo Bonzini, Gleb Natapov, Alexander Graf,
	Benjamin Herrenschmidt, Paul Mackerras, linux-mm, linux-kernel,
	linux-arm-kernel, kvm, kvm-ppc, linuxppc-dev, Joonsoo Kim

Currently, there are two users on CMA functionality, one is the DMA
subsystem and the other is the kvm on powerpc. They have their own code
to manage CMA reserved area even if they looks really similar.
From my guess, it is caused by some needs on bitmap management. Kvm side
wants to maintain bitmap not for 1 page, but for more size. Eventually it
use bitmap where one bit represents 64 pages.

When I implement CMA related patches, I should change those two places
to apply my change and it seem to be painful to me. I want to change
this situation and reduce future code management overhead through
this patch.

This change could also help developer who want to use CMA in their
new feature development, since they can use CMA easily without
copying & pasting this reserved area management code.

Now, we are in merge window, so this is not for merging. I'd like to
listen opinion from people who related to this stuff before actually
trying to merge this patchset. If all agree with this change, I will
resend it after rc1.

Thanks.

Joonsoo Kim (3):
  CMA: generalize CMA reserved area management functionality
  DMA, CMA: use general CMA reserved area management framework
  PPC, KVM, CMA: use general CMA reserved area management framework

 arch/powerpc/kvm/book3s_hv_builtin.c |   17 +-
 arch/powerpc/kvm/book3s_hv_cma.c     |  240 -------------------------
 arch/powerpc/kvm/book3s_hv_cma.h     |   27 ---
 drivers/base/Kconfig                 |   10 --
 drivers/base/dma-contiguous.c        |  230 ++----------------------
 include/linux/cma.h                  |   28 +++
 include/linux/dma-contiguous.h       |    7 +-
 mm/Kconfig                           |   11 ++
 mm/Makefile                          |    1 +
 mm/cma.c                             |  329 ++++++++++++++++++++++++++++++++++
 10 files changed, 396 insertions(+), 504 deletions(-)
 delete mode 100644 arch/powerpc/kvm/book3s_hv_cma.c
 delete mode 100644 arch/powerpc/kvm/book3s_hv_cma.h
 create mode 100644 include/linux/cma.h
 create mode 100644 mm/cma.c

-- 
1.7.9.5


^ permalink raw reply	[flat|nested] 74+ messages in thread

* [RFC PATCH 1/3] CMA: generalize CMA reserved area management functionality
  2014-06-03  1:11 ` Joonsoo Kim
                     ` (2 preceding siblings ...)
  (?)
@ 2014-06-03  1:11   ` Joonsoo Kim
  -1 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-03  1:11 UTC (permalink / raw)
  To: Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski, Michal Nazarewicz
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Paolo Bonzini, Gleb Natapov, Alexander Graf,
	Benjamin Herrenschmidt, Paul Mackerras, linux-mm, linux-kernel,
	linux-arm-kernel, kvm, kvm-ppc, linuxppc-dev, Joonsoo Kim

Currently, there are two users on CMA functionality, one is the DMA
subsystem and the other is the kvm on powerpc. They have their own code
to manage CMA reserved area even if they looks really similar.
>From my guess, it is caused by some needs on bitmap management. Kvm side
wants to maintain bitmap not for 1 page, but for more size. Eventually it
use bitmap where one bit represents 64 pages.

When I implement CMA related patches, I should change those two places
to apply my change and it seem to be painful to me. I want to change
this situation and reduce future code management overhead through
this patch.

This change could also help developer who want to use CMA in their
new feature development, since they can use CMA easily without
copying & pasting this reserved area management code.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
index 00e13ce..b3fe1cc 100644
--- a/drivers/base/Kconfig
+++ b/drivers/base/Kconfig
@@ -283,7 +283,7 @@ config CMA_ALIGNMENT
 
 	  If unsure, leave the default value "8".
 
-config CMA_AREAS
+config DMA_CMA_AREAS
 	int "Maximum count of the CMA device-private areas"
 	default 7
 	help
diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
index 83969f8..48cdac8 100644
--- a/drivers/base/dma-contiguous.c
+++ b/drivers/base/dma-contiguous.c
@@ -186,7 +186,7 @@ static int __init cma_activate_area(struct cma *cma)
 	return 0;
 }
 
-static struct cma cma_areas[MAX_CMA_AREAS];
+static struct cma cma_areas[MAX_DMA_CMA_AREAS];
 static unsigned cma_area_count;
 
 static int __init cma_init_reserved_areas(void)
diff --git a/include/linux/cma.h b/include/linux/cma.h
new file mode 100644
index 0000000..60ba06f
--- /dev/null
+++ b/include/linux/cma.h
@@ -0,0 +1,28 @@
+/*
+ * Contiguous Memory Allocator
+ *
+ * Copyright LG Electronics Inc., 2014
+ * Written by:
+ *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License or (at your optional) any later version of the license.
+ *
+ */
+
+#ifndef __CMA_H__
+#define __CMA_H__
+
+struct cma;
+
+extern struct page *cma_alloc(struct cma *cma, unsigned long count,
+				unsigned long align);
+extern bool cma_release(struct cma *cma, struct page *pages,
+				unsigned long count);
+extern int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
+				phys_addr_t limit, phys_addr_t alignment,
+				unsigned long bitmap_shift, bool fixed,
+				struct cma **res_cma);
+#endif
diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
index 772eab5..dfb1dc9 100644
--- a/include/linux/dma-contiguous.h
+++ b/include/linux/dma-contiguous.h
@@ -63,7 +63,7 @@ struct device;
  * There is always at least global CMA area and a few optional device
  * private areas configured in kernel .config.
  */
-#define MAX_CMA_AREAS	(1 + CONFIG_CMA_AREAS)
+#define MAX_DMA_CMA_AREAS  (1 + CONFIG_DMA_CMA_AREAS)
 
 extern struct cma *dma_contiguous_default_area;
 
@@ -123,7 +123,7 @@ bool dma_release_from_contiguous(struct device *dev, struct page *pages,
 
 #else
 
-#define MAX_CMA_AREAS	(0)
+#define MAX_DMA_CMA_AREAS	(0)
 
 static inline struct cma *dev_get_cma_area(struct device *dev)
 {
diff --git a/mm/Kconfig b/mm/Kconfig
index 7511b4a..0877ddc 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -515,6 +515,17 @@ config CMA_DEBUG
 	  processing calls such as dma_alloc_from_contiguous().
 	  This option does not affect warning and error messages.
 
+config CMA_AREAS
+	int "Maximum count of the CMA areas"
+	depends on CMA
+	default 7
+	help
+	  CMA allows to create CMA areas for particular purpose, mainly,
+	  used as device private area. This parameter sets the maximum
+	  number of CMA area in the system.
+
+	  If unsure, leave the default value "7".
+
 config ZBUD
 	tristate
 	default n
diff --git a/mm/Makefile b/mm/Makefile
index 1eaa70b..bc0422b 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -62,3 +62,4 @@ obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o
 obj-$(CONFIG_ZBUD)	+= zbud.o
 obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
 obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
+obj-$(CONFIG_CMA)	+= cma.o
diff --git a/mm/cma.c b/mm/cma.c
new file mode 100644
index 0000000..0dae88d
--- /dev/null
+++ b/mm/cma.c
@@ -0,0 +1,329 @@
+/*
+ * Contiguous Memory Allocator
+ *
+ * Copyright (c) 2010-2011 by Samsung Electronics.
+ * Copyright IBM Corporation, 2013
+ * Copyright LG Electronics Inc., 2014
+ * Written by:
+ *	Marek Szyprowski <m.szyprowski@samsung.com>
+ *	Michal Nazarewicz <mina86@mina86.com>
+ *	Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
+ *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License or (at your optional) any later version of the license.
+ */
+
+#define pr_fmt(fmt) "cma: " fmt
+
+#ifdef CONFIG_CMA_DEBUG
+#ifndef DEBUG
+#  define DEBUG
+#endif
+#endif
+
+#include <linux/memblock.h>
+#include <linux/err.h>
+#include <linux/mm.h>
+#include <linux/mutex.h>
+#include <linux/sizes.h>
+#include <linux/slab.h>
+
+struct cma {
+	unsigned long	base_pfn;
+	unsigned long	count;
+	unsigned long	*bitmap;
+	unsigned long	bitmap_shift;
+	struct mutex	lock;
+};
+
+/*
+ * There is always at least global CMA area and a few optional
+ * areas configured in kernel .config.
+ */
+#define MAX_CMA_AREAS	(1 + CONFIG_CMA_AREAS)
+
+static struct cma cma_areas[MAX_CMA_AREAS];
+static unsigned cma_area_count;
+static DEFINE_MUTEX(cma_mutex);
+
+static unsigned long cma_bitmap_mask(struct cma *cma,
+				unsigned long align_order)
+{
+	return (1 << (align_order >> cma->bitmap_shift)) - 1;
+}
+
+static unsigned long cma_bitmap_max_no(struct cma *cma)
+{
+	return cma->count >> cma->bitmap_shift;
+}
+
+static unsigned long cma_bitmap_pages_to_bits(struct cma *cma,
+						unsigned long pages)
+{
+	return ALIGN(pages, 1 << cma->bitmap_shift) >> cma->bitmap_shift;
+}
+
+static void clear_cma_bitmap(struct cma *cma, unsigned long pfn, int count)
+{
+	unsigned long bitmapno, nr_bits;
+
+	bitmapno = (pfn - cma->base_pfn) >> cma->bitmap_shift;
+	nr_bits = cma_bitmap_pages_to_bits(cma, count);
+
+	mutex_lock(&cma->lock);
+	bitmap_clear(cma->bitmap, bitmapno, nr_bits);
+	mutex_unlock(&cma->lock);
+}
+
+static int __init cma_activate_area(struct cma *cma)
+{
+	int max_bitmapno = cma_bitmap_max_no(cma);
+	int bitmap_size = BITS_TO_LONGS(max_bitmapno) * sizeof(long);
+	unsigned long base_pfn = cma->base_pfn, pfn = base_pfn;
+	unsigned i = cma->count >> pageblock_order;
+	struct zone *zone;
+
+	pr_debug("%s()\n", __func__);
+	if (!cma->count)
+		return 0;
+
+	cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
+	if (!cma->bitmap)
+		return -ENOMEM;
+
+	WARN_ON_ONCE(!pfn_valid(pfn));
+	zone = page_zone(pfn_to_page(pfn));
+
+	do {
+		unsigned j;
+
+		base_pfn = pfn;
+		for (j = pageblock_nr_pages; j; --j, pfn++) {
+			WARN_ON_ONCE(!pfn_valid(pfn));
+			/*
+			 * alloc_contig_range requires the pfn range
+			 * specified to be in the same zone. Make this
+			 * simple by forcing the entire CMA resv range
+			 * to be in the same zone.
+			 */
+			if (page_zone(pfn_to_page(pfn)) != zone)
+				goto err;
+		}
+		init_cma_reserved_pageblock(pfn_to_page(base_pfn));
+	} while (--i);
+
+	mutex_init(&cma->lock);
+	return 0;
+
+err:
+	kfree(cma->bitmap);
+	return -EINVAL;
+}
+
+static int __init cma_init_reserved_areas(void)
+{
+	int i;
+
+	for (i = 0; i < cma_area_count; i++) {
+		int ret = cma_activate_area(&cma_areas[i]);
+
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+core_initcall(cma_init_reserved_areas);
+
+/**
+ * cma_declare_contiguous() - reserve custom contiguous area
+ * @size: Size of the reserved area (in bytes),
+ * @base: Base address of the reserved area optional, use 0 for any
+ * @limit: End address of the reserved memory (optional, 0 for any).
+ * @bitmap_shift: Order of pages represented by one bit on bitmap.
+ * @fixed: hint about where to place the reserved area
+ * @res_cma: Pointer to store the created cma region.
+ *
+ * This function reserves memory from early allocator. It should be
+ * called by arch specific code once the early allocator (memblock or bootmem)
+ * has been activated and all other subsystems have already allocated/reserved
+ * memory. This function allows to create custom reserved areas.
+ *
+ * If @fixed is true, reserve contiguous area at exactly @base.  If false,
+ * reserve in range from @base to @limit.
+ */
+int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
+				phys_addr_t limit, phys_addr_t alignment,
+				unsigned long bitmap_shift, bool fixed,
+				struct cma **res_cma)
+{
+	struct cma *cma = &cma_areas[cma_area_count];
+	int ret = 0;
+
+	pr_debug("%s(size %lx, base %08lx, limit %08lx, alignment %08lx)\n",
+			__func__, (unsigned long)size, (unsigned long)base,
+			(unsigned long)limit, (unsigned long)alignment);
+
+	/* Sanity checks */
+	if (cma_area_count == ARRAY_SIZE(cma_areas)) {
+		pr_err("Not enough slots for CMA reserved regions!\n");
+		return -ENOSPC;
+	}
+
+	if (!size)
+		return -EINVAL;
+
+	/*
+	 * Sanitise input arguments.
+	 * CMA area should be at least MAX_ORDER - 1 aligned. Otherwise,
+	 * CMA area could be merged into other MIGRATE_TYPE by buddy mechanism
+	 * and CMA property will be broken.
+	 */
+	alignment >>= PAGE_SHIFT;
+	alignment = PAGE_SIZE << max3(MAX_ORDER - 1, pageblock_order,
+						(int)alignment);
+	base = ALIGN(base, alignment);
+	size = ALIGN(size, alignment);
+	limit &= ~(alignment - 1);
+	/* size should be aligned with bitmap_shift */
+	BUG_ON(!IS_ALIGNED(size >> PAGE_SHIFT, 1 << cma->bitmap_shift));
+
+	/* Reserve memory */
+	if (base && fixed) {
+		if (memblock_is_region_reserved(base, size) ||
+		    memblock_reserve(base, size) < 0) {
+			ret = -EBUSY;
+			goto err;
+		}
+	} else {
+		phys_addr_t addr = memblock_alloc_range(size, alignment, base,
+							limit);
+		if (!addr) {
+			ret = -ENOMEM;
+			goto err;
+		} else {
+			base = addr;
+		}
+	}
+
+	/*
+	 * Each reserved area must be initialised later, when more kernel
+	 * subsystems (like slab allocator) are available.
+	 */
+	cma->base_pfn = PFN_DOWN(base);
+	cma->count = size >> PAGE_SHIFT;
+	cma->bitmap_shift = bitmap_shift;
+	*res_cma = cma;
+	cma_area_count++;
+
+	pr_info("CMA: reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M,
+		(unsigned long)base);
+
+	return 0;
+
+err:
+	pr_err("CMA: failed to reserve %ld MiB\n", (unsigned long)size / SZ_1M);
+	return ret;
+}
+
+/**
+ * cma_alloc() - allocate pages from contiguous area
+ * @cma:   Contiguous memory region for which the allocation is performed.
+ * @count: Requested number of pages.
+ * @align: Requested alignment of pages (in PAGE_SIZE order).
+ *
+ * This function allocates part of contiguous memory on specific
+ * contiguous memory area.
+ */
+struct page *cma_alloc(struct cma *cma, unsigned long count,
+				       unsigned long align)
+{
+	unsigned long mask, pfn, start = 0;
+	unsigned long max_bitmapno, bitmapno, nr_bits;
+	struct page *page = NULL;
+	int ret;
+
+	if (!cma || !cma->count)
+		return NULL;
+
+	pr_debug("%s(cma %p, count %ld, align %ld)\n", __func__, (void *)cma,
+		 count, align);
+
+	if (!count)
+		return NULL;
+
+	mask = cma_bitmap_mask(cma, align);
+	max_bitmapno = cma_bitmap_max_no(cma);
+	nr_bits = cma_bitmap_pages_to_bits(cma, count);
+
+	for (;;) {
+		mutex_lock(&cma->lock);
+		bitmapno = bitmap_find_next_zero_area(cma->bitmap,
+					max_bitmapno, start, nr_bits, mask);
+		if (bitmapno >= max_bitmapno) {
+			mutex_unlock(&cma->lock);
+			break;
+		}
+		bitmap_set(cma->bitmap, bitmapno, nr_bits);
+		/*
+		 * It's safe to drop the lock here. We've marked this region for
+		 * our exclusive use. If the migration fails we will take the
+		 * lock again and unmark it.
+		 */
+		mutex_unlock(&cma->lock);
+
+		pfn = cma->base_pfn + (bitmapno << cma->bitmap_shift);
+		mutex_lock(&cma_mutex);
+		ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA);
+		mutex_unlock(&cma_mutex);
+		if (ret == 0) {
+			page = pfn_to_page(pfn);
+			break;
+		}
+		clear_cma_bitmap(cma, pfn, count);
+		if (ret != -EBUSY)
+			break;
+
+		pr_debug("%s(): memory range at %p is busy, retrying\n",
+			 __func__, pfn_to_page(pfn));
+		/* try again with a bit different memory target */
+		start = bitmapno + mask + 1;
+	}
+
+	pr_debug("%s(): returned %p\n", __func__, page);
+	return page;
+}
+
+/**
+ * cma_release() - release allocated pages
+ * @cma:   Contiguous memory region for which the allocation is performed.
+ * @pages: Allocated pages.
+ * @count: Number of allocated pages.
+ *
+ * This function releases memory allocated by alloc_cma().
+ * It returns false when provided pages do not belong to contiguous area and
+ * true otherwise.
+ */
+bool cma_release(struct cma *cma, struct page *pages, unsigned long count)
+{
+	unsigned long pfn;
+
+	if (!cma || !pages)
+		return false;
+
+	pr_debug("%s(page %p)\n", __func__, (void *)pages);
+
+	pfn = page_to_pfn(pages);
+	if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)
+		return false;
+
+	VM_BUG_ON(pfn + count > cma->base_pfn + cma->count);
+
+	free_contig_range(pfn, count);
+	clear_cma_bitmap(cma, pfn, count);
+
+	return true;
+}
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [RFC PATCH 1/3] CMA: generalize CMA reserved area management functionality
@ 2014-06-03  1:11   ` Joonsoo Kim
  0 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-03  1:11 UTC (permalink / raw)
  To: Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski, Michal Nazarewicz
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Paolo Bonzini, Gleb Natapov, Alexander Graf,
	Benjamin Herrenschmidt, Paul Mackerras, linux-mm, linux-kernel,
	linux-arm-kernel, kvm, kvm-ppc, linuxppc-dev, Joonsoo Kim

Currently, there are two users on CMA functionality, one is the DMA
subsystem and the other is the kvm on powerpc. They have their own code
to manage CMA reserved area even if they looks really similar.
>From my guess, it is caused by some needs on bitmap management. Kvm side
wants to maintain bitmap not for 1 page, but for more size. Eventually it
use bitmap where one bit represents 64 pages.

When I implement CMA related patches, I should change those two places
to apply my change and it seem to be painful to me. I want to change
this situation and reduce future code management overhead through
this patch.

This change could also help developer who want to use CMA in their
new feature development, since they can use CMA easily without
copying & pasting this reserved area management code.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
index 00e13ce..b3fe1cc 100644
--- a/drivers/base/Kconfig
+++ b/drivers/base/Kconfig
@@ -283,7 +283,7 @@ config CMA_ALIGNMENT
 
 	  If unsure, leave the default value "8".
 
-config CMA_AREAS
+config DMA_CMA_AREAS
 	int "Maximum count of the CMA device-private areas"
 	default 7
 	help
diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
index 83969f8..48cdac8 100644
--- a/drivers/base/dma-contiguous.c
+++ b/drivers/base/dma-contiguous.c
@@ -186,7 +186,7 @@ static int __init cma_activate_area(struct cma *cma)
 	return 0;
 }
 
-static struct cma cma_areas[MAX_CMA_AREAS];
+static struct cma cma_areas[MAX_DMA_CMA_AREAS];
 static unsigned cma_area_count;
 
 static int __init cma_init_reserved_areas(void)
diff --git a/include/linux/cma.h b/include/linux/cma.h
new file mode 100644
index 0000000..60ba06f
--- /dev/null
+++ b/include/linux/cma.h
@@ -0,0 +1,28 @@
+/*
+ * Contiguous Memory Allocator
+ *
+ * Copyright LG Electronics Inc., 2014
+ * Written by:
+ *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License or (at your optional) any later version of the license.
+ *
+ */
+
+#ifndef __CMA_H__
+#define __CMA_H__
+
+struct cma;
+
+extern struct page *cma_alloc(struct cma *cma, unsigned long count,
+				unsigned long align);
+extern bool cma_release(struct cma *cma, struct page *pages,
+				unsigned long count);
+extern int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
+				phys_addr_t limit, phys_addr_t alignment,
+				unsigned long bitmap_shift, bool fixed,
+				struct cma **res_cma);
+#endif
diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
index 772eab5..dfb1dc9 100644
--- a/include/linux/dma-contiguous.h
+++ b/include/linux/dma-contiguous.h
@@ -63,7 +63,7 @@ struct device;
  * There is always at least global CMA area and a few optional device
  * private areas configured in kernel .config.
  */
-#define MAX_CMA_AREAS	(1 + CONFIG_CMA_AREAS)
+#define MAX_DMA_CMA_AREAS  (1 + CONFIG_DMA_CMA_AREAS)
 
 extern struct cma *dma_contiguous_default_area;
 
@@ -123,7 +123,7 @@ bool dma_release_from_contiguous(struct device *dev, struct page *pages,
 
 #else
 
-#define MAX_CMA_AREAS	(0)
+#define MAX_DMA_CMA_AREAS	(0)
 
 static inline struct cma *dev_get_cma_area(struct device *dev)
 {
diff --git a/mm/Kconfig b/mm/Kconfig
index 7511b4a..0877ddc 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -515,6 +515,17 @@ config CMA_DEBUG
 	  processing calls such as dma_alloc_from_contiguous().
 	  This option does not affect warning and error messages.
 
+config CMA_AREAS
+	int "Maximum count of the CMA areas"
+	depends on CMA
+	default 7
+	help
+	  CMA allows to create CMA areas for particular purpose, mainly,
+	  used as device private area. This parameter sets the maximum
+	  number of CMA area in the system.
+
+	  If unsure, leave the default value "7".
+
 config ZBUD
 	tristate
 	default n
diff --git a/mm/Makefile b/mm/Makefile
index 1eaa70b..bc0422b 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -62,3 +62,4 @@ obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o
 obj-$(CONFIG_ZBUD)	+= zbud.o
 obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
 obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
+obj-$(CONFIG_CMA)	+= cma.o
diff --git a/mm/cma.c b/mm/cma.c
new file mode 100644
index 0000000..0dae88d
--- /dev/null
+++ b/mm/cma.c
@@ -0,0 +1,329 @@
+/*
+ * Contiguous Memory Allocator
+ *
+ * Copyright (c) 2010-2011 by Samsung Electronics.
+ * Copyright IBM Corporation, 2013
+ * Copyright LG Electronics Inc., 2014
+ * Written by:
+ *	Marek Szyprowski <m.szyprowski@samsung.com>
+ *	Michal Nazarewicz <mina86@mina86.com>
+ *	Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
+ *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License or (at your optional) any later version of the license.
+ */
+
+#define pr_fmt(fmt) "cma: " fmt
+
+#ifdef CONFIG_CMA_DEBUG
+#ifndef DEBUG
+#  define DEBUG
+#endif
+#endif
+
+#include <linux/memblock.h>
+#include <linux/err.h>
+#include <linux/mm.h>
+#include <linux/mutex.h>
+#include <linux/sizes.h>
+#include <linux/slab.h>
+
+struct cma {
+	unsigned long	base_pfn;
+	unsigned long	count;
+	unsigned long	*bitmap;
+	unsigned long	bitmap_shift;
+	struct mutex	lock;
+};
+
+/*
+ * There is always at least global CMA area and a few optional
+ * areas configured in kernel .config.
+ */
+#define MAX_CMA_AREAS	(1 + CONFIG_CMA_AREAS)
+
+static struct cma cma_areas[MAX_CMA_AREAS];
+static unsigned cma_area_count;
+static DEFINE_MUTEX(cma_mutex);
+
+static unsigned long cma_bitmap_mask(struct cma *cma,
+				unsigned long align_order)
+{
+	return (1 << (align_order >> cma->bitmap_shift)) - 1;
+}
+
+static unsigned long cma_bitmap_max_no(struct cma *cma)
+{
+	return cma->count >> cma->bitmap_shift;
+}
+
+static unsigned long cma_bitmap_pages_to_bits(struct cma *cma,
+						unsigned long pages)
+{
+	return ALIGN(pages, 1 << cma->bitmap_shift) >> cma->bitmap_shift;
+}
+
+static void clear_cma_bitmap(struct cma *cma, unsigned long pfn, int count)
+{
+	unsigned long bitmapno, nr_bits;
+
+	bitmapno = (pfn - cma->base_pfn) >> cma->bitmap_shift;
+	nr_bits = cma_bitmap_pages_to_bits(cma, count);
+
+	mutex_lock(&cma->lock);
+	bitmap_clear(cma->bitmap, bitmapno, nr_bits);
+	mutex_unlock(&cma->lock);
+}
+
+static int __init cma_activate_area(struct cma *cma)
+{
+	int max_bitmapno = cma_bitmap_max_no(cma);
+	int bitmap_size = BITS_TO_LONGS(max_bitmapno) * sizeof(long);
+	unsigned long base_pfn = cma->base_pfn, pfn = base_pfn;
+	unsigned i = cma->count >> pageblock_order;
+	struct zone *zone;
+
+	pr_debug("%s()\n", __func__);
+	if (!cma->count)
+		return 0;
+
+	cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
+	if (!cma->bitmap)
+		return -ENOMEM;
+
+	WARN_ON_ONCE(!pfn_valid(pfn));
+	zone = page_zone(pfn_to_page(pfn));
+
+	do {
+		unsigned j;
+
+		base_pfn = pfn;
+		for (j = pageblock_nr_pages; j; --j, pfn++) {
+			WARN_ON_ONCE(!pfn_valid(pfn));
+			/*
+			 * alloc_contig_range requires the pfn range
+			 * specified to be in the same zone. Make this
+			 * simple by forcing the entire CMA resv range
+			 * to be in the same zone.
+			 */
+			if (page_zone(pfn_to_page(pfn)) != zone)
+				goto err;
+		}
+		init_cma_reserved_pageblock(pfn_to_page(base_pfn));
+	} while (--i);
+
+	mutex_init(&cma->lock);
+	return 0;
+
+err:
+	kfree(cma->bitmap);
+	return -EINVAL;
+}
+
+static int __init cma_init_reserved_areas(void)
+{
+	int i;
+
+	for (i = 0; i < cma_area_count; i++) {
+		int ret = cma_activate_area(&cma_areas[i]);
+
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+core_initcall(cma_init_reserved_areas);
+
+/**
+ * cma_declare_contiguous() - reserve custom contiguous area
+ * @size: Size of the reserved area (in bytes),
+ * @base: Base address of the reserved area optional, use 0 for any
+ * @limit: End address of the reserved memory (optional, 0 for any).
+ * @bitmap_shift: Order of pages represented by one bit on bitmap.
+ * @fixed: hint about where to place the reserved area
+ * @res_cma: Pointer to store the created cma region.
+ *
+ * This function reserves memory from early allocator. It should be
+ * called by arch specific code once the early allocator (memblock or bootmem)
+ * has been activated and all other subsystems have already allocated/reserved
+ * memory. This function allows to create custom reserved areas.
+ *
+ * If @fixed is true, reserve contiguous area at exactly @base.  If false,
+ * reserve in range from @base to @limit.
+ */
+int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
+				phys_addr_t limit, phys_addr_t alignment,
+				unsigned long bitmap_shift, bool fixed,
+				struct cma **res_cma)
+{
+	struct cma *cma = &cma_areas[cma_area_count];
+	int ret = 0;
+
+	pr_debug("%s(size %lx, base %08lx, limit %08lx, alignment %08lx)\n",
+			__func__, (unsigned long)size, (unsigned long)base,
+			(unsigned long)limit, (unsigned long)alignment);
+
+	/* Sanity checks */
+	if (cma_area_count == ARRAY_SIZE(cma_areas)) {
+		pr_err("Not enough slots for CMA reserved regions!\n");
+		return -ENOSPC;
+	}
+
+	if (!size)
+		return -EINVAL;
+
+	/*
+	 * Sanitise input arguments.
+	 * CMA area should be at least MAX_ORDER - 1 aligned. Otherwise,
+	 * CMA area could be merged into other MIGRATE_TYPE by buddy mechanism
+	 * and CMA property will be broken.
+	 */
+	alignment >>= PAGE_SHIFT;
+	alignment = PAGE_SIZE << max3(MAX_ORDER - 1, pageblock_order,
+						(int)alignment);
+	base = ALIGN(base, alignment);
+	size = ALIGN(size, alignment);
+	limit &= ~(alignment - 1);
+	/* size should be aligned with bitmap_shift */
+	BUG_ON(!IS_ALIGNED(size >> PAGE_SHIFT, 1 << cma->bitmap_shift));
+
+	/* Reserve memory */
+	if (base && fixed) {
+		if (memblock_is_region_reserved(base, size) ||
+		    memblock_reserve(base, size) < 0) {
+			ret = -EBUSY;
+			goto err;
+		}
+	} else {
+		phys_addr_t addr = memblock_alloc_range(size, alignment, base,
+							limit);
+		if (!addr) {
+			ret = -ENOMEM;
+			goto err;
+		} else {
+			base = addr;
+		}
+	}
+
+	/*
+	 * Each reserved area must be initialised later, when more kernel
+	 * subsystems (like slab allocator) are available.
+	 */
+	cma->base_pfn = PFN_DOWN(base);
+	cma->count = size >> PAGE_SHIFT;
+	cma->bitmap_shift = bitmap_shift;
+	*res_cma = cma;
+	cma_area_count++;
+
+	pr_info("CMA: reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M,
+		(unsigned long)base);
+
+	return 0;
+
+err:
+	pr_err("CMA: failed to reserve %ld MiB\n", (unsigned long)size / SZ_1M);
+	return ret;
+}
+
+/**
+ * cma_alloc() - allocate pages from contiguous area
+ * @cma:   Contiguous memory region for which the allocation is performed.
+ * @count: Requested number of pages.
+ * @align: Requested alignment of pages (in PAGE_SIZE order).
+ *
+ * This function allocates part of contiguous memory on specific
+ * contiguous memory area.
+ */
+struct page *cma_alloc(struct cma *cma, unsigned long count,
+				       unsigned long align)
+{
+	unsigned long mask, pfn, start = 0;
+	unsigned long max_bitmapno, bitmapno, nr_bits;
+	struct page *page = NULL;
+	int ret;
+
+	if (!cma || !cma->count)
+		return NULL;
+
+	pr_debug("%s(cma %p, count %ld, align %ld)\n", __func__, (void *)cma,
+		 count, align);
+
+	if (!count)
+		return NULL;
+
+	mask = cma_bitmap_mask(cma, align);
+	max_bitmapno = cma_bitmap_max_no(cma);
+	nr_bits = cma_bitmap_pages_to_bits(cma, count);
+
+	for (;;) {
+		mutex_lock(&cma->lock);
+		bitmapno = bitmap_find_next_zero_area(cma->bitmap,
+					max_bitmapno, start, nr_bits, mask);
+		if (bitmapno >= max_bitmapno) {
+			mutex_unlock(&cma->lock);
+			break;
+		}
+		bitmap_set(cma->bitmap, bitmapno, nr_bits);
+		/*
+		 * It's safe to drop the lock here. We've marked this region for
+		 * our exclusive use. If the migration fails we will take the
+		 * lock again and unmark it.
+		 */
+		mutex_unlock(&cma->lock);
+
+		pfn = cma->base_pfn + (bitmapno << cma->bitmap_shift);
+		mutex_lock(&cma_mutex);
+		ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA);
+		mutex_unlock(&cma_mutex);
+		if (ret == 0) {
+			page = pfn_to_page(pfn);
+			break;
+		}
+		clear_cma_bitmap(cma, pfn, count);
+		if (ret != -EBUSY)
+			break;
+
+		pr_debug("%s(): memory range at %p is busy, retrying\n",
+			 __func__, pfn_to_page(pfn));
+		/* try again with a bit different memory target */
+		start = bitmapno + mask + 1;
+	}
+
+	pr_debug("%s(): returned %p\n", __func__, page);
+	return page;
+}
+
+/**
+ * cma_release() - release allocated pages
+ * @cma:   Contiguous memory region for which the allocation is performed.
+ * @pages: Allocated pages.
+ * @count: Number of allocated pages.
+ *
+ * This function releases memory allocated by alloc_cma().
+ * It returns false when provided pages do not belong to contiguous area and
+ * true otherwise.
+ */
+bool cma_release(struct cma *cma, struct page *pages, unsigned long count)
+{
+	unsigned long pfn;
+
+	if (!cma || !pages)
+		return false;
+
+	pr_debug("%s(page %p)\n", __func__, (void *)pages);
+
+	pfn = page_to_pfn(pages);
+	if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)
+		return false;
+
+	VM_BUG_ON(pfn + count > cma->base_pfn + cma->count);
+
+	free_contig_range(pfn, count);
+	clear_cma_bitmap(cma, pfn, count);
+
+	return true;
+}
-- 
1.7.9.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [RFC PATCH 1/3] CMA: generalize CMA reserved area management functionality
@ 2014-06-03  1:11   ` Joonsoo Kim
  0 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-03  1:11 UTC (permalink / raw)
  To: Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski, Michal Nazarewicz
  Cc: Russell King - ARM Linux, kvm, linux-mm, Gleb Natapov,
	Greg Kroah-Hartman, Alexander Graf, kvm-ppc, linux-kernel,
	Minchan Kim, Paul Mackerras, Paolo Bonzini, Joonsoo Kim,
	linuxppc-dev, linux-arm-kernel

Currently, there are two users on CMA functionality, one is the DMA
subsystem and the other is the kvm on powerpc. They have their own code
to manage CMA reserved area even if they looks really similar.
>From my guess, it is caused by some needs on bitmap management. Kvm side
wants to maintain bitmap not for 1 page, but for more size. Eventually it
use bitmap where one bit represents 64 pages.

When I implement CMA related patches, I should change those two places
to apply my change and it seem to be painful to me. I want to change
this situation and reduce future code management overhead through
this patch.

This change could also help developer who want to use CMA in their
new feature development, since they can use CMA easily without
copying & pasting this reserved area management code.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
index 00e13ce..b3fe1cc 100644
--- a/drivers/base/Kconfig
+++ b/drivers/base/Kconfig
@@ -283,7 +283,7 @@ config CMA_ALIGNMENT
 
 	  If unsure, leave the default value "8".
 
-config CMA_AREAS
+config DMA_CMA_AREAS
 	int "Maximum count of the CMA device-private areas"
 	default 7
 	help
diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
index 83969f8..48cdac8 100644
--- a/drivers/base/dma-contiguous.c
+++ b/drivers/base/dma-contiguous.c
@@ -186,7 +186,7 @@ static int __init cma_activate_area(struct cma *cma)
 	return 0;
 }
 
-static struct cma cma_areas[MAX_CMA_AREAS];
+static struct cma cma_areas[MAX_DMA_CMA_AREAS];
 static unsigned cma_area_count;
 
 static int __init cma_init_reserved_areas(void)
diff --git a/include/linux/cma.h b/include/linux/cma.h
new file mode 100644
index 0000000..60ba06f
--- /dev/null
+++ b/include/linux/cma.h
@@ -0,0 +1,28 @@
+/*
+ * Contiguous Memory Allocator
+ *
+ * Copyright LG Electronics Inc., 2014
+ * Written by:
+ *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License or (at your optional) any later version of the license.
+ *
+ */
+
+#ifndef __CMA_H__
+#define __CMA_H__
+
+struct cma;
+
+extern struct page *cma_alloc(struct cma *cma, unsigned long count,
+				unsigned long align);
+extern bool cma_release(struct cma *cma, struct page *pages,
+				unsigned long count);
+extern int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
+				phys_addr_t limit, phys_addr_t alignment,
+				unsigned long bitmap_shift, bool fixed,
+				struct cma **res_cma);
+#endif
diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
index 772eab5..dfb1dc9 100644
--- a/include/linux/dma-contiguous.h
+++ b/include/linux/dma-contiguous.h
@@ -63,7 +63,7 @@ struct device;
  * There is always at least global CMA area and a few optional device
  * private areas configured in kernel .config.
  */
-#define MAX_CMA_AREAS	(1 + CONFIG_CMA_AREAS)
+#define MAX_DMA_CMA_AREAS  (1 + CONFIG_DMA_CMA_AREAS)
 
 extern struct cma *dma_contiguous_default_area;
 
@@ -123,7 +123,7 @@ bool dma_release_from_contiguous(struct device *dev, struct page *pages,
 
 #else
 
-#define MAX_CMA_AREAS	(0)
+#define MAX_DMA_CMA_AREAS	(0)
 
 static inline struct cma *dev_get_cma_area(struct device *dev)
 {
diff --git a/mm/Kconfig b/mm/Kconfig
index 7511b4a..0877ddc 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -515,6 +515,17 @@ config CMA_DEBUG
 	  processing calls such as dma_alloc_from_contiguous().
 	  This option does not affect warning and error messages.
 
+config CMA_AREAS
+	int "Maximum count of the CMA areas"
+	depends on CMA
+	default 7
+	help
+	  CMA allows to create CMA areas for particular purpose, mainly,
+	  used as device private area. This parameter sets the maximum
+	  number of CMA area in the system.
+
+	  If unsure, leave the default value "7".
+
 config ZBUD
 	tristate
 	default n
diff --git a/mm/Makefile b/mm/Makefile
index 1eaa70b..bc0422b 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -62,3 +62,4 @@ obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o
 obj-$(CONFIG_ZBUD)	+= zbud.o
 obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
 obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
+obj-$(CONFIG_CMA)	+= cma.o
diff --git a/mm/cma.c b/mm/cma.c
new file mode 100644
index 0000000..0dae88d
--- /dev/null
+++ b/mm/cma.c
@@ -0,0 +1,329 @@
+/*
+ * Contiguous Memory Allocator
+ *
+ * Copyright (c) 2010-2011 by Samsung Electronics.
+ * Copyright IBM Corporation, 2013
+ * Copyright LG Electronics Inc., 2014
+ * Written by:
+ *	Marek Szyprowski <m.szyprowski@samsung.com>
+ *	Michal Nazarewicz <mina86@mina86.com>
+ *	Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
+ *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License or (at your optional) any later version of the license.
+ */
+
+#define pr_fmt(fmt) "cma: " fmt
+
+#ifdef CONFIG_CMA_DEBUG
+#ifndef DEBUG
+#  define DEBUG
+#endif
+#endif
+
+#include <linux/memblock.h>
+#include <linux/err.h>
+#include <linux/mm.h>
+#include <linux/mutex.h>
+#include <linux/sizes.h>
+#include <linux/slab.h>
+
+struct cma {
+	unsigned long	base_pfn;
+	unsigned long	count;
+	unsigned long	*bitmap;
+	unsigned long	bitmap_shift;
+	struct mutex	lock;
+};
+
+/*
+ * There is always at least global CMA area and a few optional
+ * areas configured in kernel .config.
+ */
+#define MAX_CMA_AREAS	(1 + CONFIG_CMA_AREAS)
+
+static struct cma cma_areas[MAX_CMA_AREAS];
+static unsigned cma_area_count;
+static DEFINE_MUTEX(cma_mutex);
+
+static unsigned long cma_bitmap_mask(struct cma *cma,
+				unsigned long align_order)
+{
+	return (1 << (align_order >> cma->bitmap_shift)) - 1;
+}
+
+static unsigned long cma_bitmap_max_no(struct cma *cma)
+{
+	return cma->count >> cma->bitmap_shift;
+}
+
+static unsigned long cma_bitmap_pages_to_bits(struct cma *cma,
+						unsigned long pages)
+{
+	return ALIGN(pages, 1 << cma->bitmap_shift) >> cma->bitmap_shift;
+}
+
+static void clear_cma_bitmap(struct cma *cma, unsigned long pfn, int count)
+{
+	unsigned long bitmapno, nr_bits;
+
+	bitmapno = (pfn - cma->base_pfn) >> cma->bitmap_shift;
+	nr_bits = cma_bitmap_pages_to_bits(cma, count);
+
+	mutex_lock(&cma->lock);
+	bitmap_clear(cma->bitmap, bitmapno, nr_bits);
+	mutex_unlock(&cma->lock);
+}
+
+static int __init cma_activate_area(struct cma *cma)
+{
+	int max_bitmapno = cma_bitmap_max_no(cma);
+	int bitmap_size = BITS_TO_LONGS(max_bitmapno) * sizeof(long);
+	unsigned long base_pfn = cma->base_pfn, pfn = base_pfn;
+	unsigned i = cma->count >> pageblock_order;
+	struct zone *zone;
+
+	pr_debug("%s()\n", __func__);
+	if (!cma->count)
+		return 0;
+
+	cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
+	if (!cma->bitmap)
+		return -ENOMEM;
+
+	WARN_ON_ONCE(!pfn_valid(pfn));
+	zone = page_zone(pfn_to_page(pfn));
+
+	do {
+		unsigned j;
+
+		base_pfn = pfn;
+		for (j = pageblock_nr_pages; j; --j, pfn++) {
+			WARN_ON_ONCE(!pfn_valid(pfn));
+			/*
+			 * alloc_contig_range requires the pfn range
+			 * specified to be in the same zone. Make this
+			 * simple by forcing the entire CMA resv range
+			 * to be in the same zone.
+			 */
+			if (page_zone(pfn_to_page(pfn)) != zone)
+				goto err;
+		}
+		init_cma_reserved_pageblock(pfn_to_page(base_pfn));
+	} while (--i);
+
+	mutex_init(&cma->lock);
+	return 0;
+
+err:
+	kfree(cma->bitmap);
+	return -EINVAL;
+}
+
+static int __init cma_init_reserved_areas(void)
+{
+	int i;
+
+	for (i = 0; i < cma_area_count; i++) {
+		int ret = cma_activate_area(&cma_areas[i]);
+
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+core_initcall(cma_init_reserved_areas);
+
+/**
+ * cma_declare_contiguous() - reserve custom contiguous area
+ * @size: Size of the reserved area (in bytes),
+ * @base: Base address of the reserved area optional, use 0 for any
+ * @limit: End address of the reserved memory (optional, 0 for any).
+ * @bitmap_shift: Order of pages represented by one bit on bitmap.
+ * @fixed: hint about where to place the reserved area
+ * @res_cma: Pointer to store the created cma region.
+ *
+ * This function reserves memory from early allocator. It should be
+ * called by arch specific code once the early allocator (memblock or bootmem)
+ * has been activated and all other subsystems have already allocated/reserved
+ * memory. This function allows to create custom reserved areas.
+ *
+ * If @fixed is true, reserve contiguous area at exactly @base.  If false,
+ * reserve in range from @base to @limit.
+ */
+int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
+				phys_addr_t limit, phys_addr_t alignment,
+				unsigned long bitmap_shift, bool fixed,
+				struct cma **res_cma)
+{
+	struct cma *cma = &cma_areas[cma_area_count];
+	int ret = 0;
+
+	pr_debug("%s(size %lx, base %08lx, limit %08lx, alignment %08lx)\n",
+			__func__, (unsigned long)size, (unsigned long)base,
+			(unsigned long)limit, (unsigned long)alignment);
+
+	/* Sanity checks */
+	if (cma_area_count == ARRAY_SIZE(cma_areas)) {
+		pr_err("Not enough slots for CMA reserved regions!\n");
+		return -ENOSPC;
+	}
+
+	if (!size)
+		return -EINVAL;
+
+	/*
+	 * Sanitise input arguments.
+	 * CMA area should be at least MAX_ORDER - 1 aligned. Otherwise,
+	 * CMA area could be merged into other MIGRATE_TYPE by buddy mechanism
+	 * and CMA property will be broken.
+	 */
+	alignment >>= PAGE_SHIFT;
+	alignment = PAGE_SIZE << max3(MAX_ORDER - 1, pageblock_order,
+						(int)alignment);
+	base = ALIGN(base, alignment);
+	size = ALIGN(size, alignment);
+	limit &= ~(alignment - 1);
+	/* size should be aligned with bitmap_shift */
+	BUG_ON(!IS_ALIGNED(size >> PAGE_SHIFT, 1 << cma->bitmap_shift));
+
+	/* Reserve memory */
+	if (base && fixed) {
+		if (memblock_is_region_reserved(base, size) ||
+		    memblock_reserve(base, size) < 0) {
+			ret = -EBUSY;
+			goto err;
+		}
+	} else {
+		phys_addr_t addr = memblock_alloc_range(size, alignment, base,
+							limit);
+		if (!addr) {
+			ret = -ENOMEM;
+			goto err;
+		} else {
+			base = addr;
+		}
+	}
+
+	/*
+	 * Each reserved area must be initialised later, when more kernel
+	 * subsystems (like slab allocator) are available.
+	 */
+	cma->base_pfn = PFN_DOWN(base);
+	cma->count = size >> PAGE_SHIFT;
+	cma->bitmap_shift = bitmap_shift;
+	*res_cma = cma;
+	cma_area_count++;
+
+	pr_info("CMA: reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M,
+		(unsigned long)base);
+
+	return 0;
+
+err:
+	pr_err("CMA: failed to reserve %ld MiB\n", (unsigned long)size / SZ_1M);
+	return ret;
+}
+
+/**
+ * cma_alloc() - allocate pages from contiguous area
+ * @cma:   Contiguous memory region for which the allocation is performed.
+ * @count: Requested number of pages.
+ * @align: Requested alignment of pages (in PAGE_SIZE order).
+ *
+ * This function allocates part of contiguous memory on specific
+ * contiguous memory area.
+ */
+struct page *cma_alloc(struct cma *cma, unsigned long count,
+				       unsigned long align)
+{
+	unsigned long mask, pfn, start = 0;
+	unsigned long max_bitmapno, bitmapno, nr_bits;
+	struct page *page = NULL;
+	int ret;
+
+	if (!cma || !cma->count)
+		return NULL;
+
+	pr_debug("%s(cma %p, count %ld, align %ld)\n", __func__, (void *)cma,
+		 count, align);
+
+	if (!count)
+		return NULL;
+
+	mask = cma_bitmap_mask(cma, align);
+	max_bitmapno = cma_bitmap_max_no(cma);
+	nr_bits = cma_bitmap_pages_to_bits(cma, count);
+
+	for (;;) {
+		mutex_lock(&cma->lock);
+		bitmapno = bitmap_find_next_zero_area(cma->bitmap,
+					max_bitmapno, start, nr_bits, mask);
+		if (bitmapno >= max_bitmapno) {
+			mutex_unlock(&cma->lock);
+			break;
+		}
+		bitmap_set(cma->bitmap, bitmapno, nr_bits);
+		/*
+		 * It's safe to drop the lock here. We've marked this region for
+		 * our exclusive use. If the migration fails we will take the
+		 * lock again and unmark it.
+		 */
+		mutex_unlock(&cma->lock);
+
+		pfn = cma->base_pfn + (bitmapno << cma->bitmap_shift);
+		mutex_lock(&cma_mutex);
+		ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA);
+		mutex_unlock(&cma_mutex);
+		if (ret == 0) {
+			page = pfn_to_page(pfn);
+			break;
+		}
+		clear_cma_bitmap(cma, pfn, count);
+		if (ret != -EBUSY)
+			break;
+
+		pr_debug("%s(): memory range at %p is busy, retrying\n",
+			 __func__, pfn_to_page(pfn));
+		/* try again with a bit different memory target */
+		start = bitmapno + mask + 1;
+	}
+
+	pr_debug("%s(): returned %p\n", __func__, page);
+	return page;
+}
+
+/**
+ * cma_release() - release allocated pages
+ * @cma:   Contiguous memory region for which the allocation is performed.
+ * @pages: Allocated pages.
+ * @count: Number of allocated pages.
+ *
+ * This function releases memory allocated by alloc_cma().
+ * It returns false when provided pages do not belong to contiguous area and
+ * true otherwise.
+ */
+bool cma_release(struct cma *cma, struct page *pages, unsigned long count)
+{
+	unsigned long pfn;
+
+	if (!cma || !pages)
+		return false;
+
+	pr_debug("%s(page %p)\n", __func__, (void *)pages);
+
+	pfn = page_to_pfn(pages);
+	if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)
+		return false;
+
+	VM_BUG_ON(pfn + count > cma->base_pfn + cma->count);
+
+	free_contig_range(pfn, count);
+	clear_cma_bitmap(cma, pfn, count);
+
+	return true;
+}
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [RFC PATCH 1/3] CMA: generalize CMA reserved area management functionality
@ 2014-06-03  1:11   ` Joonsoo Kim
  0 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-03  1:11 UTC (permalink / raw)
  To: linux-arm-kernel

Currently, there are two users on CMA functionality, one is the DMA
subsystem and the other is the kvm on powerpc. They have their own code
to manage CMA reserved area even if they looks really similar.
>From my guess, it is caused by some needs on bitmap management. Kvm side
wants to maintain bitmap not for 1 page, but for more size. Eventually it
use bitmap where one bit represents 64 pages.

When I implement CMA related patches, I should change those two places
to apply my change and it seem to be painful to me. I want to change
this situation and reduce future code management overhead through
this patch.

This change could also help developer who want to use CMA in their
new feature development, since they can use CMA easily without
copying & pasting this reserved area management code.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
index 00e13ce..b3fe1cc 100644
--- a/drivers/base/Kconfig
+++ b/drivers/base/Kconfig
@@ -283,7 +283,7 @@ config CMA_ALIGNMENT
 
 	  If unsure, leave the default value "8".
 
-config CMA_AREAS
+config DMA_CMA_AREAS
 	int "Maximum count of the CMA device-private areas"
 	default 7
 	help
diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
index 83969f8..48cdac8 100644
--- a/drivers/base/dma-contiguous.c
+++ b/drivers/base/dma-contiguous.c
@@ -186,7 +186,7 @@ static int __init cma_activate_area(struct cma *cma)
 	return 0;
 }
 
-static struct cma cma_areas[MAX_CMA_AREAS];
+static struct cma cma_areas[MAX_DMA_CMA_AREAS];
 static unsigned cma_area_count;
 
 static int __init cma_init_reserved_areas(void)
diff --git a/include/linux/cma.h b/include/linux/cma.h
new file mode 100644
index 0000000..60ba06f
--- /dev/null
+++ b/include/linux/cma.h
@@ -0,0 +1,28 @@
+/*
+ * Contiguous Memory Allocator
+ *
+ * Copyright LG Electronics Inc., 2014
+ * Written by:
+ *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License or (at your optional) any later version of the license.
+ *
+ */
+
+#ifndef __CMA_H__
+#define __CMA_H__
+
+struct cma;
+
+extern struct page *cma_alloc(struct cma *cma, unsigned long count,
+				unsigned long align);
+extern bool cma_release(struct cma *cma, struct page *pages,
+				unsigned long count);
+extern int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
+				phys_addr_t limit, phys_addr_t alignment,
+				unsigned long bitmap_shift, bool fixed,
+				struct cma **res_cma);
+#endif
diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
index 772eab5..dfb1dc9 100644
--- a/include/linux/dma-contiguous.h
+++ b/include/linux/dma-contiguous.h
@@ -63,7 +63,7 @@ struct device;
  * There is always at least global CMA area and a few optional device
  * private areas configured in kernel .config.
  */
-#define MAX_CMA_AREAS	(1 + CONFIG_CMA_AREAS)
+#define MAX_DMA_CMA_AREAS  (1 + CONFIG_DMA_CMA_AREAS)
 
 extern struct cma *dma_contiguous_default_area;
 
@@ -123,7 +123,7 @@ bool dma_release_from_contiguous(struct device *dev, struct page *pages,
 
 #else
 
-#define MAX_CMA_AREAS	(0)
+#define MAX_DMA_CMA_AREAS	(0)
 
 static inline struct cma *dev_get_cma_area(struct device *dev)
 {
diff --git a/mm/Kconfig b/mm/Kconfig
index 7511b4a..0877ddc 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -515,6 +515,17 @@ config CMA_DEBUG
 	  processing calls such as dma_alloc_from_contiguous().
 	  This option does not affect warning and error messages.
 
+config CMA_AREAS
+	int "Maximum count of the CMA areas"
+	depends on CMA
+	default 7
+	help
+	  CMA allows to create CMA areas for particular purpose, mainly,
+	  used as device private area. This parameter sets the maximum
+	  number of CMA area in the system.
+
+	  If unsure, leave the default value "7".
+
 config ZBUD
 	tristate
 	default n
diff --git a/mm/Makefile b/mm/Makefile
index 1eaa70b..bc0422b 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -62,3 +62,4 @@ obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o
 obj-$(CONFIG_ZBUD)	+= zbud.o
 obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
 obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
+obj-$(CONFIG_CMA)	+= cma.o
diff --git a/mm/cma.c b/mm/cma.c
new file mode 100644
index 0000000..0dae88d
--- /dev/null
+++ b/mm/cma.c
@@ -0,0 +1,329 @@
+/*
+ * Contiguous Memory Allocator
+ *
+ * Copyright (c) 2010-2011 by Samsung Electronics.
+ * Copyright IBM Corporation, 2013
+ * Copyright LG Electronics Inc., 2014
+ * Written by:
+ *	Marek Szyprowski <m.szyprowski@samsung.com>
+ *	Michal Nazarewicz <mina86@mina86.com>
+ *	Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
+ *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License or (at your optional) any later version of the license.
+ */
+
+#define pr_fmt(fmt) "cma: " fmt
+
+#ifdef CONFIG_CMA_DEBUG
+#ifndef DEBUG
+#  define DEBUG
+#endif
+#endif
+
+#include <linux/memblock.h>
+#include <linux/err.h>
+#include <linux/mm.h>
+#include <linux/mutex.h>
+#include <linux/sizes.h>
+#include <linux/slab.h>
+
+struct cma {
+	unsigned long	base_pfn;
+	unsigned long	count;
+	unsigned long	*bitmap;
+	unsigned long	bitmap_shift;
+	struct mutex	lock;
+};
+
+/*
+ * There is always at least global CMA area and a few optional
+ * areas configured in kernel .config.
+ */
+#define MAX_CMA_AREAS	(1 + CONFIG_CMA_AREAS)
+
+static struct cma cma_areas[MAX_CMA_AREAS];
+static unsigned cma_area_count;
+static DEFINE_MUTEX(cma_mutex);
+
+static unsigned long cma_bitmap_mask(struct cma *cma,
+				unsigned long align_order)
+{
+	return (1 << (align_order >> cma->bitmap_shift)) - 1;
+}
+
+static unsigned long cma_bitmap_max_no(struct cma *cma)
+{
+	return cma->count >> cma->bitmap_shift;
+}
+
+static unsigned long cma_bitmap_pages_to_bits(struct cma *cma,
+						unsigned long pages)
+{
+	return ALIGN(pages, 1 << cma->bitmap_shift) >> cma->bitmap_shift;
+}
+
+static void clear_cma_bitmap(struct cma *cma, unsigned long pfn, int count)
+{
+	unsigned long bitmapno, nr_bits;
+
+	bitmapno = (pfn - cma->base_pfn) >> cma->bitmap_shift;
+	nr_bits = cma_bitmap_pages_to_bits(cma, count);
+
+	mutex_lock(&cma->lock);
+	bitmap_clear(cma->bitmap, bitmapno, nr_bits);
+	mutex_unlock(&cma->lock);
+}
+
+static int __init cma_activate_area(struct cma *cma)
+{
+	int max_bitmapno = cma_bitmap_max_no(cma);
+	int bitmap_size = BITS_TO_LONGS(max_bitmapno) * sizeof(long);
+	unsigned long base_pfn = cma->base_pfn, pfn = base_pfn;
+	unsigned i = cma->count >> pageblock_order;
+	struct zone *zone;
+
+	pr_debug("%s()\n", __func__);
+	if (!cma->count)
+		return 0;
+
+	cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
+	if (!cma->bitmap)
+		return -ENOMEM;
+
+	WARN_ON_ONCE(!pfn_valid(pfn));
+	zone = page_zone(pfn_to_page(pfn));
+
+	do {
+		unsigned j;
+
+		base_pfn = pfn;
+		for (j = pageblock_nr_pages; j; --j, pfn++) {
+			WARN_ON_ONCE(!pfn_valid(pfn));
+			/*
+			 * alloc_contig_range requires the pfn range
+			 * specified to be in the same zone. Make this
+			 * simple by forcing the entire CMA resv range
+			 * to be in the same zone.
+			 */
+			if (page_zone(pfn_to_page(pfn)) != zone)
+				goto err;
+		}
+		init_cma_reserved_pageblock(pfn_to_page(base_pfn));
+	} while (--i);
+
+	mutex_init(&cma->lock);
+	return 0;
+
+err:
+	kfree(cma->bitmap);
+	return -EINVAL;
+}
+
+static int __init cma_init_reserved_areas(void)
+{
+	int i;
+
+	for (i = 0; i < cma_area_count; i++) {
+		int ret = cma_activate_area(&cma_areas[i]);
+
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+core_initcall(cma_init_reserved_areas);
+
+/**
+ * cma_declare_contiguous() - reserve custom contiguous area
+ * @size: Size of the reserved area (in bytes),
+ * @base: Base address of the reserved area optional, use 0 for any
+ * @limit: End address of the reserved memory (optional, 0 for any).
+ * @bitmap_shift: Order of pages represented by one bit on bitmap.
+ * @fixed: hint about where to place the reserved area
+ * @res_cma: Pointer to store the created cma region.
+ *
+ * This function reserves memory from early allocator. It should be
+ * called by arch specific code once the early allocator (memblock or bootmem)
+ * has been activated and all other subsystems have already allocated/reserved
+ * memory. This function allows to create custom reserved areas.
+ *
+ * If @fixed is true, reserve contiguous area at exactly @base.  If false,
+ * reserve in range from @base to @limit.
+ */
+int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
+				phys_addr_t limit, phys_addr_t alignment,
+				unsigned long bitmap_shift, bool fixed,
+				struct cma **res_cma)
+{
+	struct cma *cma = &cma_areas[cma_area_count];
+	int ret = 0;
+
+	pr_debug("%s(size %lx, base %08lx, limit %08lx, alignment %08lx)\n",
+			__func__, (unsigned long)size, (unsigned long)base,
+			(unsigned long)limit, (unsigned long)alignment);
+
+	/* Sanity checks */
+	if (cma_area_count == ARRAY_SIZE(cma_areas)) {
+		pr_err("Not enough slots for CMA reserved regions!\n");
+		return -ENOSPC;
+	}
+
+	if (!size)
+		return -EINVAL;
+
+	/*
+	 * Sanitise input arguments.
+	 * CMA area should be@least MAX_ORDER - 1 aligned. Otherwise,
+	 * CMA area could be merged into other MIGRATE_TYPE by buddy mechanism
+	 * and CMA property will be broken.
+	 */
+	alignment >>= PAGE_SHIFT;
+	alignment = PAGE_SIZE << max3(MAX_ORDER - 1, pageblock_order,
+						(int)alignment);
+	base = ALIGN(base, alignment);
+	size = ALIGN(size, alignment);
+	limit &= ~(alignment - 1);
+	/* size should be aligned with bitmap_shift */
+	BUG_ON(!IS_ALIGNED(size >> PAGE_SHIFT, 1 << cma->bitmap_shift));
+
+	/* Reserve memory */
+	if (base && fixed) {
+		if (memblock_is_region_reserved(base, size) ||
+		    memblock_reserve(base, size) < 0) {
+			ret = -EBUSY;
+			goto err;
+		}
+	} else {
+		phys_addr_t addr = memblock_alloc_range(size, alignment, base,
+							limit);
+		if (!addr) {
+			ret = -ENOMEM;
+			goto err;
+		} else {
+			base = addr;
+		}
+	}
+
+	/*
+	 * Each reserved area must be initialised later, when more kernel
+	 * subsystems (like slab allocator) are available.
+	 */
+	cma->base_pfn = PFN_DOWN(base);
+	cma->count = size >> PAGE_SHIFT;
+	cma->bitmap_shift = bitmap_shift;
+	*res_cma = cma;
+	cma_area_count++;
+
+	pr_info("CMA: reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M,
+		(unsigned long)base);
+
+	return 0;
+
+err:
+	pr_err("CMA: failed to reserve %ld MiB\n", (unsigned long)size / SZ_1M);
+	return ret;
+}
+
+/**
+ * cma_alloc() - allocate pages from contiguous area
+ * @cma:   Contiguous memory region for which the allocation is performed.
+ * @count: Requested number of pages.
+ * @align: Requested alignment of pages (in PAGE_SIZE order).
+ *
+ * This function allocates part of contiguous memory on specific
+ * contiguous memory area.
+ */
+struct page *cma_alloc(struct cma *cma, unsigned long count,
+				       unsigned long align)
+{
+	unsigned long mask, pfn, start = 0;
+	unsigned long max_bitmapno, bitmapno, nr_bits;
+	struct page *page = NULL;
+	int ret;
+
+	if (!cma || !cma->count)
+		return NULL;
+
+	pr_debug("%s(cma %p, count %ld, align %ld)\n", __func__, (void *)cma,
+		 count, align);
+
+	if (!count)
+		return NULL;
+
+	mask = cma_bitmap_mask(cma, align);
+	max_bitmapno = cma_bitmap_max_no(cma);
+	nr_bits = cma_bitmap_pages_to_bits(cma, count);
+
+	for (;;) {
+		mutex_lock(&cma->lock);
+		bitmapno = bitmap_find_next_zero_area(cma->bitmap,
+					max_bitmapno, start, nr_bits, mask);
+		if (bitmapno >= max_bitmapno) {
+			mutex_unlock(&cma->lock);
+			break;
+		}
+		bitmap_set(cma->bitmap, bitmapno, nr_bits);
+		/*
+		 * It's safe to drop the lock here. We've marked this region for
+		 * our exclusive use. If the migration fails we will take the
+		 * lock again and unmark it.
+		 */
+		mutex_unlock(&cma->lock);
+
+		pfn = cma->base_pfn + (bitmapno << cma->bitmap_shift);
+		mutex_lock(&cma_mutex);
+		ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA);
+		mutex_unlock(&cma_mutex);
+		if (ret == 0) {
+			page = pfn_to_page(pfn);
+			break;
+		}
+		clear_cma_bitmap(cma, pfn, count);
+		if (ret != -EBUSY)
+			break;
+
+		pr_debug("%s(): memory range at %p is busy, retrying\n",
+			 __func__, pfn_to_page(pfn));
+		/* try again with a bit different memory target */
+		start = bitmapno + mask + 1;
+	}
+
+	pr_debug("%s(): returned %p\n", __func__, page);
+	return page;
+}
+
+/**
+ * cma_release() - release allocated pages
+ * @cma:   Contiguous memory region for which the allocation is performed.
+ * @pages: Allocated pages.
+ * @count: Number of allocated pages.
+ *
+ * This function releases memory allocated by alloc_cma().
+ * It returns false when provided pages do not belong to contiguous area and
+ * true otherwise.
+ */
+bool cma_release(struct cma *cma, struct page *pages, unsigned long count)
+{
+	unsigned long pfn;
+
+	if (!cma || !pages)
+		return false;
+
+	pr_debug("%s(page %p)\n", __func__, (void *)pages);
+
+	pfn = page_to_pfn(pages);
+	if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)
+		return false;
+
+	VM_BUG_ON(pfn + count > cma->base_pfn + cma->count);
+
+	free_contig_range(pfn, count);
+	clear_cma_bitmap(cma, pfn, count);
+
+	return true;
+}
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [RFC PATCH 1/3] CMA: generalize CMA reserved area management functionality
@ 2014-06-03  1:11   ` Joonsoo Kim
  0 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-03  1:11 UTC (permalink / raw)
  To: Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski, Michal Nazarewicz
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Paolo Bonzini, Gleb Natapov, Alexander Graf,
	Benjamin Herrenschmidt, Paul Mackerras, linux-mm, linux-kernel,
	linux-arm-kernel, kvm, kvm-ppc, linuxppc-dev, Joonsoo Kim

Currently, there are two users on CMA functionality, one is the DMA
subsystem and the other is the kvm on powerpc. They have their own code
to manage CMA reserved area even if they looks really similar.
From my guess, it is caused by some needs on bitmap management. Kvm side
wants to maintain bitmap not for 1 page, but for more size. Eventually it
use bitmap where one bit represents 64 pages.

When I implement CMA related patches, I should change those two places
to apply my change and it seem to be painful to me. I want to change
this situation and reduce future code management overhead through
this patch.

This change could also help developer who want to use CMA in their
new feature development, since they can use CMA easily without
copying & pasting this reserved area management code.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
index 00e13ce..b3fe1cc 100644
--- a/drivers/base/Kconfig
+++ b/drivers/base/Kconfig
@@ -283,7 +283,7 @@ config CMA_ALIGNMENT
 
 	  If unsure, leave the default value "8".
 
-config CMA_AREAS
+config DMA_CMA_AREAS
 	int "Maximum count of the CMA device-private areas"
 	default 7
 	help
diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
index 83969f8..48cdac8 100644
--- a/drivers/base/dma-contiguous.c
+++ b/drivers/base/dma-contiguous.c
@@ -186,7 +186,7 @@ static int __init cma_activate_area(struct cma *cma)
 	return 0;
 }
 
-static struct cma cma_areas[MAX_CMA_AREAS];
+static struct cma cma_areas[MAX_DMA_CMA_AREAS];
 static unsigned cma_area_count;
 
 static int __init cma_init_reserved_areas(void)
diff --git a/include/linux/cma.h b/include/linux/cma.h
new file mode 100644
index 0000000..60ba06f
--- /dev/null
+++ b/include/linux/cma.h
@@ -0,0 +1,28 @@
+/*
+ * Contiguous Memory Allocator
+ *
+ * Copyright LG Electronics Inc., 2014
+ * Written by:
+ *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License or (at your optional) any later version of the license.
+ *
+ */
+
+#ifndef __CMA_H__
+#define __CMA_H__
+
+struct cma;
+
+extern struct page *cma_alloc(struct cma *cma, unsigned long count,
+				unsigned long align);
+extern bool cma_release(struct cma *cma, struct page *pages,
+				unsigned long count);
+extern int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
+				phys_addr_t limit, phys_addr_t alignment,
+				unsigned long bitmap_shift, bool fixed,
+				struct cma **res_cma);
+#endif
diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
index 772eab5..dfb1dc9 100644
--- a/include/linux/dma-contiguous.h
+++ b/include/linux/dma-contiguous.h
@@ -63,7 +63,7 @@ struct device;
  * There is always at least global CMA area and a few optional device
  * private areas configured in kernel .config.
  */
-#define MAX_CMA_AREAS	(1 + CONFIG_CMA_AREAS)
+#define MAX_DMA_CMA_AREAS  (1 + CONFIG_DMA_CMA_AREAS)
 
 extern struct cma *dma_contiguous_default_area;
 
@@ -123,7 +123,7 @@ bool dma_release_from_contiguous(struct device *dev, struct page *pages,
 
 #else
 
-#define MAX_CMA_AREAS	(0)
+#define MAX_DMA_CMA_AREAS	(0)
 
 static inline struct cma *dev_get_cma_area(struct device *dev)
 {
diff --git a/mm/Kconfig b/mm/Kconfig
index 7511b4a..0877ddc 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -515,6 +515,17 @@ config CMA_DEBUG
 	  processing calls such as dma_alloc_from_contiguous().
 	  This option does not affect warning and error messages.
 
+config CMA_AREAS
+	int "Maximum count of the CMA areas"
+	depends on CMA
+	default 7
+	help
+	  CMA allows to create CMA areas for particular purpose, mainly,
+	  used as device private area. This parameter sets the maximum
+	  number of CMA area in the system.
+
+	  If unsure, leave the default value "7".
+
 config ZBUD
 	tristate
 	default n
diff --git a/mm/Makefile b/mm/Makefile
index 1eaa70b..bc0422b 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -62,3 +62,4 @@ obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o
 obj-$(CONFIG_ZBUD)	+= zbud.o
 obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
 obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
+obj-$(CONFIG_CMA)	+= cma.o
diff --git a/mm/cma.c b/mm/cma.c
new file mode 100644
index 0000000..0dae88d
--- /dev/null
+++ b/mm/cma.c
@@ -0,0 +1,329 @@
+/*
+ * Contiguous Memory Allocator
+ *
+ * Copyright (c) 2010-2011 by Samsung Electronics.
+ * Copyright IBM Corporation, 2013
+ * Copyright LG Electronics Inc., 2014
+ * Written by:
+ *	Marek Szyprowski <m.szyprowski@samsung.com>
+ *	Michal Nazarewicz <mina86@mina86.com>
+ *	Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
+ *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License or (at your optional) any later version of the license.
+ */
+
+#define pr_fmt(fmt) "cma: " fmt
+
+#ifdef CONFIG_CMA_DEBUG
+#ifndef DEBUG
+#  define DEBUG
+#endif
+#endif
+
+#include <linux/memblock.h>
+#include <linux/err.h>
+#include <linux/mm.h>
+#include <linux/mutex.h>
+#include <linux/sizes.h>
+#include <linux/slab.h>
+
+struct cma {
+	unsigned long	base_pfn;
+	unsigned long	count;
+	unsigned long	*bitmap;
+	unsigned long	bitmap_shift;
+	struct mutex	lock;
+};
+
+/*
+ * There is always at least global CMA area and a few optional
+ * areas configured in kernel .config.
+ */
+#define MAX_CMA_AREAS	(1 + CONFIG_CMA_AREAS)
+
+static struct cma cma_areas[MAX_CMA_AREAS];
+static unsigned cma_area_count;
+static DEFINE_MUTEX(cma_mutex);
+
+static unsigned long cma_bitmap_mask(struct cma *cma,
+				unsigned long align_order)
+{
+	return (1 << (align_order >> cma->bitmap_shift)) - 1;
+}
+
+static unsigned long cma_bitmap_max_no(struct cma *cma)
+{
+	return cma->count >> cma->bitmap_shift;
+}
+
+static unsigned long cma_bitmap_pages_to_bits(struct cma *cma,
+						unsigned long pages)
+{
+	return ALIGN(pages, 1 << cma->bitmap_shift) >> cma->bitmap_shift;
+}
+
+static void clear_cma_bitmap(struct cma *cma, unsigned long pfn, int count)
+{
+	unsigned long bitmapno, nr_bits;
+
+	bitmapno = (pfn - cma->base_pfn) >> cma->bitmap_shift;
+	nr_bits = cma_bitmap_pages_to_bits(cma, count);
+
+	mutex_lock(&cma->lock);
+	bitmap_clear(cma->bitmap, bitmapno, nr_bits);
+	mutex_unlock(&cma->lock);
+}
+
+static int __init cma_activate_area(struct cma *cma)
+{
+	int max_bitmapno = cma_bitmap_max_no(cma);
+	int bitmap_size = BITS_TO_LONGS(max_bitmapno) * sizeof(long);
+	unsigned long base_pfn = cma->base_pfn, pfn = base_pfn;
+	unsigned i = cma->count >> pageblock_order;
+	struct zone *zone;
+
+	pr_debug("%s()\n", __func__);
+	if (!cma->count)
+		return 0;
+
+	cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
+	if (!cma->bitmap)
+		return -ENOMEM;
+
+	WARN_ON_ONCE(!pfn_valid(pfn));
+	zone = page_zone(pfn_to_page(pfn));
+
+	do {
+		unsigned j;
+
+		base_pfn = pfn;
+		for (j = pageblock_nr_pages; j; --j, pfn++) {
+			WARN_ON_ONCE(!pfn_valid(pfn));
+			/*
+			 * alloc_contig_range requires the pfn range
+			 * specified to be in the same zone. Make this
+			 * simple by forcing the entire CMA resv range
+			 * to be in the same zone.
+			 */
+			if (page_zone(pfn_to_page(pfn)) != zone)
+				goto err;
+		}
+		init_cma_reserved_pageblock(pfn_to_page(base_pfn));
+	} while (--i);
+
+	mutex_init(&cma->lock);
+	return 0;
+
+err:
+	kfree(cma->bitmap);
+	return -EINVAL;
+}
+
+static int __init cma_init_reserved_areas(void)
+{
+	int i;
+
+	for (i = 0; i < cma_area_count; i++) {
+		int ret = cma_activate_area(&cma_areas[i]);
+
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+core_initcall(cma_init_reserved_areas);
+
+/**
+ * cma_declare_contiguous() - reserve custom contiguous area
+ * @size: Size of the reserved area (in bytes),
+ * @base: Base address of the reserved area optional, use 0 for any
+ * @limit: End address of the reserved memory (optional, 0 for any).
+ * @bitmap_shift: Order of pages represented by one bit on bitmap.
+ * @fixed: hint about where to place the reserved area
+ * @res_cma: Pointer to store the created cma region.
+ *
+ * This function reserves memory from early allocator. It should be
+ * called by arch specific code once the early allocator (memblock or bootmem)
+ * has been activated and all other subsystems have already allocated/reserved
+ * memory. This function allows to create custom reserved areas.
+ *
+ * If @fixed is true, reserve contiguous area at exactly @base.  If false,
+ * reserve in range from @base to @limit.
+ */
+int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
+				phys_addr_t limit, phys_addr_t alignment,
+				unsigned long bitmap_shift, bool fixed,
+				struct cma **res_cma)
+{
+	struct cma *cma = &cma_areas[cma_area_count];
+	int ret = 0;
+
+	pr_debug("%s(size %lx, base %08lx, limit %08lx, alignment %08lx)\n",
+			__func__, (unsigned long)size, (unsigned long)base,
+			(unsigned long)limit, (unsigned long)alignment);
+
+	/* Sanity checks */
+	if (cma_area_count = ARRAY_SIZE(cma_areas)) {
+		pr_err("Not enough slots for CMA reserved regions!\n");
+		return -ENOSPC;
+	}
+
+	if (!size)
+		return -EINVAL;
+
+	/*
+	 * Sanitise input arguments.
+	 * CMA area should be at least MAX_ORDER - 1 aligned. Otherwise,
+	 * CMA area could be merged into other MIGRATE_TYPE by buddy mechanism
+	 * and CMA property will be broken.
+	 */
+	alignment >>= PAGE_SHIFT;
+	alignment = PAGE_SIZE << max3(MAX_ORDER - 1, pageblock_order,
+						(int)alignment);
+	base = ALIGN(base, alignment);
+	size = ALIGN(size, alignment);
+	limit &= ~(alignment - 1);
+	/* size should be aligned with bitmap_shift */
+	BUG_ON(!IS_ALIGNED(size >> PAGE_SHIFT, 1 << cma->bitmap_shift));
+
+	/* Reserve memory */
+	if (base && fixed) {
+		if (memblock_is_region_reserved(base, size) ||
+		    memblock_reserve(base, size) < 0) {
+			ret = -EBUSY;
+			goto err;
+		}
+	} else {
+		phys_addr_t addr = memblock_alloc_range(size, alignment, base,
+							limit);
+		if (!addr) {
+			ret = -ENOMEM;
+			goto err;
+		} else {
+			base = addr;
+		}
+	}
+
+	/*
+	 * Each reserved area must be initialised later, when more kernel
+	 * subsystems (like slab allocator) are available.
+	 */
+	cma->base_pfn = PFN_DOWN(base);
+	cma->count = size >> PAGE_SHIFT;
+	cma->bitmap_shift = bitmap_shift;
+	*res_cma = cma;
+	cma_area_count++;
+
+	pr_info("CMA: reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M,
+		(unsigned long)base);
+
+	return 0;
+
+err:
+	pr_err("CMA: failed to reserve %ld MiB\n", (unsigned long)size / SZ_1M);
+	return ret;
+}
+
+/**
+ * cma_alloc() - allocate pages from contiguous area
+ * @cma:   Contiguous memory region for which the allocation is performed.
+ * @count: Requested number of pages.
+ * @align: Requested alignment of pages (in PAGE_SIZE order).
+ *
+ * This function allocates part of contiguous memory on specific
+ * contiguous memory area.
+ */
+struct page *cma_alloc(struct cma *cma, unsigned long count,
+				       unsigned long align)
+{
+	unsigned long mask, pfn, start = 0;
+	unsigned long max_bitmapno, bitmapno, nr_bits;
+	struct page *page = NULL;
+	int ret;
+
+	if (!cma || !cma->count)
+		return NULL;
+
+	pr_debug("%s(cma %p, count %ld, align %ld)\n", __func__, (void *)cma,
+		 count, align);
+
+	if (!count)
+		return NULL;
+
+	mask = cma_bitmap_mask(cma, align);
+	max_bitmapno = cma_bitmap_max_no(cma);
+	nr_bits = cma_bitmap_pages_to_bits(cma, count);
+
+	for (;;) {
+		mutex_lock(&cma->lock);
+		bitmapno = bitmap_find_next_zero_area(cma->bitmap,
+					max_bitmapno, start, nr_bits, mask);
+		if (bitmapno >= max_bitmapno) {
+			mutex_unlock(&cma->lock);
+			break;
+		}
+		bitmap_set(cma->bitmap, bitmapno, nr_bits);
+		/*
+		 * It's safe to drop the lock here. We've marked this region for
+		 * our exclusive use. If the migration fails we will take the
+		 * lock again and unmark it.
+		 */
+		mutex_unlock(&cma->lock);
+
+		pfn = cma->base_pfn + (bitmapno << cma->bitmap_shift);
+		mutex_lock(&cma_mutex);
+		ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA);
+		mutex_unlock(&cma_mutex);
+		if (ret = 0) {
+			page = pfn_to_page(pfn);
+			break;
+		}
+		clear_cma_bitmap(cma, pfn, count);
+		if (ret != -EBUSY)
+			break;
+
+		pr_debug("%s(): memory range at %p is busy, retrying\n",
+			 __func__, pfn_to_page(pfn));
+		/* try again with a bit different memory target */
+		start = bitmapno + mask + 1;
+	}
+
+	pr_debug("%s(): returned %p\n", __func__, page);
+	return page;
+}
+
+/**
+ * cma_release() - release allocated pages
+ * @cma:   Contiguous memory region for which the allocation is performed.
+ * @pages: Allocated pages.
+ * @count: Number of allocated pages.
+ *
+ * This function releases memory allocated by alloc_cma().
+ * It returns false when provided pages do not belong to contiguous area and
+ * true otherwise.
+ */
+bool cma_release(struct cma *cma, struct page *pages, unsigned long count)
+{
+	unsigned long pfn;
+
+	if (!cma || !pages)
+		return false;
+
+	pr_debug("%s(page %p)\n", __func__, (void *)pages);
+
+	pfn = page_to_pfn(pages);
+	if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)
+		return false;
+
+	VM_BUG_ON(pfn + count > cma->base_pfn + cma->count);
+
+	free_contig_range(pfn, count);
+	clear_cma_bitmap(cma, pfn, count);
+
+	return true;
+}
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [RFC PATCH 2/3] DMA, CMA: use general CMA reserved area management framework
  2014-06-03  1:11 ` Joonsoo Kim
                     ` (2 preceding siblings ...)
  (?)
@ 2014-06-03  1:11   ` Joonsoo Kim
  -1 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-03  1:11 UTC (permalink / raw)
  To: Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski, Michal Nazarewicz
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Paolo Bonzini, Gleb Natapov, Alexander Graf,
	Benjamin Herrenschmidt, Paul Mackerras, linux-mm, linux-kernel,
	linux-arm-kernel, kvm, kvm-ppc, linuxppc-dev, Joonsoo Kim

Now, we have general CMA reserved area management framework,
so use it for future maintainabilty. There is no functional change.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
index b3fe1cc..4eac559 100644
--- a/drivers/base/Kconfig
+++ b/drivers/base/Kconfig
@@ -283,16 +283,6 @@ config CMA_ALIGNMENT
 
 	  If unsure, leave the default value "8".
 
-config DMA_CMA_AREAS
-	int "Maximum count of the CMA device-private areas"
-	default 7
-	help
-	  CMA allows to create CMA areas for particular devices. This parameter
-	  sets the maximum number of such device private CMA areas in the
-	  system.
-
-	  If unsure, leave the default value "7".
-
 endif
 
 endmenu
diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
index 48cdac8..4bce4e1 100644
--- a/drivers/base/dma-contiguous.c
+++ b/drivers/base/dma-contiguous.c
@@ -24,23 +24,9 @@
 
 #include <linux/memblock.h>
 #include <linux/err.h>
-#include <linux/mm.h>
-#include <linux/mutex.h>
-#include <linux/page-isolation.h>
 #include <linux/sizes.h>
-#include <linux/slab.h>
-#include <linux/swap.h>
-#include <linux/mm_types.h>
 #include <linux/dma-contiguous.h>
-
-struct cma {
-	unsigned long	base_pfn;
-	unsigned long	count;
-	unsigned long	*bitmap;
-	struct mutex	lock;
-};
-
-struct cma *dma_contiguous_default_area;
+#include <linux/cma.h>
 
 #ifdef CONFIG_CMA_SIZE_MBYTES
 #define CMA_SIZE_MBYTES CONFIG_CMA_SIZE_MBYTES
@@ -48,6 +34,8 @@ struct cma *dma_contiguous_default_area;
 #define CMA_SIZE_MBYTES 0
 #endif
 
+struct cma *dma_contiguous_default_area;
+
 /*
  * Default global CMA area size can be defined in kernel's .config.
  * This is useful mainly for distro maintainers to create a kernel
@@ -154,55 +142,6 @@ void __init dma_contiguous_reserve(phys_addr_t limit)
 	}
 }
 
-static DEFINE_MUTEX(cma_mutex);
-
-static int __init cma_activate_area(struct cma *cma)
-{
-	int bitmap_size = BITS_TO_LONGS(cma->count) * sizeof(long);
-	unsigned long base_pfn = cma->base_pfn, pfn = base_pfn;
-	unsigned i = cma->count >> pageblock_order;
-	struct zone *zone;
-
-	cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
-
-	if (!cma->bitmap)
-		return -ENOMEM;
-
-	WARN_ON_ONCE(!pfn_valid(pfn));
-	zone = page_zone(pfn_to_page(pfn));
-
-	do {
-		unsigned j;
-		base_pfn = pfn;
-		for (j = pageblock_nr_pages; j; --j, pfn++) {
-			WARN_ON_ONCE(!pfn_valid(pfn));
-			if (page_zone(pfn_to_page(pfn)) != zone)
-				return -EINVAL;
-		}
-		init_cma_reserved_pageblock(pfn_to_page(base_pfn));
-	} while (--i);
-
-	mutex_init(&cma->lock);
-	return 0;
-}
-
-static struct cma cma_areas[MAX_DMA_CMA_AREAS];
-static unsigned cma_area_count;
-
-static int __init cma_init_reserved_areas(void)
-{
-	int i;
-
-	for (i = 0; i < cma_area_count; i++) {
-		int ret = cma_activate_area(&cma_areas[i]);
-		if (ret)
-			return ret;
-	}
-
-	return 0;
-}
-core_initcall(cma_init_reserved_areas);
-
 /**
  * dma_contiguous_reserve_area() - reserve custom contiguous area
  * @size: Size of the reserved area (in bytes),
@@ -224,176 +163,31 @@ int __init dma_contiguous_reserve_area(phys_addr_t size, phys_addr_t base,
 				       phys_addr_t limit, struct cma **res_cma,
 				       bool fixed)
 {
-	struct cma *cma = &cma_areas[cma_area_count];
-	phys_addr_t alignment;
-	int ret = 0;
-
-	pr_debug("%s(size %lx, base %08lx, limit %08lx)\n", __func__,
-		 (unsigned long)size, (unsigned long)base,
-		 (unsigned long)limit);
-
-	/* Sanity checks */
-	if (cma_area_count == ARRAY_SIZE(cma_areas)) {
-		pr_err("Not enough slots for CMA reserved regions!\n");
-		return -ENOSPC;
-	}
-
-	if (!size)
-		return -EINVAL;
-
-	/* Sanitise input arguments */
-	alignment = PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order);
-	base = ALIGN(base, alignment);
-	size = ALIGN(size, alignment);
-	limit &= ~(alignment - 1);
-
-	/* Reserve memory */
-	if (base && fixed) {
-		if (memblock_is_region_reserved(base, size) ||
-		    memblock_reserve(base, size) < 0) {
-			ret = -EBUSY;
-			goto err;
-		}
-	} else {
-		phys_addr_t addr = memblock_alloc_range(size, alignment, base,
-							limit);
-		if (!addr) {
-			ret = -ENOMEM;
-			goto err;
-		} else {
-			base = addr;
-		}
-	}
-
-	/*
-	 * Each reserved area must be initialised later, when more kernel
-	 * subsystems (like slab allocator) are available.
-	 */
-	cma->base_pfn = PFN_DOWN(base);
-	cma->count = size >> PAGE_SHIFT;
-	*res_cma = cma;
-	cma_area_count++;
+	int ret;
+	struct cma *cma;
 
-	pr_info("CMA: reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M,
-		(unsigned long)base);
+	ret = cma_declare_contiguous(size, base, limit, 0, 0, fixed, &cma);
+	if (ret)
+		return ret;
 
 	/* Architecture specific contiguous memory fixup. */
 	dma_contiguous_early_fixup(base, size);
-	return 0;
-err:
-	pr_err("CMA: failed to reserve %ld MiB\n", (unsigned long)size / SZ_1M);
-	return ret;
-}
+	*res_cma = cma;
 
-static void clear_cma_bitmap(struct cma *cma, unsigned long pfn, int count)
-{
-	mutex_lock(&cma->lock);
-	bitmap_clear(cma->bitmap, pfn - cma->base_pfn, count);
-	mutex_unlock(&cma->lock);
+	return 0;
 }
 
-/**
- * dma_alloc_from_contiguous() - allocate pages from contiguous area
- * @dev:   Pointer to device for which the allocation is performed.
- * @count: Requested number of pages.
- * @align: Requested alignment of pages (in PAGE_SIZE order).
- *
- * This function allocates memory buffer for specified device. It uses
- * device specific contiguous memory area if available or the default
- * global one. Requires architecture specific dev_get_cma_area() helper
- * function.
- */
 struct page *dma_alloc_from_contiguous(struct device *dev, int count,
 				       unsigned int align)
 {
-	unsigned long mask, pfn, pageno, start = 0;
-	struct cma *cma = dev_get_cma_area(dev);
-	struct page *page = NULL;
-	int ret;
-
-	if (!cma || !cma->count)
-		return NULL;
-
 	if (align > CONFIG_CMA_ALIGNMENT)
 		align = CONFIG_CMA_ALIGNMENT;
 
-	pr_debug("%s(cma %p, count %d, align %d)\n", __func__, (void *)cma,
-		 count, align);
-
-	if (!count)
-		return NULL;
-
-	mask = (1 << align) - 1;
-
-
-	for (;;) {
-		mutex_lock(&cma->lock);
-		pageno = bitmap_find_next_zero_area(cma->bitmap, cma->count,
-						    start, count, mask);
-		if (pageno >= cma->count) {
-			mutex_unlock(&cma->lock);
-			break;
-		}
-		bitmap_set(cma->bitmap, pageno, count);
-		/*
-		 * It's safe to drop the lock here. We've marked this region for
-		 * our exclusive use. If the migration fails we will take the
-		 * lock again and unmark it.
-		 */
-		mutex_unlock(&cma->lock);
-
-		pfn = cma->base_pfn + pageno;
-		mutex_lock(&cma_mutex);
-		ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA);
-		mutex_unlock(&cma_mutex);
-		if (ret == 0) {
-			page = pfn_to_page(pfn);
-			break;
-		} else if (ret != -EBUSY) {
-			clear_cma_bitmap(cma, pfn, count);
-			break;
-		}
-		clear_cma_bitmap(cma, pfn, count);
-		pr_debug("%s(): memory range at %p is busy, retrying\n",
-			 __func__, pfn_to_page(pfn));
-		/* try again with a bit different memory target */
-		start = pageno + mask + 1;
-	}
-
-	pr_debug("%s(): returned %p\n", __func__, page);
-	return page;
+	return cma_alloc(dev_get_cma_area(dev), count, align);
 }
 
-/**
- * dma_release_from_contiguous() - release allocated pages
- * @dev:   Pointer to device for which the pages were allocated.
- * @pages: Allocated pages.
- * @count: Number of allocated pages.
- *
- * This function releases memory allocated by dma_alloc_from_contiguous().
- * It returns false when provided pages do not belong to contiguous area and
- * true otherwise.
- */
 bool dma_release_from_contiguous(struct device *dev, struct page *pages,
 				 int count)
 {
-	struct cma *cma = dev_get_cma_area(dev);
-	unsigned long pfn;
-
-	if (!cma || !pages)
-		return false;
-
-	pr_debug("%s(page %p)\n", __func__, (void *)pages);
-
-	pfn = page_to_pfn(pages);
-
-	if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)
-		return false;
-
-	VM_BUG_ON(pfn + count > cma->base_pfn + cma->count);
-
-	free_contig_range(pfn, count);
-	clear_cma_bitmap(cma, pfn, count);
-
-	return true;
+	return cma_release(dev_get_cma_area(dev), pages, count);
 }
diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
index dfb1dc9..ecb85ac 100644
--- a/include/linux/dma-contiguous.h
+++ b/include/linux/dma-contiguous.h
@@ -53,9 +53,10 @@
 
 #ifdef __KERNEL__
 
+#include <linux/device.h>
+
 struct cma;
 struct page;
-struct device;
 
 #ifdef CONFIG_DMA_CMA
 
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [RFC PATCH 2/3] DMA, CMA: use general CMA reserved area management framework
@ 2014-06-03  1:11   ` Joonsoo Kim
  0 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-03  1:11 UTC (permalink / raw)
  To: Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski, Michal Nazarewicz
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Paolo Bonzini, Gleb Natapov, Alexander Graf,
	Benjamin Herrenschmidt, Paul Mackerras, linux-mm, linux-kernel,
	linux-arm-kernel, kvm, kvm-ppc, linuxppc-dev, Joonsoo Kim

Now, we have general CMA reserved area management framework,
so use it for future maintainabilty. There is no functional change.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
index b3fe1cc..4eac559 100644
--- a/drivers/base/Kconfig
+++ b/drivers/base/Kconfig
@@ -283,16 +283,6 @@ config CMA_ALIGNMENT
 
 	  If unsure, leave the default value "8".
 
-config DMA_CMA_AREAS
-	int "Maximum count of the CMA device-private areas"
-	default 7
-	help
-	  CMA allows to create CMA areas for particular devices. This parameter
-	  sets the maximum number of such device private CMA areas in the
-	  system.
-
-	  If unsure, leave the default value "7".
-
 endif
 
 endmenu
diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
index 48cdac8..4bce4e1 100644
--- a/drivers/base/dma-contiguous.c
+++ b/drivers/base/dma-contiguous.c
@@ -24,23 +24,9 @@
 
 #include <linux/memblock.h>
 #include <linux/err.h>
-#include <linux/mm.h>
-#include <linux/mutex.h>
-#include <linux/page-isolation.h>
 #include <linux/sizes.h>
-#include <linux/slab.h>
-#include <linux/swap.h>
-#include <linux/mm_types.h>
 #include <linux/dma-contiguous.h>
-
-struct cma {
-	unsigned long	base_pfn;
-	unsigned long	count;
-	unsigned long	*bitmap;
-	struct mutex	lock;
-};
-
-struct cma *dma_contiguous_default_area;
+#include <linux/cma.h>
 
 #ifdef CONFIG_CMA_SIZE_MBYTES
 #define CMA_SIZE_MBYTES CONFIG_CMA_SIZE_MBYTES
@@ -48,6 +34,8 @@ struct cma *dma_contiguous_default_area;
 #define CMA_SIZE_MBYTES 0
 #endif
 
+struct cma *dma_contiguous_default_area;
+
 /*
  * Default global CMA area size can be defined in kernel's .config.
  * This is useful mainly for distro maintainers to create a kernel
@@ -154,55 +142,6 @@ void __init dma_contiguous_reserve(phys_addr_t limit)
 	}
 }
 
-static DEFINE_MUTEX(cma_mutex);
-
-static int __init cma_activate_area(struct cma *cma)
-{
-	int bitmap_size = BITS_TO_LONGS(cma->count) * sizeof(long);
-	unsigned long base_pfn = cma->base_pfn, pfn = base_pfn;
-	unsigned i = cma->count >> pageblock_order;
-	struct zone *zone;
-
-	cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
-
-	if (!cma->bitmap)
-		return -ENOMEM;
-
-	WARN_ON_ONCE(!pfn_valid(pfn));
-	zone = page_zone(pfn_to_page(pfn));
-
-	do {
-		unsigned j;
-		base_pfn = pfn;
-		for (j = pageblock_nr_pages; j; --j, pfn++) {
-			WARN_ON_ONCE(!pfn_valid(pfn));
-			if (page_zone(pfn_to_page(pfn)) != zone)
-				return -EINVAL;
-		}
-		init_cma_reserved_pageblock(pfn_to_page(base_pfn));
-	} while (--i);
-
-	mutex_init(&cma->lock);
-	return 0;
-}
-
-static struct cma cma_areas[MAX_DMA_CMA_AREAS];
-static unsigned cma_area_count;
-
-static int __init cma_init_reserved_areas(void)
-{
-	int i;
-
-	for (i = 0; i < cma_area_count; i++) {
-		int ret = cma_activate_area(&cma_areas[i]);
-		if (ret)
-			return ret;
-	}
-
-	return 0;
-}
-core_initcall(cma_init_reserved_areas);
-
 /**
  * dma_contiguous_reserve_area() - reserve custom contiguous area
  * @size: Size of the reserved area (in bytes),
@@ -224,176 +163,31 @@ int __init dma_contiguous_reserve_area(phys_addr_t size, phys_addr_t base,
 				       phys_addr_t limit, struct cma **res_cma,
 				       bool fixed)
 {
-	struct cma *cma = &cma_areas[cma_area_count];
-	phys_addr_t alignment;
-	int ret = 0;
-
-	pr_debug("%s(size %lx, base %08lx, limit %08lx)\n", __func__,
-		 (unsigned long)size, (unsigned long)base,
-		 (unsigned long)limit);
-
-	/* Sanity checks */
-	if (cma_area_count == ARRAY_SIZE(cma_areas)) {
-		pr_err("Not enough slots for CMA reserved regions!\n");
-		return -ENOSPC;
-	}
-
-	if (!size)
-		return -EINVAL;
-
-	/* Sanitise input arguments */
-	alignment = PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order);
-	base = ALIGN(base, alignment);
-	size = ALIGN(size, alignment);
-	limit &= ~(alignment - 1);
-
-	/* Reserve memory */
-	if (base && fixed) {
-		if (memblock_is_region_reserved(base, size) ||
-		    memblock_reserve(base, size) < 0) {
-			ret = -EBUSY;
-			goto err;
-		}
-	} else {
-		phys_addr_t addr = memblock_alloc_range(size, alignment, base,
-							limit);
-		if (!addr) {
-			ret = -ENOMEM;
-			goto err;
-		} else {
-			base = addr;
-		}
-	}
-
-	/*
-	 * Each reserved area must be initialised later, when more kernel
-	 * subsystems (like slab allocator) are available.
-	 */
-	cma->base_pfn = PFN_DOWN(base);
-	cma->count = size >> PAGE_SHIFT;
-	*res_cma = cma;
-	cma_area_count++;
+	int ret;
+	struct cma *cma;
 
-	pr_info("CMA: reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M,
-		(unsigned long)base);
+	ret = cma_declare_contiguous(size, base, limit, 0, 0, fixed, &cma);
+	if (ret)
+		return ret;
 
 	/* Architecture specific contiguous memory fixup. */
 	dma_contiguous_early_fixup(base, size);
-	return 0;
-err:
-	pr_err("CMA: failed to reserve %ld MiB\n", (unsigned long)size / SZ_1M);
-	return ret;
-}
+	*res_cma = cma;
 
-static void clear_cma_bitmap(struct cma *cma, unsigned long pfn, int count)
-{
-	mutex_lock(&cma->lock);
-	bitmap_clear(cma->bitmap, pfn - cma->base_pfn, count);
-	mutex_unlock(&cma->lock);
+	return 0;
 }
 
-/**
- * dma_alloc_from_contiguous() - allocate pages from contiguous area
- * @dev:   Pointer to device for which the allocation is performed.
- * @count: Requested number of pages.
- * @align: Requested alignment of pages (in PAGE_SIZE order).
- *
- * This function allocates memory buffer for specified device. It uses
- * device specific contiguous memory area if available or the default
- * global one. Requires architecture specific dev_get_cma_area() helper
- * function.
- */
 struct page *dma_alloc_from_contiguous(struct device *dev, int count,
 				       unsigned int align)
 {
-	unsigned long mask, pfn, pageno, start = 0;
-	struct cma *cma = dev_get_cma_area(dev);
-	struct page *page = NULL;
-	int ret;
-
-	if (!cma || !cma->count)
-		return NULL;
-
 	if (align > CONFIG_CMA_ALIGNMENT)
 		align = CONFIG_CMA_ALIGNMENT;
 
-	pr_debug("%s(cma %p, count %d, align %d)\n", __func__, (void *)cma,
-		 count, align);
-
-	if (!count)
-		return NULL;
-
-	mask = (1 << align) - 1;
-
-
-	for (;;) {
-		mutex_lock(&cma->lock);
-		pageno = bitmap_find_next_zero_area(cma->bitmap, cma->count,
-						    start, count, mask);
-		if (pageno >= cma->count) {
-			mutex_unlock(&cma->lock);
-			break;
-		}
-		bitmap_set(cma->bitmap, pageno, count);
-		/*
-		 * It's safe to drop the lock here. We've marked this region for
-		 * our exclusive use. If the migration fails we will take the
-		 * lock again and unmark it.
-		 */
-		mutex_unlock(&cma->lock);
-
-		pfn = cma->base_pfn + pageno;
-		mutex_lock(&cma_mutex);
-		ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA);
-		mutex_unlock(&cma_mutex);
-		if (ret == 0) {
-			page = pfn_to_page(pfn);
-			break;
-		} else if (ret != -EBUSY) {
-			clear_cma_bitmap(cma, pfn, count);
-			break;
-		}
-		clear_cma_bitmap(cma, pfn, count);
-		pr_debug("%s(): memory range at %p is busy, retrying\n",
-			 __func__, pfn_to_page(pfn));
-		/* try again with a bit different memory target */
-		start = pageno + mask + 1;
-	}
-
-	pr_debug("%s(): returned %p\n", __func__, page);
-	return page;
+	return cma_alloc(dev_get_cma_area(dev), count, align);
 }
 
-/**
- * dma_release_from_contiguous() - release allocated pages
- * @dev:   Pointer to device for which the pages were allocated.
- * @pages: Allocated pages.
- * @count: Number of allocated pages.
- *
- * This function releases memory allocated by dma_alloc_from_contiguous().
- * It returns false when provided pages do not belong to contiguous area and
- * true otherwise.
- */
 bool dma_release_from_contiguous(struct device *dev, struct page *pages,
 				 int count)
 {
-	struct cma *cma = dev_get_cma_area(dev);
-	unsigned long pfn;
-
-	if (!cma || !pages)
-		return false;
-
-	pr_debug("%s(page %p)\n", __func__, (void *)pages);
-
-	pfn = page_to_pfn(pages);
-
-	if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)
-		return false;
-
-	VM_BUG_ON(pfn + count > cma->base_pfn + cma->count);
-
-	free_contig_range(pfn, count);
-	clear_cma_bitmap(cma, pfn, count);
-
-	return true;
+	return cma_release(dev_get_cma_area(dev), pages, count);
 }
diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
index dfb1dc9..ecb85ac 100644
--- a/include/linux/dma-contiguous.h
+++ b/include/linux/dma-contiguous.h
@@ -53,9 +53,10 @@
 
 #ifdef __KERNEL__
 
+#include <linux/device.h>
+
 struct cma;
 struct page;
-struct device;
 
 #ifdef CONFIG_DMA_CMA
 
-- 
1.7.9.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [RFC PATCH 2/3] DMA, CMA: use general CMA reserved area management framework
@ 2014-06-03  1:11   ` Joonsoo Kim
  0 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-03  1:11 UTC (permalink / raw)
  To: Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski, Michal Nazarewicz
  Cc: Russell King - ARM Linux, kvm, linux-mm, Gleb Natapov,
	Greg Kroah-Hartman, Alexander Graf, kvm-ppc, linux-kernel,
	Minchan Kim, Paul Mackerras, Paolo Bonzini, Joonsoo Kim,
	linuxppc-dev, linux-arm-kernel

Now, we have general CMA reserved area management framework,
so use it for future maintainabilty. There is no functional change.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
index b3fe1cc..4eac559 100644
--- a/drivers/base/Kconfig
+++ b/drivers/base/Kconfig
@@ -283,16 +283,6 @@ config CMA_ALIGNMENT
 
 	  If unsure, leave the default value "8".
 
-config DMA_CMA_AREAS
-	int "Maximum count of the CMA device-private areas"
-	default 7
-	help
-	  CMA allows to create CMA areas for particular devices. This parameter
-	  sets the maximum number of such device private CMA areas in the
-	  system.
-
-	  If unsure, leave the default value "7".
-
 endif
 
 endmenu
diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
index 48cdac8..4bce4e1 100644
--- a/drivers/base/dma-contiguous.c
+++ b/drivers/base/dma-contiguous.c
@@ -24,23 +24,9 @@
 
 #include <linux/memblock.h>
 #include <linux/err.h>
-#include <linux/mm.h>
-#include <linux/mutex.h>
-#include <linux/page-isolation.h>
 #include <linux/sizes.h>
-#include <linux/slab.h>
-#include <linux/swap.h>
-#include <linux/mm_types.h>
 #include <linux/dma-contiguous.h>
-
-struct cma {
-	unsigned long	base_pfn;
-	unsigned long	count;
-	unsigned long	*bitmap;
-	struct mutex	lock;
-};
-
-struct cma *dma_contiguous_default_area;
+#include <linux/cma.h>
 
 #ifdef CONFIG_CMA_SIZE_MBYTES
 #define CMA_SIZE_MBYTES CONFIG_CMA_SIZE_MBYTES
@@ -48,6 +34,8 @@ struct cma *dma_contiguous_default_area;
 #define CMA_SIZE_MBYTES 0
 #endif
 
+struct cma *dma_contiguous_default_area;
+
 /*
  * Default global CMA area size can be defined in kernel's .config.
  * This is useful mainly for distro maintainers to create a kernel
@@ -154,55 +142,6 @@ void __init dma_contiguous_reserve(phys_addr_t limit)
 	}
 }
 
-static DEFINE_MUTEX(cma_mutex);
-
-static int __init cma_activate_area(struct cma *cma)
-{
-	int bitmap_size = BITS_TO_LONGS(cma->count) * sizeof(long);
-	unsigned long base_pfn = cma->base_pfn, pfn = base_pfn;
-	unsigned i = cma->count >> pageblock_order;
-	struct zone *zone;
-
-	cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
-
-	if (!cma->bitmap)
-		return -ENOMEM;
-
-	WARN_ON_ONCE(!pfn_valid(pfn));
-	zone = page_zone(pfn_to_page(pfn));
-
-	do {
-		unsigned j;
-		base_pfn = pfn;
-		for (j = pageblock_nr_pages; j; --j, pfn++) {
-			WARN_ON_ONCE(!pfn_valid(pfn));
-			if (page_zone(pfn_to_page(pfn)) != zone)
-				return -EINVAL;
-		}
-		init_cma_reserved_pageblock(pfn_to_page(base_pfn));
-	} while (--i);
-
-	mutex_init(&cma->lock);
-	return 0;
-}
-
-static struct cma cma_areas[MAX_DMA_CMA_AREAS];
-static unsigned cma_area_count;
-
-static int __init cma_init_reserved_areas(void)
-{
-	int i;
-
-	for (i = 0; i < cma_area_count; i++) {
-		int ret = cma_activate_area(&cma_areas[i]);
-		if (ret)
-			return ret;
-	}
-
-	return 0;
-}
-core_initcall(cma_init_reserved_areas);
-
 /**
  * dma_contiguous_reserve_area() - reserve custom contiguous area
  * @size: Size of the reserved area (in bytes),
@@ -224,176 +163,31 @@ int __init dma_contiguous_reserve_area(phys_addr_t size, phys_addr_t base,
 				       phys_addr_t limit, struct cma **res_cma,
 				       bool fixed)
 {
-	struct cma *cma = &cma_areas[cma_area_count];
-	phys_addr_t alignment;
-	int ret = 0;
-
-	pr_debug("%s(size %lx, base %08lx, limit %08lx)\n", __func__,
-		 (unsigned long)size, (unsigned long)base,
-		 (unsigned long)limit);
-
-	/* Sanity checks */
-	if (cma_area_count == ARRAY_SIZE(cma_areas)) {
-		pr_err("Not enough slots for CMA reserved regions!\n");
-		return -ENOSPC;
-	}
-
-	if (!size)
-		return -EINVAL;
-
-	/* Sanitise input arguments */
-	alignment = PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order);
-	base = ALIGN(base, alignment);
-	size = ALIGN(size, alignment);
-	limit &= ~(alignment - 1);
-
-	/* Reserve memory */
-	if (base && fixed) {
-		if (memblock_is_region_reserved(base, size) ||
-		    memblock_reserve(base, size) < 0) {
-			ret = -EBUSY;
-			goto err;
-		}
-	} else {
-		phys_addr_t addr = memblock_alloc_range(size, alignment, base,
-							limit);
-		if (!addr) {
-			ret = -ENOMEM;
-			goto err;
-		} else {
-			base = addr;
-		}
-	}
-
-	/*
-	 * Each reserved area must be initialised later, when more kernel
-	 * subsystems (like slab allocator) are available.
-	 */
-	cma->base_pfn = PFN_DOWN(base);
-	cma->count = size >> PAGE_SHIFT;
-	*res_cma = cma;
-	cma_area_count++;
+	int ret;
+	struct cma *cma;
 
-	pr_info("CMA: reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M,
-		(unsigned long)base);
+	ret = cma_declare_contiguous(size, base, limit, 0, 0, fixed, &cma);
+	if (ret)
+		return ret;
 
 	/* Architecture specific contiguous memory fixup. */
 	dma_contiguous_early_fixup(base, size);
-	return 0;
-err:
-	pr_err("CMA: failed to reserve %ld MiB\n", (unsigned long)size / SZ_1M);
-	return ret;
-}
+	*res_cma = cma;
 
-static void clear_cma_bitmap(struct cma *cma, unsigned long pfn, int count)
-{
-	mutex_lock(&cma->lock);
-	bitmap_clear(cma->bitmap, pfn - cma->base_pfn, count);
-	mutex_unlock(&cma->lock);
+	return 0;
 }
 
-/**
- * dma_alloc_from_contiguous() - allocate pages from contiguous area
- * @dev:   Pointer to device for which the allocation is performed.
- * @count: Requested number of pages.
- * @align: Requested alignment of pages (in PAGE_SIZE order).
- *
- * This function allocates memory buffer for specified device. It uses
- * device specific contiguous memory area if available or the default
- * global one. Requires architecture specific dev_get_cma_area() helper
- * function.
- */
 struct page *dma_alloc_from_contiguous(struct device *dev, int count,
 				       unsigned int align)
 {
-	unsigned long mask, pfn, pageno, start = 0;
-	struct cma *cma = dev_get_cma_area(dev);
-	struct page *page = NULL;
-	int ret;
-
-	if (!cma || !cma->count)
-		return NULL;
-
 	if (align > CONFIG_CMA_ALIGNMENT)
 		align = CONFIG_CMA_ALIGNMENT;
 
-	pr_debug("%s(cma %p, count %d, align %d)\n", __func__, (void *)cma,
-		 count, align);
-
-	if (!count)
-		return NULL;
-
-	mask = (1 << align) - 1;
-
-
-	for (;;) {
-		mutex_lock(&cma->lock);
-		pageno = bitmap_find_next_zero_area(cma->bitmap, cma->count,
-						    start, count, mask);
-		if (pageno >= cma->count) {
-			mutex_unlock(&cma->lock);
-			break;
-		}
-		bitmap_set(cma->bitmap, pageno, count);
-		/*
-		 * It's safe to drop the lock here. We've marked this region for
-		 * our exclusive use. If the migration fails we will take the
-		 * lock again and unmark it.
-		 */
-		mutex_unlock(&cma->lock);
-
-		pfn = cma->base_pfn + pageno;
-		mutex_lock(&cma_mutex);
-		ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA);
-		mutex_unlock(&cma_mutex);
-		if (ret == 0) {
-			page = pfn_to_page(pfn);
-			break;
-		} else if (ret != -EBUSY) {
-			clear_cma_bitmap(cma, pfn, count);
-			break;
-		}
-		clear_cma_bitmap(cma, pfn, count);
-		pr_debug("%s(): memory range at %p is busy, retrying\n",
-			 __func__, pfn_to_page(pfn));
-		/* try again with a bit different memory target */
-		start = pageno + mask + 1;
-	}
-
-	pr_debug("%s(): returned %p\n", __func__, page);
-	return page;
+	return cma_alloc(dev_get_cma_area(dev), count, align);
 }
 
-/**
- * dma_release_from_contiguous() - release allocated pages
- * @dev:   Pointer to device for which the pages were allocated.
- * @pages: Allocated pages.
- * @count: Number of allocated pages.
- *
- * This function releases memory allocated by dma_alloc_from_contiguous().
- * It returns false when provided pages do not belong to contiguous area and
- * true otherwise.
- */
 bool dma_release_from_contiguous(struct device *dev, struct page *pages,
 				 int count)
 {
-	struct cma *cma = dev_get_cma_area(dev);
-	unsigned long pfn;
-
-	if (!cma || !pages)
-		return false;
-
-	pr_debug("%s(page %p)\n", __func__, (void *)pages);
-
-	pfn = page_to_pfn(pages);
-
-	if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)
-		return false;
-
-	VM_BUG_ON(pfn + count > cma->base_pfn + cma->count);
-
-	free_contig_range(pfn, count);
-	clear_cma_bitmap(cma, pfn, count);
-
-	return true;
+	return cma_release(dev_get_cma_area(dev), pages, count);
 }
diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
index dfb1dc9..ecb85ac 100644
--- a/include/linux/dma-contiguous.h
+++ b/include/linux/dma-contiguous.h
@@ -53,9 +53,10 @@
 
 #ifdef __KERNEL__
 
+#include <linux/device.h>
+
 struct cma;
 struct page;
-struct device;
 
 #ifdef CONFIG_DMA_CMA
 
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [RFC PATCH 2/3] DMA, CMA: use general CMA reserved area management framework
@ 2014-06-03  1:11   ` Joonsoo Kim
  0 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-03  1:11 UTC (permalink / raw)
  To: linux-arm-kernel

Now, we have general CMA reserved area management framework,
so use it for future maintainabilty. There is no functional change.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
index b3fe1cc..4eac559 100644
--- a/drivers/base/Kconfig
+++ b/drivers/base/Kconfig
@@ -283,16 +283,6 @@ config CMA_ALIGNMENT
 
 	  If unsure, leave the default value "8".
 
-config DMA_CMA_AREAS
-	int "Maximum count of the CMA device-private areas"
-	default 7
-	help
-	  CMA allows to create CMA areas for particular devices. This parameter
-	  sets the maximum number of such device private CMA areas in the
-	  system.
-
-	  If unsure, leave the default value "7".
-
 endif
 
 endmenu
diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
index 48cdac8..4bce4e1 100644
--- a/drivers/base/dma-contiguous.c
+++ b/drivers/base/dma-contiguous.c
@@ -24,23 +24,9 @@
 
 #include <linux/memblock.h>
 #include <linux/err.h>
-#include <linux/mm.h>
-#include <linux/mutex.h>
-#include <linux/page-isolation.h>
 #include <linux/sizes.h>
-#include <linux/slab.h>
-#include <linux/swap.h>
-#include <linux/mm_types.h>
 #include <linux/dma-contiguous.h>
-
-struct cma {
-	unsigned long	base_pfn;
-	unsigned long	count;
-	unsigned long	*bitmap;
-	struct mutex	lock;
-};
-
-struct cma *dma_contiguous_default_area;
+#include <linux/cma.h>
 
 #ifdef CONFIG_CMA_SIZE_MBYTES
 #define CMA_SIZE_MBYTES CONFIG_CMA_SIZE_MBYTES
@@ -48,6 +34,8 @@ struct cma *dma_contiguous_default_area;
 #define CMA_SIZE_MBYTES 0
 #endif
 
+struct cma *dma_contiguous_default_area;
+
 /*
  * Default global CMA area size can be defined in kernel's .config.
  * This is useful mainly for distro maintainers to create a kernel
@@ -154,55 +142,6 @@ void __init dma_contiguous_reserve(phys_addr_t limit)
 	}
 }
 
-static DEFINE_MUTEX(cma_mutex);
-
-static int __init cma_activate_area(struct cma *cma)
-{
-	int bitmap_size = BITS_TO_LONGS(cma->count) * sizeof(long);
-	unsigned long base_pfn = cma->base_pfn, pfn = base_pfn;
-	unsigned i = cma->count >> pageblock_order;
-	struct zone *zone;
-
-	cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
-
-	if (!cma->bitmap)
-		return -ENOMEM;
-
-	WARN_ON_ONCE(!pfn_valid(pfn));
-	zone = page_zone(pfn_to_page(pfn));
-
-	do {
-		unsigned j;
-		base_pfn = pfn;
-		for (j = pageblock_nr_pages; j; --j, pfn++) {
-			WARN_ON_ONCE(!pfn_valid(pfn));
-			if (page_zone(pfn_to_page(pfn)) != zone)
-				return -EINVAL;
-		}
-		init_cma_reserved_pageblock(pfn_to_page(base_pfn));
-	} while (--i);
-
-	mutex_init(&cma->lock);
-	return 0;
-}
-
-static struct cma cma_areas[MAX_DMA_CMA_AREAS];
-static unsigned cma_area_count;
-
-static int __init cma_init_reserved_areas(void)
-{
-	int i;
-
-	for (i = 0; i < cma_area_count; i++) {
-		int ret = cma_activate_area(&cma_areas[i]);
-		if (ret)
-			return ret;
-	}
-
-	return 0;
-}
-core_initcall(cma_init_reserved_areas);
-
 /**
  * dma_contiguous_reserve_area() - reserve custom contiguous area
  * @size: Size of the reserved area (in bytes),
@@ -224,176 +163,31 @@ int __init dma_contiguous_reserve_area(phys_addr_t size, phys_addr_t base,
 				       phys_addr_t limit, struct cma **res_cma,
 				       bool fixed)
 {
-	struct cma *cma = &cma_areas[cma_area_count];
-	phys_addr_t alignment;
-	int ret = 0;
-
-	pr_debug("%s(size %lx, base %08lx, limit %08lx)\n", __func__,
-		 (unsigned long)size, (unsigned long)base,
-		 (unsigned long)limit);
-
-	/* Sanity checks */
-	if (cma_area_count == ARRAY_SIZE(cma_areas)) {
-		pr_err("Not enough slots for CMA reserved regions!\n");
-		return -ENOSPC;
-	}
-
-	if (!size)
-		return -EINVAL;
-
-	/* Sanitise input arguments */
-	alignment = PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order);
-	base = ALIGN(base, alignment);
-	size = ALIGN(size, alignment);
-	limit &= ~(alignment - 1);
-
-	/* Reserve memory */
-	if (base && fixed) {
-		if (memblock_is_region_reserved(base, size) ||
-		    memblock_reserve(base, size) < 0) {
-			ret = -EBUSY;
-			goto err;
-		}
-	} else {
-		phys_addr_t addr = memblock_alloc_range(size, alignment, base,
-							limit);
-		if (!addr) {
-			ret = -ENOMEM;
-			goto err;
-		} else {
-			base = addr;
-		}
-	}
-
-	/*
-	 * Each reserved area must be initialised later, when more kernel
-	 * subsystems (like slab allocator) are available.
-	 */
-	cma->base_pfn = PFN_DOWN(base);
-	cma->count = size >> PAGE_SHIFT;
-	*res_cma = cma;
-	cma_area_count++;
+	int ret;
+	struct cma *cma;
 
-	pr_info("CMA: reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M,
-		(unsigned long)base);
+	ret = cma_declare_contiguous(size, base, limit, 0, 0, fixed, &cma);
+	if (ret)
+		return ret;
 
 	/* Architecture specific contiguous memory fixup. */
 	dma_contiguous_early_fixup(base, size);
-	return 0;
-err:
-	pr_err("CMA: failed to reserve %ld MiB\n", (unsigned long)size / SZ_1M);
-	return ret;
-}
+	*res_cma = cma;
 
-static void clear_cma_bitmap(struct cma *cma, unsigned long pfn, int count)
-{
-	mutex_lock(&cma->lock);
-	bitmap_clear(cma->bitmap, pfn - cma->base_pfn, count);
-	mutex_unlock(&cma->lock);
+	return 0;
 }
 
-/**
- * dma_alloc_from_contiguous() - allocate pages from contiguous area
- * @dev:   Pointer to device for which the allocation is performed.
- * @count: Requested number of pages.
- * @align: Requested alignment of pages (in PAGE_SIZE order).
- *
- * This function allocates memory buffer for specified device. It uses
- * device specific contiguous memory area if available or the default
- * global one. Requires architecture specific dev_get_cma_area() helper
- * function.
- */
 struct page *dma_alloc_from_contiguous(struct device *dev, int count,
 				       unsigned int align)
 {
-	unsigned long mask, pfn, pageno, start = 0;
-	struct cma *cma = dev_get_cma_area(dev);
-	struct page *page = NULL;
-	int ret;
-
-	if (!cma || !cma->count)
-		return NULL;
-
 	if (align > CONFIG_CMA_ALIGNMENT)
 		align = CONFIG_CMA_ALIGNMENT;
 
-	pr_debug("%s(cma %p, count %d, align %d)\n", __func__, (void *)cma,
-		 count, align);
-
-	if (!count)
-		return NULL;
-
-	mask = (1 << align) - 1;
-
-
-	for (;;) {
-		mutex_lock(&cma->lock);
-		pageno = bitmap_find_next_zero_area(cma->bitmap, cma->count,
-						    start, count, mask);
-		if (pageno >= cma->count) {
-			mutex_unlock(&cma->lock);
-			break;
-		}
-		bitmap_set(cma->bitmap, pageno, count);
-		/*
-		 * It's safe to drop the lock here. We've marked this region for
-		 * our exclusive use. If the migration fails we will take the
-		 * lock again and unmark it.
-		 */
-		mutex_unlock(&cma->lock);
-
-		pfn = cma->base_pfn + pageno;
-		mutex_lock(&cma_mutex);
-		ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA);
-		mutex_unlock(&cma_mutex);
-		if (ret == 0) {
-			page = pfn_to_page(pfn);
-			break;
-		} else if (ret != -EBUSY) {
-			clear_cma_bitmap(cma, pfn, count);
-			break;
-		}
-		clear_cma_bitmap(cma, pfn, count);
-		pr_debug("%s(): memory range at %p is busy, retrying\n",
-			 __func__, pfn_to_page(pfn));
-		/* try again with a bit different memory target */
-		start = pageno + mask + 1;
-	}
-
-	pr_debug("%s(): returned %p\n", __func__, page);
-	return page;
+	return cma_alloc(dev_get_cma_area(dev), count, align);
 }
 
-/**
- * dma_release_from_contiguous() - release allocated pages
- * @dev:   Pointer to device for which the pages were allocated.
- * @pages: Allocated pages.
- * @count: Number of allocated pages.
- *
- * This function releases memory allocated by dma_alloc_from_contiguous().
- * It returns false when provided pages do not belong to contiguous area and
- * true otherwise.
- */
 bool dma_release_from_contiguous(struct device *dev, struct page *pages,
 				 int count)
 {
-	struct cma *cma = dev_get_cma_area(dev);
-	unsigned long pfn;
-
-	if (!cma || !pages)
-		return false;
-
-	pr_debug("%s(page %p)\n", __func__, (void *)pages);
-
-	pfn = page_to_pfn(pages);
-
-	if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)
-		return false;
-
-	VM_BUG_ON(pfn + count > cma->base_pfn + cma->count);
-
-	free_contig_range(pfn, count);
-	clear_cma_bitmap(cma, pfn, count);
-
-	return true;
+	return cma_release(dev_get_cma_area(dev), pages, count);
 }
diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
index dfb1dc9..ecb85ac 100644
--- a/include/linux/dma-contiguous.h
+++ b/include/linux/dma-contiguous.h
@@ -53,9 +53,10 @@
 
 #ifdef __KERNEL__
 
+#include <linux/device.h>
+
 struct cma;
 struct page;
-struct device;
 
 #ifdef CONFIG_DMA_CMA
 
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [RFC PATCH 2/3] DMA, CMA: use general CMA reserved area management framework
@ 2014-06-03  1:11   ` Joonsoo Kim
  0 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-03  1:11 UTC (permalink / raw)
  To: Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski, Michal Nazarewicz
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Paolo Bonzini, Gleb Natapov, Alexander Graf,
	Benjamin Herrenschmidt, Paul Mackerras, linux-mm, linux-kernel,
	linux-arm-kernel, kvm, kvm-ppc, linuxppc-dev, Joonsoo Kim

Now, we have general CMA reserved area management framework,
so use it for future maintainabilty. There is no functional change.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
index b3fe1cc..4eac559 100644
--- a/drivers/base/Kconfig
+++ b/drivers/base/Kconfig
@@ -283,16 +283,6 @@ config CMA_ALIGNMENT
 
 	  If unsure, leave the default value "8".
 
-config DMA_CMA_AREAS
-	int "Maximum count of the CMA device-private areas"
-	default 7
-	help
-	  CMA allows to create CMA areas for particular devices. This parameter
-	  sets the maximum number of such device private CMA areas in the
-	  system.
-
-	  If unsure, leave the default value "7".
-
 endif
 
 endmenu
diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
index 48cdac8..4bce4e1 100644
--- a/drivers/base/dma-contiguous.c
+++ b/drivers/base/dma-contiguous.c
@@ -24,23 +24,9 @@
 
 #include <linux/memblock.h>
 #include <linux/err.h>
-#include <linux/mm.h>
-#include <linux/mutex.h>
-#include <linux/page-isolation.h>
 #include <linux/sizes.h>
-#include <linux/slab.h>
-#include <linux/swap.h>
-#include <linux/mm_types.h>
 #include <linux/dma-contiguous.h>
-
-struct cma {
-	unsigned long	base_pfn;
-	unsigned long	count;
-	unsigned long	*bitmap;
-	struct mutex	lock;
-};
-
-struct cma *dma_contiguous_default_area;
+#include <linux/cma.h>
 
 #ifdef CONFIG_CMA_SIZE_MBYTES
 #define CMA_SIZE_MBYTES CONFIG_CMA_SIZE_MBYTES
@@ -48,6 +34,8 @@ struct cma *dma_contiguous_default_area;
 #define CMA_SIZE_MBYTES 0
 #endif
 
+struct cma *dma_contiguous_default_area;
+
 /*
  * Default global CMA area size can be defined in kernel's .config.
  * This is useful mainly for distro maintainers to create a kernel
@@ -154,55 +142,6 @@ void __init dma_contiguous_reserve(phys_addr_t limit)
 	}
 }
 
-static DEFINE_MUTEX(cma_mutex);
-
-static int __init cma_activate_area(struct cma *cma)
-{
-	int bitmap_size = BITS_TO_LONGS(cma->count) * sizeof(long);
-	unsigned long base_pfn = cma->base_pfn, pfn = base_pfn;
-	unsigned i = cma->count >> pageblock_order;
-	struct zone *zone;
-
-	cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
-
-	if (!cma->bitmap)
-		return -ENOMEM;
-
-	WARN_ON_ONCE(!pfn_valid(pfn));
-	zone = page_zone(pfn_to_page(pfn));
-
-	do {
-		unsigned j;
-		base_pfn = pfn;
-		for (j = pageblock_nr_pages; j; --j, pfn++) {
-			WARN_ON_ONCE(!pfn_valid(pfn));
-			if (page_zone(pfn_to_page(pfn)) != zone)
-				return -EINVAL;
-		}
-		init_cma_reserved_pageblock(pfn_to_page(base_pfn));
-	} while (--i);
-
-	mutex_init(&cma->lock);
-	return 0;
-}
-
-static struct cma cma_areas[MAX_DMA_CMA_AREAS];
-static unsigned cma_area_count;
-
-static int __init cma_init_reserved_areas(void)
-{
-	int i;
-
-	for (i = 0; i < cma_area_count; i++) {
-		int ret = cma_activate_area(&cma_areas[i]);
-		if (ret)
-			return ret;
-	}
-
-	return 0;
-}
-core_initcall(cma_init_reserved_areas);
-
 /**
  * dma_contiguous_reserve_area() - reserve custom contiguous area
  * @size: Size of the reserved area (in bytes),
@@ -224,176 +163,31 @@ int __init dma_contiguous_reserve_area(phys_addr_t size, phys_addr_t base,
 				       phys_addr_t limit, struct cma **res_cma,
 				       bool fixed)
 {
-	struct cma *cma = &cma_areas[cma_area_count];
-	phys_addr_t alignment;
-	int ret = 0;
-
-	pr_debug("%s(size %lx, base %08lx, limit %08lx)\n", __func__,
-		 (unsigned long)size, (unsigned long)base,
-		 (unsigned long)limit);
-
-	/* Sanity checks */
-	if (cma_area_count = ARRAY_SIZE(cma_areas)) {
-		pr_err("Not enough slots for CMA reserved regions!\n");
-		return -ENOSPC;
-	}
-
-	if (!size)
-		return -EINVAL;
-
-	/* Sanitise input arguments */
-	alignment = PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order);
-	base = ALIGN(base, alignment);
-	size = ALIGN(size, alignment);
-	limit &= ~(alignment - 1);
-
-	/* Reserve memory */
-	if (base && fixed) {
-		if (memblock_is_region_reserved(base, size) ||
-		    memblock_reserve(base, size) < 0) {
-			ret = -EBUSY;
-			goto err;
-		}
-	} else {
-		phys_addr_t addr = memblock_alloc_range(size, alignment, base,
-							limit);
-		if (!addr) {
-			ret = -ENOMEM;
-			goto err;
-		} else {
-			base = addr;
-		}
-	}
-
-	/*
-	 * Each reserved area must be initialised later, when more kernel
-	 * subsystems (like slab allocator) are available.
-	 */
-	cma->base_pfn = PFN_DOWN(base);
-	cma->count = size >> PAGE_SHIFT;
-	*res_cma = cma;
-	cma_area_count++;
+	int ret;
+	struct cma *cma;
 
-	pr_info("CMA: reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M,
-		(unsigned long)base);
+	ret = cma_declare_contiguous(size, base, limit, 0, 0, fixed, &cma);
+	if (ret)
+		return ret;
 
 	/* Architecture specific contiguous memory fixup. */
 	dma_contiguous_early_fixup(base, size);
-	return 0;
-err:
-	pr_err("CMA: failed to reserve %ld MiB\n", (unsigned long)size / SZ_1M);
-	return ret;
-}
+	*res_cma = cma;
 
-static void clear_cma_bitmap(struct cma *cma, unsigned long pfn, int count)
-{
-	mutex_lock(&cma->lock);
-	bitmap_clear(cma->bitmap, pfn - cma->base_pfn, count);
-	mutex_unlock(&cma->lock);
+	return 0;
 }
 
-/**
- * dma_alloc_from_contiguous() - allocate pages from contiguous area
- * @dev:   Pointer to device for which the allocation is performed.
- * @count: Requested number of pages.
- * @align: Requested alignment of pages (in PAGE_SIZE order).
- *
- * This function allocates memory buffer for specified device. It uses
- * device specific contiguous memory area if available or the default
- * global one. Requires architecture specific dev_get_cma_area() helper
- * function.
- */
 struct page *dma_alloc_from_contiguous(struct device *dev, int count,
 				       unsigned int align)
 {
-	unsigned long mask, pfn, pageno, start = 0;
-	struct cma *cma = dev_get_cma_area(dev);
-	struct page *page = NULL;
-	int ret;
-
-	if (!cma || !cma->count)
-		return NULL;
-
 	if (align > CONFIG_CMA_ALIGNMENT)
 		align = CONFIG_CMA_ALIGNMENT;
 
-	pr_debug("%s(cma %p, count %d, align %d)\n", __func__, (void *)cma,
-		 count, align);
-
-	if (!count)
-		return NULL;
-
-	mask = (1 << align) - 1;
-
-
-	for (;;) {
-		mutex_lock(&cma->lock);
-		pageno = bitmap_find_next_zero_area(cma->bitmap, cma->count,
-						    start, count, mask);
-		if (pageno >= cma->count) {
-			mutex_unlock(&cma->lock);
-			break;
-		}
-		bitmap_set(cma->bitmap, pageno, count);
-		/*
-		 * It's safe to drop the lock here. We've marked this region for
-		 * our exclusive use. If the migration fails we will take the
-		 * lock again and unmark it.
-		 */
-		mutex_unlock(&cma->lock);
-
-		pfn = cma->base_pfn + pageno;
-		mutex_lock(&cma_mutex);
-		ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA);
-		mutex_unlock(&cma_mutex);
-		if (ret = 0) {
-			page = pfn_to_page(pfn);
-			break;
-		} else if (ret != -EBUSY) {
-			clear_cma_bitmap(cma, pfn, count);
-			break;
-		}
-		clear_cma_bitmap(cma, pfn, count);
-		pr_debug("%s(): memory range at %p is busy, retrying\n",
-			 __func__, pfn_to_page(pfn));
-		/* try again with a bit different memory target */
-		start = pageno + mask + 1;
-	}
-
-	pr_debug("%s(): returned %p\n", __func__, page);
-	return page;
+	return cma_alloc(dev_get_cma_area(dev), count, align);
 }
 
-/**
- * dma_release_from_contiguous() - release allocated pages
- * @dev:   Pointer to device for which the pages were allocated.
- * @pages: Allocated pages.
- * @count: Number of allocated pages.
- *
- * This function releases memory allocated by dma_alloc_from_contiguous().
- * It returns false when provided pages do not belong to contiguous area and
- * true otherwise.
- */
 bool dma_release_from_contiguous(struct device *dev, struct page *pages,
 				 int count)
 {
-	struct cma *cma = dev_get_cma_area(dev);
-	unsigned long pfn;
-
-	if (!cma || !pages)
-		return false;
-
-	pr_debug("%s(page %p)\n", __func__, (void *)pages);
-
-	pfn = page_to_pfn(pages);
-
-	if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)
-		return false;
-
-	VM_BUG_ON(pfn + count > cma->base_pfn + cma->count);
-
-	free_contig_range(pfn, count);
-	clear_cma_bitmap(cma, pfn, count);
-
-	return true;
+	return cma_release(dev_get_cma_area(dev), pages, count);
 }
diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
index dfb1dc9..ecb85ac 100644
--- a/include/linux/dma-contiguous.h
+++ b/include/linux/dma-contiguous.h
@@ -53,9 +53,10 @@
 
 #ifdef __KERNEL__
 
+#include <linux/device.h>
+
 struct cma;
 struct page;
-struct device;
 
 #ifdef CONFIG_DMA_CMA
 
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [RFC PATCH 3/3] PPC, KVM, CMA: use general CMA reserved area management framework
  2014-06-03  1:11 ` Joonsoo Kim
                     ` (2 preceding siblings ...)
  (?)
@ 2014-06-03  1:11   ` Joonsoo Kim
  -1 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-03  1:11 UTC (permalink / raw)
  To: Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski, Michal Nazarewicz
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Paolo Bonzini, Gleb Natapov, Alexander Graf,
	Benjamin Herrenschmidt, Paul Mackerras, linux-mm, linux-kernel,
	linux-arm-kernel, kvm, kvm-ppc, linuxppc-dev, Joonsoo Kim

Now, we have general CMA reserved area management framework,
so use it for future maintainabilty. There is no functional change.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book3s_hv_builtin.c
index 8cd0dae..43c3f81 100644
--- a/arch/powerpc/kvm/book3s_hv_builtin.c
+++ b/arch/powerpc/kvm/book3s_hv_builtin.c
@@ -15,12 +15,14 @@
 #include <linux/init.h>
 #include <linux/memblock.h>
 #include <linux/sizes.h>
+#include <linux/cma.h>
 
 #include <asm/cputable.h>
 #include <asm/kvm_ppc.h>
 #include <asm/kvm_book3s.h>
 
-#include "book3s_hv_cma.h"
+#define KVM_CMA_CHUNK_ORDER	18
+
 /*
  * Hash page table alignment on newer cpus(CPU_FTR_ARCH_206)
  * should be power of 2.
@@ -42,6 +44,8 @@ static unsigned long kvm_cma_resv_ratio = 5;
 unsigned long kvm_rma_pages = (1 << 27) >> PAGE_SHIFT;	/* 128MB */
 EXPORT_SYMBOL_GPL(kvm_rma_pages);
 
+static struct cma *kvm_cma;
+
 /* Work out RMLS (real mode limit selector) field value for a given RMA size.
    Assumes POWER7 or PPC970. */
 static inline int lpcr_rmls(unsigned long rma_size)
@@ -96,7 +100,7 @@ struct kvm_rma_info *kvm_alloc_rma()
 	ri = kmalloc(sizeof(struct kvm_rma_info), GFP_KERNEL);
 	if (!ri)
 		return NULL;
-	page = kvm_alloc_cma(kvm_rma_pages, kvm_rma_pages);
+	page = cma_alloc(kvm_cma, kvm_rma_pages, get_order(kvm_rma_pages));
 	if (!page)
 		goto err_out;
 	atomic_set(&ri->use_count, 1);
@@ -111,7 +115,7 @@ EXPORT_SYMBOL_GPL(kvm_alloc_rma);
 void kvm_release_rma(struct kvm_rma_info *ri)
 {
 	if (atomic_dec_and_test(&ri->use_count)) {
-		kvm_release_cma(pfn_to_page(ri->base_pfn), kvm_rma_pages);
+		cma_release(kvm_cma, pfn_to_page(ri->base_pfn), kvm_rma_pages);
 		kfree(ri);
 	}
 }
@@ -133,13 +137,13 @@ struct page *kvm_alloc_hpt(unsigned long nr_pages)
 	/* Old CPUs require HPT aligned on a multiple of its size */
 	if (!cpu_has_feature(CPU_FTR_ARCH_206))
 		align_pages = nr_pages;
-	return kvm_alloc_cma(nr_pages, align_pages);
+	return cma_alloc(kvm_cma, nr_pages, get_order(align_pages));
 }
 EXPORT_SYMBOL_GPL(kvm_alloc_hpt);
 
 void kvm_release_hpt(struct page *page, unsigned long nr_pages)
 {
-	kvm_release_cma(page, nr_pages);
+	cma_release(kvm_cma, page, nr_pages);
 }
 EXPORT_SYMBOL_GPL(kvm_release_hpt);
 
@@ -178,6 +182,7 @@ void __init kvm_cma_reserve(void)
 			align_size = HPT_ALIGN_PAGES << PAGE_SHIFT;
 
 		align_size = max(kvm_rma_pages << PAGE_SHIFT, align_size);
-		kvm_cma_declare_contiguous(selected_size, align_size);
+		cma_declare_contiguous(selected_size, 0, 0, align_size,
+			KVM_CMA_CHUNK_ORDER - PAGE_SHIFT, false, &kvm_cma);
 	}
 }
diff --git a/arch/powerpc/kvm/book3s_hv_cma.c b/arch/powerpc/kvm/book3s_hv_cma.c
deleted file mode 100644
index d9d3d85..0000000
--- a/arch/powerpc/kvm/book3s_hv_cma.c
+++ /dev/null
@@ -1,240 +0,0 @@
-/*
- * Contiguous Memory Allocator for ppc KVM hash pagetable  based on CMA
- * for DMA mapping framework
- *
- * Copyright IBM Corporation, 2013
- * Author Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License as
- * published by the Free Software Foundation; either version 2 of the
- * License or (at your optional) any later version of the license.
- *
- */
-#define pr_fmt(fmt) "kvm_cma: " fmt
-
-#ifdef CONFIG_CMA_DEBUG
-#ifndef DEBUG
-#  define DEBUG
-#endif
-#endif
-
-#include <linux/memblock.h>
-#include <linux/mutex.h>
-#include <linux/sizes.h>
-#include <linux/slab.h>
-
-#include "book3s_hv_cma.h"
-
-struct kvm_cma {
-	unsigned long	base_pfn;
-	unsigned long	count;
-	unsigned long	*bitmap;
-};
-
-static DEFINE_MUTEX(kvm_cma_mutex);
-static struct kvm_cma kvm_cma_area;
-
-/**
- * kvm_cma_declare_contiguous() - reserve area for contiguous memory handling
- *			          for kvm hash pagetable
- * @size:  Size of the reserved memory.
- * @alignment:  Alignment for the contiguous memory area
- *
- * This function reserves memory for kvm cma area. It should be
- * called by arch code when early allocator (memblock or bootmem)
- * is still activate.
- */
-long __init kvm_cma_declare_contiguous(phys_addr_t size, phys_addr_t alignment)
-{
-	long base_pfn;
-	phys_addr_t addr;
-	struct kvm_cma *cma = &kvm_cma_area;
-
-	pr_debug("%s(size %lx)\n", __func__, (unsigned long)size);
-
-	if (!size)
-		return -EINVAL;
-	/*
-	 * Sanitise input arguments.
-	 * We should be pageblock aligned for CMA.
-	 */
-	alignment = max(alignment, (phys_addr_t)(PAGE_SIZE << pageblock_order));
-	size = ALIGN(size, alignment);
-	/*
-	 * Reserve memory
-	 * Use __memblock_alloc_base() since
-	 * memblock_alloc_base() panic()s.
-	 */
-	addr = __memblock_alloc_base(size, alignment, 0);
-	if (!addr) {
-		base_pfn = -ENOMEM;
-		goto err;
-	} else
-		base_pfn = PFN_DOWN(addr);
-
-	/*
-	 * Each reserved area must be initialised later, when more kernel
-	 * subsystems (like slab allocator) are available.
-	 */
-	cma->base_pfn = base_pfn;
-	cma->count    = size >> PAGE_SHIFT;
-	pr_info("CMA: reserved %ld MiB\n", (unsigned long)size / SZ_1M);
-	return 0;
-err:
-	pr_err("CMA: failed to reserve %ld MiB\n", (unsigned long)size / SZ_1M);
-	return base_pfn;
-}
-
-/**
- * kvm_alloc_cma() - allocate pages from contiguous area
- * @nr_pages: Requested number of pages.
- * @align_pages: Requested alignment in number of pages
- *
- * This function allocates memory buffer for hash pagetable.
- */
-struct page *kvm_alloc_cma(unsigned long nr_pages, unsigned long align_pages)
-{
-	int ret;
-	struct page *page = NULL;
-	struct kvm_cma *cma = &kvm_cma_area;
-	unsigned long chunk_count, nr_chunk;
-	unsigned long mask, pfn, pageno, start = 0;
-
-
-	if (!cma || !cma->count)
-		return NULL;
-
-	pr_debug("%s(cma %p, count %lu, align pages %lu)\n", __func__,
-		 (void *)cma, nr_pages, align_pages);
-
-	if (!nr_pages)
-		return NULL;
-	/*
-	 * align mask with chunk size. The bit tracks pages in chunk size
-	 */
-	VM_BUG_ON(!is_power_of_2(align_pages));
-	mask = (align_pages >> (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT)) - 1;
-	BUILD_BUG_ON(PAGE_SHIFT > KVM_CMA_CHUNK_ORDER);
-
-	chunk_count = cma->count >>  (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT);
-	nr_chunk = nr_pages >> (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT);
-
-	mutex_lock(&kvm_cma_mutex);
-	for (;;) {
-		pageno = bitmap_find_next_zero_area(cma->bitmap, chunk_count,
-						    start, nr_chunk, mask);
-		if (pageno >= chunk_count)
-			break;
-
-		pfn = cma->base_pfn + (pageno << (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT));
-		ret = alloc_contig_range(pfn, pfn + nr_pages, MIGRATE_CMA);
-		if (ret == 0) {
-			bitmap_set(cma->bitmap, pageno, nr_chunk);
-			page = pfn_to_page(pfn);
-			memset(pfn_to_kaddr(pfn), 0, nr_pages << PAGE_SHIFT);
-			break;
-		} else if (ret != -EBUSY) {
-			break;
-		}
-		pr_debug("%s(): memory range at %p is busy, retrying\n",
-			 __func__, pfn_to_page(pfn));
-		/* try again with a bit different memory target */
-		start = pageno + mask + 1;
-	}
-	mutex_unlock(&kvm_cma_mutex);
-	pr_debug("%s(): returned %p\n", __func__, page);
-	return page;
-}
-
-/**
- * kvm_release_cma() - release allocated pages for hash pagetable
- * @pages: Allocated pages.
- * @nr_pages: Number of allocated pages.
- *
- * This function releases memory allocated by kvm_alloc_cma().
- * It returns false when provided pages do not belong to contiguous area and
- * true otherwise.
- */
-bool kvm_release_cma(struct page *pages, unsigned long nr_pages)
-{
-	unsigned long pfn;
-	unsigned long nr_chunk;
-	struct kvm_cma *cma = &kvm_cma_area;
-
-	if (!cma || !pages)
-		return false;
-
-	pr_debug("%s(page %p count %lu)\n", __func__, (void *)pages, nr_pages);
-
-	pfn = page_to_pfn(pages);
-
-	if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)
-		return false;
-
-	VM_BUG_ON(pfn + nr_pages > cma->base_pfn + cma->count);
-	nr_chunk = nr_pages >>  (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT);
-
-	mutex_lock(&kvm_cma_mutex);
-	bitmap_clear(cma->bitmap,
-		     (pfn - cma->base_pfn) >> (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT),
-		     nr_chunk);
-	free_contig_range(pfn, nr_pages);
-	mutex_unlock(&kvm_cma_mutex);
-
-	return true;
-}
-
-static int __init kvm_cma_activate_area(unsigned long base_pfn,
-					unsigned long count)
-{
-	unsigned long pfn = base_pfn;
-	unsigned i = count >> pageblock_order;
-	struct zone *zone;
-
-	WARN_ON_ONCE(!pfn_valid(pfn));
-	zone = page_zone(pfn_to_page(pfn));
-	do {
-		unsigned j;
-		base_pfn = pfn;
-		for (j = pageblock_nr_pages; j; --j, pfn++) {
-			WARN_ON_ONCE(!pfn_valid(pfn));
-			/*
-			 * alloc_contig_range requires the pfn range
-			 * specified to be in the same zone. Make this
-			 * simple by forcing the entire CMA resv range
-			 * to be in the same zone.
-			 */
-			if (page_zone(pfn_to_page(pfn)) != zone)
-				return -EINVAL;
-		}
-		init_cma_reserved_pageblock(pfn_to_page(base_pfn));
-	} while (--i);
-	return 0;
-}
-
-static int __init kvm_cma_init_reserved_areas(void)
-{
-	int bitmap_size, ret;
-	unsigned long chunk_count;
-	struct kvm_cma *cma = &kvm_cma_area;
-
-	pr_debug("%s()\n", __func__);
-	if (!cma->count)
-		return 0;
-	chunk_count = cma->count >> (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT);
-	bitmap_size = BITS_TO_LONGS(chunk_count) * sizeof(long);
-	cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
-	if (!cma->bitmap)
-		return -ENOMEM;
-
-	ret = kvm_cma_activate_area(cma->base_pfn, cma->count);
-	if (ret)
-		goto error;
-	return 0;
-
-error:
-	kfree(cma->bitmap);
-	return ret;
-}
-core_initcall(kvm_cma_init_reserved_areas);
diff --git a/arch/powerpc/kvm/book3s_hv_cma.h b/arch/powerpc/kvm/book3s_hv_cma.h
deleted file mode 100644
index 655144f..0000000
--- a/arch/powerpc/kvm/book3s_hv_cma.h
+++ /dev/null
@@ -1,27 +0,0 @@
-/*
- * Contiguous Memory Allocator for ppc KVM hash pagetable  based on CMA
- * for DMA mapping framework
- *
- * Copyright IBM Corporation, 2013
- * Author Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License as
- * published by the Free Software Foundation; either version 2 of the
- * License or (at your optional) any later version of the license.
- *
- */
-
-#ifndef __POWERPC_KVM_CMA_ALLOC_H__
-#define __POWERPC_KVM_CMA_ALLOC_H__
-/*
- * Both RMA and Hash page allocation will be multiple of 256K.
- */
-#define KVM_CMA_CHUNK_ORDER	18
-
-extern struct page *kvm_alloc_cma(unsigned long nr_pages,
-				  unsigned long align_pages);
-extern bool kvm_release_cma(struct page *pages, unsigned long nr_pages);
-extern long kvm_cma_declare_contiguous(phys_addr_t size,
-				       phys_addr_t alignment) __init;
-#endif
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [RFC PATCH 3/3] PPC, KVM, CMA: use general CMA reserved area management framework
@ 2014-06-03  1:11   ` Joonsoo Kim
  0 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-03  1:11 UTC (permalink / raw)
  To: Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski, Michal Nazarewicz
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Paolo Bonzini, Gleb Natapov, Alexander Graf,
	Benjamin Herrenschmidt, Paul Mackerras, linux-mm, linux-kernel,
	linux-arm-kernel, kvm, kvm-ppc, linuxppc-dev, Joonsoo Kim

Now, we have general CMA reserved area management framework,
so use it for future maintainabilty. There is no functional change.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book3s_hv_builtin.c
index 8cd0dae..43c3f81 100644
--- a/arch/powerpc/kvm/book3s_hv_builtin.c
+++ b/arch/powerpc/kvm/book3s_hv_builtin.c
@@ -15,12 +15,14 @@
 #include <linux/init.h>
 #include <linux/memblock.h>
 #include <linux/sizes.h>
+#include <linux/cma.h>
 
 #include <asm/cputable.h>
 #include <asm/kvm_ppc.h>
 #include <asm/kvm_book3s.h>
 
-#include "book3s_hv_cma.h"
+#define KVM_CMA_CHUNK_ORDER	18
+
 /*
  * Hash page table alignment on newer cpus(CPU_FTR_ARCH_206)
  * should be power of 2.
@@ -42,6 +44,8 @@ static unsigned long kvm_cma_resv_ratio = 5;
 unsigned long kvm_rma_pages = (1 << 27) >> PAGE_SHIFT;	/* 128MB */
 EXPORT_SYMBOL_GPL(kvm_rma_pages);
 
+static struct cma *kvm_cma;
+
 /* Work out RMLS (real mode limit selector) field value for a given RMA size.
    Assumes POWER7 or PPC970. */
 static inline int lpcr_rmls(unsigned long rma_size)
@@ -96,7 +100,7 @@ struct kvm_rma_info *kvm_alloc_rma()
 	ri = kmalloc(sizeof(struct kvm_rma_info), GFP_KERNEL);
 	if (!ri)
 		return NULL;
-	page = kvm_alloc_cma(kvm_rma_pages, kvm_rma_pages);
+	page = cma_alloc(kvm_cma, kvm_rma_pages, get_order(kvm_rma_pages));
 	if (!page)
 		goto err_out;
 	atomic_set(&ri->use_count, 1);
@@ -111,7 +115,7 @@ EXPORT_SYMBOL_GPL(kvm_alloc_rma);
 void kvm_release_rma(struct kvm_rma_info *ri)
 {
 	if (atomic_dec_and_test(&ri->use_count)) {
-		kvm_release_cma(pfn_to_page(ri->base_pfn), kvm_rma_pages);
+		cma_release(kvm_cma, pfn_to_page(ri->base_pfn), kvm_rma_pages);
 		kfree(ri);
 	}
 }
@@ -133,13 +137,13 @@ struct page *kvm_alloc_hpt(unsigned long nr_pages)
 	/* Old CPUs require HPT aligned on a multiple of its size */
 	if (!cpu_has_feature(CPU_FTR_ARCH_206))
 		align_pages = nr_pages;
-	return kvm_alloc_cma(nr_pages, align_pages);
+	return cma_alloc(kvm_cma, nr_pages, get_order(align_pages));
 }
 EXPORT_SYMBOL_GPL(kvm_alloc_hpt);
 
 void kvm_release_hpt(struct page *page, unsigned long nr_pages)
 {
-	kvm_release_cma(page, nr_pages);
+	cma_release(kvm_cma, page, nr_pages);
 }
 EXPORT_SYMBOL_GPL(kvm_release_hpt);
 
@@ -178,6 +182,7 @@ void __init kvm_cma_reserve(void)
 			align_size = HPT_ALIGN_PAGES << PAGE_SHIFT;
 
 		align_size = max(kvm_rma_pages << PAGE_SHIFT, align_size);
-		kvm_cma_declare_contiguous(selected_size, align_size);
+		cma_declare_contiguous(selected_size, 0, 0, align_size,
+			KVM_CMA_CHUNK_ORDER - PAGE_SHIFT, false, &kvm_cma);
 	}
 }
diff --git a/arch/powerpc/kvm/book3s_hv_cma.c b/arch/powerpc/kvm/book3s_hv_cma.c
deleted file mode 100644
index d9d3d85..0000000
--- a/arch/powerpc/kvm/book3s_hv_cma.c
+++ /dev/null
@@ -1,240 +0,0 @@
-/*
- * Contiguous Memory Allocator for ppc KVM hash pagetable  based on CMA
- * for DMA mapping framework
- *
- * Copyright IBM Corporation, 2013
- * Author Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License as
- * published by the Free Software Foundation; either version 2 of the
- * License or (at your optional) any later version of the license.
- *
- */
-#define pr_fmt(fmt) "kvm_cma: " fmt
-
-#ifdef CONFIG_CMA_DEBUG
-#ifndef DEBUG
-#  define DEBUG
-#endif
-#endif
-
-#include <linux/memblock.h>
-#include <linux/mutex.h>
-#include <linux/sizes.h>
-#include <linux/slab.h>
-
-#include "book3s_hv_cma.h"
-
-struct kvm_cma {
-	unsigned long	base_pfn;
-	unsigned long	count;
-	unsigned long	*bitmap;
-};
-
-static DEFINE_MUTEX(kvm_cma_mutex);
-static struct kvm_cma kvm_cma_area;
-
-/**
- * kvm_cma_declare_contiguous() - reserve area for contiguous memory handling
- *			          for kvm hash pagetable
- * @size:  Size of the reserved memory.
- * @alignment:  Alignment for the contiguous memory area
- *
- * This function reserves memory for kvm cma area. It should be
- * called by arch code when early allocator (memblock or bootmem)
- * is still activate.
- */
-long __init kvm_cma_declare_contiguous(phys_addr_t size, phys_addr_t alignment)
-{
-	long base_pfn;
-	phys_addr_t addr;
-	struct kvm_cma *cma = &kvm_cma_area;
-
-	pr_debug("%s(size %lx)\n", __func__, (unsigned long)size);
-
-	if (!size)
-		return -EINVAL;
-	/*
-	 * Sanitise input arguments.
-	 * We should be pageblock aligned for CMA.
-	 */
-	alignment = max(alignment, (phys_addr_t)(PAGE_SIZE << pageblock_order));
-	size = ALIGN(size, alignment);
-	/*
-	 * Reserve memory
-	 * Use __memblock_alloc_base() since
-	 * memblock_alloc_base() panic()s.
-	 */
-	addr = __memblock_alloc_base(size, alignment, 0);
-	if (!addr) {
-		base_pfn = -ENOMEM;
-		goto err;
-	} else
-		base_pfn = PFN_DOWN(addr);
-
-	/*
-	 * Each reserved area must be initialised later, when more kernel
-	 * subsystems (like slab allocator) are available.
-	 */
-	cma->base_pfn = base_pfn;
-	cma->count    = size >> PAGE_SHIFT;
-	pr_info("CMA: reserved %ld MiB\n", (unsigned long)size / SZ_1M);
-	return 0;
-err:
-	pr_err("CMA: failed to reserve %ld MiB\n", (unsigned long)size / SZ_1M);
-	return base_pfn;
-}
-
-/**
- * kvm_alloc_cma() - allocate pages from contiguous area
- * @nr_pages: Requested number of pages.
- * @align_pages: Requested alignment in number of pages
- *
- * This function allocates memory buffer for hash pagetable.
- */
-struct page *kvm_alloc_cma(unsigned long nr_pages, unsigned long align_pages)
-{
-	int ret;
-	struct page *page = NULL;
-	struct kvm_cma *cma = &kvm_cma_area;
-	unsigned long chunk_count, nr_chunk;
-	unsigned long mask, pfn, pageno, start = 0;
-
-
-	if (!cma || !cma->count)
-		return NULL;
-
-	pr_debug("%s(cma %p, count %lu, align pages %lu)\n", __func__,
-		 (void *)cma, nr_pages, align_pages);
-
-	if (!nr_pages)
-		return NULL;
-	/*
-	 * align mask with chunk size. The bit tracks pages in chunk size
-	 */
-	VM_BUG_ON(!is_power_of_2(align_pages));
-	mask = (align_pages >> (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT)) - 1;
-	BUILD_BUG_ON(PAGE_SHIFT > KVM_CMA_CHUNK_ORDER);
-
-	chunk_count = cma->count >>  (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT);
-	nr_chunk = nr_pages >> (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT);
-
-	mutex_lock(&kvm_cma_mutex);
-	for (;;) {
-		pageno = bitmap_find_next_zero_area(cma->bitmap, chunk_count,
-						    start, nr_chunk, mask);
-		if (pageno >= chunk_count)
-			break;
-
-		pfn = cma->base_pfn + (pageno << (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT));
-		ret = alloc_contig_range(pfn, pfn + nr_pages, MIGRATE_CMA);
-		if (ret == 0) {
-			bitmap_set(cma->bitmap, pageno, nr_chunk);
-			page = pfn_to_page(pfn);
-			memset(pfn_to_kaddr(pfn), 0, nr_pages << PAGE_SHIFT);
-			break;
-		} else if (ret != -EBUSY) {
-			break;
-		}
-		pr_debug("%s(): memory range at %p is busy, retrying\n",
-			 __func__, pfn_to_page(pfn));
-		/* try again with a bit different memory target */
-		start = pageno + mask + 1;
-	}
-	mutex_unlock(&kvm_cma_mutex);
-	pr_debug("%s(): returned %p\n", __func__, page);
-	return page;
-}
-
-/**
- * kvm_release_cma() - release allocated pages for hash pagetable
- * @pages: Allocated pages.
- * @nr_pages: Number of allocated pages.
- *
- * This function releases memory allocated by kvm_alloc_cma().
- * It returns false when provided pages do not belong to contiguous area and
- * true otherwise.
- */
-bool kvm_release_cma(struct page *pages, unsigned long nr_pages)
-{
-	unsigned long pfn;
-	unsigned long nr_chunk;
-	struct kvm_cma *cma = &kvm_cma_area;
-
-	if (!cma || !pages)
-		return false;
-
-	pr_debug("%s(page %p count %lu)\n", __func__, (void *)pages, nr_pages);
-
-	pfn = page_to_pfn(pages);
-
-	if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)
-		return false;
-
-	VM_BUG_ON(pfn + nr_pages > cma->base_pfn + cma->count);
-	nr_chunk = nr_pages >>  (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT);
-
-	mutex_lock(&kvm_cma_mutex);
-	bitmap_clear(cma->bitmap,
-		     (pfn - cma->base_pfn) >> (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT),
-		     nr_chunk);
-	free_contig_range(pfn, nr_pages);
-	mutex_unlock(&kvm_cma_mutex);
-
-	return true;
-}
-
-static int __init kvm_cma_activate_area(unsigned long base_pfn,
-					unsigned long count)
-{
-	unsigned long pfn = base_pfn;
-	unsigned i = count >> pageblock_order;
-	struct zone *zone;
-
-	WARN_ON_ONCE(!pfn_valid(pfn));
-	zone = page_zone(pfn_to_page(pfn));
-	do {
-		unsigned j;
-		base_pfn = pfn;
-		for (j = pageblock_nr_pages; j; --j, pfn++) {
-			WARN_ON_ONCE(!pfn_valid(pfn));
-			/*
-			 * alloc_contig_range requires the pfn range
-			 * specified to be in the same zone. Make this
-			 * simple by forcing the entire CMA resv range
-			 * to be in the same zone.
-			 */
-			if (page_zone(pfn_to_page(pfn)) != zone)
-				return -EINVAL;
-		}
-		init_cma_reserved_pageblock(pfn_to_page(base_pfn));
-	} while (--i);
-	return 0;
-}
-
-static int __init kvm_cma_init_reserved_areas(void)
-{
-	int bitmap_size, ret;
-	unsigned long chunk_count;
-	struct kvm_cma *cma = &kvm_cma_area;
-
-	pr_debug("%s()\n", __func__);
-	if (!cma->count)
-		return 0;
-	chunk_count = cma->count >> (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT);
-	bitmap_size = BITS_TO_LONGS(chunk_count) * sizeof(long);
-	cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
-	if (!cma->bitmap)
-		return -ENOMEM;
-
-	ret = kvm_cma_activate_area(cma->base_pfn, cma->count);
-	if (ret)
-		goto error;
-	return 0;
-
-error:
-	kfree(cma->bitmap);
-	return ret;
-}
-core_initcall(kvm_cma_init_reserved_areas);
diff --git a/arch/powerpc/kvm/book3s_hv_cma.h b/arch/powerpc/kvm/book3s_hv_cma.h
deleted file mode 100644
index 655144f..0000000
--- a/arch/powerpc/kvm/book3s_hv_cma.h
+++ /dev/null
@@ -1,27 +0,0 @@
-/*
- * Contiguous Memory Allocator for ppc KVM hash pagetable  based on CMA
- * for DMA mapping framework
- *
- * Copyright IBM Corporation, 2013
- * Author Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License as
- * published by the Free Software Foundation; either version 2 of the
- * License or (at your optional) any later version of the license.
- *
- */
-
-#ifndef __POWERPC_KVM_CMA_ALLOC_H__
-#define __POWERPC_KVM_CMA_ALLOC_H__
-/*
- * Both RMA and Hash page allocation will be multiple of 256K.
- */
-#define KVM_CMA_CHUNK_ORDER	18
-
-extern struct page *kvm_alloc_cma(unsigned long nr_pages,
-				  unsigned long align_pages);
-extern bool kvm_release_cma(struct page *pages, unsigned long nr_pages);
-extern long kvm_cma_declare_contiguous(phys_addr_t size,
-				       phys_addr_t alignment) __init;
-#endif
-- 
1.7.9.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [RFC PATCH 3/3] PPC, KVM, CMA: use general CMA reserved area management framework
@ 2014-06-03  1:11   ` Joonsoo Kim
  0 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-03  1:11 UTC (permalink / raw)
  To: Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski, Michal Nazarewicz
  Cc: Russell King - ARM Linux, kvm, linux-mm, Gleb Natapov,
	Greg Kroah-Hartman, Alexander Graf, kvm-ppc, linux-kernel,
	Minchan Kim, Paul Mackerras, Paolo Bonzini, Joonsoo Kim,
	linuxppc-dev, linux-arm-kernel

Now, we have general CMA reserved area management framework,
so use it for future maintainabilty. There is no functional change.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book3s_hv_builtin.c
index 8cd0dae..43c3f81 100644
--- a/arch/powerpc/kvm/book3s_hv_builtin.c
+++ b/arch/powerpc/kvm/book3s_hv_builtin.c
@@ -15,12 +15,14 @@
 #include <linux/init.h>
 #include <linux/memblock.h>
 #include <linux/sizes.h>
+#include <linux/cma.h>
 
 #include <asm/cputable.h>
 #include <asm/kvm_ppc.h>
 #include <asm/kvm_book3s.h>
 
-#include "book3s_hv_cma.h"
+#define KVM_CMA_CHUNK_ORDER	18
+
 /*
  * Hash page table alignment on newer cpus(CPU_FTR_ARCH_206)
  * should be power of 2.
@@ -42,6 +44,8 @@ static unsigned long kvm_cma_resv_ratio = 5;
 unsigned long kvm_rma_pages = (1 << 27) >> PAGE_SHIFT;	/* 128MB */
 EXPORT_SYMBOL_GPL(kvm_rma_pages);
 
+static struct cma *kvm_cma;
+
 /* Work out RMLS (real mode limit selector) field value for a given RMA size.
    Assumes POWER7 or PPC970. */
 static inline int lpcr_rmls(unsigned long rma_size)
@@ -96,7 +100,7 @@ struct kvm_rma_info *kvm_alloc_rma()
 	ri = kmalloc(sizeof(struct kvm_rma_info), GFP_KERNEL);
 	if (!ri)
 		return NULL;
-	page = kvm_alloc_cma(kvm_rma_pages, kvm_rma_pages);
+	page = cma_alloc(kvm_cma, kvm_rma_pages, get_order(kvm_rma_pages));
 	if (!page)
 		goto err_out;
 	atomic_set(&ri->use_count, 1);
@@ -111,7 +115,7 @@ EXPORT_SYMBOL_GPL(kvm_alloc_rma);
 void kvm_release_rma(struct kvm_rma_info *ri)
 {
 	if (atomic_dec_and_test(&ri->use_count)) {
-		kvm_release_cma(pfn_to_page(ri->base_pfn), kvm_rma_pages);
+		cma_release(kvm_cma, pfn_to_page(ri->base_pfn), kvm_rma_pages);
 		kfree(ri);
 	}
 }
@@ -133,13 +137,13 @@ struct page *kvm_alloc_hpt(unsigned long nr_pages)
 	/* Old CPUs require HPT aligned on a multiple of its size */
 	if (!cpu_has_feature(CPU_FTR_ARCH_206))
 		align_pages = nr_pages;
-	return kvm_alloc_cma(nr_pages, align_pages);
+	return cma_alloc(kvm_cma, nr_pages, get_order(align_pages));
 }
 EXPORT_SYMBOL_GPL(kvm_alloc_hpt);
 
 void kvm_release_hpt(struct page *page, unsigned long nr_pages)
 {
-	kvm_release_cma(page, nr_pages);
+	cma_release(kvm_cma, page, nr_pages);
 }
 EXPORT_SYMBOL_GPL(kvm_release_hpt);
 
@@ -178,6 +182,7 @@ void __init kvm_cma_reserve(void)
 			align_size = HPT_ALIGN_PAGES << PAGE_SHIFT;
 
 		align_size = max(kvm_rma_pages << PAGE_SHIFT, align_size);
-		kvm_cma_declare_contiguous(selected_size, align_size);
+		cma_declare_contiguous(selected_size, 0, 0, align_size,
+			KVM_CMA_CHUNK_ORDER - PAGE_SHIFT, false, &kvm_cma);
 	}
 }
diff --git a/arch/powerpc/kvm/book3s_hv_cma.c b/arch/powerpc/kvm/book3s_hv_cma.c
deleted file mode 100644
index d9d3d85..0000000
--- a/arch/powerpc/kvm/book3s_hv_cma.c
+++ /dev/null
@@ -1,240 +0,0 @@
-/*
- * Contiguous Memory Allocator for ppc KVM hash pagetable  based on CMA
- * for DMA mapping framework
- *
- * Copyright IBM Corporation, 2013
- * Author Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License as
- * published by the Free Software Foundation; either version 2 of the
- * License or (at your optional) any later version of the license.
- *
- */
-#define pr_fmt(fmt) "kvm_cma: " fmt
-
-#ifdef CONFIG_CMA_DEBUG
-#ifndef DEBUG
-#  define DEBUG
-#endif
-#endif
-
-#include <linux/memblock.h>
-#include <linux/mutex.h>
-#include <linux/sizes.h>
-#include <linux/slab.h>
-
-#include "book3s_hv_cma.h"
-
-struct kvm_cma {
-	unsigned long	base_pfn;
-	unsigned long	count;
-	unsigned long	*bitmap;
-};
-
-static DEFINE_MUTEX(kvm_cma_mutex);
-static struct kvm_cma kvm_cma_area;
-
-/**
- * kvm_cma_declare_contiguous() - reserve area for contiguous memory handling
- *			          for kvm hash pagetable
- * @size:  Size of the reserved memory.
- * @alignment:  Alignment for the contiguous memory area
- *
- * This function reserves memory for kvm cma area. It should be
- * called by arch code when early allocator (memblock or bootmem)
- * is still activate.
- */
-long __init kvm_cma_declare_contiguous(phys_addr_t size, phys_addr_t alignment)
-{
-	long base_pfn;
-	phys_addr_t addr;
-	struct kvm_cma *cma = &kvm_cma_area;
-
-	pr_debug("%s(size %lx)\n", __func__, (unsigned long)size);
-
-	if (!size)
-		return -EINVAL;
-	/*
-	 * Sanitise input arguments.
-	 * We should be pageblock aligned for CMA.
-	 */
-	alignment = max(alignment, (phys_addr_t)(PAGE_SIZE << pageblock_order));
-	size = ALIGN(size, alignment);
-	/*
-	 * Reserve memory
-	 * Use __memblock_alloc_base() since
-	 * memblock_alloc_base() panic()s.
-	 */
-	addr = __memblock_alloc_base(size, alignment, 0);
-	if (!addr) {
-		base_pfn = -ENOMEM;
-		goto err;
-	} else
-		base_pfn = PFN_DOWN(addr);
-
-	/*
-	 * Each reserved area must be initialised later, when more kernel
-	 * subsystems (like slab allocator) are available.
-	 */
-	cma->base_pfn = base_pfn;
-	cma->count    = size >> PAGE_SHIFT;
-	pr_info("CMA: reserved %ld MiB\n", (unsigned long)size / SZ_1M);
-	return 0;
-err:
-	pr_err("CMA: failed to reserve %ld MiB\n", (unsigned long)size / SZ_1M);
-	return base_pfn;
-}
-
-/**
- * kvm_alloc_cma() - allocate pages from contiguous area
- * @nr_pages: Requested number of pages.
- * @align_pages: Requested alignment in number of pages
- *
- * This function allocates memory buffer for hash pagetable.
- */
-struct page *kvm_alloc_cma(unsigned long nr_pages, unsigned long align_pages)
-{
-	int ret;
-	struct page *page = NULL;
-	struct kvm_cma *cma = &kvm_cma_area;
-	unsigned long chunk_count, nr_chunk;
-	unsigned long mask, pfn, pageno, start = 0;
-
-
-	if (!cma || !cma->count)
-		return NULL;
-
-	pr_debug("%s(cma %p, count %lu, align pages %lu)\n", __func__,
-		 (void *)cma, nr_pages, align_pages);
-
-	if (!nr_pages)
-		return NULL;
-	/*
-	 * align mask with chunk size. The bit tracks pages in chunk size
-	 */
-	VM_BUG_ON(!is_power_of_2(align_pages));
-	mask = (align_pages >> (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT)) - 1;
-	BUILD_BUG_ON(PAGE_SHIFT > KVM_CMA_CHUNK_ORDER);
-
-	chunk_count = cma->count >>  (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT);
-	nr_chunk = nr_pages >> (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT);
-
-	mutex_lock(&kvm_cma_mutex);
-	for (;;) {
-		pageno = bitmap_find_next_zero_area(cma->bitmap, chunk_count,
-						    start, nr_chunk, mask);
-		if (pageno >= chunk_count)
-			break;
-
-		pfn = cma->base_pfn + (pageno << (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT));
-		ret = alloc_contig_range(pfn, pfn + nr_pages, MIGRATE_CMA);
-		if (ret == 0) {
-			bitmap_set(cma->bitmap, pageno, nr_chunk);
-			page = pfn_to_page(pfn);
-			memset(pfn_to_kaddr(pfn), 0, nr_pages << PAGE_SHIFT);
-			break;
-		} else if (ret != -EBUSY) {
-			break;
-		}
-		pr_debug("%s(): memory range at %p is busy, retrying\n",
-			 __func__, pfn_to_page(pfn));
-		/* try again with a bit different memory target */
-		start = pageno + mask + 1;
-	}
-	mutex_unlock(&kvm_cma_mutex);
-	pr_debug("%s(): returned %p\n", __func__, page);
-	return page;
-}
-
-/**
- * kvm_release_cma() - release allocated pages for hash pagetable
- * @pages: Allocated pages.
- * @nr_pages: Number of allocated pages.
- *
- * This function releases memory allocated by kvm_alloc_cma().
- * It returns false when provided pages do not belong to contiguous area and
- * true otherwise.
- */
-bool kvm_release_cma(struct page *pages, unsigned long nr_pages)
-{
-	unsigned long pfn;
-	unsigned long nr_chunk;
-	struct kvm_cma *cma = &kvm_cma_area;
-
-	if (!cma || !pages)
-		return false;
-
-	pr_debug("%s(page %p count %lu)\n", __func__, (void *)pages, nr_pages);
-
-	pfn = page_to_pfn(pages);
-
-	if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)
-		return false;
-
-	VM_BUG_ON(pfn + nr_pages > cma->base_pfn + cma->count);
-	nr_chunk = nr_pages >>  (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT);
-
-	mutex_lock(&kvm_cma_mutex);
-	bitmap_clear(cma->bitmap,
-		     (pfn - cma->base_pfn) >> (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT),
-		     nr_chunk);
-	free_contig_range(pfn, nr_pages);
-	mutex_unlock(&kvm_cma_mutex);
-
-	return true;
-}
-
-static int __init kvm_cma_activate_area(unsigned long base_pfn,
-					unsigned long count)
-{
-	unsigned long pfn = base_pfn;
-	unsigned i = count >> pageblock_order;
-	struct zone *zone;
-
-	WARN_ON_ONCE(!pfn_valid(pfn));
-	zone = page_zone(pfn_to_page(pfn));
-	do {
-		unsigned j;
-		base_pfn = pfn;
-		for (j = pageblock_nr_pages; j; --j, pfn++) {
-			WARN_ON_ONCE(!pfn_valid(pfn));
-			/*
-			 * alloc_contig_range requires the pfn range
-			 * specified to be in the same zone. Make this
-			 * simple by forcing the entire CMA resv range
-			 * to be in the same zone.
-			 */
-			if (page_zone(pfn_to_page(pfn)) != zone)
-				return -EINVAL;
-		}
-		init_cma_reserved_pageblock(pfn_to_page(base_pfn));
-	} while (--i);
-	return 0;
-}
-
-static int __init kvm_cma_init_reserved_areas(void)
-{
-	int bitmap_size, ret;
-	unsigned long chunk_count;
-	struct kvm_cma *cma = &kvm_cma_area;
-
-	pr_debug("%s()\n", __func__);
-	if (!cma->count)
-		return 0;
-	chunk_count = cma->count >> (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT);
-	bitmap_size = BITS_TO_LONGS(chunk_count) * sizeof(long);
-	cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
-	if (!cma->bitmap)
-		return -ENOMEM;
-
-	ret = kvm_cma_activate_area(cma->base_pfn, cma->count);
-	if (ret)
-		goto error;
-	return 0;
-
-error:
-	kfree(cma->bitmap);
-	return ret;
-}
-core_initcall(kvm_cma_init_reserved_areas);
diff --git a/arch/powerpc/kvm/book3s_hv_cma.h b/arch/powerpc/kvm/book3s_hv_cma.h
deleted file mode 100644
index 655144f..0000000
--- a/arch/powerpc/kvm/book3s_hv_cma.h
+++ /dev/null
@@ -1,27 +0,0 @@
-/*
- * Contiguous Memory Allocator for ppc KVM hash pagetable  based on CMA
- * for DMA mapping framework
- *
- * Copyright IBM Corporation, 2013
- * Author Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License as
- * published by the Free Software Foundation; either version 2 of the
- * License or (at your optional) any later version of the license.
- *
- */
-
-#ifndef __POWERPC_KVM_CMA_ALLOC_H__
-#define __POWERPC_KVM_CMA_ALLOC_H__
-/*
- * Both RMA and Hash page allocation will be multiple of 256K.
- */
-#define KVM_CMA_CHUNK_ORDER	18
-
-extern struct page *kvm_alloc_cma(unsigned long nr_pages,
-				  unsigned long align_pages);
-extern bool kvm_release_cma(struct page *pages, unsigned long nr_pages);
-extern long kvm_cma_declare_contiguous(phys_addr_t size,
-				       phys_addr_t alignment) __init;
-#endif
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [RFC PATCH 3/3] PPC, KVM, CMA: use general CMA reserved area management framework
@ 2014-06-03  1:11   ` Joonsoo Kim
  0 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-03  1:11 UTC (permalink / raw)
  To: linux-arm-kernel

Now, we have general CMA reserved area management framework,
so use it for future maintainabilty. There is no functional change.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book3s_hv_builtin.c
index 8cd0dae..43c3f81 100644
--- a/arch/powerpc/kvm/book3s_hv_builtin.c
+++ b/arch/powerpc/kvm/book3s_hv_builtin.c
@@ -15,12 +15,14 @@
 #include <linux/init.h>
 #include <linux/memblock.h>
 #include <linux/sizes.h>
+#include <linux/cma.h>
 
 #include <asm/cputable.h>
 #include <asm/kvm_ppc.h>
 #include <asm/kvm_book3s.h>
 
-#include "book3s_hv_cma.h"
+#define KVM_CMA_CHUNK_ORDER	18
+
 /*
  * Hash page table alignment on newer cpus(CPU_FTR_ARCH_206)
  * should be power of 2.
@@ -42,6 +44,8 @@ static unsigned long kvm_cma_resv_ratio = 5;
 unsigned long kvm_rma_pages = (1 << 27) >> PAGE_SHIFT;	/* 128MB */
 EXPORT_SYMBOL_GPL(kvm_rma_pages);
 
+static struct cma *kvm_cma;
+
 /* Work out RMLS (real mode limit selector) field value for a given RMA size.
    Assumes POWER7 or PPC970. */
 static inline int lpcr_rmls(unsigned long rma_size)
@@ -96,7 +100,7 @@ struct kvm_rma_info *kvm_alloc_rma()
 	ri = kmalloc(sizeof(struct kvm_rma_info), GFP_KERNEL);
 	if (!ri)
 		return NULL;
-	page = kvm_alloc_cma(kvm_rma_pages, kvm_rma_pages);
+	page = cma_alloc(kvm_cma, kvm_rma_pages, get_order(kvm_rma_pages));
 	if (!page)
 		goto err_out;
 	atomic_set(&ri->use_count, 1);
@@ -111,7 +115,7 @@ EXPORT_SYMBOL_GPL(kvm_alloc_rma);
 void kvm_release_rma(struct kvm_rma_info *ri)
 {
 	if (atomic_dec_and_test(&ri->use_count)) {
-		kvm_release_cma(pfn_to_page(ri->base_pfn), kvm_rma_pages);
+		cma_release(kvm_cma, pfn_to_page(ri->base_pfn), kvm_rma_pages);
 		kfree(ri);
 	}
 }
@@ -133,13 +137,13 @@ struct page *kvm_alloc_hpt(unsigned long nr_pages)
 	/* Old CPUs require HPT aligned on a multiple of its size */
 	if (!cpu_has_feature(CPU_FTR_ARCH_206))
 		align_pages = nr_pages;
-	return kvm_alloc_cma(nr_pages, align_pages);
+	return cma_alloc(kvm_cma, nr_pages, get_order(align_pages));
 }
 EXPORT_SYMBOL_GPL(kvm_alloc_hpt);
 
 void kvm_release_hpt(struct page *page, unsigned long nr_pages)
 {
-	kvm_release_cma(page, nr_pages);
+	cma_release(kvm_cma, page, nr_pages);
 }
 EXPORT_SYMBOL_GPL(kvm_release_hpt);
 
@@ -178,6 +182,7 @@ void __init kvm_cma_reserve(void)
 			align_size = HPT_ALIGN_PAGES << PAGE_SHIFT;
 
 		align_size = max(kvm_rma_pages << PAGE_SHIFT, align_size);
-		kvm_cma_declare_contiguous(selected_size, align_size);
+		cma_declare_contiguous(selected_size, 0, 0, align_size,
+			KVM_CMA_CHUNK_ORDER - PAGE_SHIFT, false, &kvm_cma);
 	}
 }
diff --git a/arch/powerpc/kvm/book3s_hv_cma.c b/arch/powerpc/kvm/book3s_hv_cma.c
deleted file mode 100644
index d9d3d85..0000000
--- a/arch/powerpc/kvm/book3s_hv_cma.c
+++ /dev/null
@@ -1,240 +0,0 @@
-/*
- * Contiguous Memory Allocator for ppc KVM hash pagetable  based on CMA
- * for DMA mapping framework
- *
- * Copyright IBM Corporation, 2013
- * Author Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License as
- * published by the Free Software Foundation; either version 2 of the
- * License or (at your optional) any later version of the license.
- *
- */
-#define pr_fmt(fmt) "kvm_cma: " fmt
-
-#ifdef CONFIG_CMA_DEBUG
-#ifndef DEBUG
-#  define DEBUG
-#endif
-#endif
-
-#include <linux/memblock.h>
-#include <linux/mutex.h>
-#include <linux/sizes.h>
-#include <linux/slab.h>
-
-#include "book3s_hv_cma.h"
-
-struct kvm_cma {
-	unsigned long	base_pfn;
-	unsigned long	count;
-	unsigned long	*bitmap;
-};
-
-static DEFINE_MUTEX(kvm_cma_mutex);
-static struct kvm_cma kvm_cma_area;
-
-/**
- * kvm_cma_declare_contiguous() - reserve area for contiguous memory handling
- *			          for kvm hash pagetable
- * @size:  Size of the reserved memory.
- * @alignment:  Alignment for the contiguous memory area
- *
- * This function reserves memory for kvm cma area. It should be
- * called by arch code when early allocator (memblock or bootmem)
- * is still activate.
- */
-long __init kvm_cma_declare_contiguous(phys_addr_t size, phys_addr_t alignment)
-{
-	long base_pfn;
-	phys_addr_t addr;
-	struct kvm_cma *cma = &kvm_cma_area;
-
-	pr_debug("%s(size %lx)\n", __func__, (unsigned long)size);
-
-	if (!size)
-		return -EINVAL;
-	/*
-	 * Sanitise input arguments.
-	 * We should be pageblock aligned for CMA.
-	 */
-	alignment = max(alignment, (phys_addr_t)(PAGE_SIZE << pageblock_order));
-	size = ALIGN(size, alignment);
-	/*
-	 * Reserve memory
-	 * Use __memblock_alloc_base() since
-	 * memblock_alloc_base() panic()s.
-	 */
-	addr = __memblock_alloc_base(size, alignment, 0);
-	if (!addr) {
-		base_pfn = -ENOMEM;
-		goto err;
-	} else
-		base_pfn = PFN_DOWN(addr);
-
-	/*
-	 * Each reserved area must be initialised later, when more kernel
-	 * subsystems (like slab allocator) are available.
-	 */
-	cma->base_pfn = base_pfn;
-	cma->count    = size >> PAGE_SHIFT;
-	pr_info("CMA: reserved %ld MiB\n", (unsigned long)size / SZ_1M);
-	return 0;
-err:
-	pr_err("CMA: failed to reserve %ld MiB\n", (unsigned long)size / SZ_1M);
-	return base_pfn;
-}
-
-/**
- * kvm_alloc_cma() - allocate pages from contiguous area
- * @nr_pages: Requested number of pages.
- * @align_pages: Requested alignment in number of pages
- *
- * This function allocates memory buffer for hash pagetable.
- */
-struct page *kvm_alloc_cma(unsigned long nr_pages, unsigned long align_pages)
-{
-	int ret;
-	struct page *page = NULL;
-	struct kvm_cma *cma = &kvm_cma_area;
-	unsigned long chunk_count, nr_chunk;
-	unsigned long mask, pfn, pageno, start = 0;
-
-
-	if (!cma || !cma->count)
-		return NULL;
-
-	pr_debug("%s(cma %p, count %lu, align pages %lu)\n", __func__,
-		 (void *)cma, nr_pages, align_pages);
-
-	if (!nr_pages)
-		return NULL;
-	/*
-	 * align mask with chunk size. The bit tracks pages in chunk size
-	 */
-	VM_BUG_ON(!is_power_of_2(align_pages));
-	mask = (align_pages >> (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT)) - 1;
-	BUILD_BUG_ON(PAGE_SHIFT > KVM_CMA_CHUNK_ORDER);
-
-	chunk_count = cma->count >>  (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT);
-	nr_chunk = nr_pages >> (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT);
-
-	mutex_lock(&kvm_cma_mutex);
-	for (;;) {
-		pageno = bitmap_find_next_zero_area(cma->bitmap, chunk_count,
-						    start, nr_chunk, mask);
-		if (pageno >= chunk_count)
-			break;
-
-		pfn = cma->base_pfn + (pageno << (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT));
-		ret = alloc_contig_range(pfn, pfn + nr_pages, MIGRATE_CMA);
-		if (ret == 0) {
-			bitmap_set(cma->bitmap, pageno, nr_chunk);
-			page = pfn_to_page(pfn);
-			memset(pfn_to_kaddr(pfn), 0, nr_pages << PAGE_SHIFT);
-			break;
-		} else if (ret != -EBUSY) {
-			break;
-		}
-		pr_debug("%s(): memory range@%p is busy, retrying\n",
-			 __func__, pfn_to_page(pfn));
-		/* try again with a bit different memory target */
-		start = pageno + mask + 1;
-	}
-	mutex_unlock(&kvm_cma_mutex);
-	pr_debug("%s(): returned %p\n", __func__, page);
-	return page;
-}
-
-/**
- * kvm_release_cma() - release allocated pages for hash pagetable
- * @pages: Allocated pages.
- * @nr_pages: Number of allocated pages.
- *
- * This function releases memory allocated by kvm_alloc_cma().
- * It returns false when provided pages do not belong to contiguous area and
- * true otherwise.
- */
-bool kvm_release_cma(struct page *pages, unsigned long nr_pages)
-{
-	unsigned long pfn;
-	unsigned long nr_chunk;
-	struct kvm_cma *cma = &kvm_cma_area;
-
-	if (!cma || !pages)
-		return false;
-
-	pr_debug("%s(page %p count %lu)\n", __func__, (void *)pages, nr_pages);
-
-	pfn = page_to_pfn(pages);
-
-	if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)
-		return false;
-
-	VM_BUG_ON(pfn + nr_pages > cma->base_pfn + cma->count);
-	nr_chunk = nr_pages >>  (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT);
-
-	mutex_lock(&kvm_cma_mutex);
-	bitmap_clear(cma->bitmap,
-		     (pfn - cma->base_pfn) >> (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT),
-		     nr_chunk);
-	free_contig_range(pfn, nr_pages);
-	mutex_unlock(&kvm_cma_mutex);
-
-	return true;
-}
-
-static int __init kvm_cma_activate_area(unsigned long base_pfn,
-					unsigned long count)
-{
-	unsigned long pfn = base_pfn;
-	unsigned i = count >> pageblock_order;
-	struct zone *zone;
-
-	WARN_ON_ONCE(!pfn_valid(pfn));
-	zone = page_zone(pfn_to_page(pfn));
-	do {
-		unsigned j;
-		base_pfn = pfn;
-		for (j = pageblock_nr_pages; j; --j, pfn++) {
-			WARN_ON_ONCE(!pfn_valid(pfn));
-			/*
-			 * alloc_contig_range requires the pfn range
-			 * specified to be in the same zone. Make this
-			 * simple by forcing the entire CMA resv range
-			 * to be in the same zone.
-			 */
-			if (page_zone(pfn_to_page(pfn)) != zone)
-				return -EINVAL;
-		}
-		init_cma_reserved_pageblock(pfn_to_page(base_pfn));
-	} while (--i);
-	return 0;
-}
-
-static int __init kvm_cma_init_reserved_areas(void)
-{
-	int bitmap_size, ret;
-	unsigned long chunk_count;
-	struct kvm_cma *cma = &kvm_cma_area;
-
-	pr_debug("%s()\n", __func__);
-	if (!cma->count)
-		return 0;
-	chunk_count = cma->count >> (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT);
-	bitmap_size = BITS_TO_LONGS(chunk_count) * sizeof(long);
-	cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
-	if (!cma->bitmap)
-		return -ENOMEM;
-
-	ret = kvm_cma_activate_area(cma->base_pfn, cma->count);
-	if (ret)
-		goto error;
-	return 0;
-
-error:
-	kfree(cma->bitmap);
-	return ret;
-}
-core_initcall(kvm_cma_init_reserved_areas);
diff --git a/arch/powerpc/kvm/book3s_hv_cma.h b/arch/powerpc/kvm/book3s_hv_cma.h
deleted file mode 100644
index 655144f..0000000
--- a/arch/powerpc/kvm/book3s_hv_cma.h
+++ /dev/null
@@ -1,27 +0,0 @@
-/*
- * Contiguous Memory Allocator for ppc KVM hash pagetable  based on CMA
- * for DMA mapping framework
- *
- * Copyright IBM Corporation, 2013
- * Author Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License as
- * published by the Free Software Foundation; either version 2 of the
- * License or (at your optional) any later version of the license.
- *
- */
-
-#ifndef __POWERPC_KVM_CMA_ALLOC_H__
-#define __POWERPC_KVM_CMA_ALLOC_H__
-/*
- * Both RMA and Hash page allocation will be multiple of 256K.
- */
-#define KVM_CMA_CHUNK_ORDER	18
-
-extern struct page *kvm_alloc_cma(unsigned long nr_pages,
-				  unsigned long align_pages);
-extern bool kvm_release_cma(struct page *pages, unsigned long nr_pages);
-extern long kvm_cma_declare_contiguous(phys_addr_t size,
-				       phys_addr_t alignment) __init;
-#endif
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 74+ messages in thread

* [RFC PATCH 3/3] PPC, KVM, CMA: use general CMA reserved area management framework
@ 2014-06-03  1:11   ` Joonsoo Kim
  0 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-03  1:11 UTC (permalink / raw)
  To: Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski, Michal Nazarewicz
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Paolo Bonzini, Gleb Natapov, Alexander Graf,
	Benjamin Herrenschmidt, Paul Mackerras, linux-mm, linux-kernel,
	linux-arm-kernel, kvm, kvm-ppc, linuxppc-dev, Joonsoo Kim

Now, we have general CMA reserved area management framework,
so use it for future maintainabilty. There is no functional change.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

diff --git a/arch/powerpc/kvm/book3s_hv_builtin.c b/arch/powerpc/kvm/book3s_hv_builtin.c
index 8cd0dae..43c3f81 100644
--- a/arch/powerpc/kvm/book3s_hv_builtin.c
+++ b/arch/powerpc/kvm/book3s_hv_builtin.c
@@ -15,12 +15,14 @@
 #include <linux/init.h>
 #include <linux/memblock.h>
 #include <linux/sizes.h>
+#include <linux/cma.h>
 
 #include <asm/cputable.h>
 #include <asm/kvm_ppc.h>
 #include <asm/kvm_book3s.h>
 
-#include "book3s_hv_cma.h"
+#define KVM_CMA_CHUNK_ORDER	18
+
 /*
  * Hash page table alignment on newer cpus(CPU_FTR_ARCH_206)
  * should be power of 2.
@@ -42,6 +44,8 @@ static unsigned long kvm_cma_resv_ratio = 5;
 unsigned long kvm_rma_pages = (1 << 27) >> PAGE_SHIFT;	/* 128MB */
 EXPORT_SYMBOL_GPL(kvm_rma_pages);
 
+static struct cma *kvm_cma;
+
 /* Work out RMLS (real mode limit selector) field value for a given RMA size.
    Assumes POWER7 or PPC970. */
 static inline int lpcr_rmls(unsigned long rma_size)
@@ -96,7 +100,7 @@ struct kvm_rma_info *kvm_alloc_rma()
 	ri = kmalloc(sizeof(struct kvm_rma_info), GFP_KERNEL);
 	if (!ri)
 		return NULL;
-	page = kvm_alloc_cma(kvm_rma_pages, kvm_rma_pages);
+	page = cma_alloc(kvm_cma, kvm_rma_pages, get_order(kvm_rma_pages));
 	if (!page)
 		goto err_out;
 	atomic_set(&ri->use_count, 1);
@@ -111,7 +115,7 @@ EXPORT_SYMBOL_GPL(kvm_alloc_rma);
 void kvm_release_rma(struct kvm_rma_info *ri)
 {
 	if (atomic_dec_and_test(&ri->use_count)) {
-		kvm_release_cma(pfn_to_page(ri->base_pfn), kvm_rma_pages);
+		cma_release(kvm_cma, pfn_to_page(ri->base_pfn), kvm_rma_pages);
 		kfree(ri);
 	}
 }
@@ -133,13 +137,13 @@ struct page *kvm_alloc_hpt(unsigned long nr_pages)
 	/* Old CPUs require HPT aligned on a multiple of its size */
 	if (!cpu_has_feature(CPU_FTR_ARCH_206))
 		align_pages = nr_pages;
-	return kvm_alloc_cma(nr_pages, align_pages);
+	return cma_alloc(kvm_cma, nr_pages, get_order(align_pages));
 }
 EXPORT_SYMBOL_GPL(kvm_alloc_hpt);
 
 void kvm_release_hpt(struct page *page, unsigned long nr_pages)
 {
-	kvm_release_cma(page, nr_pages);
+	cma_release(kvm_cma, page, nr_pages);
 }
 EXPORT_SYMBOL_GPL(kvm_release_hpt);
 
@@ -178,6 +182,7 @@ void __init kvm_cma_reserve(void)
 			align_size = HPT_ALIGN_PAGES << PAGE_SHIFT;
 
 		align_size = max(kvm_rma_pages << PAGE_SHIFT, align_size);
-		kvm_cma_declare_contiguous(selected_size, align_size);
+		cma_declare_contiguous(selected_size, 0, 0, align_size,
+			KVM_CMA_CHUNK_ORDER - PAGE_SHIFT, false, &kvm_cma);
 	}
 }
diff --git a/arch/powerpc/kvm/book3s_hv_cma.c b/arch/powerpc/kvm/book3s_hv_cma.c
deleted file mode 100644
index d9d3d85..0000000
--- a/arch/powerpc/kvm/book3s_hv_cma.c
+++ /dev/null
@@ -1,240 +0,0 @@
-/*
- * Contiguous Memory Allocator for ppc KVM hash pagetable  based on CMA
- * for DMA mapping framework
- *
- * Copyright IBM Corporation, 2013
- * Author Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License as
- * published by the Free Software Foundation; either version 2 of the
- * License or (at your optional) any later version of the license.
- *
- */
-#define pr_fmt(fmt) "kvm_cma: " fmt
-
-#ifdef CONFIG_CMA_DEBUG
-#ifndef DEBUG
-#  define DEBUG
-#endif
-#endif
-
-#include <linux/memblock.h>
-#include <linux/mutex.h>
-#include <linux/sizes.h>
-#include <linux/slab.h>
-
-#include "book3s_hv_cma.h"
-
-struct kvm_cma {
-	unsigned long	base_pfn;
-	unsigned long	count;
-	unsigned long	*bitmap;
-};
-
-static DEFINE_MUTEX(kvm_cma_mutex);
-static struct kvm_cma kvm_cma_area;
-
-/**
- * kvm_cma_declare_contiguous() - reserve area for contiguous memory handling
- *			          for kvm hash pagetable
- * @size:  Size of the reserved memory.
- * @alignment:  Alignment for the contiguous memory area
- *
- * This function reserves memory for kvm cma area. It should be
- * called by arch code when early allocator (memblock or bootmem)
- * is still activate.
- */
-long __init kvm_cma_declare_contiguous(phys_addr_t size, phys_addr_t alignment)
-{
-	long base_pfn;
-	phys_addr_t addr;
-	struct kvm_cma *cma = &kvm_cma_area;
-
-	pr_debug("%s(size %lx)\n", __func__, (unsigned long)size);
-
-	if (!size)
-		return -EINVAL;
-	/*
-	 * Sanitise input arguments.
-	 * We should be pageblock aligned for CMA.
-	 */
-	alignment = max(alignment, (phys_addr_t)(PAGE_SIZE << pageblock_order));
-	size = ALIGN(size, alignment);
-	/*
-	 * Reserve memory
-	 * Use __memblock_alloc_base() since
-	 * memblock_alloc_base() panic()s.
-	 */
-	addr = __memblock_alloc_base(size, alignment, 0);
-	if (!addr) {
-		base_pfn = -ENOMEM;
-		goto err;
-	} else
-		base_pfn = PFN_DOWN(addr);
-
-	/*
-	 * Each reserved area must be initialised later, when more kernel
-	 * subsystems (like slab allocator) are available.
-	 */
-	cma->base_pfn = base_pfn;
-	cma->count    = size >> PAGE_SHIFT;
-	pr_info("CMA: reserved %ld MiB\n", (unsigned long)size / SZ_1M);
-	return 0;
-err:
-	pr_err("CMA: failed to reserve %ld MiB\n", (unsigned long)size / SZ_1M);
-	return base_pfn;
-}
-
-/**
- * kvm_alloc_cma() - allocate pages from contiguous area
- * @nr_pages: Requested number of pages.
- * @align_pages: Requested alignment in number of pages
- *
- * This function allocates memory buffer for hash pagetable.
- */
-struct page *kvm_alloc_cma(unsigned long nr_pages, unsigned long align_pages)
-{
-	int ret;
-	struct page *page = NULL;
-	struct kvm_cma *cma = &kvm_cma_area;
-	unsigned long chunk_count, nr_chunk;
-	unsigned long mask, pfn, pageno, start = 0;
-
-
-	if (!cma || !cma->count)
-		return NULL;
-
-	pr_debug("%s(cma %p, count %lu, align pages %lu)\n", __func__,
-		 (void *)cma, nr_pages, align_pages);
-
-	if (!nr_pages)
-		return NULL;
-	/*
-	 * align mask with chunk size. The bit tracks pages in chunk size
-	 */
-	VM_BUG_ON(!is_power_of_2(align_pages));
-	mask = (align_pages >> (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT)) - 1;
-	BUILD_BUG_ON(PAGE_SHIFT > KVM_CMA_CHUNK_ORDER);
-
-	chunk_count = cma->count >>  (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT);
-	nr_chunk = nr_pages >> (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT);
-
-	mutex_lock(&kvm_cma_mutex);
-	for (;;) {
-		pageno = bitmap_find_next_zero_area(cma->bitmap, chunk_count,
-						    start, nr_chunk, mask);
-		if (pageno >= chunk_count)
-			break;
-
-		pfn = cma->base_pfn + (pageno << (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT));
-		ret = alloc_contig_range(pfn, pfn + nr_pages, MIGRATE_CMA);
-		if (ret = 0) {
-			bitmap_set(cma->bitmap, pageno, nr_chunk);
-			page = pfn_to_page(pfn);
-			memset(pfn_to_kaddr(pfn), 0, nr_pages << PAGE_SHIFT);
-			break;
-		} else if (ret != -EBUSY) {
-			break;
-		}
-		pr_debug("%s(): memory range at %p is busy, retrying\n",
-			 __func__, pfn_to_page(pfn));
-		/* try again with a bit different memory target */
-		start = pageno + mask + 1;
-	}
-	mutex_unlock(&kvm_cma_mutex);
-	pr_debug("%s(): returned %p\n", __func__, page);
-	return page;
-}
-
-/**
- * kvm_release_cma() - release allocated pages for hash pagetable
- * @pages: Allocated pages.
- * @nr_pages: Number of allocated pages.
- *
- * This function releases memory allocated by kvm_alloc_cma().
- * It returns false when provided pages do not belong to contiguous area and
- * true otherwise.
- */
-bool kvm_release_cma(struct page *pages, unsigned long nr_pages)
-{
-	unsigned long pfn;
-	unsigned long nr_chunk;
-	struct kvm_cma *cma = &kvm_cma_area;
-
-	if (!cma || !pages)
-		return false;
-
-	pr_debug("%s(page %p count %lu)\n", __func__, (void *)pages, nr_pages);
-
-	pfn = page_to_pfn(pages);
-
-	if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count)
-		return false;
-
-	VM_BUG_ON(pfn + nr_pages > cma->base_pfn + cma->count);
-	nr_chunk = nr_pages >>  (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT);
-
-	mutex_lock(&kvm_cma_mutex);
-	bitmap_clear(cma->bitmap,
-		     (pfn - cma->base_pfn) >> (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT),
-		     nr_chunk);
-	free_contig_range(pfn, nr_pages);
-	mutex_unlock(&kvm_cma_mutex);
-
-	return true;
-}
-
-static int __init kvm_cma_activate_area(unsigned long base_pfn,
-					unsigned long count)
-{
-	unsigned long pfn = base_pfn;
-	unsigned i = count >> pageblock_order;
-	struct zone *zone;
-
-	WARN_ON_ONCE(!pfn_valid(pfn));
-	zone = page_zone(pfn_to_page(pfn));
-	do {
-		unsigned j;
-		base_pfn = pfn;
-		for (j = pageblock_nr_pages; j; --j, pfn++) {
-			WARN_ON_ONCE(!pfn_valid(pfn));
-			/*
-			 * alloc_contig_range requires the pfn range
-			 * specified to be in the same zone. Make this
-			 * simple by forcing the entire CMA resv range
-			 * to be in the same zone.
-			 */
-			if (page_zone(pfn_to_page(pfn)) != zone)
-				return -EINVAL;
-		}
-		init_cma_reserved_pageblock(pfn_to_page(base_pfn));
-	} while (--i);
-	return 0;
-}
-
-static int __init kvm_cma_init_reserved_areas(void)
-{
-	int bitmap_size, ret;
-	unsigned long chunk_count;
-	struct kvm_cma *cma = &kvm_cma_area;
-
-	pr_debug("%s()\n", __func__);
-	if (!cma->count)
-		return 0;
-	chunk_count = cma->count >> (KVM_CMA_CHUNK_ORDER - PAGE_SHIFT);
-	bitmap_size = BITS_TO_LONGS(chunk_count) * sizeof(long);
-	cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
-	if (!cma->bitmap)
-		return -ENOMEM;
-
-	ret = kvm_cma_activate_area(cma->base_pfn, cma->count);
-	if (ret)
-		goto error;
-	return 0;
-
-error:
-	kfree(cma->bitmap);
-	return ret;
-}
-core_initcall(kvm_cma_init_reserved_areas);
diff --git a/arch/powerpc/kvm/book3s_hv_cma.h b/arch/powerpc/kvm/book3s_hv_cma.h
deleted file mode 100644
index 655144f..0000000
--- a/arch/powerpc/kvm/book3s_hv_cma.h
+++ /dev/null
@@ -1,27 +0,0 @@
-/*
- * Contiguous Memory Allocator for ppc KVM hash pagetable  based on CMA
- * for DMA mapping framework
- *
- * Copyright IBM Corporation, 2013
- * Author Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License as
- * published by the Free Software Foundation; either version 2 of the
- * License or (at your optional) any later version of the license.
- *
- */
-
-#ifndef __POWERPC_KVM_CMA_ALLOC_H__
-#define __POWERPC_KVM_CMA_ALLOC_H__
-/*
- * Both RMA and Hash page allocation will be multiple of 256K.
- */
-#define KVM_CMA_CHUNK_ORDER	18
-
-extern struct page *kvm_alloc_cma(unsigned long nr_pages,
-				  unsigned long align_pages);
-extern bool kvm_release_cma(struct page *pages, unsigned long nr_pages);
-extern long kvm_cma_declare_contiguous(phys_addr_t size,
-				       phys_addr_t alignment) __init;
-#endif
-- 
1.7.9.5


^ permalink raw reply related	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 1/3] CMA: generalize CMA reserved area management functionality
  2014-06-03  1:11   ` Joonsoo Kim
                       ` (3 preceding siblings ...)
  (?)
@ 2014-06-03  6:56     ` Michal Nazarewicz
  -1 siblings, 0 replies; 74+ messages in thread
From: Michal Nazarewicz @ 2014-06-03  6:56 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Paolo Bonzini, Gleb Natapov, Alexander Graf,
	Benjamin Herrenschmidt, Paul Mackerras, linux-mm, linux-kernel,
	linux-arm-kernel, kvm, kvm-ppc, linuxppc-dev, Joonsoo Kim

On Tue, Jun 03 2014, Joonsoo Kim wrote:
> Currently, there are two users on CMA functionality, one is the DMA
> subsystem and the other is the kvm on powerpc. They have their own code
> to manage CMA reserved area even if they looks really similar.
> From my guess, it is caused by some needs on bitmap management. Kvm side
> wants to maintain bitmap not for 1 page, but for more size. Eventually it
> use bitmap where one bit represents 64 pages.
>
> When I implement CMA related patches, I should change those two places
> to apply my change and it seem to be painful to me. I want to change
> this situation and reduce future code management overhead through
> this patch.
>
> This change could also help developer who want to use CMA in their
> new feature development, since they can use CMA easily without
> copying & pasting this reserved area management code.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Some small comments below, but in general

Acked-by: Michal Nazarewicz <mina86@mina86.com>

>
> diff --git a/include/linux/cma.h b/include/linux/cma.h
> new file mode 100644
> index 0000000..60ba06f
> --- /dev/null
> +++ b/include/linux/cma.h
> @@ -0,0 +1,28 @@
> +/*
> + * Contiguous Memory Allocator
> + *
> + * Copyright LG Electronics Inc., 2014
> + * Written by:
> + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License as
> + * published by the Free Software Foundation; either version 2 of the
> + * License or (at your optional) any later version of the license.
> + *

Superfluous empty comment line.

Also, I'm not certain whether this copyright notice is appropriate here,
but that's another story.

> + */
> +
> +#ifndef __CMA_H__
> +#define __CMA_H__
> +
> +struct cma;
> +
> +extern struct page *cma_alloc(struct cma *cma, unsigned long count,
> +				unsigned long align);
> +extern bool cma_release(struct cma *cma, struct page *pages,
> +				unsigned long count);
> +extern int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
> +				phys_addr_t limit, phys_addr_t alignment,
> +				unsigned long bitmap_shift, bool fixed,
> +				struct cma **res_cma);
> +#endif

> diff --git a/mm/cma.c b/mm/cma.c
> new file mode 100644
> index 0000000..0dae88d
> --- /dev/null
> +++ b/mm/cma.c
> @@ -0,0 +1,329 @@

> +static int __init cma_activate_area(struct cma *cma)
> +{
> +	int max_bitmapno = cma_bitmap_max_no(cma);
> +	int bitmap_size = BITS_TO_LONGS(max_bitmapno) * sizeof(long);
> +	unsigned long base_pfn = cma->base_pfn, pfn = base_pfn;
> +	unsigned i = cma->count >> pageblock_order;
> +	struct zone *zone;
> +
> +	pr_debug("%s()\n", __func__);
> +	if (!cma->count)
> +		return 0;

Alternatively:

+	if (!i)
+		return 0;

> +
> +	cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
> +	if (!cma->bitmap)
> +		return -ENOMEM;
> +
> +	WARN_ON_ONCE(!pfn_valid(pfn));
> +	zone = page_zone(pfn_to_page(pfn));
> +
> +	do {
> +		unsigned j;
> +
> +		base_pfn = pfn;
> +		for (j = pageblock_nr_pages; j; --j, pfn++) {
> +			WARN_ON_ONCE(!pfn_valid(pfn));
> +			/*
> +			 * alloc_contig_range requires the pfn range
> +			 * specified to be in the same zone. Make this
> +			 * simple by forcing the entire CMA resv range
> +			 * to be in the same zone.
> +			 */
> +			if (page_zone(pfn_to_page(pfn)) != zone)
> +				goto err;
> +		}
> +		init_cma_reserved_pageblock(pfn_to_page(base_pfn));
> +	} while (--i);
> +
> +	mutex_init(&cma->lock);
> +	return 0;
> +
> +err:
> +	kfree(cma->bitmap);
> +	return -EINVAL;
> +}

> +static int __init cma_init_reserved_areas(void)
> +{
> +	int i;
> +
> +	for (i = 0; i < cma_area_count; i++) {
> +		int ret = cma_activate_area(&cma_areas[i]);
> +
> +		if (ret)
> +			return ret;
> +	}
> +
> +	return 0;
> +}

Or even:

static int __init cma_init_reserved_areas(void)
{
	int i, ret = 0;
	for (i = 0; !ret && i < cma_area_count; ++i)
		ret = cma_activate_area(&cma_areas[i]);
	return ret;
}

> +int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
> +				phys_addr_t limit, phys_addr_t alignment,
> +				unsigned long bitmap_shift, bool fixed,
> +				struct cma **res_cma)
> +{
> +	struct cma *cma = &cma_areas[cma_area_count];

Perhaps it would make sense to move this initialisation to the far end
of this function?

> +	int ret = 0;
> +
> +	pr_debug("%s(size %lx, base %08lx, limit %08lx, alignment %08lx)\n",
> +			__func__, (unsigned long)size, (unsigned long)base,
> +			(unsigned long)limit, (unsigned long)alignment);
> +
> +	/* Sanity checks */
> +	if (cma_area_count == ARRAY_SIZE(cma_areas)) {
> +		pr_err("Not enough slots for CMA reserved regions!\n");
> +		return -ENOSPC;
> +	}
> +
> +	if (!size)
> +		return -EINVAL;
> +
> +	/*
> +	 * Sanitise input arguments.
> +	 * CMA area should be at least MAX_ORDER - 1 aligned. Otherwise,
> +	 * CMA area could be merged into other MIGRATE_TYPE by buddy mechanism
> +	 * and CMA property will be broken.
> +	 */
> +	alignment >>= PAGE_SHIFT;
> +	alignment = PAGE_SIZE << max3(MAX_ORDER - 1, pageblock_order,
> +						(int)alignment);
> +	base = ALIGN(base, alignment);
> +	size = ALIGN(size, alignment);
> +	limit &= ~(alignment - 1);
> +	/* size should be aligned with bitmap_shift */
> +	BUG_ON(!IS_ALIGNED(size >> PAGE_SHIFT, 1 << cma->bitmap_shift));

cma->bitmap_shift is not yet initialised thus the above line should be:

	BUG_ON(!IS_ALIGNED(size >> PAGE_SHIFT, 1 << bitmap_shift));

> +
> +	/* Reserve memory */
> +	if (base && fixed) {
> +		if (memblock_is_region_reserved(base, size) ||
> +		    memblock_reserve(base, size) < 0) {
> +			ret = -EBUSY;
> +			goto err;
> +		}
> +	} else {
> +		phys_addr_t addr = memblock_alloc_range(size, alignment, base,
> +							limit);
> +		if (!addr) {
> +			ret = -ENOMEM;
> +			goto err;
> +		} else {
> +			base = addr;
> +		}
> +	}
> +
> +	/*
> +	 * Each reserved area must be initialised later, when more kernel
> +	 * subsystems (like slab allocator) are available.
> +	 */
> +	cma->base_pfn = PFN_DOWN(base);
> +	cma->count = size >> PAGE_SHIFT;
> +	cma->bitmap_shift = bitmap_shift;
> +	*res_cma = cma;
> +	cma_area_count++;
> +
> +	pr_info("CMA: reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M,
> +		(unsigned long)base);

Doesn't this message end up being: “cma: CMA: reserved …”? pr_fmt adds
“cma:” at the beginning, doesn't it?  So we should probably drop “CMA:”
here.

> +
> +	return 0;
> +
> +err:
> +	pr_err("CMA: failed to reserve %ld MiB\n", (unsigned long)size / SZ_1M);
> +	return ret;
> +}

-- 
Best regards,                                         _     _
.o. | Liege of Serenely Enlightened Majesty of      o' \,=./ `o
..o | Computer Science,  Michał “mina86” Nazarewicz    (o o)
ooo +--<mpn@google.com>--<xmpp:mina86@jabber.org>--ooO--(_)--Ooo--

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 1/3] CMA: generalize CMA reserved area management functionality
@ 2014-06-03  6:56     ` Michal Nazarewicz
  0 siblings, 0 replies; 74+ messages in thread
From: Michal Nazarewicz @ 2014-06-03  6:56 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Paolo Bonzini, Gleb Natapov, Alexander Graf,
	Benjamin Herrenschmidt, Paul Mackerras, linux-mm, linux-kernel,
	linux-arm-kernel, kvm, kvm-ppc, linuxppc-dev, Joonsoo Kim

On Tue, Jun 03 2014, Joonsoo Kim wrote:
> Currently, there are two users on CMA functionality, one is the DMA
> subsystem and the other is the kvm on powerpc. They have their own code
> to manage CMA reserved area even if they looks really similar.
> From my guess, it is caused by some needs on bitmap management. Kvm side
> wants to maintain bitmap not for 1 page, but for more size. Eventually it
> use bitmap where one bit represents 64 pages.
>
> When I implement CMA related patches, I should change those two places
> to apply my change and it seem to be painful to me. I want to change
> this situation and reduce future code management overhead through
> this patch.
>
> This change could also help developer who want to use CMA in their
> new feature development, since they can use CMA easily without
> copying & pasting this reserved area management code.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Some small comments below, but in general

Acked-by: Michal Nazarewicz <mina86@mina86.com>

>
> diff --git a/include/linux/cma.h b/include/linux/cma.h
> new file mode 100644
> index 0000000..60ba06f
> --- /dev/null
> +++ b/include/linux/cma.h
> @@ -0,0 +1,28 @@
> +/*
> + * Contiguous Memory Allocator
> + *
> + * Copyright LG Electronics Inc., 2014
> + * Written by:
> + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License as
> + * published by the Free Software Foundation; either version 2 of the
> + * License or (at your optional) any later version of the license.
> + *

Superfluous empty comment line.

Also, I'm not certain whether this copyright notice is appropriate here,
but that's another story.

> + */
> +
> +#ifndef __CMA_H__
> +#define __CMA_H__
> +
> +struct cma;
> +
> +extern struct page *cma_alloc(struct cma *cma, unsigned long count,
> +				unsigned long align);
> +extern bool cma_release(struct cma *cma, struct page *pages,
> +				unsigned long count);
> +extern int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
> +				phys_addr_t limit, phys_addr_t alignment,
> +				unsigned long bitmap_shift, bool fixed,
> +				struct cma **res_cma);
> +#endif

> diff --git a/mm/cma.c b/mm/cma.c
> new file mode 100644
> index 0000000..0dae88d
> --- /dev/null
> +++ b/mm/cma.c
> @@ -0,0 +1,329 @@

> +static int __init cma_activate_area(struct cma *cma)
> +{
> +	int max_bitmapno = cma_bitmap_max_no(cma);
> +	int bitmap_size = BITS_TO_LONGS(max_bitmapno) * sizeof(long);
> +	unsigned long base_pfn = cma->base_pfn, pfn = base_pfn;
> +	unsigned i = cma->count >> pageblock_order;
> +	struct zone *zone;
> +
> +	pr_debug("%s()\n", __func__);
> +	if (!cma->count)
> +		return 0;

Alternatively:

+	if (!i)
+		return 0;

> +
> +	cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
> +	if (!cma->bitmap)
> +		return -ENOMEM;
> +
> +	WARN_ON_ONCE(!pfn_valid(pfn));
> +	zone = page_zone(pfn_to_page(pfn));
> +
> +	do {
> +		unsigned j;
> +
> +		base_pfn = pfn;
> +		for (j = pageblock_nr_pages; j; --j, pfn++) {
> +			WARN_ON_ONCE(!pfn_valid(pfn));
> +			/*
> +			 * alloc_contig_range requires the pfn range
> +			 * specified to be in the same zone. Make this
> +			 * simple by forcing the entire CMA resv range
> +			 * to be in the same zone.
> +			 */
> +			if (page_zone(pfn_to_page(pfn)) != zone)
> +				goto err;
> +		}
> +		init_cma_reserved_pageblock(pfn_to_page(base_pfn));
> +	} while (--i);
> +
> +	mutex_init(&cma->lock);
> +	return 0;
> +
> +err:
> +	kfree(cma->bitmap);
> +	return -EINVAL;
> +}

> +static int __init cma_init_reserved_areas(void)
> +{
> +	int i;
> +
> +	for (i = 0; i < cma_area_count; i++) {
> +		int ret = cma_activate_area(&cma_areas[i]);
> +
> +		if (ret)
> +			return ret;
> +	}
> +
> +	return 0;
> +}

Or even:

static int __init cma_init_reserved_areas(void)
{
	int i, ret = 0;
	for (i = 0; !ret && i < cma_area_count; ++i)
		ret = cma_activate_area(&cma_areas[i]);
	return ret;
}

> +int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
> +				phys_addr_t limit, phys_addr_t alignment,
> +				unsigned long bitmap_shift, bool fixed,
> +				struct cma **res_cma)
> +{
> +	struct cma *cma = &cma_areas[cma_area_count];

Perhaps it would make sense to move this initialisation to the far end
of this function?

> +	int ret = 0;
> +
> +	pr_debug("%s(size %lx, base %08lx, limit %08lx, alignment %08lx)\n",
> +			__func__, (unsigned long)size, (unsigned long)base,
> +			(unsigned long)limit, (unsigned long)alignment);
> +
> +	/* Sanity checks */
> +	if (cma_area_count == ARRAY_SIZE(cma_areas)) {
> +		pr_err("Not enough slots for CMA reserved regions!\n");
> +		return -ENOSPC;
> +	}
> +
> +	if (!size)
> +		return -EINVAL;
> +
> +	/*
> +	 * Sanitise input arguments.
> +	 * CMA area should be at least MAX_ORDER - 1 aligned. Otherwise,
> +	 * CMA area could be merged into other MIGRATE_TYPE by buddy mechanism
> +	 * and CMA property will be broken.
> +	 */
> +	alignment >>= PAGE_SHIFT;
> +	alignment = PAGE_SIZE << max3(MAX_ORDER - 1, pageblock_order,
> +						(int)alignment);
> +	base = ALIGN(base, alignment);
> +	size = ALIGN(size, alignment);
> +	limit &= ~(alignment - 1);
> +	/* size should be aligned with bitmap_shift */
> +	BUG_ON(!IS_ALIGNED(size >> PAGE_SHIFT, 1 << cma->bitmap_shift));

cma->bitmap_shift is not yet initialised thus the above line should be:

	BUG_ON(!IS_ALIGNED(size >> PAGE_SHIFT, 1 << bitmap_shift));

> +
> +	/* Reserve memory */
> +	if (base && fixed) {
> +		if (memblock_is_region_reserved(base, size) ||
> +		    memblock_reserve(base, size) < 0) {
> +			ret = -EBUSY;
> +			goto err;
> +		}
> +	} else {
> +		phys_addr_t addr = memblock_alloc_range(size, alignment, base,
> +							limit);
> +		if (!addr) {
> +			ret = -ENOMEM;
> +			goto err;
> +		} else {
> +			base = addr;
> +		}
> +	}
> +
> +	/*
> +	 * Each reserved area must be initialised later, when more kernel
> +	 * subsystems (like slab allocator) are available.
> +	 */
> +	cma->base_pfn = PFN_DOWN(base);
> +	cma->count = size >> PAGE_SHIFT;
> +	cma->bitmap_shift = bitmap_shift;
> +	*res_cma = cma;
> +	cma_area_count++;
> +
> +	pr_info("CMA: reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M,
> +		(unsigned long)base);

Doesn't this message end up being: “cma: CMA: reserved …”? pr_fmt adds
“cma:” at the beginning, doesn't it?  So we should probably drop “CMA:”
here.

> +
> +	return 0;
> +
> +err:
> +	pr_err("CMA: failed to reserve %ld MiB\n", (unsigned long)size / SZ_1M);
> +	return ret;
> +}

-- 
Best regards,                                         _     _
.o. | Liege of Serenely Enlightened Majesty of      o' \,=./ `o
..o | Computer Science,  Michał “mina86” Nazarewicz    (o o)
ooo +--<mpn@google.com>--<xmpp:mina86@jabber.org>--ooO--(_)--Ooo--

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 1/3] CMA: generalize CMA reserved area management functionality
@ 2014-06-03  6:56     ` Michal Nazarewicz
  0 siblings, 0 replies; 74+ messages in thread
From: Michal Nazarewicz @ 2014-06-03  6:56 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Paolo Bonzini, Gleb Natapov, Alexander Graf,
	Benjamin Herrenschmidt, Paul Mackerras, linux-mm, linux-kernel,
	linux-arm-kernel, kvm, kvm-ppc, linuxppc-dev

On Tue, Jun 03 2014, Joonsoo Kim wrote:
> Currently, there are two users on CMA functionality, one is the DMA
> subsystem and the other is the kvm on powerpc. They have their own code
> to manage CMA reserved area even if they looks really similar.
> From my guess, it is caused by some needs on bitmap management. Kvm side
> wants to maintain bitmap not for 1 page, but for more size. Eventually it
> use bitmap where one bit represents 64 pages.
>
> When I implement CMA related patches, I should change those two places
> to apply my change and it seem to be painful to me. I want to change
> this situation and reduce future code management overhead through
> this patch.
>
> This change could also help developer who want to use CMA in their
> new feature development, since they can use CMA easily without
> copying & pasting this reserved area management code.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Some small comments below, but in general

Acked-by: Michal Nazarewicz <mina86@mina86.com>

>
> diff --git a/include/linux/cma.h b/include/linux/cma.h
> new file mode 100644
> index 0000000..60ba06f
> --- /dev/null
> +++ b/include/linux/cma.h
> @@ -0,0 +1,28 @@
> +/*
> + * Contiguous Memory Allocator
> + *
> + * Copyright LG Electronics Inc., 2014
> + * Written by:
> + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License as
> + * published by the Free Software Foundation; either version 2 of the
> + * License or (at your optional) any later version of the license.
> + *

Superfluous empty comment line.

Also, I'm not certain whether this copyright notice is appropriate here,
but that's another story.

> + */
> +
> +#ifndef __CMA_H__
> +#define __CMA_H__
> +
> +struct cma;
> +
> +extern struct page *cma_alloc(struct cma *cma, unsigned long count,
> +				unsigned long align);
> +extern bool cma_release(struct cma *cma, struct page *pages,
> +				unsigned long count);
> +extern int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
> +				phys_addr_t limit, phys_addr_t alignment,
> +				unsigned long bitmap_shift, bool fixed,
> +				struct cma **res_cma);
> +#endif

> diff --git a/mm/cma.c b/mm/cma.c
> new file mode 100644
> index 0000000..0dae88d
> --- /dev/null
> +++ b/mm/cma.c
> @@ -0,0 +1,329 @@

> +static int __init cma_activate_area(struct cma *cma)
> +{
> +	int max_bitmapno = cma_bitmap_max_no(cma);
> +	int bitmap_size = BITS_TO_LONGS(max_bitmapno) * sizeof(long);
> +	unsigned long base_pfn = cma->base_pfn, pfn = base_pfn;
> +	unsigned i = cma->count >> pageblock_order;
> +	struct zone *zone;
> +
> +	pr_debug("%s()\n", __func__);
> +	if (!cma->count)
> +		return 0;

Alternatively:

+	if (!i)
+		return 0;

> +
> +	cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
> +	if (!cma->bitmap)
> +		return -ENOMEM;
> +
> +	WARN_ON_ONCE(!pfn_valid(pfn));
> +	zone = page_zone(pfn_to_page(pfn));
> +
> +	do {
> +		unsigned j;
> +
> +		base_pfn = pfn;
> +		for (j = pageblock_nr_pages; j; --j, pfn++) {
> +			WARN_ON_ONCE(!pfn_valid(pfn));
> +			/*
> +			 * alloc_contig_range requires the pfn range
> +			 * specified to be in the same zone. Make this
> +			 * simple by forcing the entire CMA resv range
> +			 * to be in the same zone.
> +			 */
> +			if (page_zone(pfn_to_page(pfn)) != zone)
> +				goto err;
> +		}
> +		init_cma_reserved_pageblock(pfn_to_page(base_pfn));
> +	} while (--i);
> +
> +	mutex_init(&cma->lock);
> +	return 0;
> +
> +err:
> +	kfree(cma->bitmap);
> +	return -EINVAL;
> +}

> +static int __init cma_init_reserved_areas(void)
> +{
> +	int i;
> +
> +	for (i = 0; i < cma_area_count; i++) {
> +		int ret = cma_activate_area(&cma_areas[i]);
> +
> +		if (ret)
> +			return ret;
> +	}
> +
> +	return 0;
> +}

Or even:

static int __init cma_init_reserved_areas(void)
{
	int i, ret = 0;
	for (i = 0; !ret && i < cma_area_count; ++i)
		ret = cma_activate_area(&cma_areas[i]);
	return ret;
}

> +int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
> +				phys_addr_t limit, phys_addr_t alignment,
> +				unsigned long bitmap_shift, bool fixed,
> +				struct cma **res_cma)
> +{
> +	struct cma *cma = &cma_areas[cma_area_count];

Perhaps it would make sense to move this initialisation to the far end
of this function?

> +	int ret = 0;
> +
> +	pr_debug("%s(size %lx, base %08lx, limit %08lx, alignment %08lx)\n",
> +			__func__, (unsigned long)size, (unsigned long)base,
> +			(unsigned long)limit, (unsigned long)alignment);
> +
> +	/* Sanity checks */
> +	if (cma_area_count == ARRAY_SIZE(cma_areas)) {
> +		pr_err("Not enough slots for CMA reserved regions!\n");
> +		return -ENOSPC;
> +	}
> +
> +	if (!size)
> +		return -EINVAL;
> +
> +	/*
> +	 * Sanitise input arguments.
> +	 * CMA area should be at least MAX_ORDER - 1 aligned. Otherwise,
> +	 * CMA area could be merged into other MIGRATE_TYPE by buddy mechanism
> +	 * and CMA property will be broken.
> +	 */
> +	alignment >>= PAGE_SHIFT;
> +	alignment = PAGE_SIZE << max3(MAX_ORDER - 1, pageblock_order,
> +						(int)alignment);
> +	base = ALIGN(base, alignment);
> +	size = ALIGN(size, alignment);
> +	limit &= ~(alignment - 1);
> +	/* size should be aligned with bitmap_shift */
> +	BUG_ON(!IS_ALIGNED(size >> PAGE_SHIFT, 1 << cma->bitmap_shift));

cma->bitmap_shift is not yet initialised thus the above line should be:

	BUG_ON(!IS_ALIGNED(size >> PAGE_SHIFT, 1 << bitmap_shift));

> +
> +	/* Reserve memory */
> +	if (base && fixed) {
> +		if (memblock_is_region_reserved(base, size) ||
> +		    memblock_reserve(base, size) < 0) {
> +			ret = -EBUSY;
> +			goto err;
> +		}
> +	} else {
> +		phys_addr_t addr = memblock_alloc_range(size, alignment, base,
> +							limit);
> +		if (!addr) {
> +			ret = -ENOMEM;
> +			goto err;
> +		} else {
> +			base = addr;
> +		}
> +	}
> +
> +	/*
> +	 * Each reserved area must be initialised later, when more kernel
> +	 * subsystems (like slab allocator) are available.
> +	 */
> +	cma->base_pfn = PFN_DOWN(base);
> +	cma->count = size >> PAGE_SHIFT;
> +	cma->bitmap_shift = bitmap_shift;
> +	*res_cma = cma;
> +	cma_area_count++;
> +
> +	pr_info("CMA: reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M,
> +		(unsigned long)base);

Doesn't this message end up being: “cma: CMA: reserved …”? pr_fmt adds
“cma:” at the beginning, doesn't it?  So we should probably drop “CMA:”
here.

> +
> +	return 0;
> +
> +err:
> +	pr_err("CMA: failed to reserve %ld MiB\n", (unsigned long)size / SZ_1M);
> +	return ret;
> +}

-- 
Best regards,                                         _     _
.o. | Liege of Serenely Enlightened Majesty of      o' \,=./ `o
..o | Computer Science,  Michał “mina86” Nazarewicz    (o o)
ooo +--<mpn@google.com>--<xmpp:mina86@jabber.org>--ooO--(_)--Ooo--

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 1/3] CMA: generalize CMA reserved area management functionality
@ 2014-06-03  6:56     ` Michal Nazarewicz
  0 siblings, 0 replies; 74+ messages in thread
From: Michal Nazarewicz @ 2014-06-03  6:56 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski
  Cc: Russell King - ARM Linux, kvm, linux-mm, Gleb Natapov,
	Greg Kroah-Hartman, Alexander Graf, kvm-ppc, linux-kernel,
	Minchan Kim, Paul Mackerras, Paolo Bonzini, Joonsoo Kim,
	linuxppc-dev, linux-arm-kernel

On Tue, Jun 03 2014, Joonsoo Kim wrote:
> Currently, there are two users on CMA functionality, one is the DMA
> subsystem and the other is the kvm on powerpc. They have their own code
> to manage CMA reserved area even if they looks really similar.
> From my guess, it is caused by some needs on bitmap management. Kvm side
> wants to maintain bitmap not for 1 page, but for more size. Eventually it
> use bitmap where one bit represents 64 pages.
>
> When I implement CMA related patches, I should change those two places
> to apply my change and it seem to be painful to me. I want to change
> this situation and reduce future code management overhead through
> this patch.
>
> This change could also help developer who want to use CMA in their
> new feature development, since they can use CMA easily without
> copying & pasting this reserved area management code.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Some small comments below, but in general

Acked-by: Michal Nazarewicz <mina86@mina86.com>

>
> diff --git a/include/linux/cma.h b/include/linux/cma.h
> new file mode 100644
> index 0000000..60ba06f
> --- /dev/null
> +++ b/include/linux/cma.h
> @@ -0,0 +1,28 @@
> +/*
> + * Contiguous Memory Allocator
> + *
> + * Copyright LG Electronics Inc., 2014
> + * Written by:
> + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License as
> + * published by the Free Software Foundation; either version 2 of the
> + * License or (at your optional) any later version of the license.
> + *

Superfluous empty comment line.

Also, I'm not certain whether this copyright notice is appropriate here,
but that's another story.

> + */
> +
> +#ifndef __CMA_H__
> +#define __CMA_H__
> +
> +struct cma;
> +
> +extern struct page *cma_alloc(struct cma *cma, unsigned long count,
> +				unsigned long align);
> +extern bool cma_release(struct cma *cma, struct page *pages,
> +				unsigned long count);
> +extern int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t b=
ase,
> +				phys_addr_t limit, phys_addr_t alignment,
> +				unsigned long bitmap_shift, bool fixed,
> +				struct cma **res_cma);
> +#endif

> diff --git a/mm/cma.c b/mm/cma.c
> new file mode 100644
> index 0000000..0dae88d
> --- /dev/null
> +++ b/mm/cma.c
> @@ -0,0 +1,329 @@

> +static int __init cma_activate_area(struct cma *cma)
> +{
> +	int max_bitmapno =3D cma_bitmap_max_no(cma);
> +	int bitmap_size =3D BITS_TO_LONGS(max_bitmapno) * sizeof(long);
> +	unsigned long base_pfn =3D cma->base_pfn, pfn =3D base_pfn;
> +	unsigned i =3D cma->count >> pageblock_order;
> +	struct zone *zone;
> +
> +	pr_debug("%s()\n", __func__);
> +	if (!cma->count)
> +		return 0;

Alternatively:

+	if (!i)
+		return 0;

> +
> +	cma->bitmap =3D kzalloc(bitmap_size, GFP_KERNEL);
> +	if (!cma->bitmap)
> +		return -ENOMEM;
> +
> +	WARN_ON_ONCE(!pfn_valid(pfn));
> +	zone =3D page_zone(pfn_to_page(pfn));
> +
> +	do {
> +		unsigned j;
> +
> +		base_pfn =3D pfn;
> +		for (j =3D pageblock_nr_pages; j; --j, pfn++) {
> +			WARN_ON_ONCE(!pfn_valid(pfn));
> +			/*
> +			 * alloc_contig_range requires the pfn range
> +			 * specified to be in the same zone. Make this
> +			 * simple by forcing the entire CMA resv range
> +			 * to be in the same zone.
> +			 */
> +			if (page_zone(pfn_to_page(pfn)) !=3D zone)
> +				goto err;
> +		}
> +		init_cma_reserved_pageblock(pfn_to_page(base_pfn));
> +	} while (--i);
> +
> +	mutex_init(&cma->lock);
> +	return 0;
> +
> +err:
> +	kfree(cma->bitmap);
> +	return -EINVAL;
> +}

> +static int __init cma_init_reserved_areas(void)
> +{
> +	int i;
> +
> +	for (i =3D 0; i < cma_area_count; i++) {
> +		int ret =3D cma_activate_area(&cma_areas[i]);
> +
> +		if (ret)
> +			return ret;
> +	}
> +
> +	return 0;
> +}

Or even:

static int __init cma_init_reserved_areas(void)
{
	int i, ret =3D 0;
	for (i =3D 0; !ret && i < cma_area_count; ++i)
		ret =3D cma_activate_area(&cma_areas[i]);
	return ret;
}

> +int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
> +				phys_addr_t limit, phys_addr_t alignment,
> +				unsigned long bitmap_shift, bool fixed,
> +				struct cma **res_cma)
> +{
> +	struct cma *cma =3D &cma_areas[cma_area_count];

Perhaps it would make sense to move this initialisation to the far end
of this function?

> +	int ret =3D 0;
> +
> +	pr_debug("%s(size %lx, base %08lx, limit %08lx, alignment %08lx)\n",
> +			__func__, (unsigned long)size, (unsigned long)base,
> +			(unsigned long)limit, (unsigned long)alignment);
> +
> +	/* Sanity checks */
> +	if (cma_area_count =3D=3D ARRAY_SIZE(cma_areas)) {
> +		pr_err("Not enough slots for CMA reserved regions!\n");
> +		return -ENOSPC;
> +	}
> +
> +	if (!size)
> +		return -EINVAL;
> +
> +	/*
> +	 * Sanitise input arguments.
> +	 * CMA area should be at least MAX_ORDER - 1 aligned. Otherwise,
> +	 * CMA area could be merged into other MIGRATE_TYPE by buddy mechanism
> +	 * and CMA property will be broken.
> +	 */
> +	alignment >>=3D PAGE_SHIFT;
> +	alignment =3D PAGE_SIZE << max3(MAX_ORDER - 1, pageblock_order,
> +						(int)alignment);
> +	base =3D ALIGN(base, alignment);
> +	size =3D ALIGN(size, alignment);
> +	limit &=3D ~(alignment - 1);
> +	/* size should be aligned with bitmap_shift */
> +	BUG_ON(!IS_ALIGNED(size >> PAGE_SHIFT, 1 << cma->bitmap_shift));

cma->bitmap_shift is not yet initialised thus the above line should be:

	BUG_ON(!IS_ALIGNED(size >> PAGE_SHIFT, 1 << bitmap_shift));

> +
> +	/* Reserve memory */
> +	if (base && fixed) {
> +		if (memblock_is_region_reserved(base, size) ||
> +		    memblock_reserve(base, size) < 0) {
> +			ret =3D -EBUSY;
> +			goto err;
> +		}
> +	} else {
> +		phys_addr_t addr =3D memblock_alloc_range(size, alignment, base,
> +							limit);
> +		if (!addr) {
> +			ret =3D -ENOMEM;
> +			goto err;
> +		} else {
> +			base =3D addr;
> +		}
> +	}
> +
> +	/*
> +	 * Each reserved area must be initialised later, when more kernel
> +	 * subsystems (like slab allocator) are available.
> +	 */
> +	cma->base_pfn =3D PFN_DOWN(base);
> +	cma->count =3D size >> PAGE_SHIFT;
> +	cma->bitmap_shift =3D bitmap_shift;
> +	*res_cma =3D cma;
> +	cma_area_count++;
> +
> +	pr_info("CMA: reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M,
> +		(unsigned long)base);

Doesn't this message end up being: =E2=80=9Ccma: CMA: reserved =E2=80=A6=E2=
=80=9D? pr_fmt adds
=E2=80=9Ccma:=E2=80=9D at the beginning, doesn't it?  So we should probably=
 drop =E2=80=9CCMA:=E2=80=9D
here.

> +
> +	return 0;
> +
> +err:
> +	pr_err("CMA: failed to reserve %ld MiB\n", (unsigned long)size / SZ_1M);
> +	return ret;
> +}

--=20
Best regards,                                         _     _
.o. | Liege of Serenely Enlightened Majesty of      o' \,=3D./ `o
..o | Computer Science,  Micha=C5=82 =E2=80=9Cmina86=E2=80=9D Nazarewicz   =
 (o o)
ooo +--<mpn@google.com>--<xmpp:mina86@jabber.org>--ooO--(_)--Ooo--

^ permalink raw reply	[flat|nested] 74+ messages in thread

* [RFC PATCH 1/3] CMA: generalize CMA reserved area management functionality
@ 2014-06-03  6:56     ` Michal Nazarewicz
  0 siblings, 0 replies; 74+ messages in thread
From: Michal Nazarewicz @ 2014-06-03  6:56 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jun 03 2014, Joonsoo Kim wrote:
> Currently, there are two users on CMA functionality, one is the DMA
> subsystem and the other is the kvm on powerpc. They have their own code
> to manage CMA reserved area even if they looks really similar.
> From my guess, it is caused by some needs on bitmap management. Kvm side
> wants to maintain bitmap not for 1 page, but for more size. Eventually it
> use bitmap where one bit represents 64 pages.
>
> When I implement CMA related patches, I should change those two places
> to apply my change and it seem to be painful to me. I want to change
> this situation and reduce future code management overhead through
> this patch.
>
> This change could also help developer who want to use CMA in their
> new feature development, since they can use CMA easily without
> copying & pasting this reserved area management code.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Some small comments below, but in general

Acked-by: Michal Nazarewicz <mina86@mina86.com>

>
> diff --git a/include/linux/cma.h b/include/linux/cma.h
> new file mode 100644
> index 0000000..60ba06f
> --- /dev/null
> +++ b/include/linux/cma.h
> @@ -0,0 +1,28 @@
> +/*
> + * Contiguous Memory Allocator
> + *
> + * Copyright LG Electronics Inc., 2014
> + * Written by:
> + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License as
> + * published by the Free Software Foundation; either version 2 of the
> + * License or (at your optional) any later version of the license.
> + *

Superfluous empty comment line.

Also, I'm not certain whether this copyright notice is appropriate here,
but that's another story.

> + */
> +
> +#ifndef __CMA_H__
> +#define __CMA_H__
> +
> +struct cma;
> +
> +extern struct page *cma_alloc(struct cma *cma, unsigned long count,
> +				unsigned long align);
> +extern bool cma_release(struct cma *cma, struct page *pages,
> +				unsigned long count);
> +extern int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
> +				phys_addr_t limit, phys_addr_t alignment,
> +				unsigned long bitmap_shift, bool fixed,
> +				struct cma **res_cma);
> +#endif

> diff --git a/mm/cma.c b/mm/cma.c
> new file mode 100644
> index 0000000..0dae88d
> --- /dev/null
> +++ b/mm/cma.c
> @@ -0,0 +1,329 @@

> +static int __init cma_activate_area(struct cma *cma)
> +{
> +	int max_bitmapno = cma_bitmap_max_no(cma);
> +	int bitmap_size = BITS_TO_LONGS(max_bitmapno) * sizeof(long);
> +	unsigned long base_pfn = cma->base_pfn, pfn = base_pfn;
> +	unsigned i = cma->count >> pageblock_order;
> +	struct zone *zone;
> +
> +	pr_debug("%s()\n", __func__);
> +	if (!cma->count)
> +		return 0;

Alternatively:

+	if (!i)
+		return 0;

> +
> +	cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
> +	if (!cma->bitmap)
> +		return -ENOMEM;
> +
> +	WARN_ON_ONCE(!pfn_valid(pfn));
> +	zone = page_zone(pfn_to_page(pfn));
> +
> +	do {
> +		unsigned j;
> +
> +		base_pfn = pfn;
> +		for (j = pageblock_nr_pages; j; --j, pfn++) {
> +			WARN_ON_ONCE(!pfn_valid(pfn));
> +			/*
> +			 * alloc_contig_range requires the pfn range
> +			 * specified to be in the same zone. Make this
> +			 * simple by forcing the entire CMA resv range
> +			 * to be in the same zone.
> +			 */
> +			if (page_zone(pfn_to_page(pfn)) != zone)
> +				goto err;
> +		}
> +		init_cma_reserved_pageblock(pfn_to_page(base_pfn));
> +	} while (--i);
> +
> +	mutex_init(&cma->lock);
> +	return 0;
> +
> +err:
> +	kfree(cma->bitmap);
> +	return -EINVAL;
> +}

> +static int __init cma_init_reserved_areas(void)
> +{
> +	int i;
> +
> +	for (i = 0; i < cma_area_count; i++) {
> +		int ret = cma_activate_area(&cma_areas[i]);
> +
> +		if (ret)
> +			return ret;
> +	}
> +
> +	return 0;
> +}

Or even:

static int __init cma_init_reserved_areas(void)
{
	int i, ret = 0;
	for (i = 0; !ret && i < cma_area_count; ++i)
		ret = cma_activate_area(&cma_areas[i]);
	return ret;
}

> +int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
> +				phys_addr_t limit, phys_addr_t alignment,
> +				unsigned long bitmap_shift, bool fixed,
> +				struct cma **res_cma)
> +{
> +	struct cma *cma = &cma_areas[cma_area_count];

Perhaps it would make sense to move this initialisation to the far end
of this function?

> +	int ret = 0;
> +
> +	pr_debug("%s(size %lx, base %08lx, limit %08lx, alignment %08lx)\n",
> +			__func__, (unsigned long)size, (unsigned long)base,
> +			(unsigned long)limit, (unsigned long)alignment);
> +
> +	/* Sanity checks */
> +	if (cma_area_count == ARRAY_SIZE(cma_areas)) {
> +		pr_err("Not enough slots for CMA reserved regions!\n");
> +		return -ENOSPC;
> +	}
> +
> +	if (!size)
> +		return -EINVAL;
> +
> +	/*
> +	 * Sanitise input arguments.
> +	 * CMA area should be at least MAX_ORDER - 1 aligned. Otherwise,
> +	 * CMA area could be merged into other MIGRATE_TYPE by buddy mechanism
> +	 * and CMA property will be broken.
> +	 */
> +	alignment >>= PAGE_SHIFT;
> +	alignment = PAGE_SIZE << max3(MAX_ORDER - 1, pageblock_order,
> +						(int)alignment);
> +	base = ALIGN(base, alignment);
> +	size = ALIGN(size, alignment);
> +	limit &= ~(alignment - 1);
> +	/* size should be aligned with bitmap_shift */
> +	BUG_ON(!IS_ALIGNED(size >> PAGE_SHIFT, 1 << cma->bitmap_shift));

cma->bitmap_shift is not yet initialised thus the above line should be:

	BUG_ON(!IS_ALIGNED(size >> PAGE_SHIFT, 1 << bitmap_shift));

> +
> +	/* Reserve memory */
> +	if (base && fixed) {
> +		if (memblock_is_region_reserved(base, size) ||
> +		    memblock_reserve(base, size) < 0) {
> +			ret = -EBUSY;
> +			goto err;
> +		}
> +	} else {
> +		phys_addr_t addr = memblock_alloc_range(size, alignment, base,
> +							limit);
> +		if (!addr) {
> +			ret = -ENOMEM;
> +			goto err;
> +		} else {
> +			base = addr;
> +		}
> +	}
> +
> +	/*
> +	 * Each reserved area must be initialised later, when more kernel
> +	 * subsystems (like slab allocator) are available.
> +	 */
> +	cma->base_pfn = PFN_DOWN(base);
> +	cma->count = size >> PAGE_SHIFT;
> +	cma->bitmap_shift = bitmap_shift;
> +	*res_cma = cma;
> +	cma_area_count++;
> +
> +	pr_info("CMA: reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M,
> +		(unsigned long)base);

Doesn't this message end up being: ?cma: CMA: reserved ??? pr_fmt adds
?cma:? at the beginning, doesn't it?  So we should probably drop ?CMA:?
here.

> +
> +	return 0;
> +
> +err:
> +	pr_err("CMA: failed to reserve %ld MiB\n", (unsigned long)size / SZ_1M);
> +	return ret;
> +}

-- 
Best regards,                                         _     _
.o. | Liege of Serenely Enlightened Majesty of      o' \,=./ `o
..o | Computer Science,  Micha? ?mina86? Nazarewicz    (o o)
ooo +--<mpn@google.com>--<xmpp:mina86@jabber.org>--ooO--(_)--Ooo--

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 1/3] CMA: generalize CMA reserved area management functionality
@ 2014-06-03  6:56     ` Michal Nazarewicz
  0 siblings, 0 replies; 74+ messages in thread
From: Michal Nazarewicz @ 2014-06-03  6:56 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Paolo Bonzini, Gleb Natapov, Alexander Graf,
	Benjamin Herrenschmidt, Paul Mackerras, linux-mm, linux-kernel,
	linux-arm-kernel, kvm, kvm-ppc, linuxppc-dev, Joonsoo Kim

On Tue, Jun 03 2014, Joonsoo Kim wrote:
> Currently, there are two users on CMA functionality, one is the DMA
> subsystem and the other is the kvm on powerpc. They have their own code
> to manage CMA reserved area even if they looks really similar.
> From my guess, it is caused by some needs on bitmap management. Kvm side
> wants to maintain bitmap not for 1 page, but for more size. Eventually it
> use bitmap where one bit represents 64 pages.
>
> When I implement CMA related patches, I should change those two places
> to apply my change and it seem to be painful to me. I want to change
> this situation and reduce future code management overhead through
> this patch.
>
> This change could also help developer who want to use CMA in their
> new feature development, since they can use CMA easily without
> copying & pasting this reserved area management code.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Some small comments below, but in general

Acked-by: Michal Nazarewicz <mina86@mina86.com>

>
> diff --git a/include/linux/cma.h b/include/linux/cma.h
> new file mode 100644
> index 0000000..60ba06f
> --- /dev/null
> +++ b/include/linux/cma.h
> @@ -0,0 +1,28 @@
> +/*
> + * Contiguous Memory Allocator
> + *
> + * Copyright LG Electronics Inc., 2014
> + * Written by:
> + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License as
> + * published by the Free Software Foundation; either version 2 of the
> + * License or (at your optional) any later version of the license.
> + *

Superfluous empty comment line.

Also, I'm not certain whether this copyright notice is appropriate here,
but that's another story.

> + */
> +
> +#ifndef __CMA_H__
> +#define __CMA_H__
> +
> +struct cma;
> +
> +extern struct page *cma_alloc(struct cma *cma, unsigned long count,
> +				unsigned long align);
> +extern bool cma_release(struct cma *cma, struct page *pages,
> +				unsigned long count);
> +extern int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
> +				phys_addr_t limit, phys_addr_t alignment,
> +				unsigned long bitmap_shift, bool fixed,
> +				struct cma **res_cma);
> +#endif

> diff --git a/mm/cma.c b/mm/cma.c
> new file mode 100644
> index 0000000..0dae88d
> --- /dev/null
> +++ b/mm/cma.c
> @@ -0,0 +1,329 @@

> +static int __init cma_activate_area(struct cma *cma)
> +{
> +	int max_bitmapno = cma_bitmap_max_no(cma);
> +	int bitmap_size = BITS_TO_LONGS(max_bitmapno) * sizeof(long);
> +	unsigned long base_pfn = cma->base_pfn, pfn = base_pfn;
> +	unsigned i = cma->count >> pageblock_order;
> +	struct zone *zone;
> +
> +	pr_debug("%s()\n", __func__);
> +	if (!cma->count)
> +		return 0;

Alternatively:

+	if (!i)
+		return 0;

> +
> +	cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
> +	if (!cma->bitmap)
> +		return -ENOMEM;
> +
> +	WARN_ON_ONCE(!pfn_valid(pfn));
> +	zone = page_zone(pfn_to_page(pfn));
> +
> +	do {
> +		unsigned j;
> +
> +		base_pfn = pfn;
> +		for (j = pageblock_nr_pages; j; --j, pfn++) {
> +			WARN_ON_ONCE(!pfn_valid(pfn));
> +			/*
> +			 * alloc_contig_range requires the pfn range
> +			 * specified to be in the same zone. Make this
> +			 * simple by forcing the entire CMA resv range
> +			 * to be in the same zone.
> +			 */
> +			if (page_zone(pfn_to_page(pfn)) != zone)
> +				goto err;
> +		}
> +		init_cma_reserved_pageblock(pfn_to_page(base_pfn));
> +	} while (--i);
> +
> +	mutex_init(&cma->lock);
> +	return 0;
> +
> +err:
> +	kfree(cma->bitmap);
> +	return -EINVAL;
> +}

> +static int __init cma_init_reserved_areas(void)
> +{
> +	int i;
> +
> +	for (i = 0; i < cma_area_count; i++) {
> +		int ret = cma_activate_area(&cma_areas[i]);
> +
> +		if (ret)
> +			return ret;
> +	}
> +
> +	return 0;
> +}

Or even:

static int __init cma_init_reserved_areas(void)
{
	int i, ret = 0;
	for (i = 0; !ret && i < cma_area_count; ++i)
		ret = cma_activate_area(&cma_areas[i]);
	return ret;
}

> +int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
> +				phys_addr_t limit, phys_addr_t alignment,
> +				unsigned long bitmap_shift, bool fixed,
> +				struct cma **res_cma)
> +{
> +	struct cma *cma = &cma_areas[cma_area_count];

Perhaps it would make sense to move this initialisation to the far end
of this function?

> +	int ret = 0;
> +
> +	pr_debug("%s(size %lx, base %08lx, limit %08lx, alignment %08lx)\n",
> +			__func__, (unsigned long)size, (unsigned long)base,
> +			(unsigned long)limit, (unsigned long)alignment);
> +
> +	/* Sanity checks */
> +	if (cma_area_count = ARRAY_SIZE(cma_areas)) {
> +		pr_err("Not enough slots for CMA reserved regions!\n");
> +		return -ENOSPC;
> +	}
> +
> +	if (!size)
> +		return -EINVAL;
> +
> +	/*
> +	 * Sanitise input arguments.
> +	 * CMA area should be at least MAX_ORDER - 1 aligned. Otherwise,
> +	 * CMA area could be merged into other MIGRATE_TYPE by buddy mechanism
> +	 * and CMA property will be broken.
> +	 */
> +	alignment >>= PAGE_SHIFT;
> +	alignment = PAGE_SIZE << max3(MAX_ORDER - 1, pageblock_order,
> +						(int)alignment);
> +	base = ALIGN(base, alignment);
> +	size = ALIGN(size, alignment);
> +	limit &= ~(alignment - 1);
> +	/* size should be aligned with bitmap_shift */
> +	BUG_ON(!IS_ALIGNED(size >> PAGE_SHIFT, 1 << cma->bitmap_shift));

cma->bitmap_shift is not yet initialised thus the above line should be:

	BUG_ON(!IS_ALIGNED(size >> PAGE_SHIFT, 1 << bitmap_shift));

> +
> +	/* Reserve memory */
> +	if (base && fixed) {
> +		if (memblock_is_region_reserved(base, size) ||
> +		    memblock_reserve(base, size) < 0) {
> +			ret = -EBUSY;
> +			goto err;
> +		}
> +	} else {
> +		phys_addr_t addr = memblock_alloc_range(size, alignment, base,
> +							limit);
> +		if (!addr) {
> +			ret = -ENOMEM;
> +			goto err;
> +		} else {
> +			base = addr;
> +		}
> +	}
> +
> +	/*
> +	 * Each reserved area must be initialised later, when more kernel
> +	 * subsystems (like slab allocator) are available.
> +	 */
> +	cma->base_pfn = PFN_DOWN(base);
> +	cma->count = size >> PAGE_SHIFT;
> +	cma->bitmap_shift = bitmap_shift;
> +	*res_cma = cma;
> +	cma_area_count++;
> +
> +	pr_info("CMA: reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M,
> +		(unsigned long)base);

Doesn't this message end up being: “cma: CMA: reserved …”? pr_fmt adds
“cma:” at the beginning, doesn't it?  So we should probably drop “CMA:”
here.

> +
> +	return 0;
> +
> +err:
> +	pr_err("CMA: failed to reserve %ld MiB\n", (unsigned long)size / SZ_1M);
> +	return ret;
> +}

-- 
Best regards,                                         _     _
.o. | Liege of Serenely Enlightened Majesty of      o' \,=./ `o
..o | Computer Science,  Michał “mina86” Nazarewicz    (o o)
ooo +--<mpn@google.com>--<xmpp:mina86@jabber.org>--ooO--(_)--Ooo--

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 2/3] DMA, CMA: use general CMA reserved area management framework
  2014-06-03  1:11   ` Joonsoo Kim
                       ` (3 preceding siblings ...)
  (?)
@ 2014-06-03  7:00     ` Michal Nazarewicz
  -1 siblings, 0 replies; 74+ messages in thread
From: Michal Nazarewicz @ 2014-06-03  7:00 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Paolo Bonzini, Gleb Natapov, Alexander Graf,
	Benjamin Herrenschmidt, Paul Mackerras, linux-mm, linux-kernel,
	linux-arm-kernel, kvm, kvm-ppc, linuxppc-dev, Joonsoo Kim

On Tue, Jun 03 2014, Joonsoo Kim wrote:
> Now, we have general CMA reserved area management framework,
> so use it for future maintainabilty. There is no functional change.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: Michal Nazarewicz <mina86@mina86.com>

> diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
> index dfb1dc9..ecb85ac 100644
> --- a/include/linux/dma-contiguous.h
> +++ b/include/linux/dma-contiguous.h
> @@ -53,9 +53,10 @@
>  
>  #ifdef __KERNEL__
>  
> +#include <linux/device.h>
> +

Why is this suddenly required?

>  struct cma;
>  struct page;
> -struct device;
>  
>  #ifdef CONFIG_DMA_CMA

-- 
Best regards,                                         _     _
.o. | Liege of Serenely Enlightened Majesty of      o' \,=./ `o
..o | Computer Science,  Michał “mina86” Nazarewicz    (o o)
ooo +--<mpn@google.com>--<xmpp:mina86@jabber.org>--ooO--(_)--Ooo--

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 2/3] DMA, CMA: use general CMA reserved area management framework
@ 2014-06-03  7:00     ` Michal Nazarewicz
  0 siblings, 0 replies; 74+ messages in thread
From: Michal Nazarewicz @ 2014-06-03  7:00 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Paolo Bonzini, Gleb Natapov, Alexander Graf,
	Benjamin Herrenschmidt, Paul Mackerras, linux-mm, linux-kernel,
	linux-arm-kernel, kvm, kvm-ppc, linuxppc-dev, Joonsoo Kim

On Tue, Jun 03 2014, Joonsoo Kim wrote:
> Now, we have general CMA reserved area management framework,
> so use it for future maintainabilty. There is no functional change.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: Michal Nazarewicz <mina86@mina86.com>

> diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
> index dfb1dc9..ecb85ac 100644
> --- a/include/linux/dma-contiguous.h
> +++ b/include/linux/dma-contiguous.h
> @@ -53,9 +53,10 @@
>  
>  #ifdef __KERNEL__
>  
> +#include <linux/device.h>
> +

Why is this suddenly required?

>  struct cma;
>  struct page;
> -struct device;
>  
>  #ifdef CONFIG_DMA_CMA

-- 
Best regards,                                         _     _
.o. | Liege of Serenely Enlightened Majesty of      o' \,=./ `o
..o | Computer Science,  Michał “mina86” Nazarewicz    (o o)
ooo +--<mpn@google.com>--<xmpp:mina86@jabber.org>--ooO--(_)--Ooo--

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 2/3] DMA, CMA: use general CMA reserved area management framework
@ 2014-06-03  7:00     ` Michal Nazarewicz
  0 siblings, 0 replies; 74+ messages in thread
From: Michal Nazarewicz @ 2014-06-03  7:00 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Paolo Bonzini, Gleb Natapov, Alexander Graf,
	Benjamin Herrenschmidt, Paul Mackerras, linux-mm, linux-kernel,
	linux-arm-kernel, kvm, kvm-ppc, linuxppc-dev

On Tue, Jun 03 2014, Joonsoo Kim wrote:
> Now, we have general CMA reserved area management framework,
> so use it for future maintainabilty. There is no functional change.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: Michal Nazarewicz <mina86@mina86.com>

> diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
> index dfb1dc9..ecb85ac 100644
> --- a/include/linux/dma-contiguous.h
> +++ b/include/linux/dma-contiguous.h
> @@ -53,9 +53,10 @@
>  
>  #ifdef __KERNEL__
>  
> +#include <linux/device.h>
> +

Why is this suddenly required?

>  struct cma;
>  struct page;
> -struct device;
>  
>  #ifdef CONFIG_DMA_CMA

-- 
Best regards,                                         _     _
.o. | Liege of Serenely Enlightened Majesty of      o' \,=./ `o
..o | Computer Science,  Michał “mina86” Nazarewicz    (o o)
ooo +--<mpn@google.com>--<xmpp:mina86@jabber.org>--ooO--(_)--Ooo--

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 2/3] DMA, CMA: use general CMA reserved area management framework
@ 2014-06-03  7:00     ` Michal Nazarewicz
  0 siblings, 0 replies; 74+ messages in thread
From: Michal Nazarewicz @ 2014-06-03  7:00 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski
  Cc: Russell King - ARM Linux, kvm, linux-mm, Gleb Natapov,
	Greg Kroah-Hartman, Alexander Graf, kvm-ppc, linux-kernel,
	Minchan Kim, Paul Mackerras, Paolo Bonzini, Joonsoo Kim,
	linuxppc-dev, linux-arm-kernel

On Tue, Jun 03 2014, Joonsoo Kim wrote:
> Now, we have general CMA reserved area management framework,
> so use it for future maintainabilty. There is no functional change.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: Michal Nazarewicz <mina86@mina86.com>

> diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguou=
s.h
> index dfb1dc9..ecb85ac 100644
> --- a/include/linux/dma-contiguous.h
> +++ b/include/linux/dma-contiguous.h
> @@ -53,9 +53,10 @@
>=20=20
>  #ifdef __KERNEL__
>=20=20
> +#include <linux/device.h>
> +

Why is this suddenly required?

>  struct cma;
>  struct page;
> -struct device;
>=20=20
>  #ifdef CONFIG_DMA_CMA

--=20
Best regards,                                         _     _
.o. | Liege of Serenely Enlightened Majesty of      o' \,=3D./ `o
..o | Computer Science,  Micha=C5=82 =E2=80=9Cmina86=E2=80=9D Nazarewicz   =
 (o o)
ooo +--<mpn@google.com>--<xmpp:mina86@jabber.org>--ooO--(_)--Ooo--

^ permalink raw reply	[flat|nested] 74+ messages in thread

* [RFC PATCH 2/3] DMA, CMA: use general CMA reserved area management framework
@ 2014-06-03  7:00     ` Michal Nazarewicz
  0 siblings, 0 replies; 74+ messages in thread
From: Michal Nazarewicz @ 2014-06-03  7:00 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jun 03 2014, Joonsoo Kim wrote:
> Now, we have general CMA reserved area management framework,
> so use it for future maintainabilty. There is no functional change.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: Michal Nazarewicz <mina86@mina86.com>

> diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
> index dfb1dc9..ecb85ac 100644
> --- a/include/linux/dma-contiguous.h
> +++ b/include/linux/dma-contiguous.h
> @@ -53,9 +53,10 @@
>  
>  #ifdef __KERNEL__
>  
> +#include <linux/device.h>
> +

Why is this suddenly required?

>  struct cma;
>  struct page;
> -struct device;
>  
>  #ifdef CONFIG_DMA_CMA

-- 
Best regards,                                         _     _
.o. | Liege of Serenely Enlightened Majesty of      o' \,=./ `o
..o | Computer Science,  Micha? ?mina86? Nazarewicz    (o o)
ooo +--<mpn@google.com>--<xmpp:mina86@jabber.org>--ooO--(_)--Ooo--

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 2/3] DMA, CMA: use general CMA reserved area management framework
@ 2014-06-03  7:00     ` Michal Nazarewicz
  0 siblings, 0 replies; 74+ messages in thread
From: Michal Nazarewicz @ 2014-06-03  7:00 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Paolo Bonzini, Gleb Natapov, Alexander Graf,
	Benjamin Herrenschmidt, Paul Mackerras, linux-mm, linux-kernel,
	linux-arm-kernel, kvm, kvm-ppc, linuxppc-dev, Joonsoo Kim

On Tue, Jun 03 2014, Joonsoo Kim wrote:
> Now, we have general CMA reserved area management framework,
> so use it for future maintainabilty. There is no functional change.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: Michal Nazarewicz <mina86@mina86.com>

> diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
> index dfb1dc9..ecb85ac 100644
> --- a/include/linux/dma-contiguous.h
> +++ b/include/linux/dma-contiguous.h
> @@ -53,9 +53,10 @@
>  
>  #ifdef __KERNEL__
>  
> +#include <linux/device.h>
> +

Why is this suddenly required?

>  struct cma;
>  struct page;
> -struct device;
>  
>  #ifdef CONFIG_DMA_CMA

-- 
Best regards,                                         _     _
.o. | Liege of Serenely Enlightened Majesty of      o' \,=./ `o
..o | Computer Science,  Michał “mina86” Nazarewicz    (o o)
ooo +--<mpn@google.com>--<xmpp:mina86@jabber.org>--ooO--(_)--Ooo--

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 3/3] PPC, KVM, CMA: use general CMA reserved area management framework
  2014-06-03  1:11   ` Joonsoo Kim
                       ` (3 preceding siblings ...)
  (?)
@ 2014-06-03  7:02     ` Michal Nazarewicz
  -1 siblings, 0 replies; 74+ messages in thread
From: Michal Nazarewicz @ 2014-06-03  7:02 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Paolo Bonzini, Gleb Natapov, Alexander Graf,
	Benjamin Herrenschmidt, Paul Mackerras, linux-mm, linux-kernel,
	linux-arm-kernel, kvm, kvm-ppc, linuxppc-dev, Joonsoo Kim

On Tue, Jun 03 2014, Joonsoo Kim wrote:
> Now, we have general CMA reserved area management framework,
> so use it for future maintainabilty. There is no functional change.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: Michal Nazarewicz <mina86@mina86.com>

-- 
Best regards,                                         _     _
.o. | Liege of Serenely Enlightened Majesty of      o' \,=./ `o
..o | Computer Science,  Michał “mina86” Nazarewicz    (o o)
ooo +--<mpn@google.com>--<xmpp:mina86@jabber.org>--ooO--(_)--Ooo--

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 3/3] PPC, KVM, CMA: use general CMA reserved area management framework
@ 2014-06-03  7:02     ` Michal Nazarewicz
  0 siblings, 0 replies; 74+ messages in thread
From: Michal Nazarewicz @ 2014-06-03  7:02 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Paolo Bonzini, Gleb Natapov, Alexander Graf,
	Benjamin Herrenschmidt, Paul Mackerras, linux-mm, linux-kernel,
	linux-arm-kernel, kvm, kvm-ppc, linuxppc-dev, Joonsoo Kim

On Tue, Jun 03 2014, Joonsoo Kim wrote:
> Now, we have general CMA reserved area management framework,
> so use it for future maintainabilty. There is no functional change.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: Michal Nazarewicz <mina86@mina86.com>

-- 
Best regards,                                         _     _
.o. | Liege of Serenely Enlightened Majesty of      o' \,=./ `o
..o | Computer Science,  Michał “mina86” Nazarewicz    (o o)
ooo +--<mpn@google.com>--<xmpp:mina86@jabber.org>--ooO--(_)--Ooo--

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 3/3] PPC, KVM, CMA: use general CMA reserved area management framework
@ 2014-06-03  7:02     ` Michal Nazarewicz
  0 siblings, 0 replies; 74+ messages in thread
From: Michal Nazarewicz @ 2014-06-03  7:02 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Paolo Bonzini, Gleb Natapov, Alexander Graf,
	Benjamin Herrenschmidt, Paul Mackerras, linux-mm, linux-kernel,
	linux-arm-kernel, kvm, kvm-ppc, linuxppc-dev

On Tue, Jun 03 2014, Joonsoo Kim wrote:
> Now, we have general CMA reserved area management framework,
> so use it for future maintainabilty. There is no functional change.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: Michal Nazarewicz <mina86@mina86.com>

-- 
Best regards,                                         _     _
.o. | Liege of Serenely Enlightened Majesty of      o' \,=./ `o
..o | Computer Science,  Michał “mina86” Nazarewicz    (o o)
ooo +--<mpn@google.com>--<xmpp:mina86@jabber.org>--ooO--(_)--Ooo--

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 3/3] PPC, KVM, CMA: use general CMA reserved area management framework
@ 2014-06-03  7:02     ` Michal Nazarewicz
  0 siblings, 0 replies; 74+ messages in thread
From: Michal Nazarewicz @ 2014-06-03  7:02 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski
  Cc: Russell King - ARM Linux, kvm, linux-mm, Gleb Natapov,
	Greg Kroah-Hartman, Alexander Graf, kvm-ppc, linux-kernel,
	Minchan Kim, Paul Mackerras, Paolo Bonzini, Joonsoo Kim,
	linuxppc-dev, linux-arm-kernel

On Tue, Jun 03 2014, Joonsoo Kim wrote:
> Now, we have general CMA reserved area management framework,
> so use it for future maintainabilty. There is no functional change.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: Michal Nazarewicz <mina86@mina86.com>

--=20
Best regards,                                         _     _
.o. | Liege of Serenely Enlightened Majesty of      o' \,=3D./ `o
..o | Computer Science,  Micha=C5=82 =E2=80=9Cmina86=E2=80=9D Nazarewicz   =
 (o o)
ooo +--<mpn@google.com>--<xmpp:mina86@jabber.org>--ooO--(_)--Ooo--

^ permalink raw reply	[flat|nested] 74+ messages in thread

* [RFC PATCH 3/3] PPC, KVM, CMA: use general CMA reserved area management framework
@ 2014-06-03  7:02     ` Michal Nazarewicz
  0 siblings, 0 replies; 74+ messages in thread
From: Michal Nazarewicz @ 2014-06-03  7:02 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jun 03 2014, Joonsoo Kim wrote:
> Now, we have general CMA reserved area management framework,
> so use it for future maintainabilty. There is no functional change.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: Michal Nazarewicz <mina86@mina86.com>

-- 
Best regards,                                         _     _
.o. | Liege of Serenely Enlightened Majesty of      o' \,=./ `o
..o | Computer Science,  Micha? ?mina86? Nazarewicz    (o o)
ooo +--<mpn@google.com>--<xmpp:mina86@jabber.org>--ooO--(_)--Ooo--

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 3/3] PPC, KVM, CMA: use general CMA reserved area management framework
@ 2014-06-03  7:02     ` Michal Nazarewicz
  0 siblings, 0 replies; 74+ messages in thread
From: Michal Nazarewicz @ 2014-06-03  7:02 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Paolo Bonzini, Gleb Natapov, Alexander Graf,
	Benjamin Herrenschmidt, Paul Mackerras, linux-mm, linux-kernel,
	linux-arm-kernel, kvm, kvm-ppc, linuxppc-dev, Joonsoo Kim

On Tue, Jun 03 2014, Joonsoo Kim wrote:
> Now, we have general CMA reserved area management framework,
> so use it for future maintainabilty. There is no functional change.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: Michal Nazarewicz <mina86@mina86.com>

-- 
Best regards,                                         _     _
.o. | Liege of Serenely Enlightened Majesty of      o' \,=./ `o
..o | Computer Science,  Michał “mina86” Nazarewicz    (o o)
ooo +--<mpn@google.com>--<xmpp:mina86@jabber.org>--ooO--(_)--Ooo--

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 3/3] PPC, KVM, CMA: use general CMA reserved area management framework
  2014-06-03  7:02     ` Michal Nazarewicz
                         ` (2 preceding siblings ...)
  (?)
@ 2014-06-03  9:20       ` Paolo Bonzini
  -1 siblings, 0 replies; 74+ messages in thread
From: Paolo Bonzini @ 2014-06-03  9:20 UTC (permalink / raw)
  To: Michal Nazarewicz, Joonsoo Kim, Andrew Morton, Aneesh Kumar K.V,
	Marek Szyprowski
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Gleb Natapov, Alexander Graf, Benjamin Herrenschmidt,
	Paul Mackerras, linux-mm, linux-kernel, linux-arm-kernel, kvm,
	kvm-ppc, linuxppc-dev

Il 03/06/2014 09:02, Michal Nazarewicz ha scritto:
> On Tue, Jun 03 2014, Joonsoo Kim wrote:
>> Now, we have general CMA reserved area management framework,
>> so use it for future maintainabilty. There is no functional change.
>>
>> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
>
> Acked-by: Michal Nazarewicz <mina86@mina86.com>
>

Acked-by: Paolo Bonzini <pbonzini@redhat.com>

Aneesh, can you test this series?

Paolo

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 3/3] PPC, KVM, CMA: use general CMA reserved area management framework
@ 2014-06-03  9:20       ` Paolo Bonzini
  0 siblings, 0 replies; 74+ messages in thread
From: Paolo Bonzini @ 2014-06-03  9:20 UTC (permalink / raw)
  To: Michal Nazarewicz, Joonsoo Kim, Andrew Morton, Aneesh Kumar K.V,
	Marek Szyprowski
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Gleb Natapov, Alexander Graf, Benjamin Herrenschmidt,
	Paul Mackerras, linux-mm, linux-kernel, linux-arm-kernel, kvm,
	kvm-ppc, linuxppc-dev

Il 03/06/2014 09:02, Michal Nazarewicz ha scritto:
> On Tue, Jun 03 2014, Joonsoo Kim wrote:
>> Now, we have general CMA reserved area management framework,
>> so use it for future maintainabilty. There is no functional change.
>>
>> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
>
> Acked-by: Michal Nazarewicz <mina86@mina86.com>
>

Acked-by: Paolo Bonzini <pbonzini@redhat.com>

Aneesh, can you test this series?

Paolo

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 3/3] PPC, KVM, CMA: use general CMA reserved area management framework
@ 2014-06-03  9:20       ` Paolo Bonzini
  0 siblings, 0 replies; 74+ messages in thread
From: Paolo Bonzini @ 2014-06-03  9:20 UTC (permalink / raw)
  To: Michal Nazarewicz, Joonsoo Kim, Andrew Morton, Aneesh Kumar K.V,
	Marek Szyprowski
  Cc: Russell King - ARM Linux, kvm, linux-mm, Gleb Natapov,
	Alexander Graf, kvm-ppc, linux-kernel, Minchan Kim,
	Paul Mackerras, Greg Kroah-Hartman, linuxppc-dev,
	linux-arm-kernel

Il 03/06/2014 09:02, Michal Nazarewicz ha scritto:
> On Tue, Jun 03 2014, Joonsoo Kim wrote:
>> Now, we have general CMA reserved area management framework,
>> so use it for future maintainabilty. There is no functional change.
>>
>> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
>
> Acked-by: Michal Nazarewicz <mina86@mina86.com>
>

Acked-by: Paolo Bonzini <pbonzini@redhat.com>

Aneesh, can you test this series?

Paolo

^ permalink raw reply	[flat|nested] 74+ messages in thread

* [RFC PATCH 3/3] PPC, KVM, CMA: use general CMA reserved area management framework
@ 2014-06-03  9:20       ` Paolo Bonzini
  0 siblings, 0 replies; 74+ messages in thread
From: Paolo Bonzini @ 2014-06-03  9:20 UTC (permalink / raw)
  To: linux-arm-kernel

Il 03/06/2014 09:02, Michal Nazarewicz ha scritto:
> On Tue, Jun 03 2014, Joonsoo Kim wrote:
>> Now, we have general CMA reserved area management framework,
>> so use it for future maintainabilty. There is no functional change.
>>
>> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
>
> Acked-by: Michal Nazarewicz <mina86@mina86.com>
>

Acked-by: Paolo Bonzini <pbonzini@redhat.com>

Aneesh, can you test this series?

Paolo

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 3/3] PPC, KVM, CMA: use general CMA reserved area management framework
@ 2014-06-03  9:20       ` Paolo Bonzini
  0 siblings, 0 replies; 74+ messages in thread
From: Paolo Bonzini @ 2014-06-03  9:20 UTC (permalink / raw)
  To: Michal Nazarewicz, Joonsoo Kim, Andrew Morton, Aneesh Kumar K.V,
	Marek Szyprowski
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Gleb Natapov, Alexander Graf, Benjamin Herrenschmidt,
	Paul Mackerras, linux-mm, linux-kernel, linux-arm-kernel, kvm,
	kvm-ppc, linuxppc-dev

Il 03/06/2014 09:02, Michal Nazarewicz ha scritto:
> On Tue, Jun 03 2014, Joonsoo Kim wrote:
>> Now, we have general CMA reserved area management framework,
>> so use it for future maintainabilty. There is no functional change.
>>
>> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
>
> Acked-by: Michal Nazarewicz <mina86@mina86.com>
>

Acked-by: Paolo Bonzini <pbonzini@redhat.com>

Aneesh, can you test this series?

Paolo

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 3/3] PPC, KVM, CMA: use general CMA reserved area management framework
  2014-06-03  9:20       ` Paolo Bonzini
                           ` (2 preceding siblings ...)
  (?)
@ 2014-06-05 17:00         ` Aneesh Kumar K.V
  -1 siblings, 0 replies; 74+ messages in thread
From: Aneesh Kumar K.V @ 2014-06-05 17:00 UTC (permalink / raw)
  To: Paolo Bonzini, Michal Nazarewicz, Joonsoo Kim, Andrew Morton,
	Marek Szyprowski
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Gleb Natapov, Alexander Graf, Benjamin Herrenschmidt,
	Paul Mackerras, linux-mm, linux-kernel, linux-arm-kernel, kvm,
	kvm-ppc, linuxppc-dev

Paolo Bonzini <pbonzini@redhat.com> writes:

> Il 03/06/2014 09:02, Michal Nazarewicz ha scritto:
>> On Tue, Jun 03 2014, Joonsoo Kim wrote:
>>> Now, we have general CMA reserved area management framework,
>>> so use it for future maintainabilty. There is no functional change.
>>>
>>> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
>>
>> Acked-by: Michal Nazarewicz <mina86@mina86.com>
>>
>
> Acked-by: Paolo Bonzini <pbonzini@redhat.com>
>
> Aneesh, can you test this series?

Sorry for the late reply. I will test this and update here.

-aneesh


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 3/3] PPC, KVM, CMA: use general CMA reserved area management framework
@ 2014-06-05 17:00         ` Aneesh Kumar K.V
  0 siblings, 0 replies; 74+ messages in thread
From: Aneesh Kumar K.V @ 2014-06-05 17:00 UTC (permalink / raw)
  To: Paolo Bonzini, Michal Nazarewicz, Joonsoo Kim, Andrew Morton,
	Marek Szyprowski
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Gleb Natapov, Alexander Graf, Benjamin Herrenschmidt,
	Paul Mackerras, linux-mm, linux-kernel, linux-arm-kernel, kvm,
	kvm-ppc, linuxppc-dev

Paolo Bonzini <pbonzini@redhat.com> writes:

> Il 03/06/2014 09:02, Michal Nazarewicz ha scritto:
>> On Tue, Jun 03 2014, Joonsoo Kim wrote:
>>> Now, we have general CMA reserved area management framework,
>>> so use it for future maintainabilty. There is no functional change.
>>>
>>> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
>>
>> Acked-by: Michal Nazarewicz <mina86@mina86.com>
>>
>
> Acked-by: Paolo Bonzini <pbonzini@redhat.com>
>
> Aneesh, can you test this series?

Sorry for the late reply. I will test this and update here.

-aneesh

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 3/3] PPC, KVM, CMA: use general CMA reserved area management framework
@ 2014-06-05 17:00         ` Aneesh Kumar K.V
  0 siblings, 0 replies; 74+ messages in thread
From: Aneesh Kumar K.V @ 2014-06-05 17:00 UTC (permalink / raw)
  To: Paolo Bonzini, Michal Nazarewicz, Joonsoo Kim, Andrew Morton,
	Marek Szyprowski
  Cc: Russell King - ARM Linux, kvm, linux-mm, Gleb Natapov,
	Alexander Graf, kvm-ppc, linux-kernel, Minchan Kim,
	Paul Mackerras, Greg Kroah-Hartman, linuxppc-dev,
	linux-arm-kernel

Paolo Bonzini <pbonzini@redhat.com> writes:

> Il 03/06/2014 09:02, Michal Nazarewicz ha scritto:
>> On Tue, Jun 03 2014, Joonsoo Kim wrote:
>>> Now, we have general CMA reserved area management framework,
>>> so use it for future maintainabilty. There is no functional change.
>>>
>>> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
>>
>> Acked-by: Michal Nazarewicz <mina86@mina86.com>
>>
>
> Acked-by: Paolo Bonzini <pbonzini@redhat.com>
>
> Aneesh, can you test this series?

Sorry for the late reply. I will test this and update here.

-aneesh

^ permalink raw reply	[flat|nested] 74+ messages in thread

* [RFC PATCH 3/3] PPC, KVM, CMA: use general CMA reserved area management framework
@ 2014-06-05 17:00         ` Aneesh Kumar K.V
  0 siblings, 0 replies; 74+ messages in thread
From: Aneesh Kumar K.V @ 2014-06-05 17:00 UTC (permalink / raw)
  To: linux-arm-kernel

Paolo Bonzini <pbonzini@redhat.com> writes:

> Il 03/06/2014 09:02, Michal Nazarewicz ha scritto:
>> On Tue, Jun 03 2014, Joonsoo Kim wrote:
>>> Now, we have general CMA reserved area management framework,
>>> so use it for future maintainabilty. There is no functional change.
>>>
>>> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
>>
>> Acked-by: Michal Nazarewicz <mina86@mina86.com>
>>
>
> Acked-by: Paolo Bonzini <pbonzini@redhat.com>
>
> Aneesh, can you test this series?

Sorry for the late reply. I will test this and update here.

-aneesh

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 3/3] PPC, KVM, CMA: use general CMA reserved area management framework
@ 2014-06-05 17:00         ` Aneesh Kumar K.V
  0 siblings, 0 replies; 74+ messages in thread
From: Aneesh Kumar K.V @ 2014-06-05 17:12 UTC (permalink / raw)
  To: Paolo Bonzini, Michal Nazarewicz, Joonsoo Kim, Andrew Morton,
	Marek Szyprowski
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Gleb Natapov, Alexander Graf, Benjamin Herrenschmidt,
	Paul Mackerras, linux-mm, linux-kernel, linux-arm-kernel, kvm,
	kvm-ppc, linuxppc-dev

Paolo Bonzini <pbonzini@redhat.com> writes:

> Il 03/06/2014 09:02, Michal Nazarewicz ha scritto:
>> On Tue, Jun 03 2014, Joonsoo Kim wrote:
>>> Now, we have general CMA reserved area management framework,
>>> so use it for future maintainabilty. There is no functional change.
>>>
>>> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
>>
>> Acked-by: Michal Nazarewicz <mina86@mina86.com>
>>
>
> Acked-by: Paolo Bonzini <pbonzini@redhat.com>
>
> Aneesh, can you test this series?

Sorry for the late reply. I will test this and update here.

-aneesh


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 1/3] CMA: generalize CMA reserved area management functionality
  2014-06-03  1:11   ` Joonsoo Kim
                       ` (3 preceding siblings ...)
  (?)
@ 2014-06-05 17:39     ` Aneesh Kumar K.V
  -1 siblings, 0 replies; 74+ messages in thread
From: Aneesh Kumar K.V @ 2014-06-05 17:39 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton, Marek Szyprowski, Michal Nazarewicz
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Paolo Bonzini, Gleb Natapov, Alexander Graf,
	Benjamin Herrenschmidt, Paul Mackerras, linux-mm, linux-kernel,
	linux-arm-kernel, kvm, kvm-ppc, linuxppc-dev, Joonsoo Kim

Joonsoo Kim <iamjoonsoo.kim@lge.com> writes:

> Currently, there are two users on CMA functionality, one is the DMA
> subsystem and the other is the kvm on powerpc. They have their own code
> to manage CMA reserved area even if they looks really similar.
> From my guess, it is caused by some needs on bitmap management. Kvm side
> wants to maintain bitmap not for 1 page, but for more size. Eventually it
> use bitmap where one bit represents 64 pages.
>
> When I implement CMA related patches, I should change those two places
> to apply my change and it seem to be painful to me. I want to change
> this situation and reduce future code management overhead through
> this patch.
>
> This change could also help developer who want to use CMA in their
> new feature development, since they can use CMA easily without
> copying & pasting this reserved area management code.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>


The way patches are split makes it difficult to review. Will it be
possible to make changes against one implementation and them move that
to generic code. That helps in finding out what exactly changed.

>
> diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
> index 00e13ce..b3fe1cc 100644
> --- a/drivers/base/Kconfig
> +++ b/drivers/base/Kconfig
> @@ -283,7 +283,7 @@ config CMA_ALIGNMENT
>
>  	  If unsure, leave the default value "8".
>
> -config CMA_AREAS
> +config DMA_CMA_AREAS
>  	int "Maximum count of the CMA device-private areas"
>  	default 7
>  	help

for ex: The above can be a seperate patch along with further DMA related
cleanups . 


> diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
> index 83969f8..48cdac8 100644
> --- a/drivers/base/dma-contiguous.c
> +++ b/drivers/base/dma-contiguous.c
> @@ -186,7 +186,7 @@ static int __init cma_activate_area(struct cma *cma)
>  	return 0;
>  }
>
> -static struct cma cma_areas[MAX_CMA_AREAS];
> +static struct cma cma_areas[MAX_DMA_CMA_AREAS];
>  static unsigned cma_area_count;
>
>  static int __init cma_init_reserved_areas(void)
> diff --git a/include/linux/cma.h b/include/linux/cma.h
> new file mode 100644
> index 0000000..60ba06f
> --- /dev/null
> +++ b/include/linux/cma.h
> @@ -0,0 +1,28 @@
> +/*
> + * Contiguous Memory Allocator
> + *
> + * Copyright LG Electronics Inc., 2014
> + * Written by:
> + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License as
> + * published by the Free Software Foundation; either version 2 of the
> + * License or (at your optional) any later version of the license.
> + *
> + */
> +
> +#ifndef __CMA_H__
> +#define __CMA_H__
> +
> +struct cma;
> +
> +extern struct page *cma_alloc(struct cma *cma, unsigned long count,
> +				unsigned long align);
> +extern bool cma_release(struct cma *cma, struct page *pages,
> +				unsigned long count);
> +extern int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
> +				phys_addr_t limit, phys_addr_t alignment,
> +				unsigned long bitmap_shift, bool fixed,
> +				struct cma **res_cma);
> +#endif
> diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
> index 772eab5..dfb1dc9 100644
> --- a/include/linux/dma-contiguous.h
> +++ b/include/linux/dma-contiguous.h
> @@ -63,7 +63,7 @@ struct device;
>   * There is always at least global CMA area and a few optional device
>   * private areas configured in kernel .config.
>   */
> -#define MAX_CMA_AREAS	(1 + CONFIG_CMA_AREAS)
> +#define MAX_DMA_CMA_AREAS  (1 + CONFIG_DMA_CMA_AREAS)
>
>  extern struct cma *dma_contiguous_default_area;
>
> @@ -123,7 +123,7 @@ bool dma_release_from_contiguous(struct device *dev, struct page *pages,
>
>  #else
>
> -#define MAX_CMA_AREAS	(0)
> +#define MAX_DMA_CMA_AREAS	(0)
>
>  static inline struct cma *dev_get_cma_area(struct device *dev)
>  {
> diff --git a/mm/Kconfig b/mm/Kconfig
> index 7511b4a..0877ddc 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -515,6 +515,17 @@ config CMA_DEBUG
>  	  processing calls such as dma_alloc_from_contiguous().
>  	  This option does not affect warning and error messages.
>
> +config CMA_AREAS
> +	int "Maximum count of the CMA areas"
> +	depends on CMA
> +	default 7
> +	help
> +	  CMA allows to create CMA areas for particular purpose, mainly,
> +	  used as device private area. This parameter sets the maximum
> +	  number of CMA area in the system.
> +
> +	  If unsure, leave the default value "7".
> +

If we have 7 declare continugous request, a kvm cma allocation request will fail ?

>  config ZBUD
>  	tristate
>  	default n
> diff --git a/mm/Makefile b/mm/Makefile
> index 1eaa70b..bc0422b 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -62,3 +62,4 @@ obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o
>  obj-$(CONFIG_ZBUD)	+= zbud.o
>  obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
>  obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
> +obj-$(CONFIG_CMA)	+= cma.o
> diff --git a/mm/cma.c b/mm/cma.c
> new file mode 100644
> index 0000000..0dae88d
> --- /dev/null
> +++ b/mm/cma.c
> @@ -0,0 +1,329 @@
> +/*
> + * Contiguous Memory Allocator
> + *
> + * Copyright (c) 2010-2011 by Samsung Electronics.
> + * Copyright IBM Corporation, 2013
> + * Copyright LG Electronics Inc., 2014
> + * Written by:
> + *	Marek Szyprowski <m.szyprowski@samsung.com>
> + *	Michal Nazarewicz <mina86@mina86.com>
> + *	Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License as
> + * published by the Free Software Foundation; either version 2 of the
> + * License or (at your optional) any later version of the license.
> + */
> +
> +#define pr_fmt(fmt) "cma: " fmt
> +
> +#ifdef CONFIG_CMA_DEBUG
> +#ifndef DEBUG
> +#  define DEBUG
> +#endif
> +#endif
> +
> +#include <linux/memblock.h>
> +#include <linux/err.h>
> +#include <linux/mm.h>
> +#include <linux/mutex.h>
> +#include <linux/sizes.h>
> +#include <linux/slab.h>
> +
> +struct cma {
> +	unsigned long	base_pfn;
> +	unsigned long	count;
> +	unsigned long	*bitmap;
> +	unsigned long	bitmap_shift;

I guess this is added to accommodate the kvm specific alloc chunks. May
be you should do this as a patch against kvm implementation and then
move the code to generic ?

> +	struct mutex	lock;
> +};
> +

-aneesh


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 1/3] CMA: generalize CMA reserved area management functionality
@ 2014-06-05 17:39     ` Aneesh Kumar K.V
  0 siblings, 0 replies; 74+ messages in thread
From: Aneesh Kumar K.V @ 2014-06-05 17:39 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton, Marek Szyprowski, Michal Nazarewicz
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Paolo Bonzini, Gleb Natapov, Alexander Graf,
	Benjamin Herrenschmidt, Paul Mackerras, linux-mm, linux-kernel,
	linux-arm-kernel, kvm, kvm-ppc, linuxppc-dev, Joonsoo Kim

Joonsoo Kim <iamjoonsoo.kim@lge.com> writes:

> Currently, there are two users on CMA functionality, one is the DMA
> subsystem and the other is the kvm on powerpc. They have their own code
> to manage CMA reserved area even if they looks really similar.
> From my guess, it is caused by some needs on bitmap management. Kvm side
> wants to maintain bitmap not for 1 page, but for more size. Eventually it
> use bitmap where one bit represents 64 pages.
>
> When I implement CMA related patches, I should change those two places
> to apply my change and it seem to be painful to me. I want to change
> this situation and reduce future code management overhead through
> this patch.
>
> This change could also help developer who want to use CMA in their
> new feature development, since they can use CMA easily without
> copying & pasting this reserved area management code.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>


The way patches are split makes it difficult to review. Will it be
possible to make changes against one implementation and them move that
to generic code. That helps in finding out what exactly changed.

>
> diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
> index 00e13ce..b3fe1cc 100644
> --- a/drivers/base/Kconfig
> +++ b/drivers/base/Kconfig
> @@ -283,7 +283,7 @@ config CMA_ALIGNMENT
>
>  	  If unsure, leave the default value "8".
>
> -config CMA_AREAS
> +config DMA_CMA_AREAS
>  	int "Maximum count of the CMA device-private areas"
>  	default 7
>  	help

for ex: The above can be a seperate patch along with further DMA related
cleanups . 


> diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
> index 83969f8..48cdac8 100644
> --- a/drivers/base/dma-contiguous.c
> +++ b/drivers/base/dma-contiguous.c
> @@ -186,7 +186,7 @@ static int __init cma_activate_area(struct cma *cma)
>  	return 0;
>  }
>
> -static struct cma cma_areas[MAX_CMA_AREAS];
> +static struct cma cma_areas[MAX_DMA_CMA_AREAS];
>  static unsigned cma_area_count;
>
>  static int __init cma_init_reserved_areas(void)
> diff --git a/include/linux/cma.h b/include/linux/cma.h
> new file mode 100644
> index 0000000..60ba06f
> --- /dev/null
> +++ b/include/linux/cma.h
> @@ -0,0 +1,28 @@
> +/*
> + * Contiguous Memory Allocator
> + *
> + * Copyright LG Electronics Inc., 2014
> + * Written by:
> + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License as
> + * published by the Free Software Foundation; either version 2 of the
> + * License or (at your optional) any later version of the license.
> + *
> + */
> +
> +#ifndef __CMA_H__
> +#define __CMA_H__
> +
> +struct cma;
> +
> +extern struct page *cma_alloc(struct cma *cma, unsigned long count,
> +				unsigned long align);
> +extern bool cma_release(struct cma *cma, struct page *pages,
> +				unsigned long count);
> +extern int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
> +				phys_addr_t limit, phys_addr_t alignment,
> +				unsigned long bitmap_shift, bool fixed,
> +				struct cma **res_cma);
> +#endif
> diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
> index 772eab5..dfb1dc9 100644
> --- a/include/linux/dma-contiguous.h
> +++ b/include/linux/dma-contiguous.h
> @@ -63,7 +63,7 @@ struct device;
>   * There is always at least global CMA area and a few optional device
>   * private areas configured in kernel .config.
>   */
> -#define MAX_CMA_AREAS	(1 + CONFIG_CMA_AREAS)
> +#define MAX_DMA_CMA_AREAS  (1 + CONFIG_DMA_CMA_AREAS)
>
>  extern struct cma *dma_contiguous_default_area;
>
> @@ -123,7 +123,7 @@ bool dma_release_from_contiguous(struct device *dev, struct page *pages,
>
>  #else
>
> -#define MAX_CMA_AREAS	(0)
> +#define MAX_DMA_CMA_AREAS	(0)
>
>  static inline struct cma *dev_get_cma_area(struct device *dev)
>  {
> diff --git a/mm/Kconfig b/mm/Kconfig
> index 7511b4a..0877ddc 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -515,6 +515,17 @@ config CMA_DEBUG
>  	  processing calls such as dma_alloc_from_contiguous().
>  	  This option does not affect warning and error messages.
>
> +config CMA_AREAS
> +	int "Maximum count of the CMA areas"
> +	depends on CMA
> +	default 7
> +	help
> +	  CMA allows to create CMA areas for particular purpose, mainly,
> +	  used as device private area. This parameter sets the maximum
> +	  number of CMA area in the system.
> +
> +	  If unsure, leave the default value "7".
> +

If we have 7 declare continugous request, a kvm cma allocation request will fail ?

>  config ZBUD
>  	tristate
>  	default n
> diff --git a/mm/Makefile b/mm/Makefile
> index 1eaa70b..bc0422b 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -62,3 +62,4 @@ obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o
>  obj-$(CONFIG_ZBUD)	+= zbud.o
>  obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
>  obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
> +obj-$(CONFIG_CMA)	+= cma.o
> diff --git a/mm/cma.c b/mm/cma.c
> new file mode 100644
> index 0000000..0dae88d
> --- /dev/null
> +++ b/mm/cma.c
> @@ -0,0 +1,329 @@
> +/*
> + * Contiguous Memory Allocator
> + *
> + * Copyright (c) 2010-2011 by Samsung Electronics.
> + * Copyright IBM Corporation, 2013
> + * Copyright LG Electronics Inc., 2014
> + * Written by:
> + *	Marek Szyprowski <m.szyprowski@samsung.com>
> + *	Michal Nazarewicz <mina86@mina86.com>
> + *	Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License as
> + * published by the Free Software Foundation; either version 2 of the
> + * License or (at your optional) any later version of the license.
> + */
> +
> +#define pr_fmt(fmt) "cma: " fmt
> +
> +#ifdef CONFIG_CMA_DEBUG
> +#ifndef DEBUG
> +#  define DEBUG
> +#endif
> +#endif
> +
> +#include <linux/memblock.h>
> +#include <linux/err.h>
> +#include <linux/mm.h>
> +#include <linux/mutex.h>
> +#include <linux/sizes.h>
> +#include <linux/slab.h>
> +
> +struct cma {
> +	unsigned long	base_pfn;
> +	unsigned long	count;
> +	unsigned long	*bitmap;
> +	unsigned long	bitmap_shift;

I guess this is added to accommodate the kvm specific alloc chunks. May
be you should do this as a patch against kvm implementation and then
move the code to generic ?

> +	struct mutex	lock;
> +};
> +

-aneesh

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 1/3] CMA: generalize CMA reserved area management functionality
@ 2014-06-05 17:39     ` Aneesh Kumar K.V
  0 siblings, 0 replies; 74+ messages in thread
From: Aneesh Kumar K.V @ 2014-06-05 17:39 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton, Marek Szyprowski, Michal Nazarewicz
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Paolo Bonzini, Gleb Natapov, Alexander Graf,
	Benjamin Herrenschmidt, Paul Mackerras, linux-mm, linux-kernel,
	linux-arm-kernel, kvm, kvm-ppc, linuxppc-dev

Joonsoo Kim <iamjoonsoo.kim@lge.com> writes:

> Currently, there are two users on CMA functionality, one is the DMA
> subsystem and the other is the kvm on powerpc. They have their own code
> to manage CMA reserved area even if they looks really similar.
> From my guess, it is caused by some needs on bitmap management. Kvm side
> wants to maintain bitmap not for 1 page, but for more size. Eventually it
> use bitmap where one bit represents 64 pages.
>
> When I implement CMA related patches, I should change those two places
> to apply my change and it seem to be painful to me. I want to change
> this situation and reduce future code management overhead through
> this patch.
>
> This change could also help developer who want to use CMA in their
> new feature development, since they can use CMA easily without
> copying & pasting this reserved area management code.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>


The way patches are split makes it difficult to review. Will it be
possible to make changes against one implementation and them move that
to generic code. That helps in finding out what exactly changed.

>
> diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
> index 00e13ce..b3fe1cc 100644
> --- a/drivers/base/Kconfig
> +++ b/drivers/base/Kconfig
> @@ -283,7 +283,7 @@ config CMA_ALIGNMENT
>
>  	  If unsure, leave the default value "8".
>
> -config CMA_AREAS
> +config DMA_CMA_AREAS
>  	int "Maximum count of the CMA device-private areas"
>  	default 7
>  	help

for ex: The above can be a seperate patch along with further DMA related
cleanups . 


> diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
> index 83969f8..48cdac8 100644
> --- a/drivers/base/dma-contiguous.c
> +++ b/drivers/base/dma-contiguous.c
> @@ -186,7 +186,7 @@ static int __init cma_activate_area(struct cma *cma)
>  	return 0;
>  }
>
> -static struct cma cma_areas[MAX_CMA_AREAS];
> +static struct cma cma_areas[MAX_DMA_CMA_AREAS];
>  static unsigned cma_area_count;
>
>  static int __init cma_init_reserved_areas(void)
> diff --git a/include/linux/cma.h b/include/linux/cma.h
> new file mode 100644
> index 0000000..60ba06f
> --- /dev/null
> +++ b/include/linux/cma.h
> @@ -0,0 +1,28 @@
> +/*
> + * Contiguous Memory Allocator
> + *
> + * Copyright LG Electronics Inc., 2014
> + * Written by:
> + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License as
> + * published by the Free Software Foundation; either version 2 of the
> + * License or (at your optional) any later version of the license.
> + *
> + */
> +
> +#ifndef __CMA_H__
> +#define __CMA_H__
> +
> +struct cma;
> +
> +extern struct page *cma_alloc(struct cma *cma, unsigned long count,
> +				unsigned long align);
> +extern bool cma_release(struct cma *cma, struct page *pages,
> +				unsigned long count);
> +extern int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
> +				phys_addr_t limit, phys_addr_t alignment,
> +				unsigned long bitmap_shift, bool fixed,
> +				struct cma **res_cma);
> +#endif
> diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
> index 772eab5..dfb1dc9 100644
> --- a/include/linux/dma-contiguous.h
> +++ b/include/linux/dma-contiguous.h
> @@ -63,7 +63,7 @@ struct device;
>   * There is always at least global CMA area and a few optional device
>   * private areas configured in kernel .config.
>   */
> -#define MAX_CMA_AREAS	(1 + CONFIG_CMA_AREAS)
> +#define MAX_DMA_CMA_AREAS  (1 + CONFIG_DMA_CMA_AREAS)
>
>  extern struct cma *dma_contiguous_default_area;
>
> @@ -123,7 +123,7 @@ bool dma_release_from_contiguous(struct device *dev, struct page *pages,
>
>  #else
>
> -#define MAX_CMA_AREAS	(0)
> +#define MAX_DMA_CMA_AREAS	(0)
>
>  static inline struct cma *dev_get_cma_area(struct device *dev)
>  {
> diff --git a/mm/Kconfig b/mm/Kconfig
> index 7511b4a..0877ddc 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -515,6 +515,17 @@ config CMA_DEBUG
>  	  processing calls such as dma_alloc_from_contiguous().
>  	  This option does not affect warning and error messages.
>
> +config CMA_AREAS
> +	int "Maximum count of the CMA areas"
> +	depends on CMA
> +	default 7
> +	help
> +	  CMA allows to create CMA areas for particular purpose, mainly,
> +	  used as device private area. This parameter sets the maximum
> +	  number of CMA area in the system.
> +
> +	  If unsure, leave the default value "7".
> +

If we have 7 declare continugous request, a kvm cma allocation request will fail ?

>  config ZBUD
>  	tristate
>  	default n
> diff --git a/mm/Makefile b/mm/Makefile
> index 1eaa70b..bc0422b 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -62,3 +62,4 @@ obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o
>  obj-$(CONFIG_ZBUD)	+= zbud.o
>  obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
>  obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
> +obj-$(CONFIG_CMA)	+= cma.o
> diff --git a/mm/cma.c b/mm/cma.c
> new file mode 100644
> index 0000000..0dae88d
> --- /dev/null
> +++ b/mm/cma.c
> @@ -0,0 +1,329 @@
> +/*
> + * Contiguous Memory Allocator
> + *
> + * Copyright (c) 2010-2011 by Samsung Electronics.
> + * Copyright IBM Corporation, 2013
> + * Copyright LG Electronics Inc., 2014
> + * Written by:
> + *	Marek Szyprowski <m.szyprowski@samsung.com>
> + *	Michal Nazarewicz <mina86@mina86.com>
> + *	Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License as
> + * published by the Free Software Foundation; either version 2 of the
> + * License or (at your optional) any later version of the license.
> + */
> +
> +#define pr_fmt(fmt) "cma: " fmt
> +
> +#ifdef CONFIG_CMA_DEBUG
> +#ifndef DEBUG
> +#  define DEBUG
> +#endif
> +#endif
> +
> +#include <linux/memblock.h>
> +#include <linux/err.h>
> +#include <linux/mm.h>
> +#include <linux/mutex.h>
> +#include <linux/sizes.h>
> +#include <linux/slab.h>
> +
> +struct cma {
> +	unsigned long	base_pfn;
> +	unsigned long	count;
> +	unsigned long	*bitmap;
> +	unsigned long	bitmap_shift;

I guess this is added to accommodate the kvm specific alloc chunks. May
be you should do this as a patch against kvm implementation and then
move the code to generic ?

> +	struct mutex	lock;
> +};
> +

-aneesh

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 1/3] CMA: generalize CMA reserved area management functionality
@ 2014-06-05 17:39     ` Aneesh Kumar K.V
  0 siblings, 0 replies; 74+ messages in thread
From: Aneesh Kumar K.V @ 2014-06-05 17:39 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton, Marek Szyprowski, Michal Nazarewicz
  Cc: Russell King - ARM Linux, kvm, linux-mm, Gleb Natapov,
	Greg Kroah-Hartman, Alexander Graf, kvm-ppc, linux-kernel,
	Minchan Kim, Paul Mackerras, Paolo Bonzini, Joonsoo Kim,
	linuxppc-dev, linux-arm-kernel

Joonsoo Kim <iamjoonsoo.kim@lge.com> writes:

> Currently, there are two users on CMA functionality, one is the DMA
> subsystem and the other is the kvm on powerpc. They have their own code
> to manage CMA reserved area even if they looks really similar.
> From my guess, it is caused by some needs on bitmap management. Kvm side
> wants to maintain bitmap not for 1 page, but for more size. Eventually it
> use bitmap where one bit represents 64 pages.
>
> When I implement CMA related patches, I should change those two places
> to apply my change and it seem to be painful to me. I want to change
> this situation and reduce future code management overhead through
> this patch.
>
> This change could also help developer who want to use CMA in their
> new feature development, since they can use CMA easily without
> copying & pasting this reserved area management code.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>


The way patches are split makes it difficult to review. Will it be
possible to make changes against one implementation and them move that
to generic code. That helps in finding out what exactly changed.

>
> diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
> index 00e13ce..b3fe1cc 100644
> --- a/drivers/base/Kconfig
> +++ b/drivers/base/Kconfig
> @@ -283,7 +283,7 @@ config CMA_ALIGNMENT
>
>  	  If unsure, leave the default value "8".
>
> -config CMA_AREAS
> +config DMA_CMA_AREAS
>  	int "Maximum count of the CMA device-private areas"
>  	default 7
>  	help

for ex: The above can be a seperate patch along with further DMA related
cleanups . 


> diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
> index 83969f8..48cdac8 100644
> --- a/drivers/base/dma-contiguous.c
> +++ b/drivers/base/dma-contiguous.c
> @@ -186,7 +186,7 @@ static int __init cma_activate_area(struct cma *cma)
>  	return 0;
>  }
>
> -static struct cma cma_areas[MAX_CMA_AREAS];
> +static struct cma cma_areas[MAX_DMA_CMA_AREAS];
>  static unsigned cma_area_count;
>
>  static int __init cma_init_reserved_areas(void)
> diff --git a/include/linux/cma.h b/include/linux/cma.h
> new file mode 100644
> index 0000000..60ba06f
> --- /dev/null
> +++ b/include/linux/cma.h
> @@ -0,0 +1,28 @@
> +/*
> + * Contiguous Memory Allocator
> + *
> + * Copyright LG Electronics Inc., 2014
> + * Written by:
> + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License as
> + * published by the Free Software Foundation; either version 2 of the
> + * License or (at your optional) any later version of the license.
> + *
> + */
> +
> +#ifndef __CMA_H__
> +#define __CMA_H__
> +
> +struct cma;
> +
> +extern struct page *cma_alloc(struct cma *cma, unsigned long count,
> +				unsigned long align);
> +extern bool cma_release(struct cma *cma, struct page *pages,
> +				unsigned long count);
> +extern int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
> +				phys_addr_t limit, phys_addr_t alignment,
> +				unsigned long bitmap_shift, bool fixed,
> +				struct cma **res_cma);
> +#endif
> diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
> index 772eab5..dfb1dc9 100644
> --- a/include/linux/dma-contiguous.h
> +++ b/include/linux/dma-contiguous.h
> @@ -63,7 +63,7 @@ struct device;
>   * There is always at least global CMA area and a few optional device
>   * private areas configured in kernel .config.
>   */
> -#define MAX_CMA_AREAS	(1 + CONFIG_CMA_AREAS)
> +#define MAX_DMA_CMA_AREAS  (1 + CONFIG_DMA_CMA_AREAS)
>
>  extern struct cma *dma_contiguous_default_area;
>
> @@ -123,7 +123,7 @@ bool dma_release_from_contiguous(struct device *dev, struct page *pages,
>
>  #else
>
> -#define MAX_CMA_AREAS	(0)
> +#define MAX_DMA_CMA_AREAS	(0)
>
>  static inline struct cma *dev_get_cma_area(struct device *dev)
>  {
> diff --git a/mm/Kconfig b/mm/Kconfig
> index 7511b4a..0877ddc 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -515,6 +515,17 @@ config CMA_DEBUG
>  	  processing calls such as dma_alloc_from_contiguous().
>  	  This option does not affect warning and error messages.
>
> +config CMA_AREAS
> +	int "Maximum count of the CMA areas"
> +	depends on CMA
> +	default 7
> +	help
> +	  CMA allows to create CMA areas for particular purpose, mainly,
> +	  used as device private area. This parameter sets the maximum
> +	  number of CMA area in the system.
> +
> +	  If unsure, leave the default value "7".
> +

If we have 7 declare continugous request, a kvm cma allocation request will fail ?

>  config ZBUD
>  	tristate
>  	default n
> diff --git a/mm/Makefile b/mm/Makefile
> index 1eaa70b..bc0422b 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -62,3 +62,4 @@ obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o
>  obj-$(CONFIG_ZBUD)	+= zbud.o
>  obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
>  obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
> +obj-$(CONFIG_CMA)	+= cma.o
> diff --git a/mm/cma.c b/mm/cma.c
> new file mode 100644
> index 0000000..0dae88d
> --- /dev/null
> +++ b/mm/cma.c
> @@ -0,0 +1,329 @@
> +/*
> + * Contiguous Memory Allocator
> + *
> + * Copyright (c) 2010-2011 by Samsung Electronics.
> + * Copyright IBM Corporation, 2013
> + * Copyright LG Electronics Inc., 2014
> + * Written by:
> + *	Marek Szyprowski <m.szyprowski@samsung.com>
> + *	Michal Nazarewicz <mina86@mina86.com>
> + *	Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License as
> + * published by the Free Software Foundation; either version 2 of the
> + * License or (at your optional) any later version of the license.
> + */
> +
> +#define pr_fmt(fmt) "cma: " fmt
> +
> +#ifdef CONFIG_CMA_DEBUG
> +#ifndef DEBUG
> +#  define DEBUG
> +#endif
> +#endif
> +
> +#include <linux/memblock.h>
> +#include <linux/err.h>
> +#include <linux/mm.h>
> +#include <linux/mutex.h>
> +#include <linux/sizes.h>
> +#include <linux/slab.h>
> +
> +struct cma {
> +	unsigned long	base_pfn;
> +	unsigned long	count;
> +	unsigned long	*bitmap;
> +	unsigned long	bitmap_shift;

I guess this is added to accommodate the kvm specific alloc chunks. May
be you should do this as a patch against kvm implementation and then
move the code to generic ?

> +	struct mutex	lock;
> +};
> +

-aneesh

^ permalink raw reply	[flat|nested] 74+ messages in thread

* [RFC PATCH 1/3] CMA: generalize CMA reserved area management functionality
@ 2014-06-05 17:39     ` Aneesh Kumar K.V
  0 siblings, 0 replies; 74+ messages in thread
From: Aneesh Kumar K.V @ 2014-06-05 17:39 UTC (permalink / raw)
  To: linux-arm-kernel

Joonsoo Kim <iamjoonsoo.kim@lge.com> writes:

> Currently, there are two users on CMA functionality, one is the DMA
> subsystem and the other is the kvm on powerpc. They have their own code
> to manage CMA reserved area even if they looks really similar.
> From my guess, it is caused by some needs on bitmap management. Kvm side
> wants to maintain bitmap not for 1 page, but for more size. Eventually it
> use bitmap where one bit represents 64 pages.
>
> When I implement CMA related patches, I should change those two places
> to apply my change and it seem to be painful to me. I want to change
> this situation and reduce future code management overhead through
> this patch.
>
> This change could also help developer who want to use CMA in their
> new feature development, since they can use CMA easily without
> copying & pasting this reserved area management code.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>


The way patches are split makes it difficult to review. Will it be
possible to make changes against one implementation and them move that
to generic code. That helps in finding out what exactly changed.

>
> diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
> index 00e13ce..b3fe1cc 100644
> --- a/drivers/base/Kconfig
> +++ b/drivers/base/Kconfig
> @@ -283,7 +283,7 @@ config CMA_ALIGNMENT
>
>  	  If unsure, leave the default value "8".
>
> -config CMA_AREAS
> +config DMA_CMA_AREAS
>  	int "Maximum count of the CMA device-private areas"
>  	default 7
>  	help

for ex: The above can be a seperate patch along with further DMA related
cleanups . 


> diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
> index 83969f8..48cdac8 100644
> --- a/drivers/base/dma-contiguous.c
> +++ b/drivers/base/dma-contiguous.c
> @@ -186,7 +186,7 @@ static int __init cma_activate_area(struct cma *cma)
>  	return 0;
>  }
>
> -static struct cma cma_areas[MAX_CMA_AREAS];
> +static struct cma cma_areas[MAX_DMA_CMA_AREAS];
>  static unsigned cma_area_count;
>
>  static int __init cma_init_reserved_areas(void)
> diff --git a/include/linux/cma.h b/include/linux/cma.h
> new file mode 100644
> index 0000000..60ba06f
> --- /dev/null
> +++ b/include/linux/cma.h
> @@ -0,0 +1,28 @@
> +/*
> + * Contiguous Memory Allocator
> + *
> + * Copyright LG Electronics Inc., 2014
> + * Written by:
> + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License as
> + * published by the Free Software Foundation; either version 2 of the
> + * License or (at your optional) any later version of the license.
> + *
> + */
> +
> +#ifndef __CMA_H__
> +#define __CMA_H__
> +
> +struct cma;
> +
> +extern struct page *cma_alloc(struct cma *cma, unsigned long count,
> +				unsigned long align);
> +extern bool cma_release(struct cma *cma, struct page *pages,
> +				unsigned long count);
> +extern int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
> +				phys_addr_t limit, phys_addr_t alignment,
> +				unsigned long bitmap_shift, bool fixed,
> +				struct cma **res_cma);
> +#endif
> diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
> index 772eab5..dfb1dc9 100644
> --- a/include/linux/dma-contiguous.h
> +++ b/include/linux/dma-contiguous.h
> @@ -63,7 +63,7 @@ struct device;
>   * There is always at least global CMA area and a few optional device
>   * private areas configured in kernel .config.
>   */
> -#define MAX_CMA_AREAS	(1 + CONFIG_CMA_AREAS)
> +#define MAX_DMA_CMA_AREAS  (1 + CONFIG_DMA_CMA_AREAS)
>
>  extern struct cma *dma_contiguous_default_area;
>
> @@ -123,7 +123,7 @@ bool dma_release_from_contiguous(struct device *dev, struct page *pages,
>
>  #else
>
> -#define MAX_CMA_AREAS	(0)
> +#define MAX_DMA_CMA_AREAS	(0)
>
>  static inline struct cma *dev_get_cma_area(struct device *dev)
>  {
> diff --git a/mm/Kconfig b/mm/Kconfig
> index 7511b4a..0877ddc 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -515,6 +515,17 @@ config CMA_DEBUG
>  	  processing calls such as dma_alloc_from_contiguous().
>  	  This option does not affect warning and error messages.
>
> +config CMA_AREAS
> +	int "Maximum count of the CMA areas"
> +	depends on CMA
> +	default 7
> +	help
> +	  CMA allows to create CMA areas for particular purpose, mainly,
> +	  used as device private area. This parameter sets the maximum
> +	  number of CMA area in the system.
> +
> +	  If unsure, leave the default value "7".
> +

If we have 7 declare continugous request, a kvm cma allocation request will fail ?

>  config ZBUD
>  	tristate
>  	default n
> diff --git a/mm/Makefile b/mm/Makefile
> index 1eaa70b..bc0422b 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -62,3 +62,4 @@ obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o
>  obj-$(CONFIG_ZBUD)	+= zbud.o
>  obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
>  obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
> +obj-$(CONFIG_CMA)	+= cma.o
> diff --git a/mm/cma.c b/mm/cma.c
> new file mode 100644
> index 0000000..0dae88d
> --- /dev/null
> +++ b/mm/cma.c
> @@ -0,0 +1,329 @@
> +/*
> + * Contiguous Memory Allocator
> + *
> + * Copyright (c) 2010-2011 by Samsung Electronics.
> + * Copyright IBM Corporation, 2013
> + * Copyright LG Electronics Inc., 2014
> + * Written by:
> + *	Marek Szyprowski <m.szyprowski@samsung.com>
> + *	Michal Nazarewicz <mina86@mina86.com>
> + *	Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License as
> + * published by the Free Software Foundation; either version 2 of the
> + * License or (at your optional) any later version of the license.
> + */
> +
> +#define pr_fmt(fmt) "cma: " fmt
> +
> +#ifdef CONFIG_CMA_DEBUG
> +#ifndef DEBUG
> +#  define DEBUG
> +#endif
> +#endif
> +
> +#include <linux/memblock.h>
> +#include <linux/err.h>
> +#include <linux/mm.h>
> +#include <linux/mutex.h>
> +#include <linux/sizes.h>
> +#include <linux/slab.h>
> +
> +struct cma {
> +	unsigned long	base_pfn;
> +	unsigned long	count;
> +	unsigned long	*bitmap;
> +	unsigned long	bitmap_shift;

I guess this is added to accommodate the kvm specific alloc chunks. May
be you should do this as a patch against kvm implementation and then
move the code to generic ?

> +	struct mutex	lock;
> +};
> +

-aneesh

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 1/3] CMA: generalize CMA reserved area management functionality
@ 2014-06-05 17:39     ` Aneesh Kumar K.V
  0 siblings, 0 replies; 74+ messages in thread
From: Aneesh Kumar K.V @ 2014-06-05 17:51 UTC (permalink / raw)
  To: Joonsoo Kim, Andrew Morton, Marek Szyprowski, Michal Nazarewicz
  Cc: Minchan Kim, Russell King - ARM Linux, Greg Kroah-Hartman,
	Paolo Bonzini, Gleb Natapov, Alexander Graf,
	Benjamin Herrenschmidt, Paul Mackerras, linux-mm, linux-kernel,
	linux-arm-kernel, kvm, kvm-ppc, linuxppc-dev, Joonsoo Kim

Joonsoo Kim <iamjoonsoo.kim@lge.com> writes:

> Currently, there are two users on CMA functionality, one is the DMA
> subsystem and the other is the kvm on powerpc. They have their own code
> to manage CMA reserved area even if they looks really similar.
> From my guess, it is caused by some needs on bitmap management. Kvm side
> wants to maintain bitmap not for 1 page, but for more size. Eventually it
> use bitmap where one bit represents 64 pages.
>
> When I implement CMA related patches, I should change those two places
> to apply my change and it seem to be painful to me. I want to change
> this situation and reduce future code management overhead through
> this patch.
>
> This change could also help developer who want to use CMA in their
> new feature development, since they can use CMA easily without
> copying & pasting this reserved area management code.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>


The way patches are split makes it difficult to review. Will it be
possible to make changes against one implementation and them move that
to generic code. That helps in finding out what exactly changed.

>
> diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
> index 00e13ce..b3fe1cc 100644
> --- a/drivers/base/Kconfig
> +++ b/drivers/base/Kconfig
> @@ -283,7 +283,7 @@ config CMA_ALIGNMENT
>
>  	  If unsure, leave the default value "8".
>
> -config CMA_AREAS
> +config DMA_CMA_AREAS
>  	int "Maximum count of the CMA device-private areas"
>  	default 7
>  	help

for ex: The above can be a seperate patch along with further DMA related
cleanups . 


> diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
> index 83969f8..48cdac8 100644
> --- a/drivers/base/dma-contiguous.c
> +++ b/drivers/base/dma-contiguous.c
> @@ -186,7 +186,7 @@ static int __init cma_activate_area(struct cma *cma)
>  	return 0;
>  }
>
> -static struct cma cma_areas[MAX_CMA_AREAS];
> +static struct cma cma_areas[MAX_DMA_CMA_AREAS];
>  static unsigned cma_area_count;
>
>  static int __init cma_init_reserved_areas(void)
> diff --git a/include/linux/cma.h b/include/linux/cma.h
> new file mode 100644
> index 0000000..60ba06f
> --- /dev/null
> +++ b/include/linux/cma.h
> @@ -0,0 +1,28 @@
> +/*
> + * Contiguous Memory Allocator
> + *
> + * Copyright LG Electronics Inc., 2014
> + * Written by:
> + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License as
> + * published by the Free Software Foundation; either version 2 of the
> + * License or (at your optional) any later version of the license.
> + *
> + */
> +
> +#ifndef __CMA_H__
> +#define __CMA_H__
> +
> +struct cma;
> +
> +extern struct page *cma_alloc(struct cma *cma, unsigned long count,
> +				unsigned long align);
> +extern bool cma_release(struct cma *cma, struct page *pages,
> +				unsigned long count);
> +extern int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
> +				phys_addr_t limit, phys_addr_t alignment,
> +				unsigned long bitmap_shift, bool fixed,
> +				struct cma **res_cma);
> +#endif
> diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
> index 772eab5..dfb1dc9 100644
> --- a/include/linux/dma-contiguous.h
> +++ b/include/linux/dma-contiguous.h
> @@ -63,7 +63,7 @@ struct device;
>   * There is always at least global CMA area and a few optional device
>   * private areas configured in kernel .config.
>   */
> -#define MAX_CMA_AREAS	(1 + CONFIG_CMA_AREAS)
> +#define MAX_DMA_CMA_AREAS  (1 + CONFIG_DMA_CMA_AREAS)
>
>  extern struct cma *dma_contiguous_default_area;
>
> @@ -123,7 +123,7 @@ bool dma_release_from_contiguous(struct device *dev, struct page *pages,
>
>  #else
>
> -#define MAX_CMA_AREAS	(0)
> +#define MAX_DMA_CMA_AREAS	(0)
>
>  static inline struct cma *dev_get_cma_area(struct device *dev)
>  {
> diff --git a/mm/Kconfig b/mm/Kconfig
> index 7511b4a..0877ddc 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -515,6 +515,17 @@ config CMA_DEBUG
>  	  processing calls such as dma_alloc_from_contiguous().
>  	  This option does not affect warning and error messages.
>
> +config CMA_AREAS
> +	int "Maximum count of the CMA areas"
> +	depends on CMA
> +	default 7
> +	help
> +	  CMA allows to create CMA areas for particular purpose, mainly,
> +	  used as device private area. This parameter sets the maximum
> +	  number of CMA area in the system.
> +
> +	  If unsure, leave the default value "7".
> +

If we have 7 declare continugous request, a kvm cma allocation request will fail ?

>  config ZBUD
>  	tristate
>  	default n
> diff --git a/mm/Makefile b/mm/Makefile
> index 1eaa70b..bc0422b 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -62,3 +62,4 @@ obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o
>  obj-$(CONFIG_ZBUD)	+= zbud.o
>  obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
>  obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
> +obj-$(CONFIG_CMA)	+= cma.o
> diff --git a/mm/cma.c b/mm/cma.c
> new file mode 100644
> index 0000000..0dae88d
> --- /dev/null
> +++ b/mm/cma.c
> @@ -0,0 +1,329 @@
> +/*
> + * Contiguous Memory Allocator
> + *
> + * Copyright (c) 2010-2011 by Samsung Electronics.
> + * Copyright IBM Corporation, 2013
> + * Copyright LG Electronics Inc., 2014
> + * Written by:
> + *	Marek Szyprowski <m.szyprowski@samsung.com>
> + *	Michal Nazarewicz <mina86@mina86.com>
> + *	Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License as
> + * published by the Free Software Foundation; either version 2 of the
> + * License or (at your optional) any later version of the license.
> + */
> +
> +#define pr_fmt(fmt) "cma: " fmt
> +
> +#ifdef CONFIG_CMA_DEBUG
> +#ifndef DEBUG
> +#  define DEBUG
> +#endif
> +#endif
> +
> +#include <linux/memblock.h>
> +#include <linux/err.h>
> +#include <linux/mm.h>
> +#include <linux/mutex.h>
> +#include <linux/sizes.h>
> +#include <linux/slab.h>
> +
> +struct cma {
> +	unsigned long	base_pfn;
> +	unsigned long	count;
> +	unsigned long	*bitmap;
> +	unsigned long	bitmap_shift;

I guess this is added to accommodate the kvm specific alloc chunks. May
be you should do this as a patch against kvm implementation and then
move the code to generic ?

> +	struct mutex	lock;
> +};
> +

-aneesh


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 1/3] CMA: generalize CMA reserved area management functionality
  2014-06-03  6:56     ` Michal Nazarewicz
                         ` (2 preceding siblings ...)
  (?)
@ 2014-06-10  2:41       ` Joonsoo Kim
  -1 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-10  2:41 UTC (permalink / raw)
  To: Michal Nazarewicz
  Cc: Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski, Minchan Kim,
	Russell King - ARM Linux, Greg Kroah-Hartman, Paolo Bonzini,
	Gleb Natapov, Alexander Graf, Benjamin Herrenschmidt,
	Paul Mackerras, linux-mm, linux-kernel, linux-arm-kernel, kvm,
	kvm-ppc, linuxppc-dev

On Tue, Jun 03, 2014 at 08:56:00AM +0200, Michal Nazarewicz wrote:
> On Tue, Jun 03 2014, Joonsoo Kim wrote:
> > Currently, there are two users on CMA functionality, one is the DMA
> > subsystem and the other is the kvm on powerpc. They have their own code
> > to manage CMA reserved area even if they looks really similar.
> > From my guess, it is caused by some needs on bitmap management. Kvm side
> > wants to maintain bitmap not for 1 page, but for more size. Eventually it
> > use bitmap where one bit represents 64 pages.
> >
> > When I implement CMA related patches, I should change those two places
> > to apply my change and it seem to be painful to me. I want to change
> > this situation and reduce future code management overhead through
> > this patch.
> >
> > This change could also help developer who want to use CMA in their
> > new feature development, since they can use CMA easily without
> > copying & pasting this reserved area management code.
> >
> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> 
> Some small comments below, but in general
> 
> Acked-by: Michal Nazarewicz <mina86@mina86.com>

Hello, Michal.

Thanks!

> 
> >
> > diff --git a/include/linux/cma.h b/include/linux/cma.h
> > new file mode 100644
> > index 0000000..60ba06f
> > --- /dev/null
> > +++ b/include/linux/cma.h
> > @@ -0,0 +1,28 @@
> > +/*
> > + * Contiguous Memory Allocator
> > + *
> > + * Copyright LG Electronics Inc., 2014
> > + * Written by:
> > + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> > + *
> > + * This program is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU General Public License as
> > + * published by the Free Software Foundation; either version 2 of the
> > + * License or (at your optional) any later version of the license.
> > + *
> 
> Superfluous empty comment line.
> 
> Also, I'm not certain whether this copyright notice is appropriate here,
> but that's another story.

Yeah, I will remove copyright notice in .h file.

> 
> > + */
> > +
> > +#ifndef __CMA_H__
> > +#define __CMA_H__
> > +
> > +struct cma;
> > +
> > +extern struct page *cma_alloc(struct cma *cma, unsigned long count,
> > +				unsigned long align);
> > +extern bool cma_release(struct cma *cma, struct page *pages,
> > +				unsigned long count);
> > +extern int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
> > +				phys_addr_t limit, phys_addr_t alignment,
> > +				unsigned long bitmap_shift, bool fixed,
> > +				struct cma **res_cma);
> > +#endif
> 
> > diff --git a/mm/cma.c b/mm/cma.c
> > new file mode 100644
> > index 0000000..0dae88d
> > --- /dev/null
> > +++ b/mm/cma.c
> > @@ -0,0 +1,329 @@
> 
> > +static int __init cma_activate_area(struct cma *cma)
> > +{
> > +	int max_bitmapno = cma_bitmap_max_no(cma);
> > +	int bitmap_size = BITS_TO_LONGS(max_bitmapno) * sizeof(long);
> > +	unsigned long base_pfn = cma->base_pfn, pfn = base_pfn;
> > +	unsigned i = cma->count >> pageblock_order;
> > +	struct zone *zone;
> > +
> > +	pr_debug("%s()\n", __func__);
> > +	if (!cma->count)
> > +		return 0;
> 
> Alternatively:
> 
> +	if (!i)
> +		return 0;

I prefer cma->count than i, since it represents what it does itself.

> > +
> > +	cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
> > +	if (!cma->bitmap)
> > +		return -ENOMEM;
> > +
> > +	WARN_ON_ONCE(!pfn_valid(pfn));
> > +	zone = page_zone(pfn_to_page(pfn));
> > +
> > +	do {
> > +		unsigned j;
> > +
> > +		base_pfn = pfn;
> > +		for (j = pageblock_nr_pages; j; --j, pfn++) {
> > +			WARN_ON_ONCE(!pfn_valid(pfn));
> > +			/*
> > +			 * alloc_contig_range requires the pfn range
> > +			 * specified to be in the same zone. Make this
> > +			 * simple by forcing the entire CMA resv range
> > +			 * to be in the same zone.
> > +			 */
> > +			if (page_zone(pfn_to_page(pfn)) != zone)
> > +				goto err;
> > +		}
> > +		init_cma_reserved_pageblock(pfn_to_page(base_pfn));
> > +	} while (--i);
> > +
> > +	mutex_init(&cma->lock);
> > +	return 0;
> > +
> > +err:
> > +	kfree(cma->bitmap);
> > +	return -EINVAL;
> > +}
> 
> > +static int __init cma_init_reserved_areas(void)
> > +{
> > +	int i;
> > +
> > +	for (i = 0; i < cma_area_count; i++) {
> > +		int ret = cma_activate_area(&cma_areas[i]);
> > +
> > +		if (ret)
> > +			return ret;
> > +	}
> > +
> > +	return 0;
> > +}
> 
> Or even:
> 
> static int __init cma_init_reserved_areas(void)
> {
> 	int i, ret = 0;
> 	for (i = 0; !ret && i < cma_area_count; ++i)
> 		ret = cma_activate_area(&cma_areas[i]);
> 	return ret;
> }

I think that originial implementation is better, since it seems
more readable to me.

> > +int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
> > +				phys_addr_t limit, phys_addr_t alignment,
> > +				unsigned long bitmap_shift, bool fixed,
> > +				struct cma **res_cma)
> > +{
> > +	struct cma *cma = &cma_areas[cma_area_count];
> 
> Perhaps it would make sense to move this initialisation to the far end
> of this function?

Yes, I will move it down.

> > +	int ret = 0;
> > +
> > +	pr_debug("%s(size %lx, base %08lx, limit %08lx, alignment %08lx)\n",
> > +			__func__, (unsigned long)size, (unsigned long)base,
> > +			(unsigned long)limit, (unsigned long)alignment);
> > +
> > +	/* Sanity checks */
> > +	if (cma_area_count == ARRAY_SIZE(cma_areas)) {
> > +		pr_err("Not enough slots for CMA reserved regions!\n");
> > +		return -ENOSPC;
> > +	}
> > +
> > +	if (!size)
> > +		return -EINVAL;
> > +
> > +	/*
> > +	 * Sanitise input arguments.
> > +	 * CMA area should be at least MAX_ORDER - 1 aligned. Otherwise,
> > +	 * CMA area could be merged into other MIGRATE_TYPE by buddy mechanism
> > +	 * and CMA property will be broken.
> > +	 */
> > +	alignment >>= PAGE_SHIFT;
> > +	alignment = PAGE_SIZE << max3(MAX_ORDER - 1, pageblock_order,
> > +						(int)alignment);
> > +	base = ALIGN(base, alignment);
> > +	size = ALIGN(size, alignment);
> > +	limit &= ~(alignment - 1);
> > +	/* size should be aligned with bitmap_shift */
> > +	BUG_ON(!IS_ALIGNED(size >> PAGE_SHIFT, 1 << cma->bitmap_shift));
> 
> cma->bitmap_shift is not yet initialised thus the above line should be:
> 
> 	BUG_ON(!IS_ALIGNED(size >> PAGE_SHIFT, 1 << bitmap_shift));

Yes, I will fix it.

> > +
> > +	/* Reserve memory */
> > +	if (base && fixed) {
> > +		if (memblock_is_region_reserved(base, size) ||
> > +		    memblock_reserve(base, size) < 0) {
> > +			ret = -EBUSY;
> > +			goto err;
> > +		}
> > +	} else {
> > +		phys_addr_t addr = memblock_alloc_range(size, alignment, base,
> > +							limit);
> > +		if (!addr) {
> > +			ret = -ENOMEM;
> > +			goto err;
> > +		} else {
> > +			base = addr;
> > +		}
> > +	}
> > +
> > +	/*
> > +	 * Each reserved area must be initialised later, when more kernel
> > +	 * subsystems (like slab allocator) are available.
> > +	 */
> > +	cma->base_pfn = PFN_DOWN(base);
> > +	cma->count = size >> PAGE_SHIFT;
> > +	cma->bitmap_shift = bitmap_shift;
> > +	*res_cma = cma;
> > +	cma_area_count++;
> > +
> > +	pr_info("CMA: reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M,
> > +		(unsigned long)base);
> 
> Doesn't this message end up being: “cma: CMA: reserved …”? pr_fmt adds
> “cma:” at the beginning, doesn't it?  So we should probably drop “CMA:”
> here.

Okay. Will do.

Thanks.

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 1/3] CMA: generalize CMA reserved area management functionality
@ 2014-06-10  2:41       ` Joonsoo Kim
  0 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-10  2:41 UTC (permalink / raw)
  To: Michal Nazarewicz
  Cc: Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski, Minchan Kim,
	Russell King - ARM Linux, Greg Kroah-Hartman, Paolo Bonzini,
	Gleb Natapov, Alexander Graf, Benjamin Herrenschmidt,
	Paul Mackerras, linux-mm, linux-kernel, linux-arm-kernel, kvm,
	kvm-ppc, linuxppc-dev

On Tue, Jun 03, 2014 at 08:56:00AM +0200, Michal Nazarewicz wrote:
> On Tue, Jun 03 2014, Joonsoo Kim wrote:
> > Currently, there are two users on CMA functionality, one is the DMA
> > subsystem and the other is the kvm on powerpc. They have their own code
> > to manage CMA reserved area even if they looks really similar.
> > From my guess, it is caused by some needs on bitmap management. Kvm side
> > wants to maintain bitmap not for 1 page, but for more size. Eventually it
> > use bitmap where one bit represents 64 pages.
> >
> > When I implement CMA related patches, I should change those two places
> > to apply my change and it seem to be painful to me. I want to change
> > this situation and reduce future code management overhead through
> > this patch.
> >
> > This change could also help developer who want to use CMA in their
> > new feature development, since they can use CMA easily without
> > copying & pasting this reserved area management code.
> >
> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> 
> Some small comments below, but in general
> 
> Acked-by: Michal Nazarewicz <mina86@mina86.com>

Hello, Michal.

Thanks!

> 
> >
> > diff --git a/include/linux/cma.h b/include/linux/cma.h
> > new file mode 100644
> > index 0000000..60ba06f
> > --- /dev/null
> > +++ b/include/linux/cma.h
> > @@ -0,0 +1,28 @@
> > +/*
> > + * Contiguous Memory Allocator
> > + *
> > + * Copyright LG Electronics Inc., 2014
> > + * Written by:
> > + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> > + *
> > + * This program is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU General Public License as
> > + * published by the Free Software Foundation; either version 2 of the
> > + * License or (at your optional) any later version of the license.
> > + *
> 
> Superfluous empty comment line.
> 
> Also, I'm not certain whether this copyright notice is appropriate here,
> but that's another story.

Yeah, I will remove copyright notice in .h file.

> 
> > + */
> > +
> > +#ifndef __CMA_H__
> > +#define __CMA_H__
> > +
> > +struct cma;
> > +
> > +extern struct page *cma_alloc(struct cma *cma, unsigned long count,
> > +				unsigned long align);
> > +extern bool cma_release(struct cma *cma, struct page *pages,
> > +				unsigned long count);
> > +extern int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
> > +				phys_addr_t limit, phys_addr_t alignment,
> > +				unsigned long bitmap_shift, bool fixed,
> > +				struct cma **res_cma);
> > +#endif
> 
> > diff --git a/mm/cma.c b/mm/cma.c
> > new file mode 100644
> > index 0000000..0dae88d
> > --- /dev/null
> > +++ b/mm/cma.c
> > @@ -0,0 +1,329 @@
> 
> > +static int __init cma_activate_area(struct cma *cma)
> > +{
> > +	int max_bitmapno = cma_bitmap_max_no(cma);
> > +	int bitmap_size = BITS_TO_LONGS(max_bitmapno) * sizeof(long);
> > +	unsigned long base_pfn = cma->base_pfn, pfn = base_pfn;
> > +	unsigned i = cma->count >> pageblock_order;
> > +	struct zone *zone;
> > +
> > +	pr_debug("%s()\n", __func__);
> > +	if (!cma->count)
> > +		return 0;
> 
> Alternatively:
> 
> +	if (!i)
> +		return 0;

I prefer cma->count than i, since it represents what it does itself.

> > +
> > +	cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
> > +	if (!cma->bitmap)
> > +		return -ENOMEM;
> > +
> > +	WARN_ON_ONCE(!pfn_valid(pfn));
> > +	zone = page_zone(pfn_to_page(pfn));
> > +
> > +	do {
> > +		unsigned j;
> > +
> > +		base_pfn = pfn;
> > +		for (j = pageblock_nr_pages; j; --j, pfn++) {
> > +			WARN_ON_ONCE(!pfn_valid(pfn));
> > +			/*
> > +			 * alloc_contig_range requires the pfn range
> > +			 * specified to be in the same zone. Make this
> > +			 * simple by forcing the entire CMA resv range
> > +			 * to be in the same zone.
> > +			 */
> > +			if (page_zone(pfn_to_page(pfn)) != zone)
> > +				goto err;
> > +		}
> > +		init_cma_reserved_pageblock(pfn_to_page(base_pfn));
> > +	} while (--i);
> > +
> > +	mutex_init(&cma->lock);
> > +	return 0;
> > +
> > +err:
> > +	kfree(cma->bitmap);
> > +	return -EINVAL;
> > +}
> 
> > +static int __init cma_init_reserved_areas(void)
> > +{
> > +	int i;
> > +
> > +	for (i = 0; i < cma_area_count; i++) {
> > +		int ret = cma_activate_area(&cma_areas[i]);
> > +
> > +		if (ret)
> > +			return ret;
> > +	}
> > +
> > +	return 0;
> > +}
> 
> Or even:
> 
> static int __init cma_init_reserved_areas(void)
> {
> 	int i, ret = 0;
> 	for (i = 0; !ret && i < cma_area_count; ++i)
> 		ret = cma_activate_area(&cma_areas[i]);
> 	return ret;
> }

I think that originial implementation is better, since it seems
more readable to me.

> > +int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
> > +				phys_addr_t limit, phys_addr_t alignment,
> > +				unsigned long bitmap_shift, bool fixed,
> > +				struct cma **res_cma)
> > +{
> > +	struct cma *cma = &cma_areas[cma_area_count];
> 
> Perhaps it would make sense to move this initialisation to the far end
> of this function?

Yes, I will move it down.

> > +	int ret = 0;
> > +
> > +	pr_debug("%s(size %lx, base %08lx, limit %08lx, alignment %08lx)\n",
> > +			__func__, (unsigned long)size, (unsigned long)base,
> > +			(unsigned long)limit, (unsigned long)alignment);
> > +
> > +	/* Sanity checks */
> > +	if (cma_area_count == ARRAY_SIZE(cma_areas)) {
> > +		pr_err("Not enough slots for CMA reserved regions!\n");
> > +		return -ENOSPC;
> > +	}
> > +
> > +	if (!size)
> > +		return -EINVAL;
> > +
> > +	/*
> > +	 * Sanitise input arguments.
> > +	 * CMA area should be at least MAX_ORDER - 1 aligned. Otherwise,
> > +	 * CMA area could be merged into other MIGRATE_TYPE by buddy mechanism
> > +	 * and CMA property will be broken.
> > +	 */
> > +	alignment >>= PAGE_SHIFT;
> > +	alignment = PAGE_SIZE << max3(MAX_ORDER - 1, pageblock_order,
> > +						(int)alignment);
> > +	base = ALIGN(base, alignment);
> > +	size = ALIGN(size, alignment);
> > +	limit &= ~(alignment - 1);
> > +	/* size should be aligned with bitmap_shift */
> > +	BUG_ON(!IS_ALIGNED(size >> PAGE_SHIFT, 1 << cma->bitmap_shift));
> 
> cma->bitmap_shift is not yet initialised thus the above line should be:
> 
> 	BUG_ON(!IS_ALIGNED(size >> PAGE_SHIFT, 1 << bitmap_shift));

Yes, I will fix it.

> > +
> > +	/* Reserve memory */
> > +	if (base && fixed) {
> > +		if (memblock_is_region_reserved(base, size) ||
> > +		    memblock_reserve(base, size) < 0) {
> > +			ret = -EBUSY;
> > +			goto err;
> > +		}
> > +	} else {
> > +		phys_addr_t addr = memblock_alloc_range(size, alignment, base,
> > +							limit);
> > +		if (!addr) {
> > +			ret = -ENOMEM;
> > +			goto err;
> > +		} else {
> > +			base = addr;
> > +		}
> > +	}
> > +
> > +	/*
> > +	 * Each reserved area must be initialised later, when more kernel
> > +	 * subsystems (like slab allocator) are available.
> > +	 */
> > +	cma->base_pfn = PFN_DOWN(base);
> > +	cma->count = size >> PAGE_SHIFT;
> > +	cma->bitmap_shift = bitmap_shift;
> > +	*res_cma = cma;
> > +	cma_area_count++;
> > +
> > +	pr_info("CMA: reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M,
> > +		(unsigned long)base);
> 
> Doesn't this message end up being: a??cma: CMA: reserved a?|a??? pr_fmt adds
> a??cma:a?? at the beginning, doesn't it?  So we should probably drop a??CMA:a??
> here.

Okay. Will do.

Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 1/3] CMA: generalize CMA reserved area management functionality
@ 2014-06-10  2:41       ` Joonsoo Kim
  0 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-10  2:41 UTC (permalink / raw)
  To: Michal Nazarewicz
  Cc: Russell King - ARM Linux, kvm, linux-mm, Gleb Natapov,
	Greg Kroah-Hartman, Alexander Graf, kvm-ppc, linux-kernel,
	Minchan Kim, Paul Mackerras, Aneesh Kumar K.V, Paolo Bonzini,
	Andrew Morton, linuxppc-dev, linux-arm-kernel, Marek Szyprowski

On Tue, Jun 03, 2014 at 08:56:00AM +0200, Michal Nazarewicz wrote:
> On Tue, Jun 03 2014, Joonsoo Kim wrote:
> > Currently, there are two users on CMA functionality, one is the DMA
> > subsystem and the other is the kvm on powerpc. They have their own code
> > to manage CMA reserved area even if they looks really similar.
> > From my guess, it is caused by some needs on bitmap management. Kvm side
> > wants to maintain bitmap not for 1 page, but for more size. Eventually it
> > use bitmap where one bit represents 64 pages.
> >
> > When I implement CMA related patches, I should change those two places
> > to apply my change and it seem to be painful to me. I want to change
> > this situation and reduce future code management overhead through
> > this patch.
> >
> > This change could also help developer who want to use CMA in their
> > new feature development, since they can use CMA easily without
> > copying & pasting this reserved area management code.
> >
> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> 
> Some small comments below, but in general
> 
> Acked-by: Michal Nazarewicz <mina86@mina86.com>

Hello, Michal.

Thanks!

> 
> >
> > diff --git a/include/linux/cma.h b/include/linux/cma.h
> > new file mode 100644
> > index 0000000..60ba06f
> > --- /dev/null
> > +++ b/include/linux/cma.h
> > @@ -0,0 +1,28 @@
> > +/*
> > + * Contiguous Memory Allocator
> > + *
> > + * Copyright LG Electronics Inc., 2014
> > + * Written by:
> > + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> > + *
> > + * This program is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU General Public License as
> > + * published by the Free Software Foundation; either version 2 of the
> > + * License or (at your optional) any later version of the license.
> > + *
> 
> Superfluous empty comment line.
> 
> Also, I'm not certain whether this copyright notice is appropriate here,
> but that's another story.

Yeah, I will remove copyright notice in .h file.

> 
> > + */
> > +
> > +#ifndef __CMA_H__
> > +#define __CMA_H__
> > +
> > +struct cma;
> > +
> > +extern struct page *cma_alloc(struct cma *cma, unsigned long count,
> > +				unsigned long align);
> > +extern bool cma_release(struct cma *cma, struct page *pages,
> > +				unsigned long count);
> > +extern int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
> > +				phys_addr_t limit, phys_addr_t alignment,
> > +				unsigned long bitmap_shift, bool fixed,
> > +				struct cma **res_cma);
> > +#endif
> 
> > diff --git a/mm/cma.c b/mm/cma.c
> > new file mode 100644
> > index 0000000..0dae88d
> > --- /dev/null
> > +++ b/mm/cma.c
> > @@ -0,0 +1,329 @@
> 
> > +static int __init cma_activate_area(struct cma *cma)
> > +{
> > +	int max_bitmapno = cma_bitmap_max_no(cma);
> > +	int bitmap_size = BITS_TO_LONGS(max_bitmapno) * sizeof(long);
> > +	unsigned long base_pfn = cma->base_pfn, pfn = base_pfn;
> > +	unsigned i = cma->count >> pageblock_order;
> > +	struct zone *zone;
> > +
> > +	pr_debug("%s()\n", __func__);
> > +	if (!cma->count)
> > +		return 0;
> 
> Alternatively:
> 
> +	if (!i)
> +		return 0;

I prefer cma->count than i, since it represents what it does itself.

> > +
> > +	cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
> > +	if (!cma->bitmap)
> > +		return -ENOMEM;
> > +
> > +	WARN_ON_ONCE(!pfn_valid(pfn));
> > +	zone = page_zone(pfn_to_page(pfn));
> > +
> > +	do {
> > +		unsigned j;
> > +
> > +		base_pfn = pfn;
> > +		for (j = pageblock_nr_pages; j; --j, pfn++) {
> > +			WARN_ON_ONCE(!pfn_valid(pfn));
> > +			/*
> > +			 * alloc_contig_range requires the pfn range
> > +			 * specified to be in the same zone. Make this
> > +			 * simple by forcing the entire CMA resv range
> > +			 * to be in the same zone.
> > +			 */
> > +			if (page_zone(pfn_to_page(pfn)) != zone)
> > +				goto err;
> > +		}
> > +		init_cma_reserved_pageblock(pfn_to_page(base_pfn));
> > +	} while (--i);
> > +
> > +	mutex_init(&cma->lock);
> > +	return 0;
> > +
> > +err:
> > +	kfree(cma->bitmap);
> > +	return -EINVAL;
> > +}
> 
> > +static int __init cma_init_reserved_areas(void)
> > +{
> > +	int i;
> > +
> > +	for (i = 0; i < cma_area_count; i++) {
> > +		int ret = cma_activate_area(&cma_areas[i]);
> > +
> > +		if (ret)
> > +			return ret;
> > +	}
> > +
> > +	return 0;
> > +}
> 
> Or even:
> 
> static int __init cma_init_reserved_areas(void)
> {
> 	int i, ret = 0;
> 	for (i = 0; !ret && i < cma_area_count; ++i)
> 		ret = cma_activate_area(&cma_areas[i]);
> 	return ret;
> }

I think that originial implementation is better, since it seems
more readable to me.

> > +int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
> > +				phys_addr_t limit, phys_addr_t alignment,
> > +				unsigned long bitmap_shift, bool fixed,
> > +				struct cma **res_cma)
> > +{
> > +	struct cma *cma = &cma_areas[cma_area_count];
> 
> Perhaps it would make sense to move this initialisation to the far end
> of this function?

Yes, I will move it down.

> > +	int ret = 0;
> > +
> > +	pr_debug("%s(size %lx, base %08lx, limit %08lx, alignment %08lx)\n",
> > +			__func__, (unsigned long)size, (unsigned long)base,
> > +			(unsigned long)limit, (unsigned long)alignment);
> > +
> > +	/* Sanity checks */
> > +	if (cma_area_count == ARRAY_SIZE(cma_areas)) {
> > +		pr_err("Not enough slots for CMA reserved regions!\n");
> > +		return -ENOSPC;
> > +	}
> > +
> > +	if (!size)
> > +		return -EINVAL;
> > +
> > +	/*
> > +	 * Sanitise input arguments.
> > +	 * CMA area should be at least MAX_ORDER - 1 aligned. Otherwise,
> > +	 * CMA area could be merged into other MIGRATE_TYPE by buddy mechanism
> > +	 * and CMA property will be broken.
> > +	 */
> > +	alignment >>= PAGE_SHIFT;
> > +	alignment = PAGE_SIZE << max3(MAX_ORDER - 1, pageblock_order,
> > +						(int)alignment);
> > +	base = ALIGN(base, alignment);
> > +	size = ALIGN(size, alignment);
> > +	limit &= ~(alignment - 1);
> > +	/* size should be aligned with bitmap_shift */
> > +	BUG_ON(!IS_ALIGNED(size >> PAGE_SHIFT, 1 << cma->bitmap_shift));
> 
> cma->bitmap_shift is not yet initialised thus the above line should be:
> 
> 	BUG_ON(!IS_ALIGNED(size >> PAGE_SHIFT, 1 << bitmap_shift));

Yes, I will fix it.

> > +
> > +	/* Reserve memory */
> > +	if (base && fixed) {
> > +		if (memblock_is_region_reserved(base, size) ||
> > +		    memblock_reserve(base, size) < 0) {
> > +			ret = -EBUSY;
> > +			goto err;
> > +		}
> > +	} else {
> > +		phys_addr_t addr = memblock_alloc_range(size, alignment, base,
> > +							limit);
> > +		if (!addr) {
> > +			ret = -ENOMEM;
> > +			goto err;
> > +		} else {
> > +			base = addr;
> > +		}
> > +	}
> > +
> > +	/*
> > +	 * Each reserved area must be initialised later, when more kernel
> > +	 * subsystems (like slab allocator) are available.
> > +	 */
> > +	cma->base_pfn = PFN_DOWN(base);
> > +	cma->count = size >> PAGE_SHIFT;
> > +	cma->bitmap_shift = bitmap_shift;
> > +	*res_cma = cma;
> > +	cma_area_count++;
> > +
> > +	pr_info("CMA: reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M,
> > +		(unsigned long)base);
> 
> Doesn't this message end up being: “cma: CMA: reserved …”? pr_fmt adds
> “cma:” at the beginning, doesn't it?  So we should probably drop “CMA:”
> here.

Okay. Will do.

Thanks.

^ permalink raw reply	[flat|nested] 74+ messages in thread

* [RFC PATCH 1/3] CMA: generalize CMA reserved area management functionality
@ 2014-06-10  2:41       ` Joonsoo Kim
  0 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-10  2:41 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jun 03, 2014 at 08:56:00AM +0200, Michal Nazarewicz wrote:
> On Tue, Jun 03 2014, Joonsoo Kim wrote:
> > Currently, there are two users on CMA functionality, one is the DMA
> > subsystem and the other is the kvm on powerpc. They have their own code
> > to manage CMA reserved area even if they looks really similar.
> > From my guess, it is caused by some needs on bitmap management. Kvm side
> > wants to maintain bitmap not for 1 page, but for more size. Eventually it
> > use bitmap where one bit represents 64 pages.
> >
> > When I implement CMA related patches, I should change those two places
> > to apply my change and it seem to be painful to me. I want to change
> > this situation and reduce future code management overhead through
> > this patch.
> >
> > This change could also help developer who want to use CMA in their
> > new feature development, since they can use CMA easily without
> > copying & pasting this reserved area management code.
> >
> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> 
> Some small comments below, but in general
> 
> Acked-by: Michal Nazarewicz <mina86@mina86.com>

Hello, Michal.

Thanks!

> 
> >
> > diff --git a/include/linux/cma.h b/include/linux/cma.h
> > new file mode 100644
> > index 0000000..60ba06f
> > --- /dev/null
> > +++ b/include/linux/cma.h
> > @@ -0,0 +1,28 @@
> > +/*
> > + * Contiguous Memory Allocator
> > + *
> > + * Copyright LG Electronics Inc., 2014
> > + * Written by:
> > + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> > + *
> > + * This program is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU General Public License as
> > + * published by the Free Software Foundation; either version 2 of the
> > + * License or (at your optional) any later version of the license.
> > + *
> 
> Superfluous empty comment line.
> 
> Also, I'm not certain whether this copyright notice is appropriate here,
> but that's another story.

Yeah, I will remove copyright notice in .h file.

> 
> > + */
> > +
> > +#ifndef __CMA_H__
> > +#define __CMA_H__
> > +
> > +struct cma;
> > +
> > +extern struct page *cma_alloc(struct cma *cma, unsigned long count,
> > +				unsigned long align);
> > +extern bool cma_release(struct cma *cma, struct page *pages,
> > +				unsigned long count);
> > +extern int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
> > +				phys_addr_t limit, phys_addr_t alignment,
> > +				unsigned long bitmap_shift, bool fixed,
> > +				struct cma **res_cma);
> > +#endif
> 
> > diff --git a/mm/cma.c b/mm/cma.c
> > new file mode 100644
> > index 0000000..0dae88d
> > --- /dev/null
> > +++ b/mm/cma.c
> > @@ -0,0 +1,329 @@
> 
> > +static int __init cma_activate_area(struct cma *cma)
> > +{
> > +	int max_bitmapno = cma_bitmap_max_no(cma);
> > +	int bitmap_size = BITS_TO_LONGS(max_bitmapno) * sizeof(long);
> > +	unsigned long base_pfn = cma->base_pfn, pfn = base_pfn;
> > +	unsigned i = cma->count >> pageblock_order;
> > +	struct zone *zone;
> > +
> > +	pr_debug("%s()\n", __func__);
> > +	if (!cma->count)
> > +		return 0;
> 
> Alternatively:
> 
> +	if (!i)
> +		return 0;

I prefer cma->count than i, since it represents what it does itself.

> > +
> > +	cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
> > +	if (!cma->bitmap)
> > +		return -ENOMEM;
> > +
> > +	WARN_ON_ONCE(!pfn_valid(pfn));
> > +	zone = page_zone(pfn_to_page(pfn));
> > +
> > +	do {
> > +		unsigned j;
> > +
> > +		base_pfn = pfn;
> > +		for (j = pageblock_nr_pages; j; --j, pfn++) {
> > +			WARN_ON_ONCE(!pfn_valid(pfn));
> > +			/*
> > +			 * alloc_contig_range requires the pfn range
> > +			 * specified to be in the same zone. Make this
> > +			 * simple by forcing the entire CMA resv range
> > +			 * to be in the same zone.
> > +			 */
> > +			if (page_zone(pfn_to_page(pfn)) != zone)
> > +				goto err;
> > +		}
> > +		init_cma_reserved_pageblock(pfn_to_page(base_pfn));
> > +	} while (--i);
> > +
> > +	mutex_init(&cma->lock);
> > +	return 0;
> > +
> > +err:
> > +	kfree(cma->bitmap);
> > +	return -EINVAL;
> > +}
> 
> > +static int __init cma_init_reserved_areas(void)
> > +{
> > +	int i;
> > +
> > +	for (i = 0; i < cma_area_count; i++) {
> > +		int ret = cma_activate_area(&cma_areas[i]);
> > +
> > +		if (ret)
> > +			return ret;
> > +	}
> > +
> > +	return 0;
> > +}
> 
> Or even:
> 
> static int __init cma_init_reserved_areas(void)
> {
> 	int i, ret = 0;
> 	for (i = 0; !ret && i < cma_area_count; ++i)
> 		ret = cma_activate_area(&cma_areas[i]);
> 	return ret;
> }

I think that originial implementation is better, since it seems
more readable to me.

> > +int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
> > +				phys_addr_t limit, phys_addr_t alignment,
> > +				unsigned long bitmap_shift, bool fixed,
> > +				struct cma **res_cma)
> > +{
> > +	struct cma *cma = &cma_areas[cma_area_count];
> 
> Perhaps it would make sense to move this initialisation to the far end
> of this function?

Yes, I will move it down.

> > +	int ret = 0;
> > +
> > +	pr_debug("%s(size %lx, base %08lx, limit %08lx, alignment %08lx)\n",
> > +			__func__, (unsigned long)size, (unsigned long)base,
> > +			(unsigned long)limit, (unsigned long)alignment);
> > +
> > +	/* Sanity checks */
> > +	if (cma_area_count == ARRAY_SIZE(cma_areas)) {
> > +		pr_err("Not enough slots for CMA reserved regions!\n");
> > +		return -ENOSPC;
> > +	}
> > +
> > +	if (!size)
> > +		return -EINVAL;
> > +
> > +	/*
> > +	 * Sanitise input arguments.
> > +	 * CMA area should be at least MAX_ORDER - 1 aligned. Otherwise,
> > +	 * CMA area could be merged into other MIGRATE_TYPE by buddy mechanism
> > +	 * and CMA property will be broken.
> > +	 */
> > +	alignment >>= PAGE_SHIFT;
> > +	alignment = PAGE_SIZE << max3(MAX_ORDER - 1, pageblock_order,
> > +						(int)alignment);
> > +	base = ALIGN(base, alignment);
> > +	size = ALIGN(size, alignment);
> > +	limit &= ~(alignment - 1);
> > +	/* size should be aligned with bitmap_shift */
> > +	BUG_ON(!IS_ALIGNED(size >> PAGE_SHIFT, 1 << cma->bitmap_shift));
> 
> cma->bitmap_shift is not yet initialised thus the above line should be:
> 
> 	BUG_ON(!IS_ALIGNED(size >> PAGE_SHIFT, 1 << bitmap_shift));

Yes, I will fix it.

> > +
> > +	/* Reserve memory */
> > +	if (base && fixed) {
> > +		if (memblock_is_region_reserved(base, size) ||
> > +		    memblock_reserve(base, size) < 0) {
> > +			ret = -EBUSY;
> > +			goto err;
> > +		}
> > +	} else {
> > +		phys_addr_t addr = memblock_alloc_range(size, alignment, base,
> > +							limit);
> > +		if (!addr) {
> > +			ret = -ENOMEM;
> > +			goto err;
> > +		} else {
> > +			base = addr;
> > +		}
> > +	}
> > +
> > +	/*
> > +	 * Each reserved area must be initialised later, when more kernel
> > +	 * subsystems (like slab allocator) are available.
> > +	 */
> > +	cma->base_pfn = PFN_DOWN(base);
> > +	cma->count = size >> PAGE_SHIFT;
> > +	cma->bitmap_shift = bitmap_shift;
> > +	*res_cma = cma;
> > +	cma_area_count++;
> > +
> > +	pr_info("CMA: reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M,
> > +		(unsigned long)base);
> 
> Doesn't this message end up being: ?cma: CMA: reserved ??? pr_fmt adds
> ?cma:? at the beginning, doesn't it?  So we should probably drop ?CMA:?
> here.

Okay. Will do.

Thanks.

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 1/3] CMA: generalize CMA reserved area management functionality
@ 2014-06-10  2:41       ` Joonsoo Kim
  0 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-10  2:41 UTC (permalink / raw)
  To: Michal Nazarewicz
  Cc: Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski, Minchan Kim,
	Russell King - ARM Linux, Greg Kroah-Hartman, Paolo Bonzini,
	Gleb Natapov, Alexander Graf, Benjamin Herrenschmidt,
	Paul Mackerras, linux-mm, linux-kernel, linux-arm-kernel, kvm,
	kvm-ppc, linuxppc-dev

On Tue, Jun 03, 2014 at 08:56:00AM +0200, Michal Nazarewicz wrote:
> On Tue, Jun 03 2014, Joonsoo Kim wrote:
> > Currently, there are two users on CMA functionality, one is the DMA
> > subsystem and the other is the kvm on powerpc. They have their own code
> > to manage CMA reserved area even if they looks really similar.
> > From my guess, it is caused by some needs on bitmap management. Kvm side
> > wants to maintain bitmap not for 1 page, but for more size. Eventually it
> > use bitmap where one bit represents 64 pages.
> >
> > When I implement CMA related patches, I should change those two places
> > to apply my change and it seem to be painful to me. I want to change
> > this situation and reduce future code management overhead through
> > this patch.
> >
> > This change could also help developer who want to use CMA in their
> > new feature development, since they can use CMA easily without
> > copying & pasting this reserved area management code.
> >
> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> 
> Some small comments below, but in general
> 
> Acked-by: Michal Nazarewicz <mina86@mina86.com>

Hello, Michal.

Thanks!

> 
> >
> > diff --git a/include/linux/cma.h b/include/linux/cma.h
> > new file mode 100644
> > index 0000000..60ba06f
> > --- /dev/null
> > +++ b/include/linux/cma.h
> > @@ -0,0 +1,28 @@
> > +/*
> > + * Contiguous Memory Allocator
> > + *
> > + * Copyright LG Electronics Inc., 2014
> > + * Written by:
> > + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> > + *
> > + * This program is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU General Public License as
> > + * published by the Free Software Foundation; either version 2 of the
> > + * License or (at your optional) any later version of the license.
> > + *
> 
> Superfluous empty comment line.
> 
> Also, I'm not certain whether this copyright notice is appropriate here,
> but that's another story.

Yeah, I will remove copyright notice in .h file.

> 
> > + */
> > +
> > +#ifndef __CMA_H__
> > +#define __CMA_H__
> > +
> > +struct cma;
> > +
> > +extern struct page *cma_alloc(struct cma *cma, unsigned long count,
> > +				unsigned long align);
> > +extern bool cma_release(struct cma *cma, struct page *pages,
> > +				unsigned long count);
> > +extern int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
> > +				phys_addr_t limit, phys_addr_t alignment,
> > +				unsigned long bitmap_shift, bool fixed,
> > +				struct cma **res_cma);
> > +#endif
> 
> > diff --git a/mm/cma.c b/mm/cma.c
> > new file mode 100644
> > index 0000000..0dae88d
> > --- /dev/null
> > +++ b/mm/cma.c
> > @@ -0,0 +1,329 @@
> 
> > +static int __init cma_activate_area(struct cma *cma)
> > +{
> > +	int max_bitmapno = cma_bitmap_max_no(cma);
> > +	int bitmap_size = BITS_TO_LONGS(max_bitmapno) * sizeof(long);
> > +	unsigned long base_pfn = cma->base_pfn, pfn = base_pfn;
> > +	unsigned i = cma->count >> pageblock_order;
> > +	struct zone *zone;
> > +
> > +	pr_debug("%s()\n", __func__);
> > +	if (!cma->count)
> > +		return 0;
> 
> Alternatively:
> 
> +	if (!i)
> +		return 0;

I prefer cma->count than i, since it represents what it does itself.

> > +
> > +	cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
> > +	if (!cma->bitmap)
> > +		return -ENOMEM;
> > +
> > +	WARN_ON_ONCE(!pfn_valid(pfn));
> > +	zone = page_zone(pfn_to_page(pfn));
> > +
> > +	do {
> > +		unsigned j;
> > +
> > +		base_pfn = pfn;
> > +		for (j = pageblock_nr_pages; j; --j, pfn++) {
> > +			WARN_ON_ONCE(!pfn_valid(pfn));
> > +			/*
> > +			 * alloc_contig_range requires the pfn range
> > +			 * specified to be in the same zone. Make this
> > +			 * simple by forcing the entire CMA resv range
> > +			 * to be in the same zone.
> > +			 */
> > +			if (page_zone(pfn_to_page(pfn)) != zone)
> > +				goto err;
> > +		}
> > +		init_cma_reserved_pageblock(pfn_to_page(base_pfn));
> > +	} while (--i);
> > +
> > +	mutex_init(&cma->lock);
> > +	return 0;
> > +
> > +err:
> > +	kfree(cma->bitmap);
> > +	return -EINVAL;
> > +}
> 
> > +static int __init cma_init_reserved_areas(void)
> > +{
> > +	int i;
> > +
> > +	for (i = 0; i < cma_area_count; i++) {
> > +		int ret = cma_activate_area(&cma_areas[i]);
> > +
> > +		if (ret)
> > +			return ret;
> > +	}
> > +
> > +	return 0;
> > +}
> 
> Or even:
> 
> static int __init cma_init_reserved_areas(void)
> {
> 	int i, ret = 0;
> 	for (i = 0; !ret && i < cma_area_count; ++i)
> 		ret = cma_activate_area(&cma_areas[i]);
> 	return ret;
> }

I think that originial implementation is better, since it seems
more readable to me.

> > +int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
> > +				phys_addr_t limit, phys_addr_t alignment,
> > +				unsigned long bitmap_shift, bool fixed,
> > +				struct cma **res_cma)
> > +{
> > +	struct cma *cma = &cma_areas[cma_area_count];
> 
> Perhaps it would make sense to move this initialisation to the far end
> of this function?

Yes, I will move it down.

> > +	int ret = 0;
> > +
> > +	pr_debug("%s(size %lx, base %08lx, limit %08lx, alignment %08lx)\n",
> > +			__func__, (unsigned long)size, (unsigned long)base,
> > +			(unsigned long)limit, (unsigned long)alignment);
> > +
> > +	/* Sanity checks */
> > +	if (cma_area_count = ARRAY_SIZE(cma_areas)) {
> > +		pr_err("Not enough slots for CMA reserved regions!\n");
> > +		return -ENOSPC;
> > +	}
> > +
> > +	if (!size)
> > +		return -EINVAL;
> > +
> > +	/*
> > +	 * Sanitise input arguments.
> > +	 * CMA area should be at least MAX_ORDER - 1 aligned. Otherwise,
> > +	 * CMA area could be merged into other MIGRATE_TYPE by buddy mechanism
> > +	 * and CMA property will be broken.
> > +	 */
> > +	alignment >>= PAGE_SHIFT;
> > +	alignment = PAGE_SIZE << max3(MAX_ORDER - 1, pageblock_order,
> > +						(int)alignment);
> > +	base = ALIGN(base, alignment);
> > +	size = ALIGN(size, alignment);
> > +	limit &= ~(alignment - 1);
> > +	/* size should be aligned with bitmap_shift */
> > +	BUG_ON(!IS_ALIGNED(size >> PAGE_SHIFT, 1 << cma->bitmap_shift));
> 
> cma->bitmap_shift is not yet initialised thus the above line should be:
> 
> 	BUG_ON(!IS_ALIGNED(size >> PAGE_SHIFT, 1 << bitmap_shift));

Yes, I will fix it.

> > +
> > +	/* Reserve memory */
> > +	if (base && fixed) {
> > +		if (memblock_is_region_reserved(base, size) ||
> > +		    memblock_reserve(base, size) < 0) {
> > +			ret = -EBUSY;
> > +			goto err;
> > +		}
> > +	} else {
> > +		phys_addr_t addr = memblock_alloc_range(size, alignment, base,
> > +							limit);
> > +		if (!addr) {
> > +			ret = -ENOMEM;
> > +			goto err;
> > +		} else {
> > +			base = addr;
> > +		}
> > +	}
> > +
> > +	/*
> > +	 * Each reserved area must be initialised later, when more kernel
> > +	 * subsystems (like slab allocator) are available.
> > +	 */
> > +	cma->base_pfn = PFN_DOWN(base);
> > +	cma->count = size >> PAGE_SHIFT;
> > +	cma->bitmap_shift = bitmap_shift;
> > +	*res_cma = cma;
> > +	cma_area_count++;
> > +
> > +	pr_info("CMA: reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M,
> > +		(unsigned long)base);
> 
> Doesn't this message end up being: “cma: CMA: reserved …”? pr_fmt adds
> “cma:” at the beginning, doesn't it?  So we should probably drop “CMA:”
> here.

Okay. Will do.

Thanks.

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 2/3] DMA, CMA: use general CMA reserved area management framework
  2014-06-03  7:00     ` Michal Nazarewicz
                         ` (2 preceding siblings ...)
  (?)
@ 2014-06-10  2:49       ` Joonsoo Kim
  -1 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-10  2:49 UTC (permalink / raw)
  To: Michal Nazarewicz
  Cc: Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski, Minchan Kim,
	Russell King - ARM Linux, Greg Kroah-Hartman, Paolo Bonzini,
	Gleb Natapov, Alexander Graf, Benjamin Herrenschmidt,
	Paul Mackerras, linux-mm, linux-kernel, linux-arm-kernel, kvm,
	kvm-ppc, linuxppc-dev

On Tue, Jun 03, 2014 at 09:00:48AM +0200, Michal Nazarewicz wrote:
> On Tue, Jun 03 2014, Joonsoo Kim wrote:
> > Now, we have general CMA reserved area management framework,
> > so use it for future maintainabilty. There is no functional change.
> >
> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> 
> Acked-by: Michal Nazarewicz <mina86@mina86.com>
> 
> > diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
> > index dfb1dc9..ecb85ac 100644
> > --- a/include/linux/dma-contiguous.h
> > +++ b/include/linux/dma-contiguous.h
> > @@ -53,9 +53,10 @@
> >  
> >  #ifdef __KERNEL__
> >  
> > +#include <linux/device.h>
> > +
> 
> Why is this suddenly required?
> 
> >  struct cma;
> >  struct page;
> > -struct device;
> >  
> >  #ifdef CONFIG_DMA_CMA
> 

Without including device.h, build failure occurs.
In dma-contiguous.h, we try to access to dev->cma_area, so we need
device.h. In the past, we included it luckily by swap.h in
drivers/base/dma-contiguous.c. Swap.h includes node.h and then node.h
includes device.h, so we were happy. But, in this patch, I remove
'include <linux/swap.h>' so we need to include device.h explicitly.

Thanks.

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 2/3] DMA, CMA: use general CMA reserved area management framework
@ 2014-06-10  2:49       ` Joonsoo Kim
  0 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-10  2:49 UTC (permalink / raw)
  To: Michal Nazarewicz
  Cc: Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski, Minchan Kim,
	Russell King - ARM Linux, Greg Kroah-Hartman, Paolo Bonzini,
	Gleb Natapov, Alexander Graf, Benjamin Herrenschmidt,
	Paul Mackerras, linux-mm, linux-kernel, linux-arm-kernel, kvm,
	kvm-ppc, linuxppc-dev

On Tue, Jun 03, 2014 at 09:00:48AM +0200, Michal Nazarewicz wrote:
> On Tue, Jun 03 2014, Joonsoo Kim wrote:
> > Now, we have general CMA reserved area management framework,
> > so use it for future maintainabilty. There is no functional change.
> >
> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> 
> Acked-by: Michal Nazarewicz <mina86@mina86.com>
> 
> > diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
> > index dfb1dc9..ecb85ac 100644
> > --- a/include/linux/dma-contiguous.h
> > +++ b/include/linux/dma-contiguous.h
> > @@ -53,9 +53,10 @@
> >  
> >  #ifdef __KERNEL__
> >  
> > +#include <linux/device.h>
> > +
> 
> Why is this suddenly required?
> 
> >  struct cma;
> >  struct page;
> > -struct device;
> >  
> >  #ifdef CONFIG_DMA_CMA
> 

Without including device.h, build failure occurs.
In dma-contiguous.h, we try to access to dev->cma_area, so we need
device.h. In the past, we included it luckily by swap.h in
drivers/base/dma-contiguous.c. Swap.h includes node.h and then node.h
includes device.h, so we were happy. But, in this patch, I remove
'include <linux/swap.h>' so we need to include device.h explicitly.

Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 2/3] DMA, CMA: use general CMA reserved area management framework
@ 2014-06-10  2:49       ` Joonsoo Kim
  0 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-10  2:49 UTC (permalink / raw)
  To: Michal Nazarewicz
  Cc: Russell King - ARM Linux, kvm, linux-mm, Gleb Natapov,
	Greg Kroah-Hartman, Alexander Graf, kvm-ppc, linux-kernel,
	Minchan Kim, Paul Mackerras, Aneesh Kumar K.V, Paolo Bonzini,
	Andrew Morton, linuxppc-dev, linux-arm-kernel, Marek Szyprowski

On Tue, Jun 03, 2014 at 09:00:48AM +0200, Michal Nazarewicz wrote:
> On Tue, Jun 03 2014, Joonsoo Kim wrote:
> > Now, we have general CMA reserved area management framework,
> > so use it for future maintainabilty. There is no functional change.
> >
> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> 
> Acked-by: Michal Nazarewicz <mina86@mina86.com>
> 
> > diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
> > index dfb1dc9..ecb85ac 100644
> > --- a/include/linux/dma-contiguous.h
> > +++ b/include/linux/dma-contiguous.h
> > @@ -53,9 +53,10 @@
> >  
> >  #ifdef __KERNEL__
> >  
> > +#include <linux/device.h>
> > +
> 
> Why is this suddenly required?
> 
> >  struct cma;
> >  struct page;
> > -struct device;
> >  
> >  #ifdef CONFIG_DMA_CMA
> 

Without including device.h, build failure occurs.
In dma-contiguous.h, we try to access to dev->cma_area, so we need
device.h. In the past, we included it luckily by swap.h in
drivers/base/dma-contiguous.c. Swap.h includes node.h and then node.h
includes device.h, so we were happy. But, in this patch, I remove
'include <linux/swap.h>' so we need to include device.h explicitly.

Thanks.

^ permalink raw reply	[flat|nested] 74+ messages in thread

* [RFC PATCH 2/3] DMA, CMA: use general CMA reserved area management framework
@ 2014-06-10  2:49       ` Joonsoo Kim
  0 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-10  2:49 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jun 03, 2014 at 09:00:48AM +0200, Michal Nazarewicz wrote:
> On Tue, Jun 03 2014, Joonsoo Kim wrote:
> > Now, we have general CMA reserved area management framework,
> > so use it for future maintainabilty. There is no functional change.
> >
> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> 
> Acked-by: Michal Nazarewicz <mina86@mina86.com>
> 
> > diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
> > index dfb1dc9..ecb85ac 100644
> > --- a/include/linux/dma-contiguous.h
> > +++ b/include/linux/dma-contiguous.h
> > @@ -53,9 +53,10 @@
> >  
> >  #ifdef __KERNEL__
> >  
> > +#include <linux/device.h>
> > +
> 
> Why is this suddenly required?
> 
> >  struct cma;
> >  struct page;
> > -struct device;
> >  
> >  #ifdef CONFIG_DMA_CMA
> 

Without including device.h, build failure occurs.
In dma-contiguous.h, we try to access to dev->cma_area, so we need
device.h. In the past, we included it luckily by swap.h in
drivers/base/dma-contiguous.c. Swap.h includes node.h and then node.h
includes device.h, so we were happy. But, in this patch, I remove
'include <linux/swap.h>' so we need to include device.h explicitly.

Thanks.

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 2/3] DMA, CMA: use general CMA reserved area management framework
@ 2014-06-10  2:49       ` Joonsoo Kim
  0 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-10  2:49 UTC (permalink / raw)
  To: Michal Nazarewicz
  Cc: Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski, Minchan Kim,
	Russell King - ARM Linux, Greg Kroah-Hartman, Paolo Bonzini,
	Gleb Natapov, Alexander Graf, Benjamin Herrenschmidt,
	Paul Mackerras, linux-mm, linux-kernel, linux-arm-kernel, kvm,
	kvm-ppc, linuxppc-dev

On Tue, Jun 03, 2014 at 09:00:48AM +0200, Michal Nazarewicz wrote:
> On Tue, Jun 03 2014, Joonsoo Kim wrote:
> > Now, we have general CMA reserved area management framework,
> > so use it for future maintainabilty. There is no functional change.
> >
> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> 
> Acked-by: Michal Nazarewicz <mina86@mina86.com>
> 
> > diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
> > index dfb1dc9..ecb85ac 100644
> > --- a/include/linux/dma-contiguous.h
> > +++ b/include/linux/dma-contiguous.h
> > @@ -53,9 +53,10 @@
> >  
> >  #ifdef __KERNEL__
> >  
> > +#include <linux/device.h>
> > +
> 
> Why is this suddenly required?
> 
> >  struct cma;
> >  struct page;
> > -struct device;
> >  
> >  #ifdef CONFIG_DMA_CMA
> 

Without including device.h, build failure occurs.
In dma-contiguous.h, we try to access to dev->cma_area, so we need
device.h. In the past, we included it luckily by swap.h in
drivers/base/dma-contiguous.c. Swap.h includes node.h and then node.h
includes device.h, so we were happy. But, in this patch, I remove
'include <linux/swap.h>' so we need to include device.h explicitly.

Thanks.

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 1/3] CMA: generalize CMA reserved area management functionality
  2014-06-05 17:39     ` Aneesh Kumar K.V
                         ` (2 preceding siblings ...)
  (?)
@ 2014-06-10  3:01       ` Joonsoo Kim
  -1 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-10  2:57 UTC (permalink / raw)
  To: Aneesh Kumar K.V
  Cc: Andrew Morton, Marek Szyprowski, Michal Nazarewicz, Minchan Kim,
	Russell King - ARM Linux, Greg Kroah-Hartman, Paolo Bonzini,
	Gleb Natapov, Alexander Graf, Benjamin Herrenschmidt,
	Paul Mackerras, linux-mm, linux-kernel, linux-arm-kernel, kvm,
	kvm-ppc, linuxppc-dev

On Thu, Jun 05, 2014 at 11:09:05PM +0530, Aneesh Kumar K.V wrote:
> Joonsoo Kim <iamjoonsoo.kim@lge.com> writes:
> 
> > Currently, there are two users on CMA functionality, one is the DMA
> > subsystem and the other is the kvm on powerpc. They have their own code
> > to manage CMA reserved area even if they looks really similar.
> > From my guess, it is caused by some needs on bitmap management. Kvm side
> > wants to maintain bitmap not for 1 page, but for more size. Eventually it
> > use bitmap where one bit represents 64 pages.
> >
> > When I implement CMA related patches, I should change those two places
> > to apply my change and it seem to be painful to me. I want to change
> > this situation and reduce future code management overhead through
> > this patch.
> >
> > This change could also help developer who want to use CMA in their
> > new feature development, since they can use CMA easily without
> > copying & pasting this reserved area management code.
> >
> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> 
> 
> The way patches are split makes it difficult to review. Will it be
> possible to make changes against one implementation and them move that
> to generic code. That helps in finding out what exactly changed.
> 

Hello,

You are right! I will respin this patchset as the form you
recommended.

> >
> > diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
> > index 00e13ce..b3fe1cc 100644
> > --- a/drivers/base/Kconfig
> > +++ b/drivers/base/Kconfig
> > @@ -283,7 +283,7 @@ config CMA_ALIGNMENT
> >
> >  	  If unsure, leave the default value "8".
> >
> > -config CMA_AREAS
> > +config DMA_CMA_AREAS
> >  	int "Maximum count of the CMA device-private areas"
> >  	default 7
> >  	help
> 
> for ex: The above can be a seperate patch along with further DMA related
> cleanups . 

Okay.

> 
> > diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
> > index 83969f8..48cdac8 100644
> > --- a/drivers/base/dma-contiguous.c
> > +++ b/drivers/base/dma-contiguous.c
> > @@ -186,7 +186,7 @@ static int __init cma_activate_area(struct cma *cma)
> >  	return 0;
> >  }
> >
> > -static struct cma cma_areas[MAX_CMA_AREAS];
> > +static struct cma cma_areas[MAX_DMA_CMA_AREAS];
> >  static unsigned cma_area_count;
> >
> >  static int __init cma_init_reserved_areas(void)
> > diff --git a/include/linux/cma.h b/include/linux/cma.h
> > new file mode 100644
> > index 0000000..60ba06f
> > --- /dev/null
> > +++ b/include/linux/cma.h
> > @@ -0,0 +1,28 @@
> > +/*
> > + * Contiguous Memory Allocator
> > + *
> > + * Copyright LG Electronics Inc., 2014
> > + * Written by:
> > + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> > + *
> > + * This program is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU General Public License as
> > + * published by the Free Software Foundation; either version 2 of the
> > + * License or (at your optional) any later version of the license.
> > + *
> > + */
> > +
> > +#ifndef __CMA_H__
> > +#define __CMA_H__
> > +
> > +struct cma;
> > +
> > +extern struct page *cma_alloc(struct cma *cma, unsigned long count,
> > +				unsigned long align);
> > +extern bool cma_release(struct cma *cma, struct page *pages,
> > +				unsigned long count);
> > +extern int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
> > +				phys_addr_t limit, phys_addr_t alignment,
> > +				unsigned long bitmap_shift, bool fixed,
> > +				struct cma **res_cma);
> > +#endif
> > diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
> > index 772eab5..dfb1dc9 100644
> > --- a/include/linux/dma-contiguous.h
> > +++ b/include/linux/dma-contiguous.h
> > @@ -63,7 +63,7 @@ struct device;
> >   * There is always at least global CMA area and a few optional device
> >   * private areas configured in kernel .config.
> >   */
> > -#define MAX_CMA_AREAS	(1 + CONFIG_CMA_AREAS)
> > +#define MAX_DMA_CMA_AREAS  (1 + CONFIG_DMA_CMA_AREAS)
> >
> >  extern struct cma *dma_contiguous_default_area;
> >
> > @@ -123,7 +123,7 @@ bool dma_release_from_contiguous(struct device *dev, struct page *pages,
> >
> >  #else
> >
> > -#define MAX_CMA_AREAS	(0)
> > +#define MAX_DMA_CMA_AREAS	(0)
> >
> >  static inline struct cma *dev_get_cma_area(struct device *dev)
> >  {
> > diff --git a/mm/Kconfig b/mm/Kconfig
> > index 7511b4a..0877ddc 100644
> > --- a/mm/Kconfig
> > +++ b/mm/Kconfig
> > @@ -515,6 +515,17 @@ config CMA_DEBUG
> >  	  processing calls such as dma_alloc_from_contiguous().
> >  	  This option does not affect warning and error messages.
> >
> > +config CMA_AREAS
> > +	int "Maximum count of the CMA areas"
> > +	depends on CMA
> > +	default 7
> > +	help
> > +	  CMA allows to create CMA areas for particular purpose, mainly,
> > +	  used as device private area. This parameter sets the maximum
> > +	  number of CMA area in the system.
> > +
> > +	  If unsure, leave the default value "7".
> > +
> 
> If we have 7 declare continugous request, a kvm cma allocation request will fail ?

Yes. If you need more, you can increase it. :)

> >  config ZBUD
> >  	tristate
> >  	default n
> > diff --git a/mm/Makefile b/mm/Makefile
> > index 1eaa70b..bc0422b 100644
> > --- a/mm/Makefile
> > +++ b/mm/Makefile
> > @@ -62,3 +62,4 @@ obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o
> >  obj-$(CONFIG_ZBUD)	+= zbud.o
> >  obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
> >  obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
> > +obj-$(CONFIG_CMA)	+= cma.o
> > diff --git a/mm/cma.c b/mm/cma.c
> > new file mode 100644
> > index 0000000..0dae88d
> > --- /dev/null
> > +++ b/mm/cma.c
> > @@ -0,0 +1,329 @@
> > +/*
> > + * Contiguous Memory Allocator
> > + *
> > + * Copyright (c) 2010-2011 by Samsung Electronics.
> > + * Copyright IBM Corporation, 2013
> > + * Copyright LG Electronics Inc., 2014
> > + * Written by:
> > + *	Marek Szyprowski <m.szyprowski@samsung.com>
> > + *	Michal Nazarewicz <mina86@mina86.com>
> > + *	Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> > + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> > + *
> > + * This program is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU General Public License as
> > + * published by the Free Software Foundation; either version 2 of the
> > + * License or (at your optional) any later version of the license.
> > + */
> > +
> > +#define pr_fmt(fmt) "cma: " fmt
> > +
> > +#ifdef CONFIG_CMA_DEBUG
> > +#ifndef DEBUG
> > +#  define DEBUG
> > +#endif
> > +#endif
> > +
> > +#include <linux/memblock.h>
> > +#include <linux/err.h>
> > +#include <linux/mm.h>
> > +#include <linux/mutex.h>
> > +#include <linux/sizes.h>
> > +#include <linux/slab.h>
> > +
> > +struct cma {
> > +	unsigned long	base_pfn;
> > +	unsigned long	count;
> > +	unsigned long	*bitmap;
> > +	unsigned long	bitmap_shift;
> 
> I guess this is added to accommodate the kvm specific alloc chunks. May
> be you should do this as a patch against kvm implementation and then
> move the code to generic ?

Yes, this is for kvm specific alloc chunks. I will consider which one
is better for the base implementation and makes patches against it.

Thanks.


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 1/3] CMA: generalize CMA reserved area management functionality
@ 2014-06-10  3:01       ` Joonsoo Kim
  0 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-10  3:01 UTC (permalink / raw)
  To: Aneesh Kumar K.V
  Cc: Andrew Morton, Marek Szyprowski, Michal Nazarewicz, Minchan Kim,
	Russell King - ARM Linux, Greg Kroah-Hartman, Paolo Bonzini,
	Gleb Natapov, Alexander Graf, Benjamin Herrenschmidt,
	Paul Mackerras, linux-mm, linux-kernel, linux-arm-kernel, kvm,
	kvm-ppc, linuxppc-dev

On Thu, Jun 05, 2014 at 11:09:05PM +0530, Aneesh Kumar K.V wrote:
> Joonsoo Kim <iamjoonsoo.kim@lge.com> writes:
> 
> > Currently, there are two users on CMA functionality, one is the DMA
> > subsystem and the other is the kvm on powerpc. They have their own code
> > to manage CMA reserved area even if they looks really similar.
> > From my guess, it is caused by some needs on bitmap management. Kvm side
> > wants to maintain bitmap not for 1 page, but for more size. Eventually it
> > use bitmap where one bit represents 64 pages.
> >
> > When I implement CMA related patches, I should change those two places
> > to apply my change and it seem to be painful to me. I want to change
> > this situation and reduce future code management overhead through
> > this patch.
> >
> > This change could also help developer who want to use CMA in their
> > new feature development, since they can use CMA easily without
> > copying & pasting this reserved area management code.
> >
> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> 
> 
> The way patches are split makes it difficult to review. Will it be
> possible to make changes against one implementation and them move that
> to generic code. That helps in finding out what exactly changed.
> 

Hello,

You are right! I will respin this patchset as the form you
recommended.

> >
> > diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
> > index 00e13ce..b3fe1cc 100644
> > --- a/drivers/base/Kconfig
> > +++ b/drivers/base/Kconfig
> > @@ -283,7 +283,7 @@ config CMA_ALIGNMENT
> >
> >  	  If unsure, leave the default value "8".
> >
> > -config CMA_AREAS
> > +config DMA_CMA_AREAS
> >  	int "Maximum count of the CMA device-private areas"
> >  	default 7
> >  	help
> 
> for ex: The above can be a seperate patch along with further DMA related
> cleanups . 

Okay.

> 
> > diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
> > index 83969f8..48cdac8 100644
> > --- a/drivers/base/dma-contiguous.c
> > +++ b/drivers/base/dma-contiguous.c
> > @@ -186,7 +186,7 @@ static int __init cma_activate_area(struct cma *cma)
> >  	return 0;
> >  }
> >
> > -static struct cma cma_areas[MAX_CMA_AREAS];
> > +static struct cma cma_areas[MAX_DMA_CMA_AREAS];
> >  static unsigned cma_area_count;
> >
> >  static int __init cma_init_reserved_areas(void)
> > diff --git a/include/linux/cma.h b/include/linux/cma.h
> > new file mode 100644
> > index 0000000..60ba06f
> > --- /dev/null
> > +++ b/include/linux/cma.h
> > @@ -0,0 +1,28 @@
> > +/*
> > + * Contiguous Memory Allocator
> > + *
> > + * Copyright LG Electronics Inc., 2014
> > + * Written by:
> > + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> > + *
> > + * This program is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU General Public License as
> > + * published by the Free Software Foundation; either version 2 of the
> > + * License or (at your optional) any later version of the license.
> > + *
> > + */
> > +
> > +#ifndef __CMA_H__
> > +#define __CMA_H__
> > +
> > +struct cma;
> > +
> > +extern struct page *cma_alloc(struct cma *cma, unsigned long count,
> > +				unsigned long align);
> > +extern bool cma_release(struct cma *cma, struct page *pages,
> > +				unsigned long count);
> > +extern int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
> > +				phys_addr_t limit, phys_addr_t alignment,
> > +				unsigned long bitmap_shift, bool fixed,
> > +				struct cma **res_cma);
> > +#endif
> > diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
> > index 772eab5..dfb1dc9 100644
> > --- a/include/linux/dma-contiguous.h
> > +++ b/include/linux/dma-contiguous.h
> > @@ -63,7 +63,7 @@ struct device;
> >   * There is always at least global CMA area and a few optional device
> >   * private areas configured in kernel .config.
> >   */
> > -#define MAX_CMA_AREAS	(1 + CONFIG_CMA_AREAS)
> > +#define MAX_DMA_CMA_AREAS  (1 + CONFIG_DMA_CMA_AREAS)
> >
> >  extern struct cma *dma_contiguous_default_area;
> >
> > @@ -123,7 +123,7 @@ bool dma_release_from_contiguous(struct device *dev, struct page *pages,
> >
> >  #else
> >
> > -#define MAX_CMA_AREAS	(0)
> > +#define MAX_DMA_CMA_AREAS	(0)
> >
> >  static inline struct cma *dev_get_cma_area(struct device *dev)
> >  {
> > diff --git a/mm/Kconfig b/mm/Kconfig
> > index 7511b4a..0877ddc 100644
> > --- a/mm/Kconfig
> > +++ b/mm/Kconfig
> > @@ -515,6 +515,17 @@ config CMA_DEBUG
> >  	  processing calls such as dma_alloc_from_contiguous().
> >  	  This option does not affect warning and error messages.
> >
> > +config CMA_AREAS
> > +	int "Maximum count of the CMA areas"
> > +	depends on CMA
> > +	default 7
> > +	help
> > +	  CMA allows to create CMA areas for particular purpose, mainly,
> > +	  used as device private area. This parameter sets the maximum
> > +	  number of CMA area in the system.
> > +
> > +	  If unsure, leave the default value "7".
> > +
> 
> If we have 7 declare continugous request, a kvm cma allocation request will fail ?

Yes. If you need more, you can increase it. :)

> >  config ZBUD
> >  	tristate
> >  	default n
> > diff --git a/mm/Makefile b/mm/Makefile
> > index 1eaa70b..bc0422b 100644
> > --- a/mm/Makefile
> > +++ b/mm/Makefile
> > @@ -62,3 +62,4 @@ obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o
> >  obj-$(CONFIG_ZBUD)	+= zbud.o
> >  obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
> >  obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
> > +obj-$(CONFIG_CMA)	+= cma.o
> > diff --git a/mm/cma.c b/mm/cma.c
> > new file mode 100644
> > index 0000000..0dae88d
> > --- /dev/null
> > +++ b/mm/cma.c
> > @@ -0,0 +1,329 @@
> > +/*
> > + * Contiguous Memory Allocator
> > + *
> > + * Copyright (c) 2010-2011 by Samsung Electronics.
> > + * Copyright IBM Corporation, 2013
> > + * Copyright LG Electronics Inc., 2014
> > + * Written by:
> > + *	Marek Szyprowski <m.szyprowski@samsung.com>
> > + *	Michal Nazarewicz <mina86@mina86.com>
> > + *	Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> > + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> > + *
> > + * This program is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU General Public License as
> > + * published by the Free Software Foundation; either version 2 of the
> > + * License or (at your optional) any later version of the license.
> > + */
> > +
> > +#define pr_fmt(fmt) "cma: " fmt
> > +
> > +#ifdef CONFIG_CMA_DEBUG
> > +#ifndef DEBUG
> > +#  define DEBUG
> > +#endif
> > +#endif
> > +
> > +#include <linux/memblock.h>
> > +#include <linux/err.h>
> > +#include <linux/mm.h>
> > +#include <linux/mutex.h>
> > +#include <linux/sizes.h>
> > +#include <linux/slab.h>
> > +
> > +struct cma {
> > +	unsigned long	base_pfn;
> > +	unsigned long	count;
> > +	unsigned long	*bitmap;
> > +	unsigned long	bitmap_shift;
> 
> I guess this is added to accommodate the kvm specific alloc chunks. May
> be you should do this as a patch against kvm implementation and then
> move the code to generic ?

Yes, this is for kvm specific alloc chunks. I will consider which one
is better for the base implementation and makes patches against it.

Thanks.


^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 1/3] CMA: generalize CMA reserved area management functionality
@ 2014-06-10  3:01       ` Joonsoo Kim
  0 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-10  3:01 UTC (permalink / raw)
  To: Aneesh Kumar K.V
  Cc: Andrew Morton, Marek Szyprowski, Michal Nazarewicz, Minchan Kim,
	Russell King - ARM Linux, Greg Kroah-Hartman, Paolo Bonzini,
	Gleb Natapov, Alexander Graf, Benjamin Herrenschmidt,
	Paul Mackerras, linux-mm, linux-kernel, linux-arm-kernel, kvm,
	kvm-ppc, linuxppc-dev

On Thu, Jun 05, 2014 at 11:09:05PM +0530, Aneesh Kumar K.V wrote:
> Joonsoo Kim <iamjoonsoo.kim@lge.com> writes:
> 
> > Currently, there are two users on CMA functionality, one is the DMA
> > subsystem and the other is the kvm on powerpc. They have their own code
> > to manage CMA reserved area even if they looks really similar.
> > From my guess, it is caused by some needs on bitmap management. Kvm side
> > wants to maintain bitmap not for 1 page, but for more size. Eventually it
> > use bitmap where one bit represents 64 pages.
> >
> > When I implement CMA related patches, I should change those two places
> > to apply my change and it seem to be painful to me. I want to change
> > this situation and reduce future code management overhead through
> > this patch.
> >
> > This change could also help developer who want to use CMA in their
> > new feature development, since they can use CMA easily without
> > copying & pasting this reserved area management code.
> >
> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> 
> 
> The way patches are split makes it difficult to review. Will it be
> possible to make changes against one implementation and them move that
> to generic code. That helps in finding out what exactly changed.
> 

Hello,

You are right! I will respin this patchset as the form you
recommended.

> >
> > diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
> > index 00e13ce..b3fe1cc 100644
> > --- a/drivers/base/Kconfig
> > +++ b/drivers/base/Kconfig
> > @@ -283,7 +283,7 @@ config CMA_ALIGNMENT
> >
> >  	  If unsure, leave the default value "8".
> >
> > -config CMA_AREAS
> > +config DMA_CMA_AREAS
> >  	int "Maximum count of the CMA device-private areas"
> >  	default 7
> >  	help
> 
> for ex: The above can be a seperate patch along with further DMA related
> cleanups . 

Okay.

> 
> > diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
> > index 83969f8..48cdac8 100644
> > --- a/drivers/base/dma-contiguous.c
> > +++ b/drivers/base/dma-contiguous.c
> > @@ -186,7 +186,7 @@ static int __init cma_activate_area(struct cma *cma)
> >  	return 0;
> >  }
> >
> > -static struct cma cma_areas[MAX_CMA_AREAS];
> > +static struct cma cma_areas[MAX_DMA_CMA_AREAS];
> >  static unsigned cma_area_count;
> >
> >  static int __init cma_init_reserved_areas(void)
> > diff --git a/include/linux/cma.h b/include/linux/cma.h
> > new file mode 100644
> > index 0000000..60ba06f
> > --- /dev/null
> > +++ b/include/linux/cma.h
> > @@ -0,0 +1,28 @@
> > +/*
> > + * Contiguous Memory Allocator
> > + *
> > + * Copyright LG Electronics Inc., 2014
> > + * Written by:
> > + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> > + *
> > + * This program is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU General Public License as
> > + * published by the Free Software Foundation; either version 2 of the
> > + * License or (at your optional) any later version of the license.
> > + *
> > + */
> > +
> > +#ifndef __CMA_H__
> > +#define __CMA_H__
> > +
> > +struct cma;
> > +
> > +extern struct page *cma_alloc(struct cma *cma, unsigned long count,
> > +				unsigned long align);
> > +extern bool cma_release(struct cma *cma, struct page *pages,
> > +				unsigned long count);
> > +extern int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
> > +				phys_addr_t limit, phys_addr_t alignment,
> > +				unsigned long bitmap_shift, bool fixed,
> > +				struct cma **res_cma);
> > +#endif
> > diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
> > index 772eab5..dfb1dc9 100644
> > --- a/include/linux/dma-contiguous.h
> > +++ b/include/linux/dma-contiguous.h
> > @@ -63,7 +63,7 @@ struct device;
> >   * There is always at least global CMA area and a few optional device
> >   * private areas configured in kernel .config.
> >   */
> > -#define MAX_CMA_AREAS	(1 + CONFIG_CMA_AREAS)
> > +#define MAX_DMA_CMA_AREAS  (1 + CONFIG_DMA_CMA_AREAS)
> >
> >  extern struct cma *dma_contiguous_default_area;
> >
> > @@ -123,7 +123,7 @@ bool dma_release_from_contiguous(struct device *dev, struct page *pages,
> >
> >  #else
> >
> > -#define MAX_CMA_AREAS	(0)
> > +#define MAX_DMA_CMA_AREAS	(0)
> >
> >  static inline struct cma *dev_get_cma_area(struct device *dev)
> >  {
> > diff --git a/mm/Kconfig b/mm/Kconfig
> > index 7511b4a..0877ddc 100644
> > --- a/mm/Kconfig
> > +++ b/mm/Kconfig
> > @@ -515,6 +515,17 @@ config CMA_DEBUG
> >  	  processing calls such as dma_alloc_from_contiguous().
> >  	  This option does not affect warning and error messages.
> >
> > +config CMA_AREAS
> > +	int "Maximum count of the CMA areas"
> > +	depends on CMA
> > +	default 7
> > +	help
> > +	  CMA allows to create CMA areas for particular purpose, mainly,
> > +	  used as device private area. This parameter sets the maximum
> > +	  number of CMA area in the system.
> > +
> > +	  If unsure, leave the default value "7".
> > +
> 
> If we have 7 declare continugous request, a kvm cma allocation request will fail ?

Yes. If you need more, you can increase it. :)

> >  config ZBUD
> >  	tristate
> >  	default n
> > diff --git a/mm/Makefile b/mm/Makefile
> > index 1eaa70b..bc0422b 100644
> > --- a/mm/Makefile
> > +++ b/mm/Makefile
> > @@ -62,3 +62,4 @@ obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o
> >  obj-$(CONFIG_ZBUD)	+= zbud.o
> >  obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
> >  obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
> > +obj-$(CONFIG_CMA)	+= cma.o
> > diff --git a/mm/cma.c b/mm/cma.c
> > new file mode 100644
> > index 0000000..0dae88d
> > --- /dev/null
> > +++ b/mm/cma.c
> > @@ -0,0 +1,329 @@
> > +/*
> > + * Contiguous Memory Allocator
> > + *
> > + * Copyright (c) 2010-2011 by Samsung Electronics.
> > + * Copyright IBM Corporation, 2013
> > + * Copyright LG Electronics Inc., 2014
> > + * Written by:
> > + *	Marek Szyprowski <m.szyprowski@samsung.com>
> > + *	Michal Nazarewicz <mina86@mina86.com>
> > + *	Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> > + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> > + *
> > + * This program is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU General Public License as
> > + * published by the Free Software Foundation; either version 2 of the
> > + * License or (at your optional) any later version of the license.
> > + */
> > +
> > +#define pr_fmt(fmt) "cma: " fmt
> > +
> > +#ifdef CONFIG_CMA_DEBUG
> > +#ifndef DEBUG
> > +#  define DEBUG
> > +#endif
> > +#endif
> > +
> > +#include <linux/memblock.h>
> > +#include <linux/err.h>
> > +#include <linux/mm.h>
> > +#include <linux/mutex.h>
> > +#include <linux/sizes.h>
> > +#include <linux/slab.h>
> > +
> > +struct cma {
> > +	unsigned long	base_pfn;
> > +	unsigned long	count;
> > +	unsigned long	*bitmap;
> > +	unsigned long	bitmap_shift;
> 
> I guess this is added to accommodate the kvm specific alloc chunks. May
> be you should do this as a patch against kvm implementation and then
> move the code to generic ?

Yes, this is for kvm specific alloc chunks. I will consider which one
is better for the base implementation and makes patches against it.

Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 1/3] CMA: generalize CMA reserved area management functionality
@ 2014-06-10  3:01       ` Joonsoo Kim
  0 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-10  3:01 UTC (permalink / raw)
  To: Aneesh Kumar K.V
  Cc: Russell King - ARM Linux, kvm, linux-mm, Gleb Natapov,
	Greg Kroah-Hartman, Alexander Graf, Michal Nazarewicz,
	linux-kernel, Minchan Kim, Paul Mackerras, kvm-ppc,
	Paolo Bonzini, Andrew Morton, linuxppc-dev, linux-arm-kernel,
	Marek Szyprowski

On Thu, Jun 05, 2014 at 11:09:05PM +0530, Aneesh Kumar K.V wrote:
> Joonsoo Kim <iamjoonsoo.kim@lge.com> writes:
> 
> > Currently, there are two users on CMA functionality, one is the DMA
> > subsystem and the other is the kvm on powerpc. They have their own code
> > to manage CMA reserved area even if they looks really similar.
> > From my guess, it is caused by some needs on bitmap management. Kvm side
> > wants to maintain bitmap not for 1 page, but for more size. Eventually it
> > use bitmap where one bit represents 64 pages.
> >
> > When I implement CMA related patches, I should change those two places
> > to apply my change and it seem to be painful to me. I want to change
> > this situation and reduce future code management overhead through
> > this patch.
> >
> > This change could also help developer who want to use CMA in their
> > new feature development, since they can use CMA easily without
> > copying & pasting this reserved area management code.
> >
> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> 
> 
> The way patches are split makes it difficult to review. Will it be
> possible to make changes against one implementation and them move that
> to generic code. That helps in finding out what exactly changed.
> 

Hello,

You are right! I will respin this patchset as the form you
recommended.

> >
> > diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
> > index 00e13ce..b3fe1cc 100644
> > --- a/drivers/base/Kconfig
> > +++ b/drivers/base/Kconfig
> > @@ -283,7 +283,7 @@ config CMA_ALIGNMENT
> >
> >  	  If unsure, leave the default value "8".
> >
> > -config CMA_AREAS
> > +config DMA_CMA_AREAS
> >  	int "Maximum count of the CMA device-private areas"
> >  	default 7
> >  	help
> 
> for ex: The above can be a seperate patch along with further DMA related
> cleanups . 

Okay.

> 
> > diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
> > index 83969f8..48cdac8 100644
> > --- a/drivers/base/dma-contiguous.c
> > +++ b/drivers/base/dma-contiguous.c
> > @@ -186,7 +186,7 @@ static int __init cma_activate_area(struct cma *cma)
> >  	return 0;
> >  }
> >
> > -static struct cma cma_areas[MAX_CMA_AREAS];
> > +static struct cma cma_areas[MAX_DMA_CMA_AREAS];
> >  static unsigned cma_area_count;
> >
> >  static int __init cma_init_reserved_areas(void)
> > diff --git a/include/linux/cma.h b/include/linux/cma.h
> > new file mode 100644
> > index 0000000..60ba06f
> > --- /dev/null
> > +++ b/include/linux/cma.h
> > @@ -0,0 +1,28 @@
> > +/*
> > + * Contiguous Memory Allocator
> > + *
> > + * Copyright LG Electronics Inc., 2014
> > + * Written by:
> > + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> > + *
> > + * This program is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU General Public License as
> > + * published by the Free Software Foundation; either version 2 of the
> > + * License or (at your optional) any later version of the license.
> > + *
> > + */
> > +
> > +#ifndef __CMA_H__
> > +#define __CMA_H__
> > +
> > +struct cma;
> > +
> > +extern struct page *cma_alloc(struct cma *cma, unsigned long count,
> > +				unsigned long align);
> > +extern bool cma_release(struct cma *cma, struct page *pages,
> > +				unsigned long count);
> > +extern int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
> > +				phys_addr_t limit, phys_addr_t alignment,
> > +				unsigned long bitmap_shift, bool fixed,
> > +				struct cma **res_cma);
> > +#endif
> > diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
> > index 772eab5..dfb1dc9 100644
> > --- a/include/linux/dma-contiguous.h
> > +++ b/include/linux/dma-contiguous.h
> > @@ -63,7 +63,7 @@ struct device;
> >   * There is always at least global CMA area and a few optional device
> >   * private areas configured in kernel .config.
> >   */
> > -#define MAX_CMA_AREAS	(1 + CONFIG_CMA_AREAS)
> > +#define MAX_DMA_CMA_AREAS  (1 + CONFIG_DMA_CMA_AREAS)
> >
> >  extern struct cma *dma_contiguous_default_area;
> >
> > @@ -123,7 +123,7 @@ bool dma_release_from_contiguous(struct device *dev, struct page *pages,
> >
> >  #else
> >
> > -#define MAX_CMA_AREAS	(0)
> > +#define MAX_DMA_CMA_AREAS	(0)
> >
> >  static inline struct cma *dev_get_cma_area(struct device *dev)
> >  {
> > diff --git a/mm/Kconfig b/mm/Kconfig
> > index 7511b4a..0877ddc 100644
> > --- a/mm/Kconfig
> > +++ b/mm/Kconfig
> > @@ -515,6 +515,17 @@ config CMA_DEBUG
> >  	  processing calls such as dma_alloc_from_contiguous().
> >  	  This option does not affect warning and error messages.
> >
> > +config CMA_AREAS
> > +	int "Maximum count of the CMA areas"
> > +	depends on CMA
> > +	default 7
> > +	help
> > +	  CMA allows to create CMA areas for particular purpose, mainly,
> > +	  used as device private area. This parameter sets the maximum
> > +	  number of CMA area in the system.
> > +
> > +	  If unsure, leave the default value "7".
> > +
> 
> If we have 7 declare continugous request, a kvm cma allocation request will fail ?

Yes. If you need more, you can increase it. :)

> >  config ZBUD
> >  	tristate
> >  	default n
> > diff --git a/mm/Makefile b/mm/Makefile
> > index 1eaa70b..bc0422b 100644
> > --- a/mm/Makefile
> > +++ b/mm/Makefile
> > @@ -62,3 +62,4 @@ obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o
> >  obj-$(CONFIG_ZBUD)	+= zbud.o
> >  obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
> >  obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
> > +obj-$(CONFIG_CMA)	+= cma.o
> > diff --git a/mm/cma.c b/mm/cma.c
> > new file mode 100644
> > index 0000000..0dae88d
> > --- /dev/null
> > +++ b/mm/cma.c
> > @@ -0,0 +1,329 @@
> > +/*
> > + * Contiguous Memory Allocator
> > + *
> > + * Copyright (c) 2010-2011 by Samsung Electronics.
> > + * Copyright IBM Corporation, 2013
> > + * Copyright LG Electronics Inc., 2014
> > + * Written by:
> > + *	Marek Szyprowski <m.szyprowski@samsung.com>
> > + *	Michal Nazarewicz <mina86@mina86.com>
> > + *	Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> > + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> > + *
> > + * This program is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU General Public License as
> > + * published by the Free Software Foundation; either version 2 of the
> > + * License or (at your optional) any later version of the license.
> > + */
> > +
> > +#define pr_fmt(fmt) "cma: " fmt
> > +
> > +#ifdef CONFIG_CMA_DEBUG
> > +#ifndef DEBUG
> > +#  define DEBUG
> > +#endif
> > +#endif
> > +
> > +#include <linux/memblock.h>
> > +#include <linux/err.h>
> > +#include <linux/mm.h>
> > +#include <linux/mutex.h>
> > +#include <linux/sizes.h>
> > +#include <linux/slab.h>
> > +
> > +struct cma {
> > +	unsigned long	base_pfn;
> > +	unsigned long	count;
> > +	unsigned long	*bitmap;
> > +	unsigned long	bitmap_shift;
> 
> I guess this is added to accommodate the kvm specific alloc chunks. May
> be you should do this as a patch against kvm implementation and then
> move the code to generic ?

Yes, this is for kvm specific alloc chunks. I will consider which one
is better for the base implementation and makes patches against it.

Thanks.

^ permalink raw reply	[flat|nested] 74+ messages in thread

* [RFC PATCH 1/3] CMA: generalize CMA reserved area management functionality
@ 2014-06-10  3:01       ` Joonsoo Kim
  0 siblings, 0 replies; 74+ messages in thread
From: Joonsoo Kim @ 2014-06-10  3:01 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Jun 05, 2014 at 11:09:05PM +0530, Aneesh Kumar K.V wrote:
> Joonsoo Kim <iamjoonsoo.kim@lge.com> writes:
> 
> > Currently, there are two users on CMA functionality, one is the DMA
> > subsystem and the other is the kvm on powerpc. They have their own code
> > to manage CMA reserved area even if they looks really similar.
> > From my guess, it is caused by some needs on bitmap management. Kvm side
> > wants to maintain bitmap not for 1 page, but for more size. Eventually it
> > use bitmap where one bit represents 64 pages.
> >
> > When I implement CMA related patches, I should change those two places
> > to apply my change and it seem to be painful to me. I want to change
> > this situation and reduce future code management overhead through
> > this patch.
> >
> > This change could also help developer who want to use CMA in their
> > new feature development, since they can use CMA easily without
> > copying & pasting this reserved area management code.
> >
> > Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> 
> 
> The way patches are split makes it difficult to review. Will it be
> possible to make changes against one implementation and them move that
> to generic code. That helps in finding out what exactly changed.
> 

Hello,

You are right! I will respin this patchset as the form you
recommended.

> >
> > diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
> > index 00e13ce..b3fe1cc 100644
> > --- a/drivers/base/Kconfig
> > +++ b/drivers/base/Kconfig
> > @@ -283,7 +283,7 @@ config CMA_ALIGNMENT
> >
> >  	  If unsure, leave the default value "8".
> >
> > -config CMA_AREAS
> > +config DMA_CMA_AREAS
> >  	int "Maximum count of the CMA device-private areas"
> >  	default 7
> >  	help
> 
> for ex: The above can be a seperate patch along with further DMA related
> cleanups . 

Okay.

> 
> > diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
> > index 83969f8..48cdac8 100644
> > --- a/drivers/base/dma-contiguous.c
> > +++ b/drivers/base/dma-contiguous.c
> > @@ -186,7 +186,7 @@ static int __init cma_activate_area(struct cma *cma)
> >  	return 0;
> >  }
> >
> > -static struct cma cma_areas[MAX_CMA_AREAS];
> > +static struct cma cma_areas[MAX_DMA_CMA_AREAS];
> >  static unsigned cma_area_count;
> >
> >  static int __init cma_init_reserved_areas(void)
> > diff --git a/include/linux/cma.h b/include/linux/cma.h
> > new file mode 100644
> > index 0000000..60ba06f
> > --- /dev/null
> > +++ b/include/linux/cma.h
> > @@ -0,0 +1,28 @@
> > +/*
> > + * Contiguous Memory Allocator
> > + *
> > + * Copyright LG Electronics Inc., 2014
> > + * Written by:
> > + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> > + *
> > + * This program is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU General Public License as
> > + * published by the Free Software Foundation; either version 2 of the
> > + * License or (at your optional) any later version of the license.
> > + *
> > + */
> > +
> > +#ifndef __CMA_H__
> > +#define __CMA_H__
> > +
> > +struct cma;
> > +
> > +extern struct page *cma_alloc(struct cma *cma, unsigned long count,
> > +				unsigned long align);
> > +extern bool cma_release(struct cma *cma, struct page *pages,
> > +				unsigned long count);
> > +extern int __init cma_declare_contiguous(phys_addr_t size, phys_addr_t base,
> > +				phys_addr_t limit, phys_addr_t alignment,
> > +				unsigned long bitmap_shift, bool fixed,
> > +				struct cma **res_cma);
> > +#endif
> > diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h
> > index 772eab5..dfb1dc9 100644
> > --- a/include/linux/dma-contiguous.h
> > +++ b/include/linux/dma-contiguous.h
> > @@ -63,7 +63,7 @@ struct device;
> >   * There is always at least global CMA area and a few optional device
> >   * private areas configured in kernel .config.
> >   */
> > -#define MAX_CMA_AREAS	(1 + CONFIG_CMA_AREAS)
> > +#define MAX_DMA_CMA_AREAS  (1 + CONFIG_DMA_CMA_AREAS)
> >
> >  extern struct cma *dma_contiguous_default_area;
> >
> > @@ -123,7 +123,7 @@ bool dma_release_from_contiguous(struct device *dev, struct page *pages,
> >
> >  #else
> >
> > -#define MAX_CMA_AREAS	(0)
> > +#define MAX_DMA_CMA_AREAS	(0)
> >
> >  static inline struct cma *dev_get_cma_area(struct device *dev)
> >  {
> > diff --git a/mm/Kconfig b/mm/Kconfig
> > index 7511b4a..0877ddc 100644
> > --- a/mm/Kconfig
> > +++ b/mm/Kconfig
> > @@ -515,6 +515,17 @@ config CMA_DEBUG
> >  	  processing calls such as dma_alloc_from_contiguous().
> >  	  This option does not affect warning and error messages.
> >
> > +config CMA_AREAS
> > +	int "Maximum count of the CMA areas"
> > +	depends on CMA
> > +	default 7
> > +	help
> > +	  CMA allows to create CMA areas for particular purpose, mainly,
> > +	  used as device private area. This parameter sets the maximum
> > +	  number of CMA area in the system.
> > +
> > +	  If unsure, leave the default value "7".
> > +
> 
> If we have 7 declare continugous request, a kvm cma allocation request will fail ?

Yes. If you need more, you can increase it. :)

> >  config ZBUD
> >  	tristate
> >  	default n
> > diff --git a/mm/Makefile b/mm/Makefile
> > index 1eaa70b..bc0422b 100644
> > --- a/mm/Makefile
> > +++ b/mm/Makefile
> > @@ -62,3 +62,4 @@ obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o
> >  obj-$(CONFIG_ZBUD)	+= zbud.o
> >  obj-$(CONFIG_ZSMALLOC)	+= zsmalloc.o
> >  obj-$(CONFIG_GENERIC_EARLY_IOREMAP) += early_ioremap.o
> > +obj-$(CONFIG_CMA)	+= cma.o
> > diff --git a/mm/cma.c b/mm/cma.c
> > new file mode 100644
> > index 0000000..0dae88d
> > --- /dev/null
> > +++ b/mm/cma.c
> > @@ -0,0 +1,329 @@
> > +/*
> > + * Contiguous Memory Allocator
> > + *
> > + * Copyright (c) 2010-2011 by Samsung Electronics.
> > + * Copyright IBM Corporation, 2013
> > + * Copyright LG Electronics Inc., 2014
> > + * Written by:
> > + *	Marek Szyprowski <m.szyprowski@samsung.com>
> > + *	Michal Nazarewicz <mina86@mina86.com>
> > + *	Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> > + *	Joonsoo Kim <iamjoonsoo.kim@lge.com>
> > + *
> > + * This program is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU General Public License as
> > + * published by the Free Software Foundation; either version 2 of the
> > + * License or (at your optional) any later version of the license.
> > + */
> > +
> > +#define pr_fmt(fmt) "cma: " fmt
> > +
> > +#ifdef CONFIG_CMA_DEBUG
> > +#ifndef DEBUG
> > +#  define DEBUG
> > +#endif
> > +#endif
> > +
> > +#include <linux/memblock.h>
> > +#include <linux/err.h>
> > +#include <linux/mm.h>
> > +#include <linux/mutex.h>
> > +#include <linux/sizes.h>
> > +#include <linux/slab.h>
> > +
> > +struct cma {
> > +	unsigned long	base_pfn;
> > +	unsigned long	count;
> > +	unsigned long	*bitmap;
> > +	unsigned long	bitmap_shift;
> 
> I guess this is added to accommodate the kvm specific alloc chunks. May
> be you should do this as a patch against kvm implementation and then
> move the code to generic ?

Yes, this is for kvm specific alloc chunks. I will consider which one
is better for the base implementation and makes patches against it.

Thanks.

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 2/3] DMA, CMA: use general CMA reserved area management framework
  2014-06-10  2:49       ` Joonsoo Kim
                           ` (2 preceding siblings ...)
  (?)
@ 2014-06-11  8:24         ` Michal Nazarewicz
  -1 siblings, 0 replies; 74+ messages in thread
From: Michal Nazarewicz @ 2014-06-11  8:24 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski, Minchan Kim,
	Russell King - ARM Linux, Greg Kroah-Hartman, Paolo Bonzini,
	Gleb Natapov, Alexander Graf, Benjamin Herrenschmidt,
	Paul Mackerras, linux-mm, linux-kernel, linux-arm-kernel, kvm,
	kvm-ppc, linuxppc-dev

On Tue, Jun 10 2014, Joonsoo Kim <iamjoonsoo.kim@lge.com> wrote:
> Without including device.h, build failure occurs.
> In dma-contiguous.h, we try to access to dev->cma_area, so we need
> device.h. In the past, we included it luckily by swap.h in
> drivers/base/dma-contiguous.c. Swap.h includes node.h and then node.h
> includes device.h, so we were happy. But, in this patch, I remove
> 'include <linux/swap.h>' so we need to include device.h explicitly.

Ack.

-- 
Best regards,                                         _     _
.o. | Liege of Serenely Enlightened Majesty of      o' \,=./ `o
..o | Computer Science,  Michał “mina86” Nazarewicz    (o o)
ooo +--<mpn@google.com>--<xmpp:mina86@jabber.org>--ooO--(_)--Ooo--

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 2/3] DMA, CMA: use general CMA reserved area management framework
@ 2014-06-11  8:24         ` Michal Nazarewicz
  0 siblings, 0 replies; 74+ messages in thread
From: Michal Nazarewicz @ 2014-06-11  8:24 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski, Minchan Kim,
	Russell King - ARM Linux, Greg Kroah-Hartman, Paolo Bonzini,
	Gleb Natapov, Alexander Graf, Benjamin Herrenschmidt,
	Paul Mackerras, linux-mm, linux-kernel, linux-arm-kernel, kvm,
	kvm-ppc, linuxppc-dev

On Tue, Jun 10 2014, Joonsoo Kim <iamjoonsoo.kim@lge.com> wrote:
> Without including device.h, build failure occurs.
> In dma-contiguous.h, we try to access to dev->cma_area, so we need
> device.h. In the past, we included it luckily by swap.h in
> drivers/base/dma-contiguous.c. Swap.h includes node.h and then node.h
> includes device.h, so we were happy. But, in this patch, I remove
> 'include <linux/swap.h>' so we need to include device.h explicitly.

Ack.

-- 
Best regards,                                         _     _
.o. | Liege of Serenely Enlightened Majesty of      o' \,=./ `o
..o | Computer Science,  Michał “mina86” Nazarewicz    (o o)
ooo +--<mpn@google.com>--<xmpp:mina86@jabber.org>--ooO--(_)--Ooo--

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 2/3] DMA, CMA: use general CMA reserved area management framework
@ 2014-06-11  8:24         ` Michal Nazarewicz
  0 siblings, 0 replies; 74+ messages in thread
From: Michal Nazarewicz @ 2014-06-11  8:24 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: Russell King - ARM Linux, kvm, linux-mm, Gleb Natapov,
	Greg Kroah-Hartman, Alexander Graf, kvm-ppc, linux-kernel,
	Minchan Kim, Paul Mackerras, Aneesh Kumar K.V, Paolo Bonzini,
	Andrew Morton, linuxppc-dev, linux-arm-kernel, Marek Szyprowski

On Tue, Jun 10 2014, Joonsoo Kim <iamjoonsoo.kim@lge.com> wrote:
> Without including device.h, build failure occurs.
> In dma-contiguous.h, we try to access to dev->cma_area, so we need
> device.h. In the past, we included it luckily by swap.h in
> drivers/base/dma-contiguous.c. Swap.h includes node.h and then node.h
> includes device.h, so we were happy. But, in this patch, I remove
> 'include <linux/swap.h>' so we need to include device.h explicitly.

Ack.

--=20
Best regards,                                         _     _
.o. | Liege of Serenely Enlightened Majesty of      o' \,=3D./ `o
..o | Computer Science,  Micha=C5=82 =E2=80=9Cmina86=E2=80=9D Nazarewicz   =
 (o o)
ooo +--<mpn@google.com>--<xmpp:mina86@jabber.org>--ooO--(_)--Ooo--

^ permalink raw reply	[flat|nested] 74+ messages in thread

* [RFC PATCH 2/3] DMA, CMA: use general CMA reserved area management framework
@ 2014-06-11  8:24         ` Michal Nazarewicz
  0 siblings, 0 replies; 74+ messages in thread
From: Michal Nazarewicz @ 2014-06-11  8:24 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jun 10 2014, Joonsoo Kim <iamjoonsoo.kim@lge.com> wrote:
> Without including device.h, build failure occurs.
> In dma-contiguous.h, we try to access to dev->cma_area, so we need
> device.h. In the past, we included it luckily by swap.h in
> drivers/base/dma-contiguous.c. Swap.h includes node.h and then node.h
> includes device.h, so we were happy. But, in this patch, I remove
> 'include <linux/swap.h>' so we need to include device.h explicitly.

Ack.

-- 
Best regards,                                         _     _
.o. | Liege of Serenely Enlightened Majesty of      o' \,=./ `o
..o | Computer Science,  Micha? ?mina86? Nazarewicz    (o o)
ooo +--<mpn@google.com>--<xmpp:mina86@jabber.org>--ooO--(_)--Ooo--

^ permalink raw reply	[flat|nested] 74+ messages in thread

* Re: [RFC PATCH 2/3] DMA, CMA: use general CMA reserved area management framework
@ 2014-06-11  8:24         ` Michal Nazarewicz
  0 siblings, 0 replies; 74+ messages in thread
From: Michal Nazarewicz @ 2014-06-11  8:24 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: Andrew Morton, Aneesh Kumar K.V, Marek Szyprowski, Minchan Kim,
	Russell King - ARM Linux, Greg Kroah-Hartman, Paolo Bonzini,
	Gleb Natapov, Alexander Graf, Benjamin Herrenschmidt,
	Paul Mackerras, linux-mm, linux-kernel, linux-arm-kernel, kvm,
	kvm-ppc, linuxppc-dev

On Tue, Jun 10 2014, Joonsoo Kim <iamjoonsoo.kim@lge.com> wrote:
> Without including device.h, build failure occurs.
> In dma-contiguous.h, we try to access to dev->cma_area, so we need
> device.h. In the past, we included it luckily by swap.h in
> drivers/base/dma-contiguous.c. Swap.h includes node.h and then node.h
> includes device.h, so we were happy. But, in this patch, I remove
> 'include <linux/swap.h>' so we need to include device.h explicitly.

Ack.

-- 
Best regards,                                         _     _
.o. | Liege of Serenely Enlightened Majesty of      o' \,=./ `o
..o | Computer Science,  Michał “mina86” Nazarewicz    (o o)
ooo +--<mpn@google.com>--<xmpp:mina86@jabber.org>--ooO--(_)--Ooo--

^ permalink raw reply	[flat|nested] 74+ messages in thread

end of thread, other threads:[~2014-06-11  8:25 UTC | newest]

Thread overview: 74+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-06-03  1:11 [RFC PATCH 0/3] CMA: generalize CMA reserved area management code Joonsoo Kim
2014-06-03  1:11 ` Joonsoo Kim
2014-06-03  1:11 ` Joonsoo Kim
2014-06-03  1:11 ` Joonsoo Kim
2014-06-03  1:11 ` Joonsoo Kim
2014-06-03  1:11 ` [RFC PATCH 1/3] CMA: generalize CMA reserved area management functionality Joonsoo Kim
2014-06-03  1:11   ` Joonsoo Kim
2014-06-03  1:11   ` Joonsoo Kim
2014-06-03  1:11   ` Joonsoo Kim
2014-06-03  1:11   ` Joonsoo Kim
2014-06-03  6:56   ` Michal Nazarewicz
2014-06-03  6:56     ` Michal Nazarewicz
2014-06-03  6:56     ` Michal Nazarewicz
2014-06-03  6:56     ` Michal Nazarewicz
2014-06-03  6:56     ` Michal Nazarewicz
2014-06-03  6:56     ` Michal Nazarewicz
2014-06-10  2:41     ` Joonsoo Kim
2014-06-10  2:41       ` Joonsoo Kim
2014-06-10  2:41       ` Joonsoo Kim
2014-06-10  2:41       ` Joonsoo Kim
2014-06-10  2:41       ` Joonsoo Kim
2014-06-05 17:39   ` Aneesh Kumar K.V
2014-06-05 17:51     ` Aneesh Kumar K.V
2014-06-05 17:39     ` Aneesh Kumar K.V
2014-06-05 17:39     ` Aneesh Kumar K.V
2014-06-05 17:39     ` Aneesh Kumar K.V
2014-06-05 17:39     ` Aneesh Kumar K.V
2014-06-10  2:57     ` Joonsoo Kim
2014-06-10  3:01       ` Joonsoo Kim
2014-06-10  3:01       ` Joonsoo Kim
2014-06-10  3:01       ` Joonsoo Kim
2014-06-10  3:01       ` Joonsoo Kim
2014-06-03  1:11 ` [RFC PATCH 2/3] DMA, CMA: use general CMA reserved area management framework Joonsoo Kim
2014-06-03  1:11   ` Joonsoo Kim
2014-06-03  1:11   ` Joonsoo Kim
2014-06-03  1:11   ` Joonsoo Kim
2014-06-03  1:11   ` Joonsoo Kim
2014-06-03  7:00   ` Michal Nazarewicz
2014-06-03  7:00     ` Michal Nazarewicz
2014-06-03  7:00     ` Michal Nazarewicz
2014-06-03  7:00     ` Michal Nazarewicz
2014-06-03  7:00     ` Michal Nazarewicz
2014-06-03  7:00     ` Michal Nazarewicz
2014-06-10  2:49     ` Joonsoo Kim
2014-06-10  2:49       ` Joonsoo Kim
2014-06-10  2:49       ` Joonsoo Kim
2014-06-10  2:49       ` Joonsoo Kim
2014-06-10  2:49       ` Joonsoo Kim
2014-06-11  8:24       ` Michal Nazarewicz
2014-06-11  8:24         ` Michal Nazarewicz
2014-06-11  8:24         ` Michal Nazarewicz
2014-06-11  8:24         ` Michal Nazarewicz
2014-06-11  8:24         ` Michal Nazarewicz
2014-06-03  1:11 ` [RFC PATCH 3/3] PPC, KVM, " Joonsoo Kim
2014-06-03  1:11   ` Joonsoo Kim
2014-06-03  1:11   ` Joonsoo Kim
2014-06-03  1:11   ` Joonsoo Kim
2014-06-03  1:11   ` Joonsoo Kim
2014-06-03  7:02   ` Michal Nazarewicz
2014-06-03  7:02     ` Michal Nazarewicz
2014-06-03  7:02     ` Michal Nazarewicz
2014-06-03  7:02     ` Michal Nazarewicz
2014-06-03  7:02     ` Michal Nazarewicz
2014-06-03  7:02     ` Michal Nazarewicz
2014-06-03  9:20     ` Paolo Bonzini
2014-06-03  9:20       ` Paolo Bonzini
2014-06-03  9:20       ` Paolo Bonzini
2014-06-03  9:20       ` Paolo Bonzini
2014-06-03  9:20       ` Paolo Bonzini
2014-06-05 17:00       ` Aneesh Kumar K.V
2014-06-05 17:12         ` Aneesh Kumar K.V
2014-06-05 17:00         ` Aneesh Kumar K.V
2014-06-05 17:00         ` Aneesh Kumar K.V
2014-06-05 17:00         ` Aneesh Kumar K.V

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.