All of lore.kernel.org
 help / color / mirror / Atom feed
From: Hui Zhu <zhuhui@xiaomi.com>
To: <m.szyprowski@samsung.com>, <mina86@mina86.com>,
	<akpm@linux-foundation.org>, <iamjoonsoo.kim@lge.com>,
	<aneesh.kumar@linux.vnet.ibm.com>, <pintu.k@samsung.com>,
	<weijie.yang@samsung.com>, <mgorman@suse.de>,
	<hannes@cmpxchg.org>, <riel@redhat.com>, <vbabka@suse.cz>,
	<laurent.pinchart+renesas@ideasonboard.com>,
	<rientjes@google.com>, <sasha.levin@oracle.com>,
	<liuweixing@xiaomi.com>, <linux-kernel@vger.kernel.org>,
	<linux-mm@kvack.org>
Cc: <teawater@gmail.com>, Hui Zhu <zhuhui@xiaomi.com>
Subject: [PATCH 3/3] CMA: Add cma_alloc_counter to make cma_alloc work better if it meet busy range
Date: Thu, 25 Dec 2014 17:43:28 +0800	[thread overview]
Message-ID: <1419500608-11656-4-git-send-email-zhuhui@xiaomi.com> (raw)
In-Reply-To: <1419500608-11656-1-git-send-email-zhuhui@xiaomi.com>

In [1], Joonsoo said that cma_alloc_counter is useless because pageblock
is isolated.
But if alloc_contig_range meet a busy range, it will undo_isolate_page_range
before goto try next range. At this time, __rmqueue_cma can begin allocd
CMA memory from the range.

So I add cma_alloc_counter let __rmqueue doesn't call __rmqueue_cma when
cma_alloc works.

[1] https://lkml.org/lkml/2014/10/24/26

Signed-off-by: Hui Zhu <zhuhui@xiaomi.com>
---
 include/linux/cma.h | 2 ++
 mm/cma.c            | 6 ++++++
 mm/page_alloc.c     | 8 +++++++-
 3 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/include/linux/cma.h b/include/linux/cma.h
index 9384ba6..155158f 100644
--- a/include/linux/cma.h
+++ b/include/linux/cma.h
@@ -26,6 +26,8 @@ extern int __init cma_declare_contiguous(phys_addr_t base,
 extern int cma_init_reserved_mem(phys_addr_t base,
 					phys_addr_t size, int order_per_bit,
 					struct cma **res_cma);
+
+extern atomic_t cma_alloc_counter;
 extern struct page *cma_alloc(struct cma *cma, int count, unsigned int align);
 extern bool cma_release(struct cma *cma, struct page *pages, int count);
 #endif
diff --git a/mm/cma.c b/mm/cma.c
index 6707b5d..b63f6be 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -348,6 +348,8 @@ err:
 	return ret;
 }
 
+atomic_t cma_alloc_counter = ATOMIC_INIT(0);
+
 /**
  * cma_alloc() - allocate pages from contiguous area
  * @cma:   Contiguous memory region for which the allocation is performed.
@@ -378,6 +380,8 @@ struct page *cma_alloc(struct cma *cma, int count, unsigned int align)
 	bitmap_maxno = cma_bitmap_maxno(cma);
 	bitmap_count = cma_bitmap_pages_to_bits(cma, count);
 
+	atomic_inc(&cma_alloc_counter);
+
 	for (;;) {
 		mutex_lock(&cma->lock);
 		bitmap_no = bitmap_find_next_zero_area_off(cma->bitmap,
@@ -415,6 +419,8 @@ struct page *cma_alloc(struct cma *cma, int count, unsigned int align)
 		start = bitmap_no + mask + 1;
 	}
 
+	atomic_dec(&cma_alloc_counter);
+
 	pr_debug("%s(): returned %p\n", __func__, page);
 	return page;
 }
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a5bbc38..0622c4c 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -66,6 +66,10 @@
 #include <asm/div64.h>
 #include "internal.h"
 
+#ifdef CONFIG_CMA
+#include <linux/cma.h>
+#endif
+
 /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
 static DEFINE_MUTEX(pcp_batch_high_lock);
 #define MIN_PERCPU_PAGELIST_FRACTION	(8)
@@ -1330,7 +1334,9 @@ static struct page *__rmqueue(struct zone *zone, unsigned int order,
 {
 	struct page *page = NULL;
 
-	if (IS_ENABLED(CONFIG_CMA) && zone->managed_cma_pages) {
+	if (IS_ENABLED(CONFIG_CMA)
+	    && zone->managed_cma_pages
+	    && atomic_read(&cma_alloc_counter) == 0) {
 		if (migratetype == MIGRATE_MOVABLE
 		    && zone->nr_try_movable <= 0)
 			page = __rmqueue_cma(zone, order);
-- 
1.9.1


WARNING: multiple messages have this Message-ID (diff)
From: Hui Zhu <zhuhui@xiaomi.com>
To: m.szyprowski@samsung.com, mina86@mina86.com,
	akpm@linux-foundation.org, iamjoonsoo.kim@lge.com,
	aneesh.kumar@linux.vnet.ibm.com, pintu.k@samsung.com,
	weijie.yang@samsung.com, mgorman@suse.de, hannes@cmpxchg.org,
	riel@redhat.com, vbabka@suse.cz,
	laurent.pinchart+renesas@ideasonboard.com, rientjes@google.com,
	sasha.levin@oracle.com, liuweixing@xiaomi.com,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Cc: teawater@gmail.com, Hui Zhu <zhuhui@xiaomi.com>
Subject: [PATCH 3/3] CMA: Add cma_alloc_counter to make cma_alloc work better if it meet busy range
Date: Thu, 25 Dec 2014 17:43:28 +0800	[thread overview]
Message-ID: <1419500608-11656-4-git-send-email-zhuhui@xiaomi.com> (raw)
In-Reply-To: <1419500608-11656-1-git-send-email-zhuhui@xiaomi.com>

In [1], Joonsoo said that cma_alloc_counter is useless because pageblock
is isolated.
But if alloc_contig_range meet a busy range, it will undo_isolate_page_range
before goto try next range. At this time, __rmqueue_cma can begin allocd
CMA memory from the range.

So I add cma_alloc_counter let __rmqueue doesn't call __rmqueue_cma when
cma_alloc works.

[1] https://lkml.org/lkml/2014/10/24/26

Signed-off-by: Hui Zhu <zhuhui@xiaomi.com>
---
 include/linux/cma.h | 2 ++
 mm/cma.c            | 6 ++++++
 mm/page_alloc.c     | 8 +++++++-
 3 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/include/linux/cma.h b/include/linux/cma.h
index 9384ba6..155158f 100644
--- a/include/linux/cma.h
+++ b/include/linux/cma.h
@@ -26,6 +26,8 @@ extern int __init cma_declare_contiguous(phys_addr_t base,
 extern int cma_init_reserved_mem(phys_addr_t base,
 					phys_addr_t size, int order_per_bit,
 					struct cma **res_cma);
+
+extern atomic_t cma_alloc_counter;
 extern struct page *cma_alloc(struct cma *cma, int count, unsigned int align);
 extern bool cma_release(struct cma *cma, struct page *pages, int count);
 #endif
diff --git a/mm/cma.c b/mm/cma.c
index 6707b5d..b63f6be 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -348,6 +348,8 @@ err:
 	return ret;
 }
 
+atomic_t cma_alloc_counter = ATOMIC_INIT(0);
+
 /**
  * cma_alloc() - allocate pages from contiguous area
  * @cma:   Contiguous memory region for which the allocation is performed.
@@ -378,6 +380,8 @@ struct page *cma_alloc(struct cma *cma, int count, unsigned int align)
 	bitmap_maxno = cma_bitmap_maxno(cma);
 	bitmap_count = cma_bitmap_pages_to_bits(cma, count);
 
+	atomic_inc(&cma_alloc_counter);
+
 	for (;;) {
 		mutex_lock(&cma->lock);
 		bitmap_no = bitmap_find_next_zero_area_off(cma->bitmap,
@@ -415,6 +419,8 @@ struct page *cma_alloc(struct cma *cma, int count, unsigned int align)
 		start = bitmap_no + mask + 1;
 	}
 
+	atomic_dec(&cma_alloc_counter);
+
 	pr_debug("%s(): returned %p\n", __func__, page);
 	return page;
 }
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a5bbc38..0622c4c 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -66,6 +66,10 @@
 #include <asm/div64.h>
 #include "internal.h"
 
+#ifdef CONFIG_CMA
+#include <linux/cma.h>
+#endif
+
 /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
 static DEFINE_MUTEX(pcp_batch_high_lock);
 #define MIN_PERCPU_PAGELIST_FRACTION	(8)
@@ -1330,7 +1334,9 @@ static struct page *__rmqueue(struct zone *zone, unsigned int order,
 {
 	struct page *page = NULL;
 
-	if (IS_ENABLED(CONFIG_CMA) && zone->managed_cma_pages) {
+	if (IS_ENABLED(CONFIG_CMA)
+	    && zone->managed_cma_pages
+	    && atomic_read(&cma_alloc_counter) == 0) {
 		if (migratetype == MIGRATE_MOVABLE
 		    && zone->nr_try_movable <= 0)
 			page = __rmqueue_cma(zone, order);
-- 
1.9.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2014-12-25 10:08 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-12-25  9:43 [PATCH 0/3] CMA: Handle the issues of aggressively allocate the Hui Zhu
2014-12-25  9:43 ` Hui Zhu
2014-12-25  9:43 ` [PATCH 1/3] CMA: Fix the bug that CMA's page number is substructed twice Hui Zhu
2014-12-25  9:43   ` Hui Zhu
2014-12-30  4:48   ` Joonsoo Kim
2014-12-30  4:48     ` Joonsoo Kim
2014-12-30 10:02     ` Hui Zhu
2014-12-30 10:02       ` Hui Zhu
2014-12-25  9:43 ` [PATCH 2/3] CMA: Fix the issue that nr_try_movable just count MIGRATE_MOVABLE memory Hui Zhu
2014-12-25  9:43   ` Hui Zhu
2014-12-25  9:43 ` Hui Zhu [this message]
2014-12-25  9:43   ` [PATCH 3/3] CMA: Add cma_alloc_counter to make cma_alloc work better if it meet busy range Hui Zhu
2014-12-30  5:00   ` Joonsoo Kim
2014-12-30  5:00     ` Joonsoo Kim

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1419500608-11656-4-git-send-email-zhuhui@xiaomi.com \
    --to=zhuhui@xiaomi.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@linux.vnet.ibm.com \
    --cc=hannes@cmpxchg.org \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=laurent.pinchart+renesas@ideasonboard.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=liuweixing@xiaomi.com \
    --cc=m.szyprowski@samsung.com \
    --cc=mgorman@suse.de \
    --cc=mina86@mina86.com \
    --cc=pintu.k@samsung.com \
    --cc=riel@redhat.com \
    --cc=rientjes@google.com \
    --cc=sasha.levin@oracle.com \
    --cc=teawater@gmail.com \
    --cc=vbabka@suse.cz \
    --cc=weijie.yang@samsung.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.