From: Hui Zhu <zhuhui@xiaomi.com> To: <m.szyprowski@samsung.com>, <mina86@mina86.com>, <akpm@linux-foundation.org>, <iamjoonsoo.kim@lge.com>, <aneesh.kumar@linux.vnet.ibm.com>, <pintu.k@samsung.com>, <weijie.yang@samsung.com>, <mgorman@suse.de>, <hannes@cmpxchg.org>, <riel@redhat.com>, <vbabka@suse.cz>, <laurent.pinchart+renesas@ideasonboard.com>, <rientjes@google.com>, <sasha.levin@oracle.com>, <liuweixing@xiaomi.com>, <linux-kernel@vger.kernel.org>, <linux-mm@kvack.org> Cc: <teawater@gmail.com>, Hui Zhu <zhuhui@xiaomi.com> Subject: [PATCH 2/3] CMA: Fix the issue that nr_try_movable just count MIGRATE_MOVABLE memory Date: Thu, 25 Dec 2014 17:43:27 +0800 [thread overview] Message-ID: <1419500608-11656-3-git-send-email-zhuhui@xiaomi.com> (raw) In-Reply-To: <1419500608-11656-1-git-send-email-zhuhui@xiaomi.com> One of my plotform that use Joonsoo's CMA patch [1] has a device that will alloc a lot of MIGRATE_UNMOVABLE memory when it works in a zone. When this device works, the memory status of this zone is not OK. Most of CMA is not allocated but most normal memory is allocated. This issue is because in __rmqueue: if (IS_ENABLED(CONFIG_CMA) && migratetype == MIGRATE_MOVABLE && zone->managed_cma_pages) page = __rmqueue_cma(zone, order); Just allocated MIGRATE_MOVABLE will be record in nr_try_movable in function __rmqueue_cma but not the others. This device allocated a lot of MIGRATE_UNMOVABLE memory affect the behavior of this zone memory allocation. This patch change __rmqueue to let nr_try_movable record all the memory allocation of normal memory. [1] https://lkml.org/lkml/2014/5/28/64 Signed-off-by: Hui Zhu <zhuhui@xiaomi.com> Signed-off-by: Weixing Liu <liuweixing@xiaomi.com> --- mm/page_alloc.c | 41 ++++++++++++++++++++--------------------- 1 file changed, 20 insertions(+), 21 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index a8d9f03..a5bbc38 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1301,28 +1301,23 @@ static struct page *__rmqueue_cma(struct zone *zone, unsigned int order) { struct page *page; - if (zone->nr_try_movable > 0) - goto alloc_movable; + if (zone->nr_try_cma <= 0) { + /* Reset counter */ + zone->nr_try_movable = zone->max_try_movable; + zone->nr_try_cma = zone->max_try_cma; - if (zone->nr_try_cma > 0) { - /* Okay. Now, we can try to allocate the page from cma region */ - zone->nr_try_cma -= 1 << order; - page = __rmqueue_smallest(zone, order, MIGRATE_CMA); - - /* CMA pages can vanish through CMA allocation */ - if (unlikely(!page && order == 0)) - zone->nr_try_cma = 0; - - return page; + return NULL; } - /* Reset counter */ - zone->nr_try_movable = zone->max_try_movable; - zone->nr_try_cma = zone->max_try_cma; + /* Okay. Now, we can try to allocate the page from cma region */ + zone->nr_try_cma -= 1 << order; + page = __rmqueue_smallest(zone, order, MIGRATE_CMA); -alloc_movable: - zone->nr_try_movable -= 1 << order; - return NULL; + /* CMA pages can vanish through CMA allocation */ + if (unlikely(!page && order == 0)) + zone->nr_try_cma = 0; + + return page; } #endif @@ -1335,9 +1330,13 @@ static struct page *__rmqueue(struct zone *zone, unsigned int order, { struct page *page = NULL; - if (IS_ENABLED(CONFIG_CMA) && - migratetype == MIGRATE_MOVABLE && zone->managed_cma_pages) - page = __rmqueue_cma(zone, order); + if (IS_ENABLED(CONFIG_CMA) && zone->managed_cma_pages) { + if (migratetype == MIGRATE_MOVABLE + && zone->nr_try_movable <= 0) + page = __rmqueue_cma(zone, order); + else + zone->nr_try_movable -= 1 << order; + } retry_reserve: if (!page) -- 1.9.1
WARNING: multiple messages have this Message-ID (diff)
From: Hui Zhu <zhuhui@xiaomi.com> To: m.szyprowski@samsung.com, mina86@mina86.com, akpm@linux-foundation.org, iamjoonsoo.kim@lge.com, aneesh.kumar@linux.vnet.ibm.com, pintu.k@samsung.com, weijie.yang@samsung.com, mgorman@suse.de, hannes@cmpxchg.org, riel@redhat.com, vbabka@suse.cz, laurent.pinchart+renesas@ideasonboard.com, rientjes@google.com, sasha.levin@oracle.com, liuweixing@xiaomi.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: teawater@gmail.com, Hui Zhu <zhuhui@xiaomi.com> Subject: [PATCH 2/3] CMA: Fix the issue that nr_try_movable just count MIGRATE_MOVABLE memory Date: Thu, 25 Dec 2014 17:43:27 +0800 [thread overview] Message-ID: <1419500608-11656-3-git-send-email-zhuhui@xiaomi.com> (raw) In-Reply-To: <1419500608-11656-1-git-send-email-zhuhui@xiaomi.com> One of my plotform that use Joonsoo's CMA patch [1] has a device that will alloc a lot of MIGRATE_UNMOVABLE memory when it works in a zone. When this device works, the memory status of this zone is not OK. Most of CMA is not allocated but most normal memory is allocated. This issue is because in __rmqueue: if (IS_ENABLED(CONFIG_CMA) && migratetype == MIGRATE_MOVABLE && zone->managed_cma_pages) page = __rmqueue_cma(zone, order); Just allocated MIGRATE_MOVABLE will be record in nr_try_movable in function __rmqueue_cma but not the others. This device allocated a lot of MIGRATE_UNMOVABLE memory affect the behavior of this zone memory allocation. This patch change __rmqueue to let nr_try_movable record all the memory allocation of normal memory. [1] https://lkml.org/lkml/2014/5/28/64 Signed-off-by: Hui Zhu <zhuhui@xiaomi.com> Signed-off-by: Weixing Liu <liuweixing@xiaomi.com> --- mm/page_alloc.c | 41 ++++++++++++++++++++--------------------- 1 file changed, 20 insertions(+), 21 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index a8d9f03..a5bbc38 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1301,28 +1301,23 @@ static struct page *__rmqueue_cma(struct zone *zone, unsigned int order) { struct page *page; - if (zone->nr_try_movable > 0) - goto alloc_movable; + if (zone->nr_try_cma <= 0) { + /* Reset counter */ + zone->nr_try_movable = zone->max_try_movable; + zone->nr_try_cma = zone->max_try_cma; - if (zone->nr_try_cma > 0) { - /* Okay. Now, we can try to allocate the page from cma region */ - zone->nr_try_cma -= 1 << order; - page = __rmqueue_smallest(zone, order, MIGRATE_CMA); - - /* CMA pages can vanish through CMA allocation */ - if (unlikely(!page && order == 0)) - zone->nr_try_cma = 0; - - return page; + return NULL; } - /* Reset counter */ - zone->nr_try_movable = zone->max_try_movable; - zone->nr_try_cma = zone->max_try_cma; + /* Okay. Now, we can try to allocate the page from cma region */ + zone->nr_try_cma -= 1 << order; + page = __rmqueue_smallest(zone, order, MIGRATE_CMA); -alloc_movable: - zone->nr_try_movable -= 1 << order; - return NULL; + /* CMA pages can vanish through CMA allocation */ + if (unlikely(!page && order == 0)) + zone->nr_try_cma = 0; + + return page; } #endif @@ -1335,9 +1330,13 @@ static struct page *__rmqueue(struct zone *zone, unsigned int order, { struct page *page = NULL; - if (IS_ENABLED(CONFIG_CMA) && - migratetype == MIGRATE_MOVABLE && zone->managed_cma_pages) - page = __rmqueue_cma(zone, order); + if (IS_ENABLED(CONFIG_CMA) && zone->managed_cma_pages) { + if (migratetype == MIGRATE_MOVABLE + && zone->nr_try_movable <= 0) + page = __rmqueue_cma(zone, order); + else + zone->nr_try_movable -= 1 << order; + } retry_reserve: if (!page) -- 1.9.1 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2014-12-25 10:08 UTC|newest] Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top 2014-12-25 9:43 [PATCH 0/3] CMA: Handle the issues of aggressively allocate the Hui Zhu 2014-12-25 9:43 ` Hui Zhu 2014-12-25 9:43 ` [PATCH 1/3] CMA: Fix the bug that CMA's page number is substructed twice Hui Zhu 2014-12-25 9:43 ` Hui Zhu 2014-12-30 4:48 ` Joonsoo Kim 2014-12-30 4:48 ` Joonsoo Kim 2014-12-30 10:02 ` Hui Zhu 2014-12-30 10:02 ` Hui Zhu 2014-12-25 9:43 ` Hui Zhu [this message] 2014-12-25 9:43 ` [PATCH 2/3] CMA: Fix the issue that nr_try_movable just count MIGRATE_MOVABLE memory Hui Zhu 2014-12-25 9:43 ` [PATCH 3/3] CMA: Add cma_alloc_counter to make cma_alloc work better if it meet busy range Hui Zhu 2014-12-25 9:43 ` Hui Zhu 2014-12-30 5:00 ` Joonsoo Kim 2014-12-30 5:00 ` Joonsoo Kim
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1419500608-11656-3-git-send-email-zhuhui@xiaomi.com \ --to=zhuhui@xiaomi.com \ --cc=akpm@linux-foundation.org \ --cc=aneesh.kumar@linux.vnet.ibm.com \ --cc=hannes@cmpxchg.org \ --cc=iamjoonsoo.kim@lge.com \ --cc=laurent.pinchart+renesas@ideasonboard.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=liuweixing@xiaomi.com \ --cc=m.szyprowski@samsung.com \ --cc=mgorman@suse.de \ --cc=mina86@mina86.com \ --cc=pintu.k@samsung.com \ --cc=riel@redhat.com \ --cc=rientjes@google.com \ --cc=sasha.levin@oracle.com \ --cc=teawater@gmail.com \ --cc=vbabka@suse.cz \ --cc=weijie.yang@samsung.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.