linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH]  mm,page_alloc,cma: conditionally prefer cma pageblocks for movable allocations
@ 2020-03-06 20:01 Rik van Riel
  2020-03-07 22:38 ` Andrew Morton
                   ` (2 more replies)
  0 siblings, 3 replies; 24+ messages in thread
From: Rik van Riel @ 2020-03-06 20:01 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, kernel-team, Roman Gushchin, Qian Cai,
	Vlastimil Babka, Mel Gorman, Anshuman Khandual

Posting this one for Roman so I can deal with any upstream feedback and
create a v2 if needed, while scratching my head over the next piece of
this puzzle :)

---8<---

From: Roman Gushchin <guro@fb.com>

Currently a cma area is barely used by the page allocator because
it's used only as a fallback from movable, however kswapd tries
hard to make sure that the fallback path isn't used.

This results in a system evicting memory and pushing data into swap,
while lots of CMA memory is still available. This happens despite the
fact that alloc_contig_range is perfectly capable of moving any movable
allocations out of the way of an allocation.

To effectively use the cma area let's alter the rules: if the zone
has more free cma pages than the half of total free pages in the zone,
use cma pageblocks first and fallback to movable blocks in the case of
failure.

Signed-off-by: Rik van Riel <riel@surriel.com>
Co-developed-by: Rik van Riel <riel@surriel.com>
Signed-off-by: Roman Gushchin <guro@fb.com>
---
 mm/page_alloc.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3c4eb750a199..0fb3c1719625 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2711,6 +2711,18 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype,
 {
 	struct page *page;
 
+	/*
+	 * Balance movable allocations between regular and CMA areas by
+	 * allocating from CMA when over half of the zone's free memory
+	 * is in the CMA area.
+	 */
+	if (migratetype == MIGRATE_MOVABLE &&
+	    zone_page_state(zone, NR_FREE_CMA_PAGES) >
+	    zone_page_state(zone, NR_FREE_PAGES) / 2) {
+		page = __rmqueue_cma_fallback(zone, order);
+		if (page)
+			return page;
+	}
 retry:
 	page = __rmqueue_smallest(zone, order, migratetype);
 	if (unlikely(!page)) {
-- 
2.24.1




^ permalink raw reply related	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2020-04-03 17:50 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-06 20:01 [PATCH] mm,page_alloc,cma: conditionally prefer cma pageblocks for movable allocations Rik van Riel
2020-03-07 22:38 ` Andrew Morton
2020-03-08 13:23   ` Rik van Riel
2020-03-11 17:58     ` Vlastimil Babka
2020-03-11 22:58       ` Roman Gushchin
2020-03-11 23:03         ` Vlastimil Babka
2020-03-11 23:21           ` Roman Gushchin
2020-03-11  8:51 ` Vlastimil Babka
2020-03-11 10:13   ` Joonsoo Kim
2020-03-11 17:41     ` Vlastimil Babka
2020-03-11 17:35   ` Roman Gushchin
2020-03-12  1:41     ` Joonsoo Kim
2020-03-12  2:39       ` Roman Gushchin
2020-03-12  8:56         ` Joonsoo Kim
2020-03-12 17:07           ` Roman Gushchin
2020-03-13  7:44             ` Joonsoo Kim
2020-04-02  2:13       ` Andrew Morton
2020-04-02  2:53         ` Roman Gushchin
2020-04-02  5:43           ` Joonsoo Kim
2020-04-02 19:42             ` Roman Gushchin
2020-04-03  4:34               ` Joonsoo Kim
2020-04-03 17:50                 ` Roman Gushchin
2020-04-02  3:05         ` Roman Gushchin
2020-03-21  0:49 ` Minchan Kim

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).