On 12 Jan 2022, at 5:54, David Hildenbrand wrote: > On 05.01.22 22:47, Zi Yan wrote: >> From: Zi Yan >> >> This is done in addition to MIGRATE_ISOLATE pageblock merge avoidance. >> It prepares for the upcoming removal of the MAX_ORDER-1 alignment >> requirement for CMA and alloc_contig_range(). >> >> MIGRARTE_HIGHATOMIC should not merge with other migratetypes like >> MIGRATE_ISOLATE and MIGRARTE_CMA[1], so this commit prevents that too. >> Also add MIGRARTE_HIGHATOMIC to fallbacks array for completeness. >> >> [1] https://lore.kernel.org/linux-mm/20211130100853.GP3366@techsingularity.net/ >> >> Signed-off-by: Zi Yan >> --- >> include/linux/mmzone.h | 6 ++++++ >> mm/page_alloc.c | 28 ++++++++++++++++++---------- >> 2 files changed, 24 insertions(+), 10 deletions(-) >> >> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h >> index aed44e9b5d89..0aa549653e4e 100644 >> --- a/include/linux/mmzone.h >> +++ b/include/linux/mmzone.h >> @@ -83,6 +83,12 @@ static inline bool is_migrate_movable(int mt) >> return is_migrate_cma(mt) || mt == MIGRATE_MOVABLE; >> } >> >> +/* See fallbacks[MIGRATE_TYPES][3] in page_alloc.c */ >> +static inline bool migratetype_has_fallback(int mt) >> +{ >> + return mt < MIGRATE_PCPTYPES; >> +} >> + >> #define for_each_migratetype_order(order, type) \ >> for (order = 0; order < MAX_ORDER; order++) \ >> for (type = 0; type < MIGRATE_TYPES; type++) >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c >> index 8dd6399bafb5..5193c953dbf8 100644 >> --- a/mm/page_alloc.c >> +++ b/mm/page_alloc.c >> @@ -1042,6 +1042,12 @@ buddy_merge_likely(unsigned long pfn, unsigned long buddy_pfn, >> return page_is_buddy(higher_page, higher_buddy, order + 1); >> } >> >> +static inline bool has_non_fallback_pageblock(struct zone *zone) >> +{ >> + return has_isolate_pageblock(zone) || zone_cma_pages(zone) != 0 || >> + zone->nr_reserved_highatomic != 0; >> +} > > Due to zone_cma_pages(), the unlikely() below will be very wrong on many > setups. Previously, isolation really was a corner case. CMA and > highatomic are less of a corner case ... Got it. > > I'm not even sure if this check is worth having around anymore at all, > or if it would be easier and cheaper to just always check the both > migration types unconditionally. Would certainly simplify the code. I will remove the if check below, since, like you said, the check is no longer a corner case with added highatomic and CMA check. > > Side node: we actually care about has_free_non_fallback_pageblock(), we > can only merge with free pageblocks. But that might not necessarily be > cheaper to test/track/check. > I agree that what we are actually looking for is free pageblocks of these migratetypes. But tracking them is nontrivial. >> + >> /* >> * Freeing function for a buddy system allocator. >> * >> @@ -1117,14 +1123,15 @@ static inline void __free_one_page(struct page *page, >> } >> if (order < MAX_ORDER - 1) { >> /* If we are here, it means order is >= pageblock_order. >> - * We want to prevent merge between freepages on isolate >> - * pageblock and normal pageblock. Without this, pageblock >> - * isolation could cause incorrect freepage or CMA accounting. >> + * We want to prevent merge between freepages on pageblock >> + * without fallbacks and normal pageblock. Without this, >> + * pageblock isolation could cause incorrect freepage or CMA >> + * accounting or HIGHATOMIC accounting. >> * >> * We don't want to hit this code for the more frequent >> * low-order merging. >> */ >> - if (unlikely(has_isolate_pageblock(zone))) { >> + if (unlikely(has_non_fallback_pageblock(zone))) { >> int buddy_mt; >> >> buddy_pfn = __find_buddy_pfn(pfn, order); >> @@ -1132,8 +1139,8 @@ static inline void __free_one_page(struct page *page, >> buddy_mt = get_pageblock_migratetype(buddy); >> >> if (migratetype != buddy_mt >> - && (is_migrate_isolate(migratetype) || >> - is_migrate_isolate(buddy_mt))) >> + && (!migratetype_has_fallback(migratetype) || >> + !migratetype_has_fallback(buddy_mt))) >> goto done_merging; >> } >> max_order = order + 1; >> @@ -2484,6 +2491,7 @@ static int fallbacks[MIGRATE_TYPES][3] = { >> [MIGRATE_UNMOVABLE] = { MIGRATE_RECLAIMABLE, MIGRATE_MOVABLE, MIGRATE_TYPES }, >> [MIGRATE_MOVABLE] = { MIGRATE_RECLAIMABLE, MIGRATE_UNMOVABLE, MIGRATE_TYPES }, >> [MIGRATE_RECLAIMABLE] = { MIGRATE_UNMOVABLE, MIGRATE_MOVABLE, MIGRATE_TYPES }, >> + [MIGRATE_HIGHATOMIC] = { MIGRATE_TYPES }, /* Never used */ >> #ifdef CONFIG_CMA >> [MIGRATE_CMA] = { MIGRATE_TYPES }, /* Never used */ >> #endif >> @@ -2795,8 +2803,8 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone, >> >> /* Yoink! */ >> mt = get_pageblock_migratetype(page); >> - if (!is_migrate_highatomic(mt) && !is_migrate_isolate(mt) >> - && !is_migrate_cma(mt)) { >> + /* Only reserve normal pageblock */ >> + if (migratetype_has_fallback(mt)) { >> zone->nr_reserved_highatomic += pageblock_nr_pages; >> set_pageblock_migratetype(page, MIGRATE_HIGHATOMIC); >> move_freepages_block(zone, page, MIGRATE_HIGHATOMIC, NULL); >> @@ -3545,8 +3553,8 @@ int __isolate_free_page(struct page *page, unsigned int order) >> struct page *endpage = page + (1 << order) - 1; >> for (; page < endpage; page += pageblock_nr_pages) { >> int mt = get_pageblock_migratetype(page); >> - if (!is_migrate_isolate(mt) && !is_migrate_cma(mt) >> - && !is_migrate_highatomic(mt)) >> + /* Only change normal pageblock */ >> + if (migratetype_has_fallback(mt)) >> set_pageblock_migratetype(page, >> MIGRATE_MOVABLE); >> } > > That part is a nice cleanup IMHO. Although the "has fallback" part is a > bit imprecise. "migratetype_is_mergable()" might be a bit clearer. > ideally "migratetype_is_mergable_with_other_types()". Can we come up > with a nice name for that? Sure. Will change the name. Thank you for the comments. -- Best Regards, Yan, Zi