From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0CBEBC433B4 for ; Wed, 21 Apr 2021 02:54:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BCC7E6140C for ; Wed, 21 Apr 2021 02:54:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234679AbhDUCzO (ORCPT ); Tue, 20 Apr 2021 22:55:14 -0400 Received: from mail.kernel.org ([198.145.29.99]:57632 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233874AbhDUCzN (ORCPT ); Tue, 20 Apr 2021 22:55:13 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 183CC61413; Wed, 21 Apr 2021 02:54:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1618973681; bh=Vxsm3qbFuwpvcFVHdtI2J5Cpu1lUBSW5rjy3bg9G0yM=; h=Date:From:To:Subject:From; b=pklAN9pijDwy3xMVpLvrPuhm8Me0MAgpSV0G7WXfJ/OIUK0klRdNnLz7rlmfefEYm uOXrdINrp8/yWJ+M2AO/1vUHLdHnv5QIrupxSEE9jpJ21QDdKo12smXUdwtOLJO31c 4flsTil+u8tatF0vahyakLuJkGITW8NMi60KtJ0I= Date: Tue, 20 Apr 2021 19:54:40 -0700 From: akpm@linux-foundation.org To: david@redhat.com, mhocko@suse.com, mike.kravetz@oracle.com, mm-commits@vger.kernel.org, osalvador@suse.de, songmuchun@bytedance.com, vbabka@suse.cz Subject: + mmcompaction-let-isolate_migratepages_rangeblock-return-error-codes.patch added to -mm tree Message-ID: <20210421025440.HBs-E6RqA%akpm@linux-foundation.org> User-Agent: s-nail v14.8.16 Precedence: bulk Reply-To: linux-kernel@vger.kernel.org List-ID: X-Mailing-List: mm-commits@vger.kernel.org The patch titled Subject: mm,compaction: let isolate_migratepages_{range,block} return error codes has been added to the -mm tree. Its filename is mmcompaction-let-isolate_migratepages_rangeblock-return-error-codes.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/mmcompaction-let-isolate_migratepages_rangeblock-return-error-codes.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/mmcompaction-let-isolate_migratepages_rangeblock-return-error-codes.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Oscar Salvador Subject: mm,compaction: let isolate_migratepages_{range,block} return error codes Currently, isolate_migratepages_{range,block} and their callers use a pfn == 0 vs pfn != 0 scheme to let the caller know whether there was any error during isolation. This does not work as soon as we need to start reporting different error codes and make sure we pass them down the chain, so they are properly interpreted by functions like e.g: alloc_contig_range. Let us rework isolate_migratepages_{range,block} so we can report error codes. Since isolate_migratepages_block will stop returning the next pfn to be scanned, we reuse the cc->migrate_pfn field to keep track of that. Link: https://lkml.kernel.org/r/20210419075413.1064-3-osalvador@suse.de Signed-off-by: Oscar Salvador Acked-by: Vlastimil Babka Acked-by: Mike Kravetz Reviewed-by: David Hildenbrand Cc: Michal Hocko Cc: Muchun Song Signed-off-by: Andrew Morton --- mm/compaction.c | 52 ++++++++++++++++++++++------------------------ mm/internal.h | 10 +++++++- mm/page_alloc.c | 7 ++---- 3 files changed, 36 insertions(+), 33 deletions(-) --- a/mm/compaction.c~mmcompaction-let-isolate_migratepages_rangeblock-return-error-codes +++ a/mm/compaction.c @@ -787,15 +787,14 @@ static bool too_many_isolated(pg_data_t * * Isolate all pages that can be migrated from the range specified by * [low_pfn, end_pfn). The range is expected to be within same pageblock. - * Returns zero if there is a fatal signal pending, otherwise PFN of the - * first page that was not scanned (which may be both less, equal to or more - * than end_pfn). + * Returns errno, like -EAGAIN or -EINTR in case e.g signal pending or congestion, + * or 0. + * cc->migrate_pfn will contain the next pfn to scan. * * The pages are isolated on cc->migratepages list (not required to be empty), - * and cc->nr_migratepages is updated accordingly. The cc->migrate_pfn field - * is neither read nor updated. + * and cc->nr_migratepages is updated accordingly. */ -static unsigned long +static int isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, unsigned long end_pfn, isolate_mode_t isolate_mode) { @@ -809,6 +808,9 @@ isolate_migratepages_block(struct compac bool skip_on_failure = false; unsigned long next_skip_pfn = 0; bool skip_updated = false; + int ret = 0; + + cc->migrate_pfn = low_pfn; /* * Ensure that there are not too many pages isolated from the LRU @@ -818,16 +820,16 @@ isolate_migratepages_block(struct compac while (unlikely(too_many_isolated(pgdat))) { /* stop isolation if there are still pages not migrated */ if (cc->nr_migratepages) - return 0; + return -EAGAIN; /* async migration should just abort */ if (cc->mode == MIGRATE_ASYNC) - return 0; + return -EAGAIN; congestion_wait(BLK_RW_ASYNC, HZ/10); if (fatal_signal_pending(current)) - return 0; + return -EINTR; } cond_resched(); @@ -875,8 +877,8 @@ isolate_migratepages_block(struct compac if (fatal_signal_pending(current)) { cc->contended = true; + ret = -EINTR; - low_pfn = 0; goto fatal_pending; } @@ -1130,7 +1132,9 @@ fatal_pending: if (nr_isolated) count_compact_events(COMPACTISOLATED, nr_isolated); - return low_pfn; + cc->migrate_pfn = low_pfn; + + return ret; } /** @@ -1139,15 +1143,14 @@ fatal_pending: * @start_pfn: The first PFN to start isolating. * @end_pfn: The one-past-last PFN. * - * Returns zero if isolation fails fatally due to e.g. pending signal. - * Otherwise, function returns one-past-the-last PFN of isolated page - * (which may be greater than end_pfn if end fell in a middle of a THP page). + * Returns -EAGAIN when contented, -EINTR in case of a signal pending or 0. */ -unsigned long +int isolate_migratepages_range(struct compact_control *cc, unsigned long start_pfn, unsigned long end_pfn) { unsigned long pfn, block_start_pfn, block_end_pfn; + int ret = 0; /* Scan block by block. First and last block may be incomplete */ pfn = start_pfn; @@ -1166,17 +1169,17 @@ isolate_migratepages_range(struct compac block_end_pfn, cc->zone)) continue; - pfn = isolate_migratepages_block(cc, pfn, block_end_pfn, - ISOLATE_UNEVICTABLE); + ret = isolate_migratepages_block(cc, pfn, block_end_pfn, + ISOLATE_UNEVICTABLE); - if (!pfn) + if (ret) break; if (cc->nr_migratepages >= COMPACT_CLUSTER_MAX) break; } - return pfn; + return ret; } #endif /* CONFIG_COMPACTION || CONFIG_CMA */ @@ -1847,7 +1850,7 @@ static isolate_migrate_t isolate_migrate */ for (; block_end_pfn <= cc->free_pfn; fast_find_block = false, - low_pfn = block_end_pfn, + cc->migrate_pfn = low_pfn = block_end_pfn, block_start_pfn = block_end_pfn, block_end_pfn += pageblock_nr_pages) { @@ -1889,10 +1892,8 @@ static isolate_migrate_t isolate_migrate } /* Perform the isolation */ - low_pfn = isolate_migratepages_block(cc, low_pfn, - block_end_pfn, isolate_mode); - - if (!low_pfn) + if (isolate_migratepages_block(cc, low_pfn, block_end_pfn, + isolate_mode)) return ISOLATE_ABORT; /* @@ -1903,9 +1904,6 @@ static isolate_migrate_t isolate_migrate break; } - /* Record where migration scanner will be restarted. */ - cc->migrate_pfn = low_pfn; - return cc->nr_migratepages ? ISOLATE_SUCCESS : ISOLATE_NONE; } --- a/mm/internal.h~mmcompaction-let-isolate_migratepages_rangeblock-return-error-codes +++ a/mm/internal.h @@ -245,7 +245,13 @@ struct compact_control { unsigned int nr_freepages; /* Number of isolated free pages */ unsigned int nr_migratepages; /* Number of pages to migrate */ unsigned long free_pfn; /* isolate_freepages search base */ - unsigned long migrate_pfn; /* isolate_migratepages search base */ + /* + * Acts as an in/out parameter to page isolation for migration. + * isolate_migratepages uses it as a search base. + * isolate_migratepages_block will update the value to the next pfn + * after the last isolated one. + */ + unsigned long migrate_pfn; unsigned long fast_start_pfn; /* a pfn to start linear scan from */ struct zone *zone; unsigned long total_migrate_scanned; @@ -281,7 +287,7 @@ struct capture_control { unsigned long isolate_freepages_range(struct compact_control *cc, unsigned long start_pfn, unsigned long end_pfn); -unsigned long +int isolate_migratepages_range(struct compact_control *cc, unsigned long low_pfn, unsigned long end_pfn); int find_suitable_fallback(struct free_area *area, unsigned int order, --- a/mm/page_alloc.c~mmcompaction-let-isolate_migratepages_rangeblock-return-error-codes +++ a/mm/page_alloc.c @@ -8689,11 +8689,10 @@ static int __alloc_contig_migrate_range( if (list_empty(&cc->migratepages)) { cc->nr_migratepages = 0; - pfn = isolate_migratepages_range(cc, pfn, end); - if (!pfn) { - ret = -EINTR; + ret = isolate_migratepages_range(cc, pfn, end); + if (ret && ret != -EAGAIN) break; - } + pfn = cc->migrate_pfn; tries = 0; } else if (++tries == 5) { ret = -EBUSY; _ Patches currently in -mm which might be from osalvador@suse.de are x86-vmemmap-drop-handling-of-4k-unaligned-vmemmap-range.patch x86-vmemmap-drop-handling-of-1gb-vmemmap-ranges.patch x86-vmemmap-handle-unpopulated-sub-pmd-ranges.patch x86-vmemmap-handle-unpopulated-sub-pmd-ranges-fix.patch x86-vmemmap-optimize-for-consecutive-sections-in-partial-populated-pmds.patch mmpage_alloc-bail-out-earlier-on-enomem-in-alloc_contig_migrate_range.patch mmcompaction-let-isolate_migratepages_rangeblock-return-error-codes.patch mmhugetlb-drop-clearing-of-flag-from-prep_new_huge_page.patch mmhugetlb-split-prep_new_huge_page-functionality.patch mm-make-alloc_contig_range-handle-free-hugetlb-pages.patch mm-make-alloc_contig_range-handle-in-use-hugetlb-pages.patch mmpage_alloc-drop-unnecessary-checks-from-pfn_range_valid_contig.patch drivers-base-memory-introduce-memory_block_onlineoffline.patch mmmemory_hotplug-relax-fully-spanned-sections-check.patch mmmemory_hotplug-allocate-memmap-from-the-added-memory-range.patch acpimemhotplug-enable-mhp_memmap_on_memory-when-supported.patch mmmemory_hotplug-add-kernel-boot-option-to-enable-memmap_on_memory.patch x86-kconfig-introduce-arch_mhp_memmap_on_memory_enable.patch arm64-kconfig-introduce-arch_mhp_memmap_on_memory_enable.patch