From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4577CC43331 for ; Thu, 2 Apr 2020 04:10:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ED2E2206E9 for ; Thu, 2 Apr 2020 04:10:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="irnY7M+X" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ED2E2206E9 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9BBDB8E0081; Thu, 2 Apr 2020 00:10:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 96B588E000D; Thu, 2 Apr 2020 00:10:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 861998E0081; Thu, 2 Apr 2020 00:10:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0203.hostedemail.com [216.40.44.203]) by kanga.kvack.org (Postfix) with ESMTP id 69DA28E000D for ; Thu, 2 Apr 2020 00:10:34 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 2E41C181AC9CC for ; Thu, 2 Apr 2020 04:10:34 +0000 (UTC) X-FDA: 76661588388.19.fish37_5c3b935c5bb4f X-HE-Tag: fish37_5c3b935c5bb4f X-Filterd-Recvd-Size: 5784 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf09.hostedemail.com (Postfix) with ESMTP for ; Thu, 2 Apr 2020 04:10:33 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 6FA2220784; Thu, 2 Apr 2020 04:10:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1585800633; bh=wnp8y9iBRcKYAVW8X1FDBPVeSKjmXl3fnzdtsHYoH0Q=; h=Date:From:To:Subject:In-Reply-To:From; b=irnY7M+XU6gmiiNmae/ZJL3XjM43Ptik8lDmFpjTJOyNA6U+MKtQFkBljbN6jZBMN 0HxR0yzjOnZPVWqkQDH2Ednl4YqQNa3MKYmDMVv+ItV8oVhfAdt2TNrhByqGiwuMOs U/Iuatoykx8aQhtEZG65wonJxavaLMug74ss102Q= Date: Wed, 01 Apr 2020 21:10:31 -0700 From: Andrew Morton To: aarcange@redhat.com, akpm@linux-foundation.org, js1304@gmail.com, linux-mm@kvack.org, mgorman@techsingularity.net, mhocko@kernel.org, mike.kravetz@oracle.com, mm-commits@vger.kernel.org, riel@surriel.com, rientjes@google.com, torvalds@linux-foundation.org, vbabka@suse.cz, ziy@nvidia.com Subject: [patch 129/155] mm,thp,compaction,cma: allow THP migration for CMA allocations Message-ID: <20200402041031.r4WtvC5wp%akpm@linux-foundation.org> In-Reply-To: <20200401210155.09e3b9742e1c6e732f5a7250@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Rik van Riel Subject: mm,thp,compaction,cma: allow THP migration for CMA allocations The code to implement THP migrations already exists, and the code for CMA to clear out a region of memory already exists. Only a few small tweaks are needed to allow CMA to move THP memory when attempting an allocation from alloc_contig_range. With these changes, migrating THPs from a CMA area works when allocating a 1GB hugepage from CMA memory. [riel@surriel.com: fix hugetlbfs pages per Mike, cleanup per Vlastimil] Link: http://lkml.kernel.org/r/20200228104700.0af2f18d@imladris.surriel.com Link: http://lkml.kernel.org/r/20200227213238.1298752-2-riel@surriel.com Signed-off-by: Rik van Riel Reviewed-by: Zi Yan Reviewed-by: Vlastimil Babka Cc: Michal Hocko Cc: Vlastimil Babka Cc: Mel Gorman Cc: David Rientjes Cc: Andrea Arcangeli Cc: Mike Kravetz Cc: Joonsoo Kim Signed-off-by: Andrew Morton --- mm/compaction.c | 22 +++++++++++++--------- mm/page_alloc.c | 9 +++++++-- 2 files changed, 20 insertions(+), 11 deletions(-) --- a/mm/compaction.c~mmthpcompactioncma-allow-thp-migration-for-cma-allocations +++ a/mm/compaction.c @@ -894,12 +894,13 @@ isolate_migratepages_block(struct compac /* * Regardless of being on LRU, compound pages such as THP and - * hugetlbfs are not to be compacted. We can potentially save - * a lot of iterations if we skip them at once. The check is - * racy, but we can consider only valid values and the only - * danger is skipping too much. + * hugetlbfs are not to be compacted unless we are attempting + * an allocation much larger than the huge page size (eg CMA). + * We can potentially save a lot of iterations if we skip them + * at once. The check is racy, but we can consider only valid + * values and the only danger is skipping too much. */ - if (PageCompound(page)) { + if (PageCompound(page) && !cc->alloc_contig) { const unsigned int order = compound_order(page); if (likely(order < MAX_ORDER)) @@ -969,7 +970,7 @@ isolate_migratepages_block(struct compac * and it's on LRU. It can only be a THP so the order * is safe to read and it's 0 for tail pages. */ - if (unlikely(PageCompound(page))) { + if (unlikely(PageCompound(page) && !cc->alloc_contig)) { low_pfn += compound_nr(page) - 1; goto isolate_fail; } @@ -981,12 +982,15 @@ isolate_migratepages_block(struct compac if (__isolate_lru_page(page, isolate_mode) != 0) goto isolate_fail; - VM_BUG_ON_PAGE(PageCompound(page), page); + /* The whole page is taken off the LRU; skip the tail pages. */ + if (PageCompound(page)) + low_pfn += compound_nr(page) - 1; /* Successfully isolated */ del_page_from_lru_list(page, lruvec, page_lru(page)); - inc_node_page_state(page, - NR_ISOLATED_ANON + page_is_file_cache(page)); + mod_node_page_state(page_pgdat(page), + NR_ISOLATED_ANON + page_is_file_cache(page), + hpage_nr_pages(page)); isolate_success: list_add(&page->lru, &cc->migratepages); --- a/mm/page_alloc.c~mmthpcompactioncma-allow-thp-migration-for-cma-allocations +++ a/mm/page_alloc.c @@ -8251,15 +8251,20 @@ struct page *has_unmovable_pages(struct /* * Hugepages are not in LRU lists, but they're movable. + * THPs are on the LRU, but need to be counted as #small pages. * We need not scan over tail pages because we don't * handle each tail page individually in migration. */ - if (PageHuge(page)) { + if (PageHuge(page) || PageTransCompound(page)) { struct page *head = compound_head(page); unsigned int skip_pages; - if (!hugepage_migration_supported(page_hstate(head))) + if (PageHuge(page)) { + if (!hugepage_migration_supported(page_hstate(head))) + return page; + } else if (!PageLRU(head) && !__PageMovable(head)) { return page; + } skip_pages = compound_nr(head) - (page - head); iter += skip_pages - 1; _