From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD74FC433E0 for ; Fri, 19 Mar 2021 13:20:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5F3BD64EE2 for ; Fri, 19 Mar 2021 13:20:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5F3BD64EE2 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2F0F26B0072; Fri, 19 Mar 2021 09:20:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2CA186B0073; Fri, 19 Mar 2021 09:20:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1B7B98D0001; Fri, 19 Mar 2021 09:20:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0134.hostedemail.com [216.40.44.134]) by kanga.kvack.org (Postfix) with ESMTP id 04B346B0072 for ; Fri, 19 Mar 2021 09:20:30 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id BADF41814904B for ; Fri, 19 Mar 2021 13:20:30 +0000 (UTC) X-FDA: 77936683020.03.DB605E6 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf01.hostedemail.com (Postfix) with ESMTP id 3B5E95001AC5 for ; Fri, 19 Mar 2021 13:20:18 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 82F97ACBF; Fri, 19 Mar 2021 13:20:18 +0000 (UTC) From: Oscar Salvador To: Andrew Morton Cc: Mike Kravetz , Vlastimil Babka , David Hildenbrand , Michal Hocko , Muchun Song , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Oscar Salvador Subject: [PATCH v6 0/5] Make alloc_contig_range handle Hugetlb pages Date: Fri, 19 Mar 2021 14:19:59 +0100 Message-Id: <20210319132004.4341-1-osalvador@suse.de> X-Mailer: git-send-email 2.28.0 MIME-Version: 1.0 X-Stat-Signature: 7ozkdntqbcsz3sfrrm99xszcba1q5opa X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 3B5E95001AC5 Received-SPF: none (suse.de>: No applicable sender policy available) receiver=imf01; identity=mailfrom; envelope-from=""; helo=mx2.suse.de; client-ip=195.135.220.15 X-HE-DKIM-Result: none/none X-HE-Tag: 1616160018-456196 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: v5 -> v6: - Collect Acked-by from Michal - Adressed feedback for patch#2 (expand the comment about migrate_pfn an= d change return values) - Complete pathc#3's changelog (per Michal) - Place retry lock inside of alloc_and_dissolve_huge_page() v4 -> v5: - Collect Acked-by and Reviewed-by from David and Vlastimil - Drop racy checks in pfn_range_valid_contig (David) - Rebased on top of 5.12-rc3 v3 -> v4: - Addressed some feedback from David and Michal - Make more clear what hugetlb_lock protects in isolate_or_dissolve_huge= _page - Start reporting proper error codes from isolate_migratepages_{range,bl= ock} - Bail out earlier in __alloc_contig_migrate_range on -ENOMEM - Addressed internal feedback from Vastlimil wrt. compaction code change= s v2 -> v3: - Drop usage of high-level generic helpers in favour of low-level approach (per Michal) - Check for the page to be marked as PageHugeFreed - Add a one-time retry in case someone grabbed the free huge page from under us v1 -> v2: - Adressed feedback by Michal - Restrict the allocation to a node with __GFP_THISNODE - Drop PageHuge check in alloc_and_dissolve_huge_page - Re-order comments in isolate_or_dissolve_huge_page - Extend comment in isolate_migratepages_block - Place put_page right after we got the page, otherwise dissolve_free_huge_page will fail RFC -> v1: - Drop RFC - Addressed feedback from David and Mike - Fence off gigantic pages as there is a cyclic dependency between them and alloc_contig_range - Re-organize the code to make race-window smaller and to put all details in hugetlb code - Drop nodemask initialization. First a node will be tried and then we will back to other nodes containing memory (N_MEMORY). Details in patch#1's changelog - Count new page as surplus in case we failed to dissolve the old page and the new one. Details in patch#1. Cover letter: alloc_contig_range lacks the hability for handling HugeTLB pages. This can be problematic for some users, e.g: CMA and virtio-mem, where t= hose users will fail the call if alloc_contig_range ever sees a HugeTLB page,= even when those pages lay in ZONE_MOVABLE and are free. That problem can be easily solved by replacing the page in the free huge= page pool. In-use HugeTLB are no exception though, as those can be isolated and mig= rated as any other LRU or Movable page. This patchset aims for improving alloc_contig_range->isolate_migratepage= s_block, so HugeTLB pages can be recognized and handled. Since we also need to start reporting errors down the chain (e.g: -ENOME= M due to not be able to allocate a new hugetlb page), isolate_migratepages_{range= ,block} interfaces need to change to start reporting error codes instead of the= pfn =3D=3D 0 vs pfn !=3D 0 scheme it is using right now. From now on, isolate_migratepages_block will not return the next pfn to = be scanned anymore, but -EINTR, -ENOMEM or 0, so we the next pfn to be scanned will= be recorded in cc->migrate_pfn field (as it is already done in isolate_migratepages_= range()). Below is an insight from David (thanks), where the problem can clearly b= e seen: "Start a VM with 4G. Hotplug 1G via virtio-mem and online it to ZONE_MOVABLE. Allocate 512 huge pages. [root@localhost ~]# cat /proc/meminfo MemTotal: 5061512 kB MemFree: 3319396 kB MemAvailable: 3457144 kB ... HugePages_Total: 512 HugePages_Free: 512 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB The huge pages get partially allocate from ZONE_MOVABLE. Try unplugging 1G via virtio-mem (remember, all ZONE_MOVABLE). Inside the guest: [ 180.058992] alloc_contig_range: [1b8000, 1c0000) PFNs busy [ 180.060531] alloc_contig_range: [1b8000, 1c0000) PFNs busy [ 180.061972] alloc_contig_range: [1b8000, 1c0000) PFNs busy [ 180.063413] alloc_contig_range: [1b8000, 1c0000) PFNs busy [ 180.064838] alloc_contig_range: [1b8000, 1c0000) PFNs busy [ 180.065848] alloc_contig_range: [1bfc00, 1c0000) PFNs busy [ 180.066794] alloc_contig_range: [1bfc00, 1c0000) PFNs busy [ 180.067738] alloc_contig_range: [1bfc00, 1c0000) PFNs busy [ 180.068669] alloc_contig_range: [1bfc00, 1c0000) PFNs busy [ 180.069598] alloc_contig_range: [1bfc00, 1c0000) PFNs busy" And then with this patchset running: "Same experiment with ZONE_MOVABLE: a) Free huge pages: all memory can get unplugged again. b) Allocated/populated but idle huge pages: all memory can get unplugge= d again. c) Allocated/populated but all 512 huge pages are read/written in a loop: all memory can get unplugged again, but I get a single [ 121.192345] alloc_contig_range: [180000, 188000) PFNs busy Most probably because it happened to try migrating a huge page while it was busy. As virtio-mem retries on ZONE_MOVABLE a couple of times, it can deal with this temporary failure. Last but not least, I did something extreme: # cat /proc/meminfo MemTotal: 5061568 kB MemFree: 186560 kB MemAvailable: 354524 kB ... HugePages_Total: 2048 HugePages_Free: 2048 HugePages_Rsvd: 0 HugePages_Surp: 0 Triggering unplug would require to dissolve+alloc - which now fails whe= n trying to allocate an additional ~512 huge pages (1G). As expected, I can properly see memory unplug not fully succeeding. + I get a fairly continuous stream of [ 226.611584] alloc_contig_range: [19f400, 19f800) PFNs busy ... But more importantly, the hugepage count remains stable, as configured by the admin (me): HugePages_Total: 2048 HugePages_Free: 2048 HugePages_Rsvd: 0 HugePages_Surp: 0" Oscar Salvador (5): mm,page_alloc: Bail out earlier on -ENOMEM in alloc_contig_migrate_range mm,compaction: Let isolate_migratepages_{range,block} return error codes mm: Make alloc_contig_range handle free hugetlb pages mm: Make alloc_contig_range handle in-use hugetlb pages mm,page_alloc: Drop unnecessary checks from pfn_range_valid_contig include/linux/hugetlb.h | 7 +++ mm/compaction.c | 94 +++++++++++++++++++++++++++----------- mm/hugetlb.c | 119 ++++++++++++++++++++++++++++++++++++++++++= +++++- mm/internal.h | 10 +++- mm/page_alloc.c | 21 ++++----- mm/vmscan.c | 5 +- 6 files changed, 212 insertions(+), 44 deletions(-) --=20 2.16.3