From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.1 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 205F8C43381 for ; Mon, 1 Mar 2021 12:43:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BFE4B64E40 for ; Mon, 1 Mar 2021 12:43:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BFE4B64E40 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2DD9B8D0068; Mon, 1 Mar 2021 07:43:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 28CB28D0063; Mon, 1 Mar 2021 07:43:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1A4008D0068; Mon, 1 Mar 2021 07:43:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0129.hostedemail.com [216.40.44.129]) by kanga.kvack.org (Postfix) with ESMTP id 051AE8D0063 for ; Mon, 1 Mar 2021 07:43:12 -0500 (EST) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id B80E08249980 for ; Mon, 1 Mar 2021 12:43:11 +0000 (UTC) X-FDA: 77871270582.01.C1DAEE7 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [63.128.21.124]) by imf09.hostedemail.com (Postfix) with ESMTP id B8B5660024A1 for ; Mon, 1 Mar 2021 12:43:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1614602588; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=B4AdVOlbXJX5rGHL4n93zbJn1yov1tLPizLDdTTBPKA=; b=gWnkIQH2K1KKfrVKRtVyUrGkqwyMzKRDceHyDQClPs02BB8iChsYujbEQ9O8AQQbdE+av2 lVH1mLymw9mMJY8NOmk8Wzd7k9ENZ9xbFZDq5WRqLRXXX4GjqUMNSuvA0HWlg0f1jgpihn WijoPbRFBBYL8uhY8DYxtsA4ok0Vr5I= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-478-zRd0fc9rNYyIIz_xKcPMug-1; Mon, 01 Mar 2021 07:43:04 -0500 X-MC-Unique: zRd0fc9rNYyIIz_xKcPMug-1 Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 640CD107ACED; Mon, 1 Mar 2021 12:43:03 +0000 (UTC) Received: from [10.36.114.87] (ovpn-114-87.ams2.redhat.com [10.36.114.87]) by smtp.corp.redhat.com (Postfix) with ESMTP id B82F419D6C; Mon, 1 Mar 2021 12:43:01 +0000 (UTC) To: Oscar Salvador , Andrew Morton Cc: Mike Kravetz , Muchun Song , Michal Hocko , linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20210222135137.25717-1-osalvador@suse.de> From: David Hildenbrand Organization: Red Hat GmbH Subject: Re: [PATCH v3 0/2] Make alloc_contig_range handle Hugetlb pages Message-ID: Date: Mon, 1 Mar 2021 13:43:00 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.7.0 MIME-Version: 1.0 In-Reply-To: <20210222135137.25717-1-osalvador@suse.de> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-Stat-Signature: mhcnwcqpxtyis8tpdnxntuyyipjby33w X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: B8B5660024A1 Received-SPF: none (redhat.com>: No applicable sender policy available) receiver=imf09; identity=mailfrom; envelope-from=""; helo=us-smtp-delivery-124.mimecast.com; client-ip=63.128.21.124 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1614602584-441651 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 22.02.21 14:51, Oscar Salvador wrote: > v2 -> v3: > - Drop usage of high-level generic helpers in favour of > low-level approach (per Michal) > - Check for the page to be marked as PageHugeFreed > - Add a one-time retry in case someone grabbed the free huge page > from under us >=20 > v1 -> v2: > - Adressed feedback by Michal > - Restrict the allocation to a node with __GFP_THISNODE > - Drop PageHuge check in alloc_and_dissolve_huge_page > - Re-order comments in isolate_or_dissolve_huge_page > - Extend comment in isolate_migratepages_block > - Place put_page right after we got the page, otherwise > dissolve_free_huge_page will fail >=20 > RFC -> v1: > - Drop RFC > - Addressed feedback from David and Mike > - Fence off gigantic pages as there is a cyclic dependency between > them and alloc_contig_range > - Re-organize the code to make race-window smaller and to put > all details in hugetlb code > - Drop nodemask initialization. First a node will be tried and then w= e > will back to other nodes containing memory (N_MEMORY). Details in > patch#1's changelog > - Count new page as surplus in case we failed to dissolve the old pag= e > and the new one. Details in patch#1. >=20 > Cover letter: >=20 > alloc_contig_range lacks the hability for handling HugeTLB pages. > This can be problematic for some users, e.g: CMA and virtio-mem, wher= e those > users will fail the call if alloc_contig_range ever sees a HugeTLB pa= ge, even > when those pages lay in ZONE_MOVABLE and are free. > That problem can be easily solved by replacing the page in the free h= ugepage > pool. >=20 > In-use HugeTLB are no exception though, as those can be isolated and = migrated > as any other LRU or Movable page. >=20 > This patchset aims for improving alloc_contig_range->isolate_migratep= ages_block, > so HugeTLB pages can be recognized and handled. >=20 > Below is an insight from David (thanks), where the problem can clearl= y be seen: >=20 > "Start a VM with 4G. Hotplug 1G via virtio-mem and online it to > ZONE_MOVABLE. Allocate 512 huge pages. >=20 > [root@localhost ~]# cat /proc/meminfo > MemTotal: 5061512 kB > MemFree: 3319396 kB > MemAvailable: 3457144 kB > ... > HugePages_Total: 512 > HugePages_Free: 512 > HugePages_Rsvd: 0 > HugePages_Surp: 0 > Hugepagesize: 2048 kB >=20 >=20 > The huge pages get partially allocate from ZONE_MOVABLE. Try unpluggi= ng > 1G via virtio-mem (remember, all ZONE_MOVABLE). Inside the guest: >=20 > [ 180.058992] alloc_contig_range: [1b8000, 1c0000) PFNs busy > [ 180.060531] alloc_contig_range: [1b8000, 1c0000) PFNs busy > [ 180.061972] alloc_contig_range: [1b8000, 1c0000) PFNs busy > [ 180.063413] alloc_contig_range: [1b8000, 1c0000) PFNs busy > [ 180.064838] alloc_contig_range: [1b8000, 1c0000) PFNs busy > [ 180.065848] alloc_contig_range: [1bfc00, 1c0000) PFNs busy > [ 180.066794] alloc_contig_range: [1bfc00, 1c0000) PFNs busy > [ 180.067738] alloc_contig_range: [1bfc00, 1c0000) PFNs busy > [ 180.068669] alloc_contig_range: [1bfc00, 1c0000) PFNs busy > [ 180.069598] alloc_contig_range: [1bfc00, 1c0000) PFNs busy" Same experiment with ZONE_MOVABLE: a) Free huge pages: all memory can get unplugged again. b) Allocated/populated but idle huge pages: all memory can get unplugged=20 again. c) Allocated/populated but all 512 huge pages are read/written in a=20 loop: all memory can get unplugged again, but I get a single [ 121.192345] alloc_contig_range: [180000, 188000) PFNs busy Most probably because it happened to try migrating a huge page while it=20 was busy. As virtio-mem retries on ZONE_MOVABLE a couple of times, it=20 can deal with this temporary failure. Last but not least, I did something extreme: ]# cat /proc/meminfo MemTotal: 5061568 kB MemFree: 186560 kB MemAvailable: 354524 kB ... HugePages_Total: 2048 HugePages_Free: 2048 HugePages_Rsvd: 0 HugePages_Surp: 0 Triggering unplug would require to dissolve+alloc - which now fails when=20 trying to allocate an additional ~512 huge pages (1G). As expected, I can properly see memory unplug not fully succeeding. + I=20 get a fairly continuous stream of [ 226.611584] alloc_contig_range: [19f400, 19f800) PFNs busy ... But more importantly, the hugepage count remains stable, as configured=20 by the admin (me): HugePages_Total: 2048 HugePages_Free: 2048 HugePages_Rsvd: 0 HugePages_Surp: 0 --=20 Thanks, David / dhildenb