From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6094C433DB for ; Thu, 4 Mar 2021 09:26:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 33AE964ED7 for ; Thu, 4 Mar 2021 09:26:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 33AE964ED7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A5B276B0005; Thu, 4 Mar 2021 04:26:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A31576B0006; Thu, 4 Mar 2021 04:26:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8D25E6B0007; Thu, 4 Mar 2021 04:26:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0062.hostedemail.com [216.40.44.62]) by kanga.kvack.org (Postfix) with ESMTP id 728FB6B0005 for ; Thu, 4 Mar 2021 04:26:44 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 31A9F5DE5 for ; Thu, 4 Mar 2021 09:26:44 +0000 (UTC) X-FDA: 77881661928.22.CC15717 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [63.128.21.124]) by imf06.hostedemail.com (Postfix) with ESMTP id 97AFAC0007D3 for ; Thu, 4 Mar 2021 09:26:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1614850002; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7tm2+bzJpnJ1Kyjj+0Uj65X3yeAu1oVqPDHjIqaaIZg=; b=ZTD6EUVBDiGMPQTstaxayzNNFWOnS3ZMxzgPcvKqPdKuq5TmR275bW+ZlgkOfK0Z+NLZ8s jBQO+JU1EGY7SMsVILFD0lz/czb8eAxHryqix8pAlmTkmeg1OPFxHySYH1cpqYLvcjkdhH QTziCLlCqBZTojdhX/hTb0pzb8rU6Uo= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-210-rmCtErenOMGsHlnBdeMe6A-1; Thu, 04 Mar 2021 04:26:38 -0500 X-MC-Unique: rmCtErenOMGsHlnBdeMe6A-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 54981801814; Thu, 4 Mar 2021 09:26:36 +0000 (UTC) Received: from [10.36.113.171] (ovpn-113-171.ams2.redhat.com [10.36.113.171]) by smtp.corp.redhat.com (Postfix) with ESMTP id D73946F92F; Thu, 4 Mar 2021 09:26:32 +0000 (UTC) To: Zi Yan Cc: linux-mm@kvack.org, Matthew Wilcox , "Kirill A . Shutemov" , Roman Gushchin , Andrew Morton , Yang Shi , Michal Hocko , John Hubbard , Ralph Campbell , David Nellans , Jason Gunthorpe , David Rientjes , Vlastimil Babka , Mike Kravetz , Song Liu References: <20210224223536.803765-1-zi.yan@sent.com> <67B2C538-45DB-4678-A64D-295A9703EDE1@nvidia.com> <483b9681-497f-d86f-1f0b-14edb9d1c388@redhat.com> From: David Hildenbrand Organization: Red Hat GmbH Subject: Re: [RFC PATCH v3 00/49] 1GB PUD THP support on x86_64 Message-ID: <9c45b18e-11bc-1671-ac49-8ed007ed3794@redhat.com> Date: Thu, 4 Mar 2021 10:26:31 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.8.0 MIME-Version: 1.0 In-Reply-To: X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US X-Stat-Signature: jb5e1f6z5awehkftkg16zti3uantd4x8 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 97AFAC0007D3 Received-SPF: none (redhat.com>: No applicable sender policy available) receiver=imf06; identity=mailfrom; envelope-from=""; helo=us-smtp-delivery-124.mimecast.com; client-ip=63.128.21.124 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1614850002-153654 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 04.03.21 00:42, Zi Yan wrote: > On 2 Mar 2021, at 3:55, David Hildenbrand wrote: >=20 >>>> >>>> However, I don't follow how this is actually really feasible in big = scale. You could only ever collapse into a 1GB THP if you happen to have = 1GB consecutive 2MB THP / 4k already. Sounds to me like this happens when= the stars align. >>> >>> Both the process_madvise() approach and my proposal require page migr= ation to bring back THPs, since like you said having consecutive pages re= ady is extremely rare. IIUC, the process_madvise() approach reuses khugep= aged code to collapse huge pages, >>> namely first allocating a 2MB THP, then copying data over, finally fr= ee old base pages. My proposal would migrate pages within >>> a virtual address range (>1GB and 1GB-aligned) to get all physical pa= ges contiguous, then promote the resulting 1GB consecutive >>> pages to 1GB THP. No new page allocation is needed. >> >> I am missing how we can ever reliably form 1GB pages (esp. after the s= ystem ran for a while) without any kind of fragmentation avoidance / defr= agmentation mechanism that is aware of gigantic pages. For THP, pageblock= s+compaction serve that purpose. >=20 > We may not have that as reliable as pageblocks+compaction for THP, but = we are able to improve over existing code after 1GB THP > is supported and used. Otherwise, why bother adding a new mechanism whe= n there is no user? >=20 > I did an experiment on my 32GB desktop like Roman suggested in another = email, using as much memory as possible and running > =E2=80=9Cgit gc=E2=80=9D on Linux repo at the same time to fragment mem= ory. I repeated the process three times with three different Linux repos. > I checked all pageblock types with my custom kernel module (https://git= hub.com/x-y-z/kernel-modules) and discovered that > the system still have 11 1GB Movable pageblocks (consecutive pageblocks= with the same migratetype are grouped as large as > possible). This means after heavy memory fragmentation the system is st= ill able to form 11 1GB THPs, which is >30% of total > possible 1GB THPs. I think it is a reasonably good number since we are = not going to form 1GB THPs for everything running > in the system. >=20 I'm sorry, but I don't think this is a relevant reproducer for=20 fragmentation with unmovable allocations. I feel like repeating myself: Anything that relies on large allocations=20 succeeding purely because "ZONE_NORMAL memory is usually not fragmented=20 after boot" is broken by design. If your approach does not have any such approach, it's broken by design=20 and only works in some very limited setups / under very limited=20 conditions. We don't want anything like that when it severely affects=20 the code ("49 patches"). >>> >>> Both approaches would need user-space invocation, assuming either the= application itself wants to get THPs for a specific region or a user-spa= ce daemon would do this for a group of application, instead of waiting fo= r khugepaged to slowly (4096 pages every 10s) scan and do huge page colla= pse. User will pay the cost of getting THP. This also means THPs are not = completely transparent to user, but I think it should be fine when users = explicitly invoke these two methods to get THPs for better performance. >> >> Here is the problem: these *advises* are not persistent. Assume your s= ystem has to swap and has to split the THP + write it to the swap backend= . The gigantic page is lost for that part of the application. When loadin= g the individual 4k pages out of swap there is no guarantee that we can f= orm a 1 GB page again - and how should we know that the application wante= d a 1 GB page at that position? >=20 > VM_HUGEPAGE will be set for that VMA and I am planning to add a new fie= ld to VMA to indicate what huge page size we want in > that VMA. About split 1GB THP due to swapping, that happens to THP too.= Either khugepaged or a user daemon calling > process_madvise() could recover 1GB THP. >=20 Sorry, but for any kind of advise like "please collapse this into a 1GB=20 page", splitting VMAs does not make any sense. Then, you can just let=20 the application mmap(MAP_HUGE ...) that part instead - you also get a=20 separate VMA and need the mmap lock in write. Ordinary THP can be recovered quite well because *we have actual=20 mechanisms in place that try to form contiguous 2MB (->pageblock) chunks*= . >> >> How would the application know that the advise was no dropped and that >> a) There is no 1GB page anymore >> b) It would have to re-issue the advise >=20 > I expected a daemon, either khugepaged or a user one calling process_ma= vise, would rescan the application and reform 1GB pages. >=20 From user space? How should it know about whether that application has=20 hugepages enabled/disabled for some regions? How should it know if we=20 have to special case uffd? I repeat: I am not convinced that the future of khugepaged is in user=20 space. It might be valuable for some minor hints from the application=20 itself - "please collapse this into a THP if possible", but not more -=20 IMHO, but not across applications. >> >> Similarly, I am not convinced that the future of khugepaged is in user= space. >=20 > The issue of khugepaged is that it runs at very slow rate, 4096 pages e= very 10s, because kernel does not want to consume > too much CPU resources without knowing the benefit of forming THPs. A u= ser daemon can run at a fast pace to form THPs or > 1GB THPs from application memory regions that users really want huge pa= ges. >=20 Not sure we really want a daemon. You could just kick khugepaged instead=20 - for example to run on a specific process. I think if - at all - it=20 should be the application that gives additional advises. But that is a=20 different discussion than 1 GB THP. >> >>> >>> The difference of my proposal is that it does not need a 1GB THP allo= cation, so there is no special requirements like using CMA >>> or increasing MAX_ORDER in buddy allocator to allow 1GB page allocati= on. It makes creating THPs with orders > MAX_ORDER possible >>> without other intrusive changes. >> >> Anything that relies on large allocations succeeding purely because "Z= ONE_NORMAL memory is usually not fragmented after boot" is broken by desi= gn. That's why we have CMA, it can give guarantees (well, once we fix all= remaining issues :) ). >=20 > It seems that you are suggesting I should use CMA for 1GB THP allocatio= n, since CMA can give guarantee for large allocations. > Using CMA for 1GB THP would be a great first step to get 1GB THP workin= g, then we can replace it with other large allocation > mechanisms later. No, as already expressed multiple times, I don't think this is the right=20 thing to do. --=20 Thanks, David / dhildenb