From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.1 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E71DC433DB for ; Tue, 2 Mar 2021 08:56:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 71FAC64DE8 for ; Tue, 2 Mar 2021 08:56:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 71FAC64DE8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 824778D00EA; Tue, 2 Mar 2021 03:56:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7D5718D0063; Tue, 2 Mar 2021 03:56:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6EBF58D00EA; Tue, 2 Mar 2021 03:56:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0215.hostedemail.com [216.40.44.215]) by kanga.kvack.org (Postfix) with ESMTP id 58C058D0063 for ; Tue, 2 Mar 2021 03:56:01 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 0BA913655 for ; Tue, 2 Mar 2021 08:56:01 +0000 (UTC) X-FDA: 77874326922.13.FD9BD75 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf12.hostedemail.com (Postfix) with ESMTP id CC7CD12E for ; Tue, 2 Mar 2021 08:55:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1614675359; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1VTBJ+XoXKikTsiDC8kptuKu7WGstGNRDpRHQvZaXCI=; b=dMf2s+0ZSqxfyjfrB/q09NpV/d/hhqJ4pt56rNH3gdXWTYuOR8BeiDVS/KXHOnNBfmhJ00 PKCNjC0lQ5ihrUSrzS2WgPRldQUpIkaBMQ8zidhi7+kC+sSY6RsISwonz/ChCaG1DXsACC BjqymCqTlVa59/RIS6N0cudxZGtZWrw= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-201-3d3iolmuPTStLV_kBfZWUg-1; Tue, 02 Mar 2021 03:55:55 -0500 X-MC-Unique: 3d3iolmuPTStLV_kBfZWUg-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D34E118BA296; Tue, 2 Mar 2021 08:55:52 +0000 (UTC) Received: from [10.36.114.189] (ovpn-114-189.ams2.redhat.com [10.36.114.189]) by smtp.corp.redhat.com (Postfix) with ESMTP id A5EFB226F1; Tue, 2 Mar 2021 08:55:48 +0000 (UTC) To: Zi Yan Cc: linux-mm@kvack.org, Matthew Wilcox , "Kirill A . Shutemov" , Roman Gushchin , Andrew Morton , Yang Shi , Michal Hocko , John Hubbard , Ralph Campbell , David Nellans , Jason Gunthorpe , David Rientjes , Vlastimil Babka , Mike Kravetz , Song Liu References: <20210224223536.803765-1-zi.yan@sent.com> <67B2C538-45DB-4678-A64D-295A9703EDE1@nvidia.com> From: David Hildenbrand Organization: Red Hat GmbH Subject: Re: [RFC PATCH v3 00/49] 1GB PUD THP support on x86_64 Message-ID: <483b9681-497f-d86f-1f0b-14edb9d1c388@redhat.com> Date: Tue, 2 Mar 2021 09:55:47 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.8.0 MIME-Version: 1.0 In-Reply-To: <67B2C538-45DB-4678-A64D-295A9703EDE1@nvidia.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: CC7CD12E X-Stat-Signature: 6etex9cubshrko7aq3ysooyzouf9gc73 Received-SPF: none (redhat.com>: No applicable sender policy available) receiver=imf12; identity=mailfrom; envelope-from=""; helo=us-smtp-delivery-124.mimecast.com; client-ip=216.205.24.124 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1614675359-194635 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: >> >> However, I don't follow how this is actually really feasible in big sc= ale. You could only ever collapse into a 1GB THP if you happen to have 1G= B consecutive 2MB THP / 4k already. Sounds to me like this happens when t= he stars align. >=20 > Both the process_madvise() approach and my proposal require page migrat= ion to bring back THPs, since like you said having consecutive pages read= y is extremely rare. IIUC, the process_madvise() approach reuses khugepag= ed code to collapse huge pages, > namely first allocating a 2MB THP, then copying data over, finally free= old base pages. My proposal would migrate pages within > a virtual address range (>1GB and 1GB-aligned) to get all physical page= s contiguous, then promote the resulting 1GB consecutive > pages to 1GB THP. No new page allocation is needed. I am missing how we can ever reliably form 1GB pages (esp. after the=20 system ran for a while) without any kind of fragmentation avoidance /=20 defragmentation mechanism that is aware of gigantic pages. For THP,=20 pageblocks+compaction serve that purpose. >=20 > Both approaches would need user-space invocation, assuming either the a= pplication itself wants to get THPs for a specific region or a user-space= daemon would do this for a group of application, instead of waiting for = khugepaged to slowly (4096 pages every 10s) scan and do huge page collaps= e. User will pay the cost of getting THP. This also means THPs are not co= mpletely transparent to user, but I think it should be fine when users ex= plicitly invoke these two methods to get THPs for better performance. Here is the problem: these *advises* are not persistent. Assume your=20 system has to swap and has to split the THP + write it to the swap=20 backend. The gigantic page is lost for that part of the application.=20 When loading the individual 4k pages out of swap there is no guarantee=20 that we can form a 1 GB page again - and how should we know that the=20 application wanted a 1 GB page at that position? How would the application know that the advise was no dropped and that a) There is no 1GB page anymore b) It would have to re-issue the advise Similarly, I am not convinced that the future of khugepaged is in user=20 space. >=20 > The difference of my proposal is that it does not need a 1GB THP alloca= tion, so there is no special requirements like using CMA > or increasing MAX_ORDER in buddy allocator to allow 1GB page allocation= . It makes creating THPs with orders > MAX_ORDER possible > without other intrusive changes. Anything that relies on large allocations succeeding purely because=20 "ZONE_NORMAL memory is usually not fragmented after boot" is broken by=20 design. That's why we have CMA, it can give guarantees (well, once we=20 fix all remaining issues :) ). --=20 Thanks, David / dhildenb