From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.5 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BED95C4338F for ; Tue, 17 Aug 2021 01:46:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5BCF760EE4 for ; Tue, 17 Aug 2021 01:46:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 5BCF760EE4 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 002886B006C; Mon, 16 Aug 2021 21:46:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EF57B6B0072; Mon, 16 Aug 2021 21:46:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E0AB08D0001; Mon, 16 Aug 2021 21:46:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0231.hostedemail.com [216.40.44.231]) by kanga.kvack.org (Postfix) with ESMTP id C663D6B006C for ; Mon, 16 Aug 2021 21:46:14 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 757581839263E for ; Tue, 17 Aug 2021 01:46:14 +0000 (UTC) X-FDA: 78482882268.15.664E5CB Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf20.hostedemail.com (Postfix) with ESMTP id 1A590D00BE8A for ; Tue, 17 Aug 2021 01:46:14 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id E94816023E; Tue, 17 Aug 2021 01:46:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1629164773; bh=p6EqPeFGcLwHb+izOknJr2kLygRCM8v8wG1Gz1UdcJk=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=fqvR2P6hR/V7Q0YlLIbZ/mxdx5m/3M6PsTbzo/TQ61FfgIQjy6jnXLDP4iOg5gQ93 f266eeMQ6OBOAoD+PCQOU7Ef4lt82GpDi41rFtNJTSb3crp7hj98SlP6/NfCTv3pgD 8tGOAnXxx4/sa+aChMS0qPPEBN/rfc8Lxjf05xiM= Date: Mon, 16 Aug 2021 18:46:11 -0700 From: Andrew Morton To: Mike Kravetz Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, David Hildenbrand , Michal Hocko , Oscar Salvador , Zi Yan , Muchun Song , Naoya Horiguchi , David Rientjes Subject: Re: [PATCH RESEND 0/8] hugetlb: add demote/split page functionality Message-Id: <20210816184611.07b97f4c26b83090f5d48fab@linux-foundation.org> In-Reply-To: References: <20210816224953.157796-1-mike.kravetz@oracle.com> <20210816162749.22b921a61156a091f3e1d14d@linux-foundation.org> X-Mailer: Sylpheed 3.5.1 (GTK+ 2.24.32; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 1A590D00BE8A Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b=fqvR2P6h; spf=pass (imf20.hostedemail.com: domain of akpm@linux-foundation.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org; dmarc=none X-Rspamd-Server: rspam01 X-Stat-Signature: 5fi4n5ruqmyxis8pr5q8xwifg5uu8j5p X-HE-Tag: 1629164774-80469 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, 16 Aug 2021 17:46:58 -0700 Mike Kravetz wrote: > > It really is a ton of new code. I think we're owed much more detail > > about the problem than the above. To be confident that all this > > material is truly justified? > > The desired functionality for this specific use case is to simply > convert a 1G huegtlb page to 512 2MB hugetlb pages. As mentioned > > "Converting larger to smaller hugetlb pages can be accomplished today by > first freeing the larger page to the buddy allocator and then allocating > the smaller pages. However, there are two issues with this approach: > 1) This process can take quite some time, especially if allocation of > the smaller pages is not immediate and requires migration/compaction. > 2) There is no guarantee that the total size of smaller pages allocated > will match the size of the larger page which was freed. This is > because the area freed by the larger page could quickly be > fragmented." > > These two issues have been experienced in practice. Well the first issue is quantifiable. What is "some time"? If it's people trying to get a 5% speedup on a rare operation because hey, bugging the kernel developers doesn't cost me anything then perhaps we have better things to be doing. And the second problem would benefit from some words to help us understand how much real-world hurt this causes, and how frequently. And let's understand what the userspace workarounds look like, etc. > A big chunk of the code changes (aprox 50%) is for the vmemmap > optimizations. This is also the most complex part of the changes. > I added the code as interaction with vmemmap reduction was discussed > during the RFC. It is only a performance enhancement and honestly > may not be worth the cost/risk. I will get some numbers to measure > the actual benefit. Cool.