From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD2A7C433DB for ; Wed, 3 Feb 2021 09:13:46 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5818A64F48 for ; Wed, 3 Feb 2021 09:13:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5818A64F48 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References:Message-ID: Subject:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=tfjuUVL9PZMkcVQxEBY3vd7ejxdVifDxgJziCZ+ZlWA=; b=jXKRUi+vz02gBAqxAW7BTd2Am A7F1ly/W70jQUehX+G+vPBUY07byu0D6BcuHXVdShQsubq1W04pSuPLIVnL703nE7gpGS7k/a+Gfu r4sBDkh9lnWSeuLPtW5fXFXGopuEne/OeXaD9SpspfTa3qE3vo+342AQf1cN2HS9/8eck6BTadY1S Cf5rTOMzaUJF8X/kzSRRx6pbMZpSgiEqF0DbOvqxyMK5vMkQgRzE4iySAmkRULvoPmPM50u+cahsD fbaDr70xd6d/lRbWevjkQRLsRwUUc9ROxKHR2VNXtERCJ9f0IpFAI35qvA7kO5hDsurZXbAZUY7Ql g4GPK0N6Q==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l7ED7-0001n7-SB; Wed, 03 Feb 2021 09:12:30 +0000 Received: from mx2.suse.de ([195.135.220.15]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l7ED4-0001lz-2L; Wed, 03 Feb 2021 09:12:27 +0000 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1612343545; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=zc3hq7uDsgQ0b+vRRJzp2lKKXI9473aJH0r0Js1Sba8=; b=HpVLWbDI2jrNX+ptbYXzVMjwEnvzT3tSDstsq31vn45eKVS7LJjYVzwqj1uQ2gr6lLmqUT bC8zchNPhAk8QJcRoCgvPjBBGQ1e/qoJChvnC3zjw9HWM74APhhMECX1FCKKYKCE1otg6M cOV8/W7XZopQR2+Wl4/NVeGMFzXcJ6g= Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id CDE7CB0EA; Wed, 3 Feb 2021 09:12:24 +0000 (UTC) Date: Wed, 3 Feb 2021 10:12:22 +0100 From: Michal Hocko To: Mike Rapoport Subject: Re: [PATCH v16 07/11] secretmem: use PMD-size pages to amortize direct map fragmentation Message-ID: References: <20210126120823.GM827@dhcp22.suse.cz> <20210128092259.GB242749@kernel.org> <73738cda43236b5ac2714e228af362b67a712f5d.camel@linux.ibm.com> <6de6b9f9c2d28eecc494e7db6ffbedc262317e11.camel@linux.ibm.com> <20210202124857.GN242749@kernel.org> <20210202191040.GP242749@kernel.org> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20210202191040.GP242749@kernel.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210203_041226_400440_7033D997 X-CRM114-Status: GOOD ( 40.47 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , David Hildenbrand , Peter Zijlstra , Catalin Marinas , Dave Hansen , linux-mm@kvack.org, linux-kselftest@vger.kernel.org, "H. Peter Anvin" , Christopher Lameter , Shuah Khan , Thomas Gleixner , Elena Reshetova , linux-arch@vger.kernel.org, Tycho Andersen , linux-nvdimm@lists.01.org, Will Deacon , x86@kernel.org, Matthew Wilcox , Mike Rapoport , Ingo Molnar , Michael Kerrisk , Palmer Dabbelt , Arnd Bergmann , James Bottomley , Hagen Paul Pfeifer , Borislav Petkov , Alexander Viro , Andy Lutomirski , Paul Walmsley , "Kirill A. Shutemov" , Dan Williams , linux-arm-kernel@lists.infradead.org, linux-api@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, Palmer Dabbelt , linux-fsdevel@vger.kernel.org, Shakeel Butt , Andrew Morton , Rick Edgecombe , Roman Gushchin Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Tue 02-02-21 21:10:40, Mike Rapoport wrote: > On Tue, Feb 02, 2021 at 02:27:14PM +0100, Michal Hocko wrote: > > On Tue 02-02-21 14:48:57, Mike Rapoport wrote: > > > On Tue, Feb 02, 2021 at 10:35:05AM +0100, Michal Hocko wrote: > > > > On Mon 01-02-21 08:56:19, James Bottomley wrote: > > > > > > > > I have also proposed potential ways out of this. Either the pool is not > > > > fixed sized and you make it a regular unevictable memory (if direct map > > > > fragmentation is not considered a major problem) > > > > > > I think that the direct map fragmentation is not a major problem, and the > > > data we have confirms it, so I'd be more than happy to entirely drop the > > > pool, allocate memory page by page and remove each page from the direct > > > map. > > > > > > Still, we cannot prove negative and it could happen that there is a > > > workload that would suffer a lot from the direct map fragmentation, so > > > having a pool of large pages upfront is better than trying to fix it > > > afterwards. As we get more confidence that the direct map fragmentation is > > > not an issue as it is common to believe we may remove the pool altogether. > > > > I would drop the pool altogether and instantiate pages to the > > unevictable LRU list and internally treat it as ramdisk/mlock so you > > will get an accounting correctly. The feature should be still opt-in > > (e.g. a kernel command line parameter) for now. The recent report by > > Intel (http://lkml.kernel.org/r/213b4567-46ce-f116-9cdf-bbd0c884eb3c@linux.intel.com) > > there is no clear win to have huge mappings in _general_ but there are > > still workloads which benefit. > > > > > I think that using PMD_ORDER allocations for the pool with a fallback to > > > order 0 will do the job, but unfortunately I doubt we'll reach a consensus > > > about this because dogmatic beliefs are hard to shake... > > > > If this is opt-in then those beliefs can be relaxed somehow. Long term > > it makes a lot of sense to optimize for a better direct map management > > but I do not think this is a hard requirement for an initial > > implementation if it is not imposed to everybody by default. > > > > > A more restrictive possibility is to still use plain PMD_ORDER allocations > > > to fill the pool, without relying on CMA. In this case there will be no > > > global secretmem specific pool to exhaust, but then it's possible to drain > > > high order free blocks in a system, so CMA has an advantage of limiting > > > secretmem pools to certain amount of memory with somewhat higher > > > probability for high order allocation to succeed. > > > > > > > or you need a careful access control > > > > > > Do you mind elaborating what do you mean by "careful access control"? > > > > As already mentioned, a mechanism to control who can use this feature - > > e.g. make it a special device which you can access control by > > permissions or higher level security policies. But that is really needed > > only if the pool is fixed sized. > > Let me reiterate to make sure I don't misread your suggestion. > > If we make secretmem an opt-in feature with, e.g. kernel parameter, the > pooling of large pages is unnecessary. In this case there is no limited > resource we need to protect because secretmem will allocate page by page. Yes. > Since there is no limited resource, we don't need special permissions > to access secretmem so we can move forward with a system call that creates > a mmapable file descriptor and save the hassle of a chardev. Yes, I assume you implicitly assume mlock rlimit here. Also memcg accounting should be in place. Wrt to the specific syscall, please document why existing interfaces are not a good fit as well. It would be also great to describe interaction with mlock itself (I assume the two to be incompatible - mlock will fail on and mlockall will ignore it). > I cannot say I don't like this as it cuts roughly half of mm/secretmem.c :) > > But I must say I am still a bit concerned about that we have no provisions > here for dealing with the direct map fragmentation even with the set goal > to improve the direct map management in the long run... Yes that is something that will be needed long term. I do not think this is strictly necessary for the initial submission, though. The implementation should be as simple as possible now and complexity added on top. -- Michal Hocko SUSE Labs _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel