From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92835C433DB for ; Mon, 15 Feb 2021 19:04:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5D3E864DFD for ; Mon, 15 Feb 2021 19:04:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230172AbhBOTE1 (ORCPT ); Mon, 15 Feb 2021 14:04:27 -0500 Received: from mx2.suse.de ([195.135.220.15]:43338 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229908AbhBOTD6 (ORCPT ); Mon, 15 Feb 2021 14:03:58 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1613415792; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=+fd7ihZjbhcZ8UUvdOazTdIjmKtVbqj7x1WqiHSTYpo=; b=PWlJyJF9epJo/+kqbywJfNJKobXG+JhtYEtY1HliEUOiac+chaXYl2jiGSBLMu7eV4qsCa oxm2AsOwKtBTAcXzOj2fFapPWffqp3ihTo3iFgaLcpwm1sYA2IAjf/3Fc8imYtbLQekL7R CF282g/eNl+JimV7h/fWNdWrWJpxnGY= Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id DBB47ACD4; Mon, 15 Feb 2021 19:03:11 +0000 (UTC) Date: Mon, 15 Feb 2021 20:02:59 +0100 From: Michal Hocko To: Muchun Song Cc: Jonathan Corbet , Mike Kravetz , Thomas Gleixner , mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, Peter Zijlstra , viro@zeniv.linux.org.uk, Andrew Morton , paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, Randy Dunlap , oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, Mina Almasry , David Rientjes , Matthew Wilcox , Oscar Salvador , "Song Bao Hua (Barry Song)" , David Hildenbrand , HORIGUCHI =?utf-8?B?TkFPWUEo5aCA5Y+jIOebtOS5nyk=?= , Joao Martins , Xiongchun duan , linux-doc@vger.kernel.org, LKML , Linux Memory Management List , linux-fsdevel Subject: Re: [External] Re: [PATCH v15 4/8] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue 16-02-21 01:48:29, Muchun Song wrote: > On Tue, Feb 16, 2021 at 12:28 AM Michal Hocko wrote: > > > > On Mon 15-02-21 23:36:49, Muchun Song wrote: > > [...] > > > > There shouldn't be any real reason why the memory allocation for > > > > vmemmaps, or handling vmemmap in general, has to be done from within the > > > > hugetlb lock and therefore requiring a non-sleeping semantic. All that > > > > can be deferred to a more relaxed context. If you want to make a > > > > > > Yeah, you are right. We can put the freeing hugetlb routine to a > > > workqueue. Just like I do in the previous version (before v13) patch. > > > I will pick up these patches. > > > > I haven't seen your v13 and I will unlikely have time to revisit that > > version. I just wanted to point out that the actual allocation doesn't > > have to happen from under the spinlock. There are multiple ways to go > > around that. Dropping the lock would be one of them. Preallocation > > before the spin lock is taken is another. WQ is certainly an option but > > I would take it as the last resort when other paths are not feasible. > > > > "Dropping the lock" and "Preallocation before the spin lock" can limit > the context of put_page to non-atomic context. I am not sure if there > is a page puted somewhere under an atomic context. e.g. compaction. > I am not an expert on this. Then do a due research or ask for a help from the MM community. Do not just try to go around harder problems and somehow duct tape a solution. I am sorry for sounding harsh here but this is a repetitive pattern. Now to the merit. put_page can indeed be called from all sorts of contexts. And it might be indeed impossible to guarantee that hugetlb pages are never freed up from an atomic context. Requiring that would be even hard to maintain longterm. There are ways around that, I believe, though. The most simple one that I can think of right now would be using in_atomic() rather than in_task() check free_huge_page. IIRC recent changes would allow in_atomic to be reliable also on !PREEMPT kernels (via RCU tree, not sure where this stands right now). That would make __free_huge_page always run in a non-atomic context which sounds like an easy enough solution. Another way would be to keep a pool of ready pages to use in case of GFP_NOWAIT allocation fails and have means to keep that pool replenished when needed. Would it be feasible to reused parts of the freed page in the worst case? -- Michal Hocko SUSE Labs