From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BAF81C433E0 for ; Mon, 25 Jan 2021 03:59:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 485A822BF3 for ; Mon, 25 Jan 2021 03:59:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 485A822BF3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 681838D0002; Sun, 24 Jan 2021 22:59:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6301B8D0001; Sun, 24 Jan 2021 22:59:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 51E3A8D0002; Sun, 24 Jan 2021 22:59:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0164.hostedemail.com [216.40.44.164]) by kanga.kvack.org (Postfix) with ESMTP id 39DBE8D0001 for ; Sun, 24 Jan 2021 22:59:06 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id F21D61EE6 for ; Mon, 25 Jan 2021 03:59:05 +0000 (UTC) X-FDA: 77742941850.14.ink47_4b04c7227582 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin14.hostedemail.com (Postfix) with ESMTP id D329218229818 for ; Mon, 25 Jan 2021 03:59:05 +0000 (UTC) X-HE-Tag: ink47_4b04c7227582 X-Filterd-Recvd-Size: 8859 Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com [209.85.210.177]) by imf19.hostedemail.com (Postfix) with ESMTP for ; Mon, 25 Jan 2021 03:59:04 +0000 (UTC) Received: by mail-pf1-f177.google.com with SMTP id m6so7664701pfk.1 for ; Sun, 24 Jan 2021 19:59:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=JawmNI2cw7dC+renZYHY4PhE5rPmyLgqTpnvi0+Z94E=; b=i+XW3lJuJx4xsB43+d4WmMeA1XBib0Lbd67jLeCEJwWlNJqNf/dI0LfOc90u9wTZB9 9Ow/6b92TQ/NSL4xfgGz2BGnq7FmCxeMEfQ8498E8+skKsd4ruCVqTU8ng+yV3fVgo2o RXy8JUE019jPedU7S9UnxT01GI/L6C6t3PR0myctdR6Qe37/ZDZPcoK5k0FAN2OgW0bP PzXZ5qayVhE4BId7InEM1+ya+SN0bfusauctEnFobR7X2oOS/E0BD/Us9RKejDvJX7bR WkRgZVOfO2N95U6srshwm1Jn1PU3l+BzCbI8tAN3W6iKuG6hnoUS54DctdsYZwq32Sro OZZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=JawmNI2cw7dC+renZYHY4PhE5rPmyLgqTpnvi0+Z94E=; b=ekuUyFWZ3RNnb6MKnjOTR+9Hdb1pp1wpa0AtD/e3slfmnCBeTO7Icwy1boLUh1y+bn Gg7/e9XTW2qevIrdrBfAqGJTlG4EKpbUzcIudg8nRSgM4woXykGlcIlOYiCvnpo/yhVz hc3mtB4x1IlPXORK2l/ccsUV3hoKs6wjRZ6DxaT18epGZ3e9WOoP2u5oBiDpTa3R1yc/ b7qnU0PrEgCUbeTUqzK7cL4zrNk+nx7zwRNgyN4/9yKPsYW41RnigyopSSXZWGZYeWML x15sK8vB2spKxjdc/YS5b5xFEBXLGvjWFoy5xzf8f2F7kiqicXsaAQugtd/lLUzORe40 KcAw== X-Gm-Message-State: AOAM530Uh2r3WtG35u4jocp8LUQCBbRG54CqrLYGRy7Tq6+yUIUaDx59 Ks811Be4oDil2iAQ9Hb+9dRSb3tgevWHwQEZ+WyHmg== X-Google-Smtp-Source: ABdhPJwo/Nr8Bv+3rC1BGZXCTci7+PJcz19l2647770oGa45hpUmoQp0YwA1O7f0Gn40pHg0K5qRzCdFyHIVSmtZFa0= X-Received: by 2002:a62:7694:0:b029:1b9:8d43:95af with SMTP id r142-20020a6276940000b02901b98d4395afmr16376787pfc.2.1611547143936; Sun, 24 Jan 2021 19:59:03 -0800 (PST) MIME-Version: 1.0 References: <20210117151053.24600-1-songmuchun@bytedance.com> <20210117151053.24600-5-songmuchun@bytedance.com> <59d18082-248a-7014-b917-625d759c572@google.com> In-Reply-To: <59d18082-248a-7014-b917-625d759c572@google.com> From: Muchun Song Date: Mon, 25 Jan 2021 11:58:27 +0800 Message-ID: Subject: Re: [External] Re: [PATCH v13 04/12] mm: hugetlb: defer freeing of HugeTLB pages To: David Rientjes Cc: Jonathan Corbet , Mike Kravetz , Thomas Gleixner , mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, Peter Zijlstra , viro@zeniv.linux.org.uk, Andrew Morton , paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, Randy Dunlap , oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, Mina Almasry , Matthew Wilcox , Oscar Salvador , Michal Hocko , "Song Bao Hua (Barry Song)" , David Hildenbrand , =?UTF-8?B?SE9SSUdVQ0hJIE5BT1lBKOWggOWPoyDnm7TkuZ8p?= , Xiongchun duan , linux-doc@vger.kernel.org, LKML , Linux Memory Management List , linux-fsdevel Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jan 25, 2021 at 7:55 AM David Rientjes wrote: > > > On Sun, 17 Jan 2021, Muchun Song wrote: > > > In the subsequent patch, we should allocate the vmemmap pages when > > freeing HugeTLB pages. But update_and_free_page() is always called > > with holding hugetlb_lock, so we cannot use GFP_KERNEL to allocate > > vmemmap pages. However, we can defer the actual freeing in a kworker > > to prevent from using GFP_ATOMIC to allocate the vmemmap pages. > > > > The update_hpage_vmemmap_workfn() is where the call to allocate > > vmemmmap pages will be inserted. > > > > I think it's reasonable to assume that userspace can release free hugetlb > pages from the pool on oom conditions when reclaim has become too > expensive. This approach now requires that we can allocate vmemmap pages > in a potential oom condition as a prerequisite for freeing memory, which > seems less than ideal. > > And, by doing this through a kworker, we can presumably get queued behind > another work item that requires memory to make forward progress in this > oom condition. > > Two thoughts: > > - We're going to be freeing the hugetlb page after we can allocate the > vmemmap pages, so why do we need to allocate with GFP_KERNEL? Can't we > simply dip into memory reserves using GFP_ATOMIC (and thus can be > holding hugetlb_lock) because we know we'll be freeing more memory than > we'll be allocating? Right. > I think requiring a GFP_KERNEL allocation to block > to free memory for vmemmap when we'll be freeing memory ourselves is > dubious. This simplifies all of this. Thanks for your thoughts. I just thought that we can go to reclaim when there is no memory in the system. But we cannot block when using GFP_KERNEL. Actually, we cannot deal with fail of memory allocating. In the next patch, I try to sleep 100ms and then try again to allocate memory when allocating memory fails. > > - If the answer is that we actually have to use GFP_KERNEL for other > reasons, what are your thoughts on pre-allocating the vmemmap as opposed > to deferring to a kworker? In other words, preallocate the necessary > memory with GFP_KERNEL and put it on a linked list in struct hstate > before acquiring hugetlb_lock. put_page() can be used in an atomic context. Actually, we cannot sleep in the __free_huge_page(). It seems a little tricky. Right? > > > Signed-off-by: Muchun Song > > Reviewed-by: Mike Kravetz > > --- > > mm/hugetlb.c | 74 ++++++++++++++++++++++++++++++++++++++++++++++++++-- > > mm/hugetlb_vmemmap.c | 12 --------- > > mm/hugetlb_vmemmap.h | 17 ++++++++++++ > > 3 files changed, 89 insertions(+), 14 deletions(-) > > > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > > index 140135fc8113..c165186ec2cf 100644 > > --- a/mm/hugetlb.c > > +++ b/mm/hugetlb.c > > @@ -1292,15 +1292,85 @@ static inline void destroy_compound_gigantic_page(struct page *page, > > unsigned int order) { } > > #endif > > > > -static void update_and_free_page(struct hstate *h, struct page *page) > > +static void __free_hugepage(struct hstate *h, struct page *page); > > + > > +/* > > + * As update_and_free_page() is always called with holding hugetlb_lock, so we > > + * cannot use GFP_KERNEL to allocate vmemmap pages. However, we can defer the > > + * actual freeing in a workqueue to prevent from using GFP_ATOMIC to allocate > > + * the vmemmap pages. > > + * > > + * The update_hpage_vmemmap_workfn() is where the call to allocate vmemmmap > > + * pages will be inserted. > > + * > > + * update_hpage_vmemmap_workfn() locklessly retrieves the linked list of pages > > + * to be freed and frees them one-by-one. As the page->mapping pointer is going > > + * to be cleared in update_hpage_vmemmap_workfn() anyway, it is reused as the > > + * llist_node structure of a lockless linked list of huge pages to be freed. > > + */ > > +static LLIST_HEAD(hpage_update_freelist); > > + > > +static void update_hpage_vmemmap_workfn(struct work_struct *work) > > { > > - int i; > > + struct llist_node *node; > > + > > + node = llist_del_all(&hpage_update_freelist); > > + > > + while (node) { > > + struct page *page; > > + struct hstate *h; > > + > > + page = container_of((struct address_space **)node, > > + struct page, mapping); > > + node = node->next; > > + page->mapping = NULL; > > + h = page_hstate(page); > > + > > + spin_lock(&hugetlb_lock); > > + __free_hugepage(h, page); > > + spin_unlock(&hugetlb_lock); > > > > + cond_resched(); > > Wouldn't it be better to hold hugetlb_lock for the iteration rather than > constantly dropping it and reacquiring it? Use > cond_resched_lock(&hugetlb_lock) instead? Great. We can use it. Thanks.