From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3116EC64E8A for ; Mon, 30 Nov 2020 15:20:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A5DD62074A for ; Mon, 30 Nov 2020 15:20:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="IhU2Zv3Z" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A5DD62074A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 242B98D0009; Mon, 30 Nov 2020 10:20:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1F3228D0002; Mon, 30 Nov 2020 10:20:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0B73E8D0009; Mon, 30 Nov 2020 10:20:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0111.hostedemail.com [216.40.44.111]) by kanga.kvack.org (Postfix) with ESMTP id E70388D0002 for ; Mon, 30 Nov 2020 10:20:54 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id A08F78249980 for ; Mon, 30 Nov 2020 15:20:54 +0000 (UTC) X-FDA: 77541447228.08.men89_0e0b98e273a2 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id 7F7EE1819E76F for ; Mon, 30 Nov 2020 15:20:54 +0000 (UTC) X-HE-Tag: men89_0e0b98e273a2 X-Filterd-Recvd-Size: 9976 Received: from mail-pj1-f67.google.com (mail-pj1-f67.google.com [209.85.216.67]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Mon, 30 Nov 2020 15:20:53 +0000 (UTC) Received: by mail-pj1-f67.google.com with SMTP id h7so71953pjk.1 for ; Mon, 30 Nov 2020 07:20:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=XlNyUom0PnuHIQiQo4ldJxgfjRtgFvL+VMXnHz2fArs=; b=IhU2Zv3Z5pBoEqwFOp2o/6bCEobuh9LdLbKbBz822Lyi1P+3mSGhbS+A3K2m+rblss vCVDVMzE4ykLJ2RrcGyuhcu33OK9ayT/d3G4p3RQKB5UcA5anpFfef3OE9tYRipSDLyh Blqw5iWKhkgaSgmH44exaBzGMD55fwn9Mb1+IJbw8mQllEUQI/ykfv4SpSTZtxeCASS1 FFkBayT6DoAsG4FF1juYMoSS854gICN+LYpPnm+KKiNcdXypKEU8eKoKkvTcA6Tx4YTs P/FBrJUD0yFN4VG22NehR61k4C5lmRqAtWGF/yr1I5vTJHMRrE7stKuRw5d1lktD6Wim qKfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=XlNyUom0PnuHIQiQo4ldJxgfjRtgFvL+VMXnHz2fArs=; b=Dnjc55IUK/AJbM65rAeCMjRIl2U9es2hTba6aYNG3dow/9XwHZL4VcCfTOh3wZaopR ZacrrAALhwsiHPJTYUlTd1mQOq/fgzfZirnqui8HRRvDJyUlTB/YOQJZ+8lrW+SC54Kw Gz8z+qYqYGZpsqyna4N3W52eixANuhROQzdcmEbClJORWyDq/xF3/TAGjnlF6whOlWot 0J0nQpWXqs//KAyAAdEdB54hKLGwbA5XqwamiS6XhW88FXYmIcUUIc3LnHAfsch5h1em zA4fzWUAzaaRmt5jeabjE+yqoCkMzxm3It7XE9evStVCtKEQTJi0y0S/A1KxynSAbdSn KQwA== X-Gm-Message-State: AOAM532BtH7xBpEW/PWp6TF7VAeYGy1MsKg7HsxQPSc4Rv8C/DfW8xLA uQ0e3gcpLOvPaHqwJJAKMrVMqw== X-Google-Smtp-Source: ABdhPJx6bw73jGyt6PNniBABrrLrn66e8wznBT6G1oDmy1Qip01Fr+dAQuvMk9kdlfhq5oA+FlWb4Q== X-Received: by 2002:a17:902:d34a:b029:da:861e:ecd8 with SMTP id l10-20020a170902d34ab02900da861eecd8mr2563726plk.45.1606749652810; Mon, 30 Nov 2020 07:20:52 -0800 (PST) Received: from localhost.bytedance.net ([103.136.221.68]) by smtp.gmail.com with ESMTPSA id q12sm16201660pgv.91.2020.11.30.07.20.43 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Nov 2020 07:20:52 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v7 09/15] mm/hugetlb: Defer freeing of HugeTLB pages Date: Mon, 30 Nov 2020 23:18:32 +0800 Message-Id: <20201130151838.11208-10-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201130151838.11208-1-songmuchun@bytedance.com> References: <20201130151838.11208-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the subsequent patch, we will allocate the vmemmap pages when free HugeTLB pages. But update_and_free_page() is called from a non-task context(and hold hugetlb_lock), so we can defer the actual freeing in a workqueue to prevent use GFP_ATOMIC to allocate the vmemmap pages. Signed-off-by: Muchun Song --- mm/hugetlb.c | 96 ++++++++++++++++++++++++++++++++++++++++++++++= ------ mm/hugetlb_vmemmap.c | 5 --- mm/hugetlb_vmemmap.h | 10 ++++++ 3 files changed, 95 insertions(+), 16 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 93dee37ceb6d..5131ae3d2245 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1220,7 +1220,7 @@ static void destroy_compound_gigantic_page(struct p= age *page, __ClearPageHead(page); } =20 -static void free_gigantic_page(struct page *page, unsigned int order) +static void __free_gigantic_page(struct page *page, unsigned int order) { /* * If the page isn't allocated using the cma allocator, @@ -1287,20 +1287,100 @@ static struct page *alloc_gigantic_page(struct h= state *h, gfp_t gfp_mask, { return NULL; } -static inline void free_gigantic_page(struct page *page, unsigned int or= der) { } +static inline void __free_gigantic_page(struct page *page, + unsigned int order) { } static inline void destroy_compound_gigantic_page(struct page *page, unsigned int order) { } #endif =20 -static void update_and_free_page(struct hstate *h, struct page *page) +static void __free_hugepage(struct hstate *h, struct page *page); + +/* + * As update_and_free_page() is be called from a non-task context(and ho= ld + * hugetlb_lock), we can defer the actual freeing in a workqueue to prev= ent + * use GFP_ATOMIC to allocate a lot of vmemmap pages. + * + * update_hpage_vmemmap_workfn() locklessly retrieves the linked list of + * pages to be freed and frees them one-by-one. As the page->mapping poi= nter + * is going to be cleared in update_hpage_vmemmap_workfn() anyway, it is + * reused as the llist_node structure of a lockless linked list of huge + * pages to be freed. + */ +static LLIST_HEAD(hpage_update_freelist); + +static void update_hpage_vmemmap_workfn(struct work_struct *work) { - int i; + struct llist_node *node; + struct page *page; + + node =3D llist_del_all(&hpage_update_freelist); + + while (node) { + page =3D container_of((struct address_space **)node, + struct page, mapping); + node =3D node->next; + page->mapping =3D NULL; + __free_hugepage(page_hstate(page), page); =20 + cond_resched(); + } +} +static DECLARE_WORK(hpage_update_work, update_hpage_vmemmap_workfn); + +static inline void __update_and_free_page(struct hstate *h, struct page = *page) +{ + /* No need to allocate vmemmap pages */ + if (!free_vmemmap_pages_per_hpage(h)) { + __free_hugepage(h, page); + return; + } + + /* + * Defer freeing to avoid using GFP_ATOMIC to allocate vmemmap + * pages. + * + * Only call schedule_work() if hpage_update_freelist is previously + * empty. Otherwise, schedule_work() had been called but the workfn + * hasn't retrieved the list yet. + */ + if (llist_add((struct llist_node *)&page->mapping, + &hpage_update_freelist)) + schedule_work(&hpage_update_work); +} + +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +static inline void free_gigantic_page(struct hstate *h, struct page *pag= e) +{ + __free_gigantic_page(page, huge_page_order(h)); +} +#else +static inline void free_gigantic_page(struct hstate *h, struct page *pag= e) +{ + /* + * Temporarily drop the hugetlb_lock, because + * we might block in __free_gigantic_page(). + */ + spin_unlock(&hugetlb_lock); + __free_gigantic_page(page, huge_page_order(h)); + spin_lock(&hugetlb_lock); +} +#endif + +static void update_and_free_page(struct hstate *h, struct page *page) +{ if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) return; =20 h->nr_huge_pages--; h->nr_huge_pages_node[page_to_nid(page)]--; + + __update_and_free_page(h, page); +} + +static void __free_hugepage(struct hstate *h, struct page *page) +{ + int i; + for (i =3D 0; i < pages_per_huge_page(h); i++) { page[i].flags &=3D ~(1 << PG_locked | 1 << PG_error | 1 << PG_referenced | 1 << PG_dirty | @@ -1312,14 +1392,8 @@ static void update_and_free_page(struct hstate *h,= struct page *page) set_compound_page_dtor(page, NULL_COMPOUND_DTOR); set_page_refcounted(page); if (hstate_is_gigantic(h)) { - /* - * Temporarily drop the hugetlb_lock, because - * we might block in free_gigantic_page(). - */ - spin_unlock(&hugetlb_lock); destroy_compound_gigantic_page(page, huge_page_order(h)); - free_gigantic_page(page, huge_page_order(h)); - spin_lock(&hugetlb_lock); + free_gigantic_page(h, page); } else { __free_pages(page, huge_page_order(h)); } diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 2c997b5de3b6..af42fad1f131 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -124,11 +124,6 @@ (__boundary - 1 < (end) - 1) ? __boundary : (end); \ }) =20 -static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h= ) -{ - return h->nr_free_vmemmap_pages; -} - static inline unsigned int vmemmap_pages_per_hpage(struct hstate *h) { return free_vmemmap_pages_per_hpage(h) + RESERVE_VMEMMAP_NR; diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 67113b67495f..293897b9f1d8 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -13,6 +13,11 @@ #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP void __init hugetlb_vmemmap_init(struct hstate *h); void free_huge_page_vmemmap(struct hstate *h, struct page *head); + +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h= ) +{ + return h->nr_free_vmemmap_pages; +} #else static inline void hugetlb_vmemmap_init(struct hstate *h) { @@ -21,5 +26,10 @@ static inline void hugetlb_vmemmap_init(struct hstate = *h) static inline void free_huge_page_vmemmap(struct hstate *h, struct page = *head) { } + +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h= ) +{ + return 0; +} #endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */ #endif /* _LINUX_HUGETLB_VMEMMAP_H */ --=20 2.11.0