From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54953C64E7B for ; Mon, 30 Nov 2020 15:21:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B5D0020719 for ; Mon, 30 Nov 2020 15:21:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="KHj6urA7" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B5D0020719 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4D7AD8D000F; Mon, 30 Nov 2020 10:21:05 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 45F188D0002; Mon, 30 Nov 2020 10:21:05 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 28E768D000F; Mon, 30 Nov 2020 10:21:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0051.hostedemail.com [216.40.44.51]) by kanga.kvack.org (Postfix) with ESMTP id 0D3538D0002 for ; Mon, 30 Nov 2020 10:21:05 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id A22F73638 for ; Mon, 30 Nov 2020 15:21:04 +0000 (UTC) X-FDA: 77541447648.13.skirt64_031845d273a2 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id 7DB6A18140B67 for ; Mon, 30 Nov 2020 15:21:04 +0000 (UTC) X-HE-Tag: skirt64_031845d273a2 X-Filterd-Recvd-Size: 10199 Received: from mail-pj1-f66.google.com (mail-pj1-f66.google.com [209.85.216.66]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Mon, 30 Nov 2020 15:21:03 +0000 (UTC) Received: by mail-pj1-f66.google.com with SMTP id t12so1563025pjq.5 for ; Mon, 30 Nov 2020 07:21:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=GJFpo+N04VqrewLjKp497y8U/hKK302CXBHn7zde7rI=; b=KHj6urA7j1PhhYBctDr/dZ2Y8+YdAZv4Scnb4CDja0LAt97BQSSAuzY01M8UHt0M6B D7wYUPj87q3dODiwYx/yDEjxrYrzsGH9XNp8UvtXG2EKFEL4MPvtbsX0aBwXGwg9myO/ 1zEvElFKGBPVTxxY07QXI5nNlF71M291SU64zE026lgD155JPixh96/+5m3h8Dt8+Rj+ +RxpzVljged5iaxILF3K1X2K/xKRIkT6mND652JE8GhjQQUhQ+ftSBN9t4E5LKK86sXm Oa8ssYo22OoQkQ/+8qes4F8asp/9svJgjt4bukIttRevzagDAs+uN04DlpjWXLcSp2pb cvnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GJFpo+N04VqrewLjKp497y8U/hKK302CXBHn7zde7rI=; b=T0N8VpztJN5GVN80pQnPyQV+sxQM7uvB0bdmjm6m6IxJq1lUR3RJ9yddwwQFAN6Kvl asfLPGFoOcitY2WjCsWiwZ5kvKXaY12DgFNRBB3YJotvDE2ncZ1zMPfdzs7CRTLono2i YGRLtPntgm+cCd7d1Hu/jH6q/9LTxfULbf6pZCcPd068iht2c+gYCbieFkk5Vu43BlJj ctu3LQLb+eXKhwL2aNe0zl3g+k6n/OKByT+jMY0OvFUxtj6zFAfbc0UmPLcMFJ9FF0lx o7sCy/Oy81aVk/dPc0ytn6FUIP5v+mJuMH0INDmvCAUKr2WjHjYGDidmW2cbg67iqaLb 5mMA== X-Gm-Message-State: AOAM533vMoqgZp1LAVTfBDrLCiV0zVdMKnAV/D310IxRmlsWHcEfnSwA SLv7+JWahI35HiShwNQO1LG9GA== X-Google-Smtp-Source: ABdhPJyxyjfnLNNhfGWHO3IAw/j3oVwIH6eKTJibyAajhJkdBkpv3vkc/jD3PRaax5IuIkX+VB5dMQ== X-Received: by 2002:a17:90a:8412:: with SMTP id j18mr27244952pjn.124.1606749663162; Mon, 30 Nov 2020 07:21:03 -0800 (PST) Received: from localhost.bytedance.net ([103.136.221.68]) by smtp.gmail.com with ESMTPSA id q12sm16201660pgv.91.2020.11.30.07.20.53 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Nov 2020 07:21:02 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v7 10/15] mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb page Date: Mon, 30 Nov 2020 23:18:33 +0800 Message-Id: <20201130151838.11208-11-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201130151838.11208-1-songmuchun@bytedance.com> References: <20201130151838.11208-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When we free a hugetlb page to the buddy, we should allocate the vmemmap pages associated with it. We can do that in the __free_hugepage(). Signed-off-by: Muchun Song --- mm/hugetlb.c | 2 ++ mm/hugetlb_vmemmap.c | 90 ++++++++++++++++++++++++++++++++++++++++++++++= +++--- mm/hugetlb_vmemmap.h | 5 +++ 3 files changed, 92 insertions(+), 5 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 5131ae3d2245..ebe35532d432 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1381,6 +1381,8 @@ static void __free_hugepage(struct hstate *h, struc= t page *page) { int i; =20 + alloc_huge_page_vmemmap(h, page); + for (i =3D 0; i < pages_per_huge_page(h); i++) { page[i].flags &=3D ~(1 << PG_locked | 1 << PG_error | 1 << PG_referenced | 1 << PG_dirty | diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index af42fad1f131..a3714db7f400 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -95,6 +95,7 @@ #define pr_fmt(fmt) "HugeTLB vmemmap: " fmt =20 #include +#include #include "hugetlb_vmemmap.h" =20 /* @@ -108,6 +109,8 @@ #define RESERVE_VMEMMAP_NR 2U #define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) #define VMEMMAP_TAIL_PAGE_REUSE -1 +#define GFP_VMEMMAP_PAGE \ + (GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_HIGH) =20 #ifndef VMEMMAP_HPAGE_SHIFT #define VMEMMAP_HPAGE_SHIFT HPAGE_SHIFT @@ -124,6 +127,11 @@ (__boundary - 1 < (end) - 1) ? __boundary : (end); \ }) =20 +typedef void (*vmemmap_remap_pte_func_t)(struct page *reuse, pte_t *pte, + unsigned long start, unsigned long end, + void *priv); + + static inline unsigned int vmemmap_pages_per_hpage(struct hstate *h) { return free_vmemmap_pages_per_hpage(h) + RESERVE_VMEMMAP_NR; @@ -163,9 +171,40 @@ static pmd_t *vmemmap_to_pmd(unsigned long addr) return pmd; } =20 +static void vmemmap_restore_pte_range(struct page *reuse, pte_t *pte, + unsigned long start, unsigned long end, + void *priv) +{ + pgprot_t pgprot =3D PAGE_KERNEL; + void *from =3D page_to_virt(reuse); + unsigned long addr; + struct list_head *pages =3D priv; + + for (addr =3D start; addr < end; addr +=3D PAGE_SIZE) { + void *to; + struct page *page; + + VM_BUG_ON(pte_none(*pte) || pte_page(*pte) !=3D reuse); + + page =3D list_first_entry(pages, struct page, lru); + list_del(&page->lru); + to =3D page_to_virt(page); + copy_page(to, from); + + /* + * Make sure that any data that writes to the @to is made + * visible to the physical page. + */ + flush_kernel_vmap_range(to, PAGE_SIZE); + + prepare_vmemmap_page(page); + set_pte_at(&init_mm, addr, pte++, mk_pte(page, pgprot)); + } +} + static void vmemmap_reuse_pte_range(struct page *reuse, pte_t *pte, unsigned long start, unsigned long end, - struct list_head *vmemmap_pages) + void *priv) { /* * Make the tail pages are mapped with read-only to catch @@ -174,6 +213,7 @@ static void vmemmap_reuse_pte_range(struct page *reus= e, pte_t *pte, pgprot_t pgprot =3D PAGE_KERNEL_RO; pte_t entry =3D mk_pte(reuse, pgprot); unsigned long addr; + struct list_head *pages =3D priv; =20 for (addr =3D start; addr < end; addr +=3D PAGE_SIZE, pte++) { struct page *page; @@ -181,14 +221,14 @@ static void vmemmap_reuse_pte_range(struct page *re= use, pte_t *pte, VM_BUG_ON(pte_none(*pte)); =20 page =3D pte_page(*pte); - list_add(&page->lru, vmemmap_pages); + list_add(&page->lru, pages); =20 set_pte_at(&init_mm, addr, pte, entry); } } =20 static void vmemmap_remap_range(unsigned long start, unsigned long end, - struct list_head *vmemmap_pages) + vmemmap_remap_pte_func_t func, void *priv) { pmd_t *pmd; unsigned long next, addr =3D start; @@ -208,12 +248,52 @@ static void vmemmap_remap_range(unsigned long start= , unsigned long end, reuse =3D pte_page(pte[VMEMMAP_TAIL_PAGE_REUSE]); =20 next =3D vmemmap_hpage_addr_end(addr, end); - vmemmap_reuse_pte_range(reuse, pte, addr, next, vmemmap_pages); + func(reuse, pte, addr, next, priv); } while (pmd++, addr =3D next, addr !=3D end); =20 flush_tlb_kernel_range(start, end); } =20 +static inline void alloc_vmemmap_pages(struct hstate *h, struct list_hea= d *list) +{ + unsigned int nr =3D free_vmemmap_pages_per_hpage(h); + + while (nr--) { + struct page *page; + +retry: + page =3D alloc_page(GFP_VMEMMAP_PAGE); + if (unlikely(!page)) { + msleep(100); + /* + * We should retry infinitely, because we cannot + * handle allocation failures. Once we allocate + * vmemmap pages successfully, then we can free + * a HugeTLB page. + */ + goto retry; + } + list_add_tail(&page->lru, list); + } +} + +void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) +{ + unsigned long start, end; + unsigned long vmemmap_addr =3D (unsigned long)head; + LIST_HEAD(vmemmap_pages); + + if (!free_vmemmap_pages_per_hpage(h)) + return; + + alloc_vmemmap_pages(h, &vmemmap_pages); + + start =3D vmemmap_addr + RESERVE_VMEMMAP_SIZE; + end =3D vmemmap_addr + vmemmap_pages_size_per_hpage(h); + vmemmap_remap_range(start, end, vmemmap_restore_pte_range, + &vmemmap_pages); +} + static inline void free_vmemmap_page_list(struct list_head *list) { struct page *page, *next; @@ -235,7 +315,7 @@ void free_huge_page_vmemmap(struct hstate *h, struct = page *head) =20 start =3D vmemmap_addr + RESERVE_VMEMMAP_SIZE; end =3D vmemmap_addr + vmemmap_pages_size_per_hpage(h); - vmemmap_remap_range(start, end, &vmemmap_pages); + vmemmap_remap_range(start, end, vmemmap_reuse_pte_range, &vmemmap_pages= ); =20 free_vmemmap_page_list(&vmemmap_pages); } diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 293897b9f1d8..7887095488f4 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -12,6 +12,7 @@ =20 #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP void __init hugetlb_vmemmap_init(struct hstate *h); +void alloc_huge_page_vmemmap(struct hstate *h, struct page *head); void free_huge_page_vmemmap(struct hstate *h, struct page *head); =20 static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h= ) @@ -23,6 +24,10 @@ static inline void hugetlb_vmemmap_init(struct hstate = *h) { } =20 +static inline void alloc_huge_page_vmemmap(struct hstate *h, struct page= *head) +{ +} + static inline void free_huge_page_vmemmap(struct hstate *h, struct page = *head) { } --=20 2.11.0