From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3347C433E0 for ; Wed, 6 Jan 2021 14:21:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7CEB82311B for ; Wed, 6 Jan 2021 14:21:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7CEB82311B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0AE306B0213; Wed, 6 Jan 2021 09:21:11 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 036456B0248; Wed, 6 Jan 2021 09:21:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E199B6B0249; Wed, 6 Jan 2021 09:21:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0237.hostedemail.com [216.40.44.237]) by kanga.kvack.org (Postfix) with ESMTP id C76776B0213 for ; Wed, 6 Jan 2021 09:21:10 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 7C4314417 for ; Wed, 6 Jan 2021 14:21:10 +0000 (UTC) X-FDA: 77675562300.29.push99_080943f274e2 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin29.hostedemail.com (Postfix) with ESMTP id 3AC92180868DE for ; Wed, 6 Jan 2021 14:21:10 +0000 (UTC) X-HE-Tag: push99_080943f274e2 X-Filterd-Recvd-Size: 10165 Received: from mail-pf1-f175.google.com (mail-pf1-f175.google.com [209.85.210.175]) by imf11.hostedemail.com (Postfix) with ESMTP for ; Wed, 6 Jan 2021 14:21:09 +0000 (UTC) Received: by mail-pf1-f175.google.com with SMTP id d2so1790155pfq.5 for ; Wed, 06 Jan 2021 06:21:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=S+NDPR45drEYxjrotG0Jf37UfWBT389Rp/ssQ93qu18=; b=A9flmtBd3jDEjnR5m4AfOp3nP7yGBrb/a5j4i5mW+QSQnRSnoHEkFQctM6uTxR+kI2 1H388RDRITA3fNl0ZKDMQhI7b3MlEBomw2+U1rbvzzC2fq9DTMXbeVnBmYuhcWHb5Q6R 5NNvFnjHm6ApGcjZmQwHfU6TE0N25lrvXlN9bO9fnTsmhSIvNafvJtZwCuy7wYTNz9ZO leRstqQSNSyRCdmzMr0sOlVVLiCOX9xa5jgPo5syzVYbVKMWoQm7g7ZCQ+aQB7KTP5Rd FALdsKq2EWIhc/vV+jHYLgdjAb/54/G5Gp+rdYUqrdgsd32R+VAD/tCfAUqW3CffPWhU 97ag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=S+NDPR45drEYxjrotG0Jf37UfWBT389Rp/ssQ93qu18=; b=Xn+v3AuvX9HnyPv+OSHJz4oFVQ8ZIx07czyuz1JbeB++TtBNZ08oigs+bIabdA0ppj D9RvLjVNOiT/gDLXW0dWBf+DsS29t8vFjZHDu0KHOTrf+IPQPHhq8sP6ZfAeJDMZywNZ vaJ8I1taYGpAfEaKwzUvUrEtN78olGO2gYV1DZRJBLEsZm01whW/8aGS1vkcAhdP2IAq eDJIrVp0YtLC/GiePWn0JJWTzQFUbKweEwyOWc44js2gGjk50ygM5T2lNHkkw4yBmnRE 8Cv3wdt/w4MhwcFMPtZQmQNhOkSRK9+dg4AWp6wXWO4vK0mb/zFaNEejDpJW7Zzjb338 dBcA== X-Gm-Message-State: AOAM532Zd5tSgB6Df4LQdc5j51n8t2m5zd9n1h2jLYpx+MxlatfhsuRM CGLsuicaDC+IWvZ1BHqr8p0RMw== X-Google-Smtp-Source: ABdhPJwqQLBpKcv80jA4+b44FVC2YiluU7y2POqYjHpq1b7YwPUBeTfZvnkkemgJZP7VHiNV1rUDPw== X-Received: by 2002:a63:4a03:: with SMTP id x3mr4745482pga.270.1609942868719; Wed, 06 Jan 2021 06:21:08 -0800 (PST) Received: from localhost.bytedance.net ([139.177.225.232]) by smtp.gmail.com with ESMTPSA id a29sm2831730pfr.73.2021.01.06.06.20.57 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 06 Jan 2021 06:21:08 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v12 06/13] mm/hugetlb: Allocate the vmemmap pages associated with each HugeTLB page Date: Wed, 6 Jan 2021 22:19:24 +0800 Message-Id: <20210106141931.73931-7-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210106141931.73931-1-songmuchun@bytedance.com> References: <20210106141931.73931-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When we free a HugeTLB page to the buddy allocator, we should allocate th= e vmemmap pages associated with it. We can do that in the __free_hugepage() before freeing it to buddy. Signed-off-by: Muchun Song --- include/linux/mm.h | 2 ++ mm/hugetlb.c | 2 ++ mm/hugetlb_vmemmap.c | 15 +++++++++++ mm/hugetlb_vmemmap.h | 5 ++++ mm/sparse-vmemmap.c | 76 ++++++++++++++++++++++++++++++++++++++++++++++= +++++- 5 files changed, 99 insertions(+), 1 deletion(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index f928994ed273..16b55d13b0ab 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3007,6 +3007,8 @@ static inline void print_vma_addr(char *prefix, uns= igned long rip) =20 void vmemmap_remap_free(unsigned long start, unsigned long end, unsigned long reuse); +void vmemmap_remap_alloc(unsigned long start, unsigned long end, + unsigned long reuse); =20 void *sparse_buffer_alloc(unsigned long size); struct page * __populate_section_memmap(unsigned long pfn, diff --git a/mm/hugetlb.c b/mm/hugetlb.c index c165186ec2cf..d11c32fcdb38 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1326,6 +1326,8 @@ static void update_hpage_vmemmap_workfn(struct work= _struct *work) page->mapping =3D NULL; h =3D page_hstate(page); =20 + alloc_huge_page_vmemmap(h, page); + spin_lock(&hugetlb_lock); __free_hugepage(h, page); spin_unlock(&hugetlb_lock); diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 19f1898aaede..6108ae80314f 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -183,6 +183,21 @@ static inline unsigned long free_vmemmap_pages_size_= per_hpage(struct hstate *h) return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT; } =20 +void alloc_huge_page_vmemmap(struct hstate *h, struct page *head) +{ + unsigned long vmemmap_addr =3D (unsigned long)head; + unsigned long vmemmap_end, vmemmap_reuse; + + if (!free_vmemmap_pages_per_hpage(h)) + return; + + vmemmap_addr +=3D RESERVE_VMEMMAP_SIZE; + vmemmap_end =3D vmemmap_addr + free_vmemmap_pages_size_per_hpage(h); + vmemmap_reuse =3D vmemmap_addr - PAGE_SIZE; + + vmemmap_remap_alloc(vmemmap_addr, vmemmap_end, vmemmap_reuse); +} + void free_huge_page_vmemmap(struct hstate *h, struct page *head) { unsigned long vmemmap_addr =3D (unsigned long)head; diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h index 01f8637adbe0..b2c8d2f11d48 100644 --- a/mm/hugetlb_vmemmap.h +++ b/mm/hugetlb_vmemmap.h @@ -11,6 +11,7 @@ #include =20 #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +void alloc_huge_page_vmemmap(struct hstate *h, struct page *head); void free_huge_page_vmemmap(struct hstate *h, struct page *head); =20 /* @@ -25,6 +26,10 @@ static inline unsigned int free_vmemmap_pages_per_hpag= e(struct hstate *h) return 0; } #else +static inline void alloc_huge_page_vmemmap(struct hstate *h, struct page= *head) +{ +} + static inline void free_huge_page_vmemmap(struct hstate *h, struct page = *head) { } diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 0e9c49a028b4..ed4702d5d664 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -29,6 +29,7 @@ #include #include #include +#include =20 #include #include @@ -40,7 +41,8 @@ * @remap_pte: called for each non-empty PTE (lowest-level) entry. * @reuse_page: the page which is reused for the tail vmemmap pages. * @reuse_addr: the virtual address of the @reuse_page page. - * @vmemmap_pages: the list head of the vmemmap pages that can be freed. + * @vmemmap_pages: the list head of the vmemmap pages that can be freed + * or is mapped from. */ struct vmemmap_remap_walk { void (*remap_pte)(pte_t *pte, unsigned long addr, @@ -50,6 +52,10 @@ struct vmemmap_remap_walk { struct list_head *vmemmap_pages; }; =20 +/* The gfp mask of allocating vmemmap page */ +#define GFP_VMEMMAP_PAGE \ + (GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN | __GFP_THISNODE) + static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, struct vmemmap_remap_walk *walk) @@ -212,6 +218,74 @@ void vmemmap_remap_free(unsigned long start, unsigne= d long end, free_vmemmap_page_list(&vmemmap_pages); } =20 +static void vmemmap_restore_pte(pte_t *pte, unsigned long addr, + struct vmemmap_remap_walk *walk) +{ + pgprot_t pgprot =3D PAGE_KERNEL; + struct page *page; + void *to; + + BUG_ON(pte_page(*pte) !=3D walk->reuse_page); + + page =3D list_first_entry(walk->vmemmap_pages, struct page, lru); + list_del(&page->lru); + to =3D page_to_virt(page); + copy_page(to, (void *)walk->reuse_addr); + + set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot)); +} + +static void alloc_vmemmap_page_list(struct list_head *list, + unsigned long start, unsigned long end) +{ + unsigned long addr; + + for (addr =3D start; addr < end; addr +=3D PAGE_SIZE) { + struct page *page; + int nid =3D page_to_nid((const void *)addr); + +retry: + page =3D alloc_pages_node(nid, GFP_VMEMMAP_PAGE, 0); + if (unlikely(!page)) { + msleep(100); + /* + * We should retry infinitely, because we cannot + * handle allocation failures. Once we allocate + * vmemmap pages successfully, then we can free + * a HugeTLB page. + */ + goto retry; + } + list_add_tail(&page->lru, list); + } +} + +/** + * vmemmap_remap_alloc - remap the vmemmap virtual address range [@start= , end) + * to the page which is from the @vmemmap_pages + * respectively. + * @start: start address of the vmemmap virtual address range. + * @end: end address of the vmemmap virtual address range. + * @reuse: reuse address. + */ +void vmemmap_remap_alloc(unsigned long start, unsigned long end, + unsigned long reuse) +{ + LIST_HEAD(vmemmap_pages); + struct vmemmap_remap_walk walk =3D { + .remap_pte =3D vmemmap_restore_pte, + .reuse_addr =3D reuse, + .vmemmap_pages =3D &vmemmap_pages, + }; + + might_sleep(); + + BUG_ON(start !=3D reuse + PAGE_SIZE); + + alloc_vmemmap_page_list(&vmemmap_pages, start, end); + vmemmap_remap_range(reuse, end, &walk); +} + /* * Allocate a block of memory to be used to back the virtual memory map * or to back the page tables that are used to create the mapping. --=20 2.11.0