From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1CC92C4361B for ; Sun, 13 Dec 2020 15:46:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7792923123 for ; Sun, 13 Dec 2020 15:46:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7792923123 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A41106B0036; Sun, 13 Dec 2020 10:46:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9C9566B005C; Sun, 13 Dec 2020 10:46:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 842B86B005D; Sun, 13 Dec 2020 10:46:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0006.hostedemail.com [216.40.44.6]) by kanga.kvack.org (Postfix) with ESMTP id 65D0E6B0036 for ; Sun, 13 Dec 2020 10:46:25 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 0828E8249980 for ; Sun, 13 Dec 2020 15:46:25 +0000 (UTC) X-FDA: 77588685930.28.order85_3b0bd7627413 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id CF1B36C3A for ; Sun, 13 Dec 2020 15:46:24 +0000 (UTC) X-HE-Tag: order85_3b0bd7627413 X-Filterd-Recvd-Size: 17647 Received: from mail-pg1-f195.google.com (mail-pg1-f195.google.com [209.85.215.195]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Sun, 13 Dec 2020 15:46:23 +0000 (UTC) Received: by mail-pg1-f195.google.com with SMTP id t37so10763360pga.7 for ; Sun, 13 Dec 2020 07:46:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=PVp1eXJLh37PYlywKPAWM+Va+f3+i1NE2JTN7B/7VqU=; b=uJF9lzCV0+QmdZnmYB2xqmFkNaxzRXox6IuJdOgdpwodZm9KGtFGihg0Zb5QYKMdoH 7lxA16vmLL1zMUig6L26rfYHymUBz8+hsvkyxCCPsD0+4xNluh2ZdKNtqhH09rnwtKZ6 /KLvDUPsO0JJYMN6F90GErJLoFdgbubIY8UlmsjutTwyAs4HsTgWHkGpuSkmtEpxMjtL NNXbXAcjn7HbwP/ql2Uqa+H5LRtYcRMAMLsX3zt+2JO8A16HxICdESNdfg1u3F+S6PCL soLWVJGaLYZPKdKCvfk9AijnJXotWTje/oW6W+1eF7CvqORDx5xvOuduOdVXHl3pqvPi aLaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=PVp1eXJLh37PYlywKPAWM+Va+f3+i1NE2JTN7B/7VqU=; b=f2na7rhfysIwTxOSu/1rrMe9E1HCPXGKeaa0NVS1YuOewoGnLKL/bUXwX3x88XMVE8 MM/kx+yKYc2GpkM1gDGte57+0x0n6Xtt65TM+3+avzPQ9MOQLrFJ/3B9udKC6GjxLWOP xUP6LKH9Se94SZKlET51eUTA+c6QLimk5nIRAKAIQk/blUJMb3FJDNk3uqLV1Czqzx3y 5BHYd0j8106oE+EBS3O2CYLWKhwx/26T1wrZozDsI/hw/HRIYhcUHS3Bn0mQ4EKK4y1Z fcnvo+IyQPbew5wHNCYMjjiqDS5DRIs1ewV/noAMSpGrFi89SKIcetDBATmeyAPQ4/zL FuKw== X-Gm-Message-State: AOAM533zB4w8nK2NZJ9aVCtiPwhFFTrh/v3+Kjh8ZghXHkQ0LNSm6aIZ TcP5Gg9/Me7y733Z14tCqyK+gA== X-Google-Smtp-Source: ABdhPJxfmFWMjN7JeNO2XyrW2X7zuk3E5Hf6OE/uny6BDpXeKqzGBuea5q5ZMQL3Tu1DcxR8nn6Ubw== X-Received: by 2002:a63:5418:: with SMTP id i24mr20357530pgb.165.1607874379639; Sun, 13 Dec 2020 07:46:19 -0800 (PST) Received: from localhost.bytedance.net ([103.136.221.66]) by smtp.gmail.com with ESMTPSA id e24sm13113753pjt.16.2020.12.13.07.46.08 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 13 Dec 2020 07:46:18 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song Subject: [PATCH v9 00/11] Free some vmemmap pages of HugeTLB page Date: Sun, 13 Dec 2020 23:45:23 +0800 Message-Id: <20201213154534.54826-1-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi all, This patch series will free some vmemmap pages(struct page structures) associated with each hugetlbpage when preallocated to save memory. In order to reduce the difficulty of the first version of code review. >From this version, we disable PMD/huge page mapping of vmemmap if this feature was enabled. This accutualy eliminate a bunch of the complex code doing page table manipulation. When this patch series is solid, we cam ad= d the code of vmemmap page table manipulation in the future. The struct page structures (page structs) are used to describe a physical page frame. By default, there is a one-to-one mapping from a page frame t= o it's corresponding page struct. The HugeTLB pages consist of multiple base page size pages and is support= ed by many architectures. See hugetlbpage.rst in the Documentation directory for more details. On the x86 architecture, HugeTLB pages of size 2MB and = 1GB are currently supported. Since the base page size on x86 is 4KB, a 2MB HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists o= f 4096 base pages. For each base page, there is a corresponding page struct= . Within the HugeTLB subsystem, only the first 4 page structs are used to contain unique information about a HugeTLB page. HUGETLB_CGROUP_MIN_ORDER provides this upper limit. The only 'useful' information in the remaining page structs is the compound_head field, and this field is the same for a= ll tail pages. By removing redundant page structs for HugeTLB pages, memory can returned= to the buddy allocator for other uses. When the system boot up, every 2M HugeTLB has 512 struct page structs whi= ch size is 8 pages(sizeof(struct page) * 512 / PAGE_SIZE). HugeTLB struct pages(8 pages) page frame(8 p= ages) +-----------+ ---virt_to_page---> +-----------+ mapping to +--------= ---+ | | | 0 | -------------> | 0 = | | | +-----------+ +--------= ---+ | | | 1 | -------------> | 1 = | | | +-----------+ +--------= ---+ | | | 2 | -------------> | 2 = | | | +-----------+ +--------= ---+ | | | 3 | -------------> | 3 = | | | +-----------+ +--------= ---+ | | | 4 | -------------> | 4 = | | 2MB | +-----------+ +--------= ---+ | | | 5 | -------------> | 5 = | | | +-----------+ +--------= ---+ | | | 6 | -------------> | 6 = | | | +-----------+ +--------= ---+ | | | 7 | -------------> | 7 = | | | +-----------+ +--------= ---+ | | | | | | +-----------+ The value of page->compound_head is the same for all tail pages. The firs= t page of page structs (page 0) associated with the HugeTLB page contains t= he 4 page structs necessary to describe the HugeTLB. The only use of the remai= ning pages of page structs (page 1 to page 7) is to point to page->compound_he= ad. Therefore, we can remap pages 2 to 7 to page 1. Only 2 pages of page stru= cts will be used for each HugeTLB page. This will allow us to free the remain= ing 6 pages to the buddy allocator. Here is how things look after remapping. HugeTLB struct pages(8 pages) page frame(8 p= ages) +-----------+ ---virt_to_page---> +-----------+ mapping to +--------= ---+ | | | 0 | -------------> | 0 = | | | +-----------+ +--------= ---+ | | | 1 | -------------> | 1 = | | | +-----------+ +--------= ---+ | | | 2 | ----------------^ ^ ^ ^ = ^ ^ | | +-----------+ | | | = | | | | | 3 | ------------------+ | | = | | | | +-----------+ | | = | | | | | 4 | --------------------+ | = | | | 2MB | +-----------+ | = | | | | | 5 | ----------------------+ = | | | | +-----------+ = | | | | | 6 | ------------------------= + | | | +-----------+ = | | | | 7 | ------------------------= --+ | | +-----------+ | | | | | | +-----------+ When a HugeTLB is freed to the buddy system, we should allocate 6 pages f= or vmemmap pages and restore the previous mapping relationship. Apart from 2MB HugeTLB page, we also have 1GB HugeTLB page. It is similar to the 2MB HugeTLB page. We also can use this approach to free the vmemma= p pages. In this case, for the 1GB HugeTLB page, we can save 4088 pages(There are 4096 pages for struct page structs, we reserve 2 pages for vmemmap and 8 pages for page tables. So we can save 4088 pages). This is a very substan= tial gain. On our server, run some SPDK/QEMU applications which will use 1024G= B hugetlbpage. With this feature enabled, we can save ~16GB(1G hugepage)/~1= 1GB (2MB hugepage, the worst case is 10GB while the best is 12GB) memory. Because there are vmemmap page tables reconstruction on the freeing/alloc= ating path, it increases some overhead. Here are some overhead analysis. 1) Allocating 10240 2MB hugetlb pages. a) With this patch series applied: # time echo 10240 > /proc/sys/vm/nr_hugepages real 0m0.166s user 0m0.000s sys 0m0.166s # bpftrace -e 'kprobe:alloc_fresh_huge_page { @start[tid] =3D nsecs; }= kretprobe:alloc_fresh_huge_page /@start[tid]/ { @latency =3D hist(nsecs = - @start[tid]); delete(@start[tid]); }' Attaching 2 probes... @latency: [8K, 16K) 8360 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@= @@@@@@@@| [16K, 32K) 1868 |@@@@@@@@@@@ = | [32K, 64K) 10 | = | [64K, 128K) 2 | = | b) Without this patch series: # time echo 10240 > /proc/sys/vm/nr_hugepages real 0m0.066s user 0m0.000s sys 0m0.066s # bpftrace -e 'kprobe:alloc_fresh_huge_page { @start[tid] =3D nsecs; }= kretprobe:alloc_fresh_huge_page /@start[tid]/ { @latency =3D hist(nsecs = - @start[tid]); delete(@start[tid]); }' Attaching 2 probes... @latency: [4K, 8K) 10176 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@= @@@@@@@@| [8K, 16K) 62 | = | [16K, 32K) 2 | = | Summarize: this feature is about ~2x slower than before. 2) Freeing 10240 2MB hugetlb pages. a) With this patch series applied: # time echo 0 > /proc/sys/vm/nr_hugepages real 0m0.004s user 0m0.000s sys 0m0.002s # bpftrace -e 'kprobe:__free_hugepage { @start[tid] =3D nsecs; } kretp= robe:__free_hugepage /@start[tid]/ { @latency =3D hist(nsecs - @start[tid= ]); delete(@start[tid]); }' Attaching 2 probes... @latency: [16K, 32K) 10240 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@= @@@@@@@@| b) Without this patch series: # time echo 0 > /proc/sys/vm/nr_hugepages real 0m0.077s user 0m0.001s sys 0m0.075s # bpftrace -e 'kprobe:__free_hugepage { @start[tid] =3D nsecs; } kretp= robe:__free_hugepage /@start[tid]/ { @latency =3D hist(nsecs - @start[tid= ]); delete(@start[tid]); }' Attaching 2 probes... @latency: [4K, 8K) 9950 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@= @@@@@@@@| [8K, 16K) 287 |@ = | [16K, 32K) 3 | = | Summarize: The overhead of __free_hugepage is about ~2-4x slower than = before. But according to the allocation test above, I think that he= re is also ~2x slower than before. But why the 'real' time of patched is smaller than before? = Because In this patch series, the freeing hugetlb is asynchronous(through kwoker). Although the overhead has increased, the overhead is not significant. Lik= e Mike said, "However, remember that the majority of use cases create hugetlb pa= ges at or shortly after boot time and add them to the pool. So, additional overh= ead is at pool creation time. There is no change to 'normal run time' operations= of getting a page from or returning a page to the pool (think page fault/unm= ap)". Todo: - Free all of the tail vmemmap pages Now for the 2MB HugrTLB page, we only free 6 vmemmap pages. we really= can free 7 vmemmap pages. In this case, we can see 8 of the 512 struct pa= ge structures has beed set PG_head flag. If we can adjust compound_head(= ) slightly and make compound_head() return the real head struct page wh= en the parameter is the tail struct page but with PG_head flag set. In order to make the code evolution route clearer. This feature can c= an be a separate patch after this patchset is solid. - Support for other architectures (e.g. aarch64). - Enable PMD/huge page mapping of vmemmap even if this feature was enab= led. Changelog in v8 -> v9: - Rework some code. Very thanks to Oscar. - Put all the non-hugetlb vmemmap functions under sparsemem-vmemmap.c. Changelog in v7 -> v8: - Adjust the order of patches. Very thanks to David and Oscar. Your suggestions are very valuable. Changelog in v6 -> v7: - Rebase to linux-next 20201130 - Do not use basepage mapping for vmemmap when this feature is disabled= . - Rework some patchs. [PATCH v6 08/16] mm/hugetlb: Free the vmemmap pages associated with e= ach hugetlb page [PATCH v6 10/16] mm/hugetlb: Allocate the vmemmap pages associated wi= th each hugetlb page Thanks to Oscar and Barry. Changelog in v5 -> v6: - Disable PMD/huge page mapping of vmemmap if this feature was enabled. - Simplify the first version code. Changelog in v4 -> v5: - Rework somme comments and code in the [PATCH v4 04/21] and [PATCH v4 = 05/21]. Thanks to Mike and Oscar's suggestions. Changelog in v3 -> v4: - Move all the vmemmap functions to hugetlb_vmemmap.c. - Make the CONFIG_HUGETLB_PAGE_FREE_VMEMMAP default to y, if we want to disable this feature, we should disable it by a boot/kernel command l= ine. - Remove vmemmap_pgtable_{init, deposit, withdraw}() helper functions. - Initialize page table lock for vmemmap through core_initcall mechanis= m. Thanks for Mike and Oscar's suggestions. Changelog in v2 -> v3: - Rename some helps function name. Thanks Mike. - Rework some code. Thanks Mike and Oscar. - Remap the tail vmemmap page with PAGE_KERNEL_RO instead of PAGE_KERNE= L. Thanks Matthew. - Add some overhead analysis in the cover letter. - Use vmemap pmd table lock instead of a hugetlb specific global lock. Changelog in v1 -> v2: - Fix do not call dissolve_compound_page in alloc_huge_page_vmemmap(). - Fix some typo and code style problems. - Remove unused handle_vmemmap_fault(). - Merge some commits to one commit suggested by Mike. Muchun Song (11): mm/memory_hotplug: Factor out bootmem core functions to bootmem_info.c mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page mm/hugetlb: Defer freeing of HugeTLB pages mm/hugetlb: Allocate the vmemmap pages associated with each HugeTLB page mm/hugetlb: Set the PageHWPoison to the raw error page mm/hugetlb: Flush work when dissolving hugetlb page mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate mm/hugetlb: Gather discrete indexes of tail page mm/hugetlb: Optimize the code with the help of the compiler Documentation/admin-guide/kernel-parameters.txt | 9 + Documentation/admin-guide/mm/hugetlbpage.rst | 3 + arch/x86/mm/init_64.c | 13 +- fs/Kconfig | 15 ++ include/linux/bootmem_info.h | 65 ++++++ include/linux/hugetlb.h | 36 ++++ include/linux/hugetlb_cgroup.h | 15 +- include/linux/memory_hotplug.h | 27 --- include/linux/mm.h | 3 + mm/Makefile | 2 + mm/bootmem_info.c | 124 +++++++++++ mm/hugetlb.c | 161 ++++++++++++-- mm/hugetlb_vmemmap.c | 268 ++++++++++++++++++= ++++++ mm/hugetlb_vmemmap.h | 45 ++++ mm/memory_hotplug.c | 116 ---------- mm/sparse-vmemmap.c | 237 ++++++++++++++++++= +++ mm/sparse.c | 1 + 17 files changed, 966 insertions(+), 174 deletions(-) create mode 100644 include/linux/bootmem_info.h create mode 100644 mm/bootmem_info.c create mode 100644 mm/hugetlb_vmemmap.c create mode 100644 mm/hugetlb_vmemmap.h --=20 2.11.0