All of lore.kernel.org
 help / color / mirror / Atom feed
From: Muchun Song <songmuchun@bytedance.com>
To: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Jonathan Corbet <corbet@lwn.net>,
	Thomas Gleixner <tglx@linutronix.de>,
	mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com,
	dave.hansen@linux.intel.com, luto@kernel.org,
	Peter Zijlstra <peterz@infradead.org>,
	viro@zeniv.linux.org.uk,
	Andrew Morton <akpm@linux-foundation.org>,
	paulmck@kernel.org, mchehab+huawei@kernel.org,
	pawan.kumar.gupta@linux.intel.com,
	Randy Dunlap <rdunlap@infradead.org>,
	oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de,
	Mina Almasry <almasrymina@google.com>,
	David Rientjes <rientjes@google.com>,
	Matthew Wilcox <willy@infradead.org>,
	Oscar Salvador <osalvador@suse.de>,
	Michal Hocko <mhocko@suse.com>,
	"Song Bao Hua (Barry Song)" <song.bao.hua@hisilicon.com>,
	David Hildenbrand <david@redhat.com>,
	Xiongchun duan <duanxiongchun@bytedance.com>,
	linux-doc@vger.kernel.org, LKML <linux-kernel@vger.kernel.org>,
	Linux Memory Management List <linux-mm@kvack.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>
Subject: Re: [External] Re: [PATCH v9 03/11] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page
Date: Thu, 17 Dec 2020 12:06:09 +0800	[thread overview]
Message-ID: <CAMZfGtXmQrdLeMs4dz60aPuLzNogEdN35EAeLRT-26gZtW64vA@mail.gmail.com> (raw)
In-Reply-To: <5936a766-505a-eab0-42a6-59aab2585880@oracle.com>

On Thu, Dec 17, 2020 at 6:08 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>
> On 12/13/20 7:45 AM, Muchun Song wrote:
> > Every HugeTLB has more than one struct page structure. We __know__ that
> > we only use the first 4(HUGETLB_CGROUP_MIN_ORDER) struct page structures
> > to store metadata associated with each HugeTLB.
> >
> > There are a lot of struct page structures associated with each HugeTLB
> > page. For tail pages, the value of compound_head is the same. So we can
> > reuse first page of tail page structures. We map the virtual addresses
> > of the remaining pages of tail page structures to the first tail page
> > struct, and then free these page frames. Therefore, we need to reserve
> > two pages as vmemmap areas.
> >
> > When we allocate a HugeTLB page from the buddy, we can free some vmemmap
> > pages associated with each HugeTLB page. It is more appropriate to do it
> > in the prep_new_huge_page().
> >
> > The free_vmemmap_pages_per_hpage(), which indicates how many vmemmap
> > pages associated with a HugeTLB page can be freed, returns zero for
> > now, which means the feature is disabled. We will enable it once all
> > the infrastructure is there.
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > ---
> >  include/linux/bootmem_info.h |  27 +++++-
> >  include/linux/mm.h           |   2 +
> >  mm/Makefile                  |   1 +
> >  mm/hugetlb.c                 |   3 +
> >  mm/hugetlb_vmemmap.c         | 209 +++++++++++++++++++++++++++++++++++++++++++
> >  mm/hugetlb_vmemmap.h         |  20 +++++
> >  mm/sparse-vmemmap.c          | 170 +++++++++++++++++++++++++++++++++++
> >  7 files changed, 431 insertions(+), 1 deletion(-)
> >  create mode 100644 mm/hugetlb_vmemmap.c
> >  create mode 100644 mm/hugetlb_vmemmap.h
>
> > diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
> > index 16183d85a7d5..78c527617e8d 100644
> > --- a/mm/sparse-vmemmap.c
> > +++ b/mm/sparse-vmemmap.c
> > @@ -27,8 +27,178 @@
> >  #include <linux/spinlock.h>
> >  #include <linux/vmalloc.h>
> >  #include <linux/sched.h>
> > +#include <linux/pgtable.h>
> > +#include <linux/bootmem_info.h>
> > +
> >  #include <asm/dma.h>
> >  #include <asm/pgalloc.h>
> > +#include <asm/tlbflush.h>
> > +
> > +/*
> > + * vmemmap_rmap_walk - walk vmemmap page table
>
> I am not sure if 'rmap' should be part of these names.  rmap today is mostly
> about reverse mapping lookup.  Did you use rmap for 'remap', or because this
> code is patterned after the page table walking rmap code?  Just think the
> naming could cause some confusion.

Yeah. I should use "remap" to avoid confusion.

>
> > + *
> > + * @rmap_pte:                called for each non-empty PTE (lowest-level) entry.
> > + * @reuse:           the page which is reused for the tail vmemmap pages.
> > + * @vmemmap_pages:   the list head of the vmemmap pages that can be freed.
> > + */
> > +struct vmemmap_rmap_walk {
> > +     void (*rmap_pte)(pte_t *pte, unsigned long addr,
> > +                      struct vmemmap_rmap_walk *walk);
> > +     struct page *reuse;
> > +     struct list_head *vmemmap_pages;
> > +};
> > +
> > +/*
> > + * The index of the pte page table which is mapped to the tail of the
> > + * vmemmap page.
> > + */
> > +#define VMEMMAP_TAIL_PAGE_REUSE              -1
>
> That is the index/offset from the range to be remapped.  See comments below.

You are right. I need to update the comment.

>
> > +
> > +static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr,
> > +                           unsigned long end, struct vmemmap_rmap_walk *walk)
> > +{
> > +     pte_t *pte;
> > +
> > +     pte = pte_offset_kernel(pmd, addr);
> > +     do {
> > +             BUG_ON(pte_none(*pte));
> > +
> > +             if (!walk->reuse)
> > +                     walk->reuse = pte_page(pte[VMEMMAP_TAIL_PAGE_REUSE]);
>
> It may be just me, but I don't like the pte[-1] here.  It certainly does work
> as designed because we want to remap all pages in the range to the page before
> the range (at offset -1).  But, we do not really validate this 'reuse' page.
> There is the BUG_ON(pte_none(*pte)) as a sanity check, but we do nothing similar
> for pte[-1].  Based on the usage for HugeTLB pages, we can be confident that
> pte[-1] is actually a pte.  In discussions with Oscar, you mentioned another
> possible use for these routines.

Yeah, we should add a BUG_ON for pte[-1].

>
> Don't change anything based on my opinion only.  I would like to see what
> others think as well.
>
> > +
> > +             if (walk->rmap_pte)
> > +                     walk->rmap_pte(pte, addr, walk);
> > +     } while (pte++, addr += PAGE_SIZE, addr != end);
> > +}
> > +
> > +static void vmemmap_pmd_range(pud_t *pud, unsigned long addr,
> > +                           unsigned long end, struct vmemmap_rmap_walk *walk)
> > +{
> > +     pmd_t *pmd;
> > +     unsigned long next;
> > +
> > +     pmd = pmd_offset(pud, addr);
> > +     do {
> > +             BUG_ON(pmd_none(*pmd));
> > +
> > +             next = pmd_addr_end(addr, end);
> > +             vmemmap_pte_range(pmd, addr, next, walk);
> > +     } while (pmd++, addr = next, addr != end);
> > +}
> > +
> > +static void vmemmap_pud_range(p4d_t *p4d, unsigned long addr,
> > +                           unsigned long end, struct vmemmap_rmap_walk *walk)
> > +{
> > +     pud_t *pud;
> > +     unsigned long next;
> > +
> > +     pud = pud_offset(p4d, addr);
> > +     do {
> > +             BUG_ON(pud_none(*pud));
> > +
> > +             next = pud_addr_end(addr, end);
> > +             vmemmap_pmd_range(pud, addr, next, walk);
> > +     } while (pud++, addr = next, addr != end);
> > +}
> > +
> > +static void vmemmap_p4d_range(pgd_t *pgd, unsigned long addr,
> > +                           unsigned long end, struct vmemmap_rmap_walk *walk)
> > +{
> > +     p4d_t *p4d;
> > +     unsigned long next;
> > +
> > +     p4d = p4d_offset(pgd, addr);
> > +     do {
> > +             BUG_ON(p4d_none(*p4d));
> > +
> > +             next = p4d_addr_end(addr, end);
> > +             vmemmap_pud_range(p4d, addr, next, walk);
> > +     } while (p4d++, addr = next, addr != end);
> > +}
> > +
> > +static void vmemmap_remap_range(unsigned long start, unsigned long end,
> > +                             struct vmemmap_rmap_walk *walk)
> > +{
> > +     unsigned long addr = start;
> > +     unsigned long next;
> > +     pgd_t *pgd;
> > +
> > +     VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE));
> > +     VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE));
> > +
> > +     pgd = pgd_offset_k(addr);
> > +     do {
> > +             BUG_ON(pgd_none(*pgd));
> > +
> > +             next = pgd_addr_end(addr, end);
> > +             vmemmap_p4d_range(pgd, addr, next, walk);
> > +     } while (pgd++, addr = next, addr != end);
> > +
> > +     flush_tlb_kernel_range(start, end);
> > +}
> > +
> > +/*
> > + * Free a vmemmap page. A vmemmap page can be allocated from the memblock
> > + * allocator or buddy allocator. If the PG_reserved flag is set, it means
> > + * that it allocated from the memblock allocator, just free it via the
> > + * free_bootmem_page(). Otherwise, use __free_page().
> > + */
> > +static inline void free_vmemmap_page(struct page *page)
> > +{
> > +     if (PageReserved(page))
> > +             free_bootmem_page(page);
> > +     else
> > +             __free_page(page);
> > +}
> > +
> > +/* Free a list of the vmemmap pages */
> > +static void free_vmemmap_page_list(struct list_head *list)
> > +{
> > +     struct page *page, *next;
> > +
> > +     list_for_each_entry_safe(page, next, list, lru) {
> > +             list_del(&page->lru);
> > +             free_vmemmap_page(page);
> > +     }
> > +}
> > +
> > +static void vmemmap_remap_reuse_pte(pte_t *pte, unsigned long addr,
> > +                                 struct vmemmap_rmap_walk *walk)
>
> See vmemmap_remap_reuse rename suggestion below.  I would suggest reuse
> be dropped from the name here and just be called 'vmemmap_remap_pte'.

OK. Will do that.

>
> > +{
> > +     /*
> > +      * Make the tail pages are mapped with read-only to catch
> > +      * illegal write operation to the tail pages.
> > +      */
> > +     pgprot_t pgprot = PAGE_KERNEL_RO;
> > +     pte_t entry = mk_pte(walk->reuse, pgprot);
> > +     struct page *page;
> > +
> > +     page = pte_page(*pte);
> > +     list_add(&page->lru, walk->vmemmap_pages);
> > +
> > +     set_pte_at(&init_mm, addr, pte, entry);
> > +}
> > +
> > +/**
> > + * vmemmap_remap_reuse - remap the vmemmap virtual address range
>
> My original commnet here was:
>
> Not sure if the word '_reuse' is best in this function name.  To me, the name
> implies this routine will reuse vmemmap pages.  Perhaps, it makes more sense
> to rename as 'vmemmap_remap_free'?  It will first remap, then free vmemmap.

The vmemmap_remap_free is also a good name to me.
In the next patch, we can use vmemmap_remap_alloc for
allocating vmemmap pages. Looks very symmetrical. :-)

Thanks Mike.

>
> But, then I looked at the code above and perhaps you are using the word
> '_reuse' because the page before the range will be reused?  The vmemmap

Yeah. You are right.

> page at offset VMEMMAP_TAIL_PAGE_REUSE (-1).
>
> > + *                       [start, start + size) to the page which
> > + *                       [start - PAGE_SIZE, start) is mapped.
> > + * @start:   start address of the vmemmap virtual address range
> > + * @end:     size of the vmemmap virtual address range
>
>       ^^^^ should be @size:

Oh, Yeah. Forgot to update it. Thanks.

>
> --
> Mike Kravetz
>
> > + */
> > +void vmemmap_remap_reuse(unsigned long start, unsigned long size)
> > +{
> > +     unsigned long end = start + size;
> > +     LIST_HEAD(vmemmap_pages);
> > +
> > +     struct vmemmap_rmap_walk walk = {
> > +             .rmap_pte       = vmemmap_remap_reuse_pte,
> > +             .vmemmap_pages  = &vmemmap_pages,
> > +     };
> > +
> > +     vmemmap_remap_range(start, end, &walk);
> > +     free_vmemmap_page_list(&vmemmap_pages);
> > +}
> >
> >  /*
> >   * Allocate a block of memory to be used to back the virtual memory map
> >



-- 
Yours,
Muchun

  parent reply	other threads:[~2020-12-17  4:07 UTC|newest]

Thread overview: 58+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-13 15:45 [PATCH v9 00/11] Free some vmemmap pages of HugeTLB page Muchun Song
2020-12-13 15:45 ` [PATCH v9 01/11] mm/memory_hotplug: Factor out bootmem core functions to bootmem_info.c Muchun Song
2020-12-13 15:45 ` [PATCH v9 02/11] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
2020-12-16  1:03   ` Mike Kravetz
2020-12-16  3:24     ` [External] " Muchun Song
2020-12-16  3:24       ` Muchun Song
2020-12-16  3:45     ` Mike Kravetz
2020-12-16  3:52       ` [External] " Muchun Song
2020-12-16  3:52         ` Muchun Song
2020-12-13 15:45 ` [PATCH v9 03/11] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page Muchun Song
2020-12-16 13:06   ` Oscar Salvador
2020-12-16 13:15     ` [External] " Muchun Song
2020-12-16 13:15       ` Muchun Song
2020-12-16 22:08   ` Mike Kravetz
2020-12-16 22:25     ` Oscar Salvador
2020-12-16 22:49       ` Mike Kravetz
2020-12-17  6:54         ` [External] " Muchun Song
2020-12-17  6:54           ` Muchun Song
2020-12-17  9:05           ` Muchun Song
2020-12-17  9:05             ` Muchun Song
2020-12-17  4:06     ` Muchun Song [this message]
2020-12-17  4:06       ` Muchun Song
2020-12-13 15:45 ` [PATCH v9 04/11] mm/hugetlb: Defer freeing of HugeTLB pages Muchun Song
2020-12-16 23:48   ` Mike Kravetz
2020-12-17  3:19     ` [External] " Muchun Song
2020-12-17  3:19       ` Muchun Song
2020-12-13 15:45 ` [PATCH v9 05/11] mm/hugetlb: Allocate the vmemmap pages associated with each HugeTLB page Muchun Song
2020-12-17  1:17   ` Mike Kravetz
2020-12-17  3:22     ` [External] " Muchun Song
2020-12-17  3:22       ` Muchun Song
2020-12-13 15:45 ` [PATCH v9 06/11] mm/hugetlb: Set the PageHWPoison to the raw error page Muchun Song
2020-12-16 13:28   ` Oscar Salvador
2020-12-16 13:51     ` [External] " Muchun Song
2020-12-16 13:51       ` Muchun Song
2020-12-16 13:30   ` Oscar Salvador
2020-12-13 15:45 ` [PATCH v9 07/11] mm/hugetlb: Flush work when dissolving hugetlb page Muchun Song
2020-12-13 15:45 ` [PATCH v9 08/11] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Muchun Song
2020-12-16 14:40   ` Oscar Salvador
2020-12-16 16:04     ` [External] " Muchun Song
2020-12-16 16:04       ` Muchun Song
2020-12-16 22:10       ` Oscar Salvador
2020-12-17  2:45         ` Muchun Song
2020-12-17  2:45           ` Muchun Song
2020-12-13 15:45 ` [PATCH v9 09/11] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
2020-12-16 13:43   ` Oscar Salvador
2020-12-16 13:56     ` [External] " Muchun Song
2020-12-16 13:56       ` Muchun Song
2020-12-16 22:12       ` Oscar Salvador
2020-12-17  8:34     ` Muchun Song
2020-12-17  8:34       ` Muchun Song
2020-12-13 15:45 ` [PATCH v9 10/11] mm/hugetlb: Gather discrete indexes of tail page Muchun Song
2020-12-16 14:03   ` Oscar Salvador
2020-12-16 14:26     ` [External] " Muchun Song
2020-12-16 14:26       ` Muchun Song
2020-12-13 15:45 ` [PATCH v9 11/11] mm/hugetlb: Optimize the code with the help of the compiler Muchun Song
2020-12-17 10:31   ` Oscar Salvador
2020-12-17 10:42     ` [External] " Muchun Song
2020-12-17 10:42       ` Muchun Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAMZfGtXmQrdLeMs4dz60aPuLzNogEdN35EAeLRT-26gZtW64vA@mail.gmail.com \
    --to=songmuchun@bytedance.com \
    --cc=akpm@linux-foundation.org \
    --cc=almasrymina@google.com \
    --cc=anshuman.khandual@arm.com \
    --cc=bp@alien8.de \
    --cc=corbet@lwn.net \
    --cc=dave.hansen@linux.intel.com \
    --cc=david@redhat.com \
    --cc=duanxiongchun@bytedance.com \
    --cc=hpa@zytor.com \
    --cc=jroedel@suse.de \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=luto@kernel.org \
    --cc=mchehab+huawei@kernel.org \
    --cc=mhocko@suse.com \
    --cc=mike.kravetz@oracle.com \
    --cc=mingo@redhat.com \
    --cc=oneukum@suse.com \
    --cc=osalvador@suse.de \
    --cc=paulmck@kernel.org \
    --cc=pawan.kumar.gupta@linux.intel.com \
    --cc=peterz@infradead.org \
    --cc=rdunlap@infradead.org \
    --cc=rientjes@google.com \
    --cc=song.bao.hua@hisilicon.com \
    --cc=tglx@linutronix.de \
    --cc=viro@zeniv.linux.org.uk \
    --cc=willy@infradead.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.