All of lore.kernel.org
 help / color / mirror / Atom feed
From: Joao Martins <joao.m.martins@oracle.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: Linux MM <linux-mm@kvack.org>,
	linux-nvdimm <linux-nvdimm@lists.01.org>,
	Matthew Wilcox <willy@infradead.org>,
	Jason Gunthorpe <jgg@ziepe.ca>,
	Muchun Song <songmuchun@bytedance.com>,
	Mike Kravetz <mike.kravetz@oracle.com>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH RFC 4/9] mm/page_alloc: Reuse tail struct pages for compound pagemaps
Date: Mon, 22 Feb 2021 12:01:29 +0000	[thread overview]
Message-ID: <50b76256-c1b5-0f88-f9cc-1cdce45c6ce1@oracle.com> (raw)
In-Reply-To: <CAPcyv4hw574wYUa0qzz+pQrB4K11R618Moh30mvLz8GLNDw=5w@mail.gmail.com>



On 2/20/21 6:17 AM, Dan Williams wrote:
> On Tue, Dec 8, 2020 at 9:31 AM Joao Martins <joao.m.martins@oracle.com> wrote:
>>
>> When PGMAP_COMPOUND is set, all pages are onlined at a given huge page
>> alignment and using compound pages to describe them as opposed to a
>> struct per 4K.
>>
> 
> Same s/online/mapped/ comment as other changelogs.
> 
Ack.

>> To minimize struct page overhead and given the usage of compound pages we
>> utilize the fact that most tail pages look the same, we online the
>> subsection while pointing to the same pages. Thus request VMEMMAP_REUSE
>> in add_pages.
>>
>> With VMEMMAP_REUSE, provided we reuse most tail pages the amount of
>> struct pages we need to initialize is a lot smaller that the total
>> amount of structs we would normnally online. Thus allow an @init_order
>> to be passed to specify how much pages we want to prep upon creating a
>> compound page.
>>
>> Finally when onlining all struct pages in memmap_init_zone_device, make
>> sure that we only initialize the unique struct pages i.e. the first 2
>> 4K pages from @align which means 128 struct pages out of 32768 for 2M
>> @align or 262144 for a 1G @align.
>>
>> Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
>> ---
>>  mm/memremap.c   |  4 +++-
>>  mm/page_alloc.c | 23 ++++++++++++++++++++---
>>  2 files changed, 23 insertions(+), 4 deletions(-)
>>
>> diff --git a/mm/memremap.c b/mm/memremap.c
>> index ecfa74848ac6..3eca07916b9d 100644
>> --- a/mm/memremap.c
>> +++ b/mm/memremap.c
>> @@ -253,8 +253,10 @@ static int pagemap_range(struct dev_pagemap *pgmap, struct mhp_params *params,
>>                         goto err_kasan;
>>                 }
>>
>> -               if (pgmap->flags & PGMAP_COMPOUND)
>> +               if (pgmap->flags & PGMAP_COMPOUND) {
>>                         params->align = pgmap->align;
>> +                       params->flags = MEMHP_REUSE_VMEMMAP;
> 
> The "reuse" naming is not my favorite. 

I also dislike it, but couldn't come up with a better one :(

> Yes, page reuse is happening,
> but what is more relevant is that the vmemmap is in a given minimum
> page size mode. So it's less of a flag and more of enum that selects
> between PAGE_SIZE, HPAGE_SIZE, and PUD_PAGE_SIZE (GPAGE_SIZE?).
> 
That does sound cleaner, but at the same time, won't we get confused
with pgmap @align and the vmemmap/memhp @align ?

Hmm, I also I think there's value in having two different attributes as
they have two different intents. A pgmap @align means is 'represent its
metadata as a huge page of a given size' and the vmemmap/memhp @align
lets tell the sparsemem that we are mapping metadata of a given @align.

The compound pages (pgmap->align) might be useful for other ZONE_DEVICE
users. But I am not sure everybody will want to immediately switch to the
'struct page reuse' trick.

>> +               }
>>
>>                 error = arch_add_memory(nid, range->start, range_len(range),
>>                                         params);
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 9716ecd58e29..180a7d4e9285 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -691,10 +691,11 @@ void free_compound_page(struct page *page)
>>         __free_pages_ok(page, compound_order(page), FPI_NONE);
>>  }
>>
>> -void prep_compound_page(struct page *page, unsigned int order)
>> +static void __prep_compound_page(struct page *page, unsigned int order,
>> +                                unsigned int init_order)
>>  {
>>         int i;
>> -       int nr_pages = 1 << order;
>> +       int nr_pages = 1 << init_order;
>>
>>         __SetPageHead(page);
>>         for (i = 1; i < nr_pages; i++) {
>> @@ -711,6 +712,11 @@ void prep_compound_page(struct page *page, unsigned int order)
>>                 atomic_set(compound_pincount_ptr(page), 0);
>>  }
>>
>> +void prep_compound_page(struct page *page, unsigned int order)
>> +{
>> +       __prep_compound_page(page, order, order);
>> +}
>> +
>>  #ifdef CONFIG_DEBUG_PAGEALLOC
>>  unsigned int _debug_guardpage_minorder;
>>
>> @@ -6108,6 +6114,9 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
>>  }
>>
>>  #ifdef CONFIG_ZONE_DEVICE
>> +
>> +#define MEMMAP_COMPOUND_SIZE (2 * (PAGE_SIZE/sizeof(struct page)))
>> +
>>  void __ref memmap_init_zone_device(struct zone *zone,
>>                                    unsigned long start_pfn,
>>                                    unsigned long nr_pages,
>> @@ -6138,6 +6147,12 @@ void __ref memmap_init_zone_device(struct zone *zone,
>>         for (pfn = start_pfn; pfn < end_pfn; pfn++) {
>>                 struct page *page = pfn_to_page(pfn);
>>
>> +               /* Skip already initialized pages. */
>> +               if (compound && (pfn % align >= MEMMAP_COMPOUND_SIZE)) {
>> +                       pfn = ALIGN(pfn, align) - 1;
>> +                       continue;
>> +               }
>> +
>>                 __init_single_page(page, pfn, zone_idx, nid);
>>
>>                 /*
>> @@ -6175,7 +6190,9 @@ void __ref memmap_init_zone_device(struct zone *zone,
>>
>>         if (compound) {
>>                 for (pfn = start_pfn; pfn < end_pfn; pfn += align)
>> -                       prep_compound_page(pfn_to_page(pfn), order_base_2(align));
>> +                       __prep_compound_page(pfn_to_page(pfn),
>> +                                          order_base_2(align),
>> +                                          order_base_2(MEMMAP_COMPOUND_SIZE));
>>         }
> 
> Alex did quite a bit of work to optimize this path, and this
> organization appears to undo it. I'd prefer to keep it all in one loop
> so a 'struct page' is only initialized once. Otherwise by the time the
> above loop finishes and this one starts the 'struct page's are
> probably cache cold again.
> 
> So I'd break prep_compoud_page into separate head and tail  init and
> call them at the right time in one loop.
> 
Ah, makes sense! I'll split into head/tail counter parts -- Might get even
faster that already is.

Which makes me wonder if we shouldn't replace that line:

"memmap_init_zone_device initialized NNNNNN pages in 0ms\n"

to use 'us' or 'ns' where applicable. That's ought to be more useful information
to the user.
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

WARNING: multiple messages have this Message-ID
From: Joao Martins <joao.m.martins@oracle.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: Linux MM <linux-mm@kvack.org>, Ira Weiny <ira.weiny@intel.com>,
	linux-nvdimm <linux-nvdimm@lists.01.org>,
	Matthew Wilcox <willy@infradead.org>,
	Jason Gunthorpe <jgg@ziepe.ca>, Jane Chu <jane.chu@oracle.com>,
	Muchun Song <songmuchun@bytedance.com>,
	Mike Kravetz <mike.kravetz@oracle.com>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH RFC 4/9] mm/page_alloc: Reuse tail struct pages for compound pagemaps
Date: Mon, 22 Feb 2021 12:01:29 +0000	[thread overview]
Message-ID: <50b76256-c1b5-0f88-f9cc-1cdce45c6ce1@oracle.com> (raw)
In-Reply-To: <CAPcyv4hw574wYUa0qzz+pQrB4K11R618Moh30mvLz8GLNDw=5w@mail.gmail.com>



On 2/20/21 6:17 AM, Dan Williams wrote:
> On Tue, Dec 8, 2020 at 9:31 AM Joao Martins <joao.m.martins@oracle.com> wrote:
>>
>> When PGMAP_COMPOUND is set, all pages are onlined at a given huge page
>> alignment and using compound pages to describe them as opposed to a
>> struct per 4K.
>>
> 
> Same s/online/mapped/ comment as other changelogs.
> 
Ack.

>> To minimize struct page overhead and given the usage of compound pages we
>> utilize the fact that most tail pages look the same, we online the
>> subsection while pointing to the same pages. Thus request VMEMMAP_REUSE
>> in add_pages.
>>
>> With VMEMMAP_REUSE, provided we reuse most tail pages the amount of
>> struct pages we need to initialize is a lot smaller that the total
>> amount of structs we would normnally online. Thus allow an @init_order
>> to be passed to specify how much pages we want to prep upon creating a
>> compound page.
>>
>> Finally when onlining all struct pages in memmap_init_zone_device, make
>> sure that we only initialize the unique struct pages i.e. the first 2
>> 4K pages from @align which means 128 struct pages out of 32768 for 2M
>> @align or 262144 for a 1G @align.
>>
>> Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
>> ---
>>  mm/memremap.c   |  4 +++-
>>  mm/page_alloc.c | 23 ++++++++++++++++++++---
>>  2 files changed, 23 insertions(+), 4 deletions(-)
>>
>> diff --git a/mm/memremap.c b/mm/memremap.c
>> index ecfa74848ac6..3eca07916b9d 100644
>> --- a/mm/memremap.c
>> +++ b/mm/memremap.c
>> @@ -253,8 +253,10 @@ static int pagemap_range(struct dev_pagemap *pgmap, struct mhp_params *params,
>>                         goto err_kasan;
>>                 }
>>
>> -               if (pgmap->flags & PGMAP_COMPOUND)
>> +               if (pgmap->flags & PGMAP_COMPOUND) {
>>                         params->align = pgmap->align;
>> +                       params->flags = MEMHP_REUSE_VMEMMAP;
> 
> The "reuse" naming is not my favorite. 

I also dislike it, but couldn't come up with a better one :(

> Yes, page reuse is happening,
> but what is more relevant is that the vmemmap is in a given minimum
> page size mode. So it's less of a flag and more of enum that selects
> between PAGE_SIZE, HPAGE_SIZE, and PUD_PAGE_SIZE (GPAGE_SIZE?).
> 
That does sound cleaner, but at the same time, won't we get confused
with pgmap @align and the vmemmap/memhp @align ?

Hmm, I also I think there's value in having two different attributes as
they have two different intents. A pgmap @align means is 'represent its
metadata as a huge page of a given size' and the vmemmap/memhp @align
lets tell the sparsemem that we are mapping metadata of a given @align.

The compound pages (pgmap->align) might be useful for other ZONE_DEVICE
users. But I am not sure everybody will want to immediately switch to the
'struct page reuse' trick.

>> +               }
>>
>>                 error = arch_add_memory(nid, range->start, range_len(range),
>>                                         params);
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 9716ecd58e29..180a7d4e9285 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -691,10 +691,11 @@ void free_compound_page(struct page *page)
>>         __free_pages_ok(page, compound_order(page), FPI_NONE);
>>  }
>>
>> -void prep_compound_page(struct page *page, unsigned int order)
>> +static void __prep_compound_page(struct page *page, unsigned int order,
>> +                                unsigned int init_order)
>>  {
>>         int i;
>> -       int nr_pages = 1 << order;
>> +       int nr_pages = 1 << init_order;
>>
>>         __SetPageHead(page);
>>         for (i = 1; i < nr_pages; i++) {
>> @@ -711,6 +712,11 @@ void prep_compound_page(struct page *page, unsigned int order)
>>                 atomic_set(compound_pincount_ptr(page), 0);
>>  }
>>
>> +void prep_compound_page(struct page *page, unsigned int order)
>> +{
>> +       __prep_compound_page(page, order, order);
>> +}
>> +
>>  #ifdef CONFIG_DEBUG_PAGEALLOC
>>  unsigned int _debug_guardpage_minorder;
>>
>> @@ -6108,6 +6114,9 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
>>  }
>>
>>  #ifdef CONFIG_ZONE_DEVICE
>> +
>> +#define MEMMAP_COMPOUND_SIZE (2 * (PAGE_SIZE/sizeof(struct page)))
>> +
>>  void __ref memmap_init_zone_device(struct zone *zone,
>>                                    unsigned long start_pfn,
>>                                    unsigned long nr_pages,
>> @@ -6138,6 +6147,12 @@ void __ref memmap_init_zone_device(struct zone *zone,
>>         for (pfn = start_pfn; pfn < end_pfn; pfn++) {
>>                 struct page *page = pfn_to_page(pfn);
>>
>> +               /* Skip already initialized pages. */
>> +               if (compound && (pfn % align >= MEMMAP_COMPOUND_SIZE)) {
>> +                       pfn = ALIGN(pfn, align) - 1;
>> +                       continue;
>> +               }
>> +
>>                 __init_single_page(page, pfn, zone_idx, nid);
>>
>>                 /*
>> @@ -6175,7 +6190,9 @@ void __ref memmap_init_zone_device(struct zone *zone,
>>
>>         if (compound) {
>>                 for (pfn = start_pfn; pfn < end_pfn; pfn += align)
>> -                       prep_compound_page(pfn_to_page(pfn), order_base_2(align));
>> +                       __prep_compound_page(pfn_to_page(pfn),
>> +                                          order_base_2(align),
>> +                                          order_base_2(MEMMAP_COMPOUND_SIZE));
>>         }
> 
> Alex did quite a bit of work to optimize this path, and this
> organization appears to undo it. I'd prefer to keep it all in one loop
> so a 'struct page' is only initialized once. Otherwise by the time the
> above loop finishes and this one starts the 'struct page's are
> probably cache cold again.
> 
> So I'd break prep_compoud_page into separate head and tail  init and
> call them at the right time in one loop.
> 
Ah, makes sense! I'll split into head/tail counter parts -- Might get even
faster that already is.

Which makes me wonder if we shouldn't replace that line:

"memmap_init_zone_device initialized NNNNNN pages in 0ms\n"

to use 'us' or 'ns' where applicable. That's ought to be more useful information
to the user.


  reply	other threads:[~2021-02-22 12:01 UTC|newest]

Thread overview: 147+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-08 17:28 [PATCH RFC 0/9] mm, sparse-vmemmap: Introduce " Joao Martins
2020-12-08 17:28 ` Joao Martins
2020-12-08 17:28 ` [PATCH RFC 1/9] memremap: add ZONE_DEVICE support for compound pages Joao Martins
2020-12-08 17:28   ` Joao Martins
2020-12-09  5:59   ` John Hubbard
2020-12-09  5:59     ` John Hubbard
2020-12-09  6:33     ` Matthew Wilcox
2020-12-09  6:33       ` Matthew Wilcox
2020-12-09 13:12       ` Joao Martins
2020-12-09 13:12         ` Joao Martins
2021-02-20  1:43     ` Dan Williams
2021-02-20  1:43       ` Dan Williams
2021-02-22 11:24       ` Joao Martins
2021-02-22 11:24         ` Joao Martins
2021-02-22 20:37         ` Dan Williams
2021-02-22 20:37           ` Dan Williams
2021-02-23 15:46           ` Joao Martins
2021-02-23 15:46             ` Joao Martins
2021-02-23 16:50             ` Dan Williams
2021-02-23 16:50               ` Dan Williams
2021-02-23 17:18               ` Joao Martins
2021-02-23 17:18                 ` Joao Martins
2021-02-23 18:18                 ` Dan Williams
2021-02-23 18:18                   ` Dan Williams
2021-03-10 18:12           ` Joao Martins
2021-03-10 18:12             ` Joao Martins
2021-03-12  5:54             ` Dan Williams
2021-03-12  5:54               ` Dan Williams
2021-02-20  1:24   ` Dan Williams
2021-02-20  1:24     ` Dan Williams
2021-02-22 11:09     ` Joao Martins
2021-02-22 11:09       ` Joao Martins
2020-12-08 17:28 ` [PATCH RFC 2/9] sparse-vmemmap: Consolidate arguments in vmemmap section populate Joao Martins
2020-12-08 17:28   ` Joao Martins
2020-12-09  6:16   ` John Hubbard
2020-12-09  6:16     ` John Hubbard
2020-12-09 13:51     ` Joao Martins
2020-12-09 13:51       ` Joao Martins
2021-02-20  1:49   ` Dan Williams
2021-02-20  1:49     ` Dan Williams
2021-02-22 11:26     ` Joao Martins
2021-02-22 11:26       ` Joao Martins
2020-12-08 17:28 ` [PATCH RFC 3/9] sparse-vmemmap: Reuse vmemmap areas for a given mhp_params::align Joao Martins
2020-12-08 17:28   ` Joao Martins
2020-12-08 17:38   ` Joao Martins
2020-12-08 17:38     ` Joao Martins
2020-12-08 17:28 ` [PATCH RFC 3/9] sparse-vmemmap: Reuse vmemmap areas for a given page size Joao Martins
2020-12-08 17:28   ` Joao Martins
2021-02-20  3:34   ` Dan Williams
2021-02-20  3:34     ` Dan Williams
2021-02-22 11:42     ` Joao Martins
2021-02-22 11:42       ` Joao Martins
2021-02-22 22:40       ` Dan Williams
2021-02-22 22:40         ` Dan Williams
2021-02-23 15:46         ` Joao Martins
2021-02-23 15:46           ` Joao Martins
2020-12-08 17:28 ` [PATCH RFC 4/9] mm/page_alloc: Reuse tail struct pages for compound pagemaps Joao Martins
2020-12-08 17:28   ` Joao Martins
2021-02-20  6:17   ` Dan Williams
2021-02-20  6:17     ` Dan Williams
2021-02-22 12:01     ` Joao Martins [this message]
2021-02-22 12:01       ` Joao Martins
2020-12-08 17:28 ` [PATCH RFC 5/9] device-dax: Compound pagemap support Joao Martins
2020-12-08 17:28   ` Joao Martins
2020-12-08 17:28 ` [PATCH RFC 6/9] mm/gup: Grab head page refcount once for group of subpages Joao Martins
2020-12-08 17:28   ` Joao Martins
2020-12-08 19:49   ` Jason Gunthorpe
2020-12-09 11:05     ` Joao Martins
2020-12-09 11:05       ` Joao Martins
2020-12-09 15:15       ` Jason Gunthorpe
2020-12-09 16:02         ` Joao Martins
2020-12-09 16:02           ` Joao Martins
2020-12-09 16:24           ` Jason Gunthorpe
2020-12-09 17:27             ` Joao Martins
2020-12-09 17:27               ` Joao Martins
2020-12-09 18:14             ` Matthew Wilcox
2020-12-09 18:14               ` Matthew Wilcox
2020-12-09 19:08               ` Jason Gunthorpe
2020-12-10 15:43               ` Joao Martins
2020-12-10 15:43                 ` Joao Martins
2020-12-09  4:40   ` John Hubbard
2020-12-09  4:40     ` John Hubbard
2020-12-09 13:44     ` Joao Martins
2020-12-09 13:44       ` Joao Martins
2020-12-08 17:28 ` [PATCH RFC 7/9] mm/gup: Decrement head page " Joao Martins
2020-12-08 17:28   ` Joao Martins
2020-12-08 19:34   ` Jason Gunthorpe
2020-12-09  5:06     ` John Hubbard
2020-12-09  5:06       ` John Hubbard
2020-12-09 13:43       ` Jason Gunthorpe
2020-12-09 12:17     ` Joao Martins
2020-12-09 12:17       ` Joao Martins
2020-12-17 19:05     ` Joao Martins
2020-12-17 19:05       ` Joao Martins
2020-12-17 20:05       ` Jason Gunthorpe
2020-12-17 22:34         ` Joao Martins
2020-12-17 22:34           ` Joao Martins
2020-12-18 14:25           ` Jason Gunthorpe
2020-12-19  2:06         ` John Hubbard
2020-12-19  2:06           ` John Hubbard
2020-12-19 13:10           ` Joao Martins
2020-12-19 13:10             ` Joao Martins
2020-12-08 17:29 ` [PATCH RFC 8/9] RDMA/umem: batch page unpin in __ib_mem_release() Joao Martins
2020-12-08 17:29   ` Joao Martins
2020-12-08 19:29   ` Jason Gunthorpe
2020-12-09 10:59     ` Joao Martins
2020-12-09 10:59       ` Joao Martins
2020-12-19 13:15       ` Joao Martins
2020-12-19 13:15         ` Joao Martins
2020-12-09  5:18   ` John Hubbard
2020-12-09  5:18     ` John Hubbard
2020-12-08 17:29 ` [PATCH RFC 9/9] mm: Add follow_devmap_page() for devdax vmas Joao Martins
2020-12-08 17:29   ` Joao Martins
2020-12-08 19:57   ` Jason Gunthorpe
2020-12-09  8:05     ` Christoph Hellwig
2020-12-09  8:05       ` Christoph Hellwig
2020-12-09 11:19     ` Joao Martins
2020-12-09 11:19       ` Joao Martins
2020-12-09  5:23   ` John Hubbard
2020-12-09  5:23     ` John Hubbard
2020-12-09  9:38 ` [PATCH RFC 0/9] mm, sparse-vmemmap: Introduce compound pagemaps David Hildenbrand
2020-12-09  9:38   ` David Hildenbrand
2020-12-09  9:52 ` [External] " Muchun Song
2020-12-09  9:52   ` Muchun Song
2021-02-20  1:18 ` Dan Williams
2021-02-20  1:18   ` Dan Williams
2021-02-22 11:06   ` Joao Martins
2021-02-22 11:06     ` Joao Martins
2021-02-22 14:32     ` Joao Martins
2021-02-22 14:32       ` Joao Martins
2021-02-23 16:28   ` Joao Martins
2021-02-23 16:28     ` Joao Martins
2021-02-23 16:44     ` Dan Williams
2021-02-23 16:44       ` Dan Williams
2021-02-23 17:15       ` Joao Martins
2021-02-23 17:15         ` Joao Martins
2021-02-23 18:15         ` Dan Williams
2021-02-23 18:15           ` Dan Williams
2021-02-23 18:54       ` Jason Gunthorpe
2021-02-23 22:48         ` Dan Williams
2021-02-23 22:48           ` Dan Williams
2021-02-23 23:07           ` Jason Gunthorpe
2021-02-24  0:14             ` Dan Williams
2021-02-24  0:14               ` Dan Williams
2021-02-24  1:00               ` Jason Gunthorpe
2021-02-24  1:32                 ` Dan Williams
2021-02-24  1:32                   ` Dan Williams

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=50b76256-c1b5-0f88-f9cc-1cdce45c6ce1@oracle.com \
    --to=joao.m.martins@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=dan.j.williams@intel.com \
    --cc=jgg@ziepe.ca \
    --cc=linux-mm@kvack.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=mike.kravetz@oracle.com \
    --cc=songmuchun@bytedance.com \
    --cc=willy@infradead.org \
    --subject='Re: [PATCH RFC 4/9] mm/page_alloc: Reuse tail struct pages for compound pagemaps' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.