All of lore.kernel.org
 help / color / mirror / Atom feed
From: Joao Martins <joao.m.martins@oracle.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: Linux MM <linux-mm@kvack.org>,
	linux-nvdimm <linux-nvdimm@lists.01.org>,
	Matthew Wilcox <willy@infradead.org>,
	Jason Gunthorpe <jgg@ziepe.ca>,
	Muchun Song <songmuchun@bytedance.com>,
	Mike Kravetz <mike.kravetz@oracle.com>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH RFC 3/9] sparse-vmemmap: Reuse vmemmap areas for a given page size
Date: Tue, 23 Feb 2021 15:46:10 +0000	[thread overview]
Message-ID: <00b3b94c-1f12-d05e-c885-674becbe64c5@oracle.com> (raw)
In-Reply-To: <CAPcyv4i1YxFRFVz9itTkH7aLHR9GXdidTLDQHaqCG-n4EEzusQ@mail.gmail.com>

On 2/22/21 10:40 PM, Dan Williams wrote:
> On Mon, Feb 22, 2021 at 3:42 AM Joao Martins <joao.m.martins@oracle.com> wrote:
>> On 2/20/21 3:34 AM, Dan Williams wrote:
>>> On Tue, Dec 8, 2020 at 9:32 AM Joao Martins <joao.m.martins@oracle.com> wrote:
>>>> Sections are 128M (or bigger/smaller),
>>>
>>> Huh?
>>>
>>
>> Section size is arch-dependent if we are being hollistic.
>> On x86 it's 64M, 128M or 512M right?
>>
>>  #ifdef CONFIG_X86_32
>>  # ifdef CONFIG_X86_PAE
>>  #  define SECTION_SIZE_BITS     29
>>  #  define MAX_PHYSMEM_BITS      36
>>  # else
>>  #  define SECTION_SIZE_BITS     26
>>  #  define MAX_PHYSMEM_BITS      32
>>  # endif
>>  #else /* CONFIG_X86_32 */
>>  # define SECTION_SIZE_BITS      27 /* matt - 128 is convenient right now */
>>  # define MAX_PHYSMEM_BITS       (pgtable_l5_enabled() ? 52 : 46)
>>  #endif
>>
>> Also, me pointing about section sizes, is because a 1GB+ page vmemmap population will
>> cross sections in how sparsemem populates the vmemmap. And on that case we gotta reuse the
>> the PTE/PMD pages across multiple invocations of vmemmap_populate_basepages(). Either
>> that, or looking at the previous page PTE, but that might be ineficient.
> 
> Ok, makes sense I think saying this description of needing to handle
> section crossing is clearer than mentioning one of the section sizes.
> 
I'll amend the commit message to have this.

>>
>>>> @@ -229,38 +235,95 @@ int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end,
>>>>         for (; addr < end; addr += PAGE_SIZE) {
>>>>                 pgd = vmemmap_pgd_populate(addr, node);
>>>>                 if (!pgd)
>>>> -                       return -ENOMEM;
>>>> +                       return NULL;
>>>>                 p4d = vmemmap_p4d_populate(pgd, addr, node);
>>>>                 if (!p4d)
>>>> -                       return -ENOMEM;
>>>> +                       return NULL;
>>>>                 pud = vmemmap_pud_populate(p4d, addr, node);
>>>>                 if (!pud)
>>>> -                       return -ENOMEM;
>>>> +                       return NULL;
>>>>                 pmd = vmemmap_pmd_populate(pud, addr, node);
>>>>                 if (!pmd)
>>>> -                       return -ENOMEM;
>>>> -               pte = vmemmap_pte_populate(pmd, addr, node, altmap);
>>>> +                       return NULL;
>>>> +               pte = vmemmap_pte_populate(pmd, addr, node, altmap, block);
>>>>                 if (!pte)
>>>> -                       return -ENOMEM;
>>>> +                       return NULL;
>>>>                 vmemmap_verify(pte, node, addr, addr + PAGE_SIZE);
>>>>         }
>>>>
>>>> +       return __va(__pfn_to_phys(pte_pfn(*pte)));
>>>> +}
>>>> +
>>>> +int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end,
>>>> +                                        int node, struct vmem_altmap *altmap)
>>>> +{
>>>> +       if (!__vmemmap_populate_basepages(start, end, node, altmap, NULL))
>>>> +               return -ENOMEM;
>>>>         return 0;
>>>>  }
>>>>
>>>> +static struct page * __meminit vmemmap_populate_reuse(unsigned long start,
>>>> +                                       unsigned long end, int node,
>>>> +                                       struct vmem_context *ctx)
>>>> +{
>>>> +       unsigned long size, addr = start;
>>>> +       unsigned long psize = PHYS_PFN(ctx->align) * sizeof(struct page);
>>>> +
>>>> +       size = min(psize, end - start);
>>>> +
>>>> +       for (; addr < end; addr += size) {
>>>> +               unsigned long head = addr + PAGE_SIZE;
>>>> +               unsigned long tail = addr;
>>>> +               unsigned long last = addr + size;
>>>> +               void *area;
>>>> +
>>>> +               if (ctx->block_page &&
>>>> +                   IS_ALIGNED((addr - ctx->block_page), psize))
>>>> +                       ctx->block = NULL;
>>>> +
>>>> +               area  = ctx->block;
>>>> +               if (!area) {
>>>> +                       if (!__vmemmap_populate_basepages(addr, head, node,
>>>> +                                                         ctx->altmap, NULL))
>>>> +                               return NULL;
>>>> +
>>>> +                       tail = head + PAGE_SIZE;
>>>> +                       area = __vmemmap_populate_basepages(head, tail, node,
>>>> +                                                           ctx->altmap, NULL);
>>>> +                       if (!area)
>>>> +                               return NULL;
>>>> +
>>>> +                       ctx->block = area;
>>>> +                       ctx->block_page = addr;
>>>> +               }
>>>> +
>>>> +               if (!__vmemmap_populate_basepages(tail, last, node,
>>>> +                                                 ctx->altmap, area))
>>>> +                       return NULL;
>>>> +       }
>>>
>>> I think that compound page accounting and combined altmap accounting
>>> makes this difficult to read, and I think the compound page case
>>> deserves it's own first class loop rather than reusing
>>> vmemmap_populate_basepages(). With the suggestion to drop altmap
>>> support I'd expect a vmmemap_populate_compound that takes a compound
>>> page size and goes the right think with respect to mapping all the
>>> tail pages to the same pfn.
>>>
>> I can move this to a separate loop as suggested.
>>
>> But to be able to map all tail pages in one call of vmemmap_populate_compound()
>> this might requires changes in sparsemem generic code that I am not so sure
>> they are warranted the added complexity. Otherwise I'll have to probably keep
>> this logic of @ctx to be able to pass the page to be reused (i.e. @block and
>> @block_page). That's actually the main reason that made me introduce
>> a struct vmem_context.
> 
> Do you need to pass in a vmem_context, isn't that context local to
> vmemmap_populate_compound_pages()?
> 

Hmm, so we allocate a vmem_context (inited to zeroes) in __add_pages(), and then we use
the same vmem_context across all sections we are onling from the pfn range passed in
__add_pages(). So all sections use the same vmem_context. Then we take care in
vmemmap_populate_compound_pages() to check whether there was a @block allocated that needs
to be reused.

So while the content itself is private/local to vmemmap_populate_compound_pages() we still
rely on the ability that vmemmap_populate_compound_pages() always gets the same
vmem_context location passed in for all sections being onlined in the whole pfn range.
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

WARNING: multiple messages have this Message-ID (diff)
From: Joao Martins <joao.m.martins@oracle.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: Linux MM <linux-mm@kvack.org>, Ira Weiny <ira.weiny@intel.com>,
	linux-nvdimm <linux-nvdimm@lists.01.org>,
	Matthew Wilcox <willy@infradead.org>,
	Jason Gunthorpe <jgg@ziepe.ca>, Jane Chu <jane.chu@oracle.com>,
	Muchun Song <songmuchun@bytedance.com>,
	Mike Kravetz <mike.kravetz@oracle.com>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH RFC 3/9] sparse-vmemmap: Reuse vmemmap areas for a given page size
Date: Tue, 23 Feb 2021 15:46:10 +0000	[thread overview]
Message-ID: <00b3b94c-1f12-d05e-c885-674becbe64c5@oracle.com> (raw)
In-Reply-To: <CAPcyv4i1YxFRFVz9itTkH7aLHR9GXdidTLDQHaqCG-n4EEzusQ@mail.gmail.com>

On 2/22/21 10:40 PM, Dan Williams wrote:
> On Mon, Feb 22, 2021 at 3:42 AM Joao Martins <joao.m.martins@oracle.com> wrote:
>> On 2/20/21 3:34 AM, Dan Williams wrote:
>>> On Tue, Dec 8, 2020 at 9:32 AM Joao Martins <joao.m.martins@oracle.com> wrote:
>>>> Sections are 128M (or bigger/smaller),
>>>
>>> Huh?
>>>
>>
>> Section size is arch-dependent if we are being hollistic.
>> On x86 it's 64M, 128M or 512M right?
>>
>>  #ifdef CONFIG_X86_32
>>  # ifdef CONFIG_X86_PAE
>>  #  define SECTION_SIZE_BITS     29
>>  #  define MAX_PHYSMEM_BITS      36
>>  # else
>>  #  define SECTION_SIZE_BITS     26
>>  #  define MAX_PHYSMEM_BITS      32
>>  # endif
>>  #else /* CONFIG_X86_32 */
>>  # define SECTION_SIZE_BITS      27 /* matt - 128 is convenient right now */
>>  # define MAX_PHYSMEM_BITS       (pgtable_l5_enabled() ? 52 : 46)
>>  #endif
>>
>> Also, me pointing about section sizes, is because a 1GB+ page vmemmap population will
>> cross sections in how sparsemem populates the vmemmap. And on that case we gotta reuse the
>> the PTE/PMD pages across multiple invocations of vmemmap_populate_basepages(). Either
>> that, or looking at the previous page PTE, but that might be ineficient.
> 
> Ok, makes sense I think saying this description of needing to handle
> section crossing is clearer than mentioning one of the section sizes.
> 
I'll amend the commit message to have this.

>>
>>>> @@ -229,38 +235,95 @@ int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end,
>>>>         for (; addr < end; addr += PAGE_SIZE) {
>>>>                 pgd = vmemmap_pgd_populate(addr, node);
>>>>                 if (!pgd)
>>>> -                       return -ENOMEM;
>>>> +                       return NULL;
>>>>                 p4d = vmemmap_p4d_populate(pgd, addr, node);
>>>>                 if (!p4d)
>>>> -                       return -ENOMEM;
>>>> +                       return NULL;
>>>>                 pud = vmemmap_pud_populate(p4d, addr, node);
>>>>                 if (!pud)
>>>> -                       return -ENOMEM;
>>>> +                       return NULL;
>>>>                 pmd = vmemmap_pmd_populate(pud, addr, node);
>>>>                 if (!pmd)
>>>> -                       return -ENOMEM;
>>>> -               pte = vmemmap_pte_populate(pmd, addr, node, altmap);
>>>> +                       return NULL;
>>>> +               pte = vmemmap_pte_populate(pmd, addr, node, altmap, block);
>>>>                 if (!pte)
>>>> -                       return -ENOMEM;
>>>> +                       return NULL;
>>>>                 vmemmap_verify(pte, node, addr, addr + PAGE_SIZE);
>>>>         }
>>>>
>>>> +       return __va(__pfn_to_phys(pte_pfn(*pte)));
>>>> +}
>>>> +
>>>> +int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end,
>>>> +                                        int node, struct vmem_altmap *altmap)
>>>> +{
>>>> +       if (!__vmemmap_populate_basepages(start, end, node, altmap, NULL))
>>>> +               return -ENOMEM;
>>>>         return 0;
>>>>  }
>>>>
>>>> +static struct page * __meminit vmemmap_populate_reuse(unsigned long start,
>>>> +                                       unsigned long end, int node,
>>>> +                                       struct vmem_context *ctx)
>>>> +{
>>>> +       unsigned long size, addr = start;
>>>> +       unsigned long psize = PHYS_PFN(ctx->align) * sizeof(struct page);
>>>> +
>>>> +       size = min(psize, end - start);
>>>> +
>>>> +       for (; addr < end; addr += size) {
>>>> +               unsigned long head = addr + PAGE_SIZE;
>>>> +               unsigned long tail = addr;
>>>> +               unsigned long last = addr + size;
>>>> +               void *area;
>>>> +
>>>> +               if (ctx->block_page &&
>>>> +                   IS_ALIGNED((addr - ctx->block_page), psize))
>>>> +                       ctx->block = NULL;
>>>> +
>>>> +               area  = ctx->block;
>>>> +               if (!area) {
>>>> +                       if (!__vmemmap_populate_basepages(addr, head, node,
>>>> +                                                         ctx->altmap, NULL))
>>>> +                               return NULL;
>>>> +
>>>> +                       tail = head + PAGE_SIZE;
>>>> +                       area = __vmemmap_populate_basepages(head, tail, node,
>>>> +                                                           ctx->altmap, NULL);
>>>> +                       if (!area)
>>>> +                               return NULL;
>>>> +
>>>> +                       ctx->block = area;
>>>> +                       ctx->block_page = addr;
>>>> +               }
>>>> +
>>>> +               if (!__vmemmap_populate_basepages(tail, last, node,
>>>> +                                                 ctx->altmap, area))
>>>> +                       return NULL;
>>>> +       }
>>>
>>> I think that compound page accounting and combined altmap accounting
>>> makes this difficult to read, and I think the compound page case
>>> deserves it's own first class loop rather than reusing
>>> vmemmap_populate_basepages(). With the suggestion to drop altmap
>>> support I'd expect a vmmemap_populate_compound that takes a compound
>>> page size and goes the right think with respect to mapping all the
>>> tail pages to the same pfn.
>>>
>> I can move this to a separate loop as suggested.
>>
>> But to be able to map all tail pages in one call of vmemmap_populate_compound()
>> this might requires changes in sparsemem generic code that I am not so sure
>> they are warranted the added complexity. Otherwise I'll have to probably keep
>> this logic of @ctx to be able to pass the page to be reused (i.e. @block and
>> @block_page). That's actually the main reason that made me introduce
>> a struct vmem_context.
> 
> Do you need to pass in a vmem_context, isn't that context local to
> vmemmap_populate_compound_pages()?
> 

Hmm, so we allocate a vmem_context (inited to zeroes) in __add_pages(), and then we use
the same vmem_context across all sections we are onling from the pfn range passed in
__add_pages(). So all sections use the same vmem_context. Then we take care in
vmemmap_populate_compound_pages() to check whether there was a @block allocated that needs
to be reused.

So while the content itself is private/local to vmemmap_populate_compound_pages() we still
rely on the ability that vmemmap_populate_compound_pages() always gets the same
vmem_context location passed in for all sections being onlined in the whole pfn range.


  reply	other threads:[~2021-02-23 15:46 UTC|newest]

Thread overview: 147+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-08 17:28 [PATCH RFC 0/9] mm, sparse-vmemmap: Introduce compound pagemaps Joao Martins
2020-12-08 17:28 ` Joao Martins
2020-12-08 17:28 ` [PATCH RFC 1/9] memremap: add ZONE_DEVICE support for compound pages Joao Martins
2020-12-08 17:28   ` Joao Martins
2020-12-09  5:59   ` John Hubbard
2020-12-09  5:59     ` John Hubbard
2020-12-09  6:33     ` Matthew Wilcox
2020-12-09  6:33       ` Matthew Wilcox
2020-12-09 13:12       ` Joao Martins
2020-12-09 13:12         ` Joao Martins
2021-02-20  1:43     ` Dan Williams
2021-02-20  1:43       ` Dan Williams
2021-02-22 11:24       ` Joao Martins
2021-02-22 11:24         ` Joao Martins
2021-02-22 20:37         ` Dan Williams
2021-02-22 20:37           ` Dan Williams
2021-02-23 15:46           ` Joao Martins
2021-02-23 15:46             ` Joao Martins
2021-02-23 16:50             ` Dan Williams
2021-02-23 16:50               ` Dan Williams
2021-02-23 17:18               ` Joao Martins
2021-02-23 17:18                 ` Joao Martins
2021-02-23 18:18                 ` Dan Williams
2021-02-23 18:18                   ` Dan Williams
2021-03-10 18:12           ` Joao Martins
2021-03-10 18:12             ` Joao Martins
2021-03-12  5:54             ` Dan Williams
2021-03-12  5:54               ` Dan Williams
2021-02-20  1:24   ` Dan Williams
2021-02-20  1:24     ` Dan Williams
2021-02-22 11:09     ` Joao Martins
2021-02-22 11:09       ` Joao Martins
2020-12-08 17:28 ` [PATCH RFC 2/9] sparse-vmemmap: Consolidate arguments in vmemmap section populate Joao Martins
2020-12-08 17:28   ` Joao Martins
2020-12-09  6:16   ` John Hubbard
2020-12-09  6:16     ` John Hubbard
2020-12-09 13:51     ` Joao Martins
2020-12-09 13:51       ` Joao Martins
2021-02-20  1:49   ` Dan Williams
2021-02-20  1:49     ` Dan Williams
2021-02-22 11:26     ` Joao Martins
2021-02-22 11:26       ` Joao Martins
2020-12-08 17:28 ` [PATCH RFC 3/9] sparse-vmemmap: Reuse vmemmap areas for a given mhp_params::align Joao Martins
2020-12-08 17:28   ` Joao Martins
2020-12-08 17:38   ` Joao Martins
2020-12-08 17:38     ` Joao Martins
2020-12-08 17:28 ` [PATCH RFC 3/9] sparse-vmemmap: Reuse vmemmap areas for a given page size Joao Martins
2020-12-08 17:28   ` Joao Martins
2021-02-20  3:34   ` Dan Williams
2021-02-20  3:34     ` Dan Williams
2021-02-22 11:42     ` Joao Martins
2021-02-22 11:42       ` Joao Martins
2021-02-22 22:40       ` Dan Williams
2021-02-22 22:40         ` Dan Williams
2021-02-23 15:46         ` Joao Martins [this message]
2021-02-23 15:46           ` Joao Martins
2020-12-08 17:28 ` [PATCH RFC 4/9] mm/page_alloc: Reuse tail struct pages for compound pagemaps Joao Martins
2020-12-08 17:28   ` Joao Martins
2021-02-20  6:17   ` Dan Williams
2021-02-20  6:17     ` Dan Williams
2021-02-22 12:01     ` Joao Martins
2021-02-22 12:01       ` Joao Martins
2020-12-08 17:28 ` [PATCH RFC 5/9] device-dax: Compound pagemap support Joao Martins
2020-12-08 17:28   ` Joao Martins
2020-12-08 17:28 ` [PATCH RFC 6/9] mm/gup: Grab head page refcount once for group of subpages Joao Martins
2020-12-08 17:28   ` Joao Martins
2020-12-08 19:49   ` Jason Gunthorpe
2020-12-09 11:05     ` Joao Martins
2020-12-09 11:05       ` Joao Martins
2020-12-09 15:15       ` Jason Gunthorpe
2020-12-09 16:02         ` Joao Martins
2020-12-09 16:02           ` Joao Martins
2020-12-09 16:24           ` Jason Gunthorpe
2020-12-09 17:27             ` Joao Martins
2020-12-09 17:27               ` Joao Martins
2020-12-09 18:14             ` Matthew Wilcox
2020-12-09 18:14               ` Matthew Wilcox
2020-12-09 19:08               ` Jason Gunthorpe
2020-12-10 15:43               ` Joao Martins
2020-12-10 15:43                 ` Joao Martins
2020-12-09  4:40   ` John Hubbard
2020-12-09  4:40     ` John Hubbard
2020-12-09 13:44     ` Joao Martins
2020-12-09 13:44       ` Joao Martins
2020-12-08 17:28 ` [PATCH RFC 7/9] mm/gup: Decrement head page " Joao Martins
2020-12-08 17:28   ` Joao Martins
2020-12-08 19:34   ` Jason Gunthorpe
2020-12-09  5:06     ` John Hubbard
2020-12-09  5:06       ` John Hubbard
2020-12-09 13:43       ` Jason Gunthorpe
2020-12-09 12:17     ` Joao Martins
2020-12-09 12:17       ` Joao Martins
2020-12-17 19:05     ` Joao Martins
2020-12-17 19:05       ` Joao Martins
2020-12-17 20:05       ` Jason Gunthorpe
2020-12-17 22:34         ` Joao Martins
2020-12-17 22:34           ` Joao Martins
2020-12-18 14:25           ` Jason Gunthorpe
2020-12-19  2:06         ` John Hubbard
2020-12-19  2:06           ` John Hubbard
2020-12-19 13:10           ` Joao Martins
2020-12-19 13:10             ` Joao Martins
2020-12-08 17:29 ` [PATCH RFC 8/9] RDMA/umem: batch page unpin in __ib_mem_release() Joao Martins
2020-12-08 17:29   ` Joao Martins
2020-12-08 19:29   ` Jason Gunthorpe
2020-12-09 10:59     ` Joao Martins
2020-12-09 10:59       ` Joao Martins
2020-12-19 13:15       ` Joao Martins
2020-12-19 13:15         ` Joao Martins
2020-12-09  5:18   ` John Hubbard
2020-12-09  5:18     ` John Hubbard
2020-12-08 17:29 ` [PATCH RFC 9/9] mm: Add follow_devmap_page() for devdax vmas Joao Martins
2020-12-08 17:29   ` Joao Martins
2020-12-08 19:57   ` Jason Gunthorpe
2020-12-09  8:05     ` Christoph Hellwig
2020-12-09  8:05       ` Christoph Hellwig
2020-12-09 11:19     ` Joao Martins
2020-12-09 11:19       ` Joao Martins
2020-12-09  5:23   ` John Hubbard
2020-12-09  5:23     ` John Hubbard
2020-12-09  9:38 ` [PATCH RFC 0/9] mm, sparse-vmemmap: Introduce compound pagemaps David Hildenbrand
2020-12-09  9:38   ` David Hildenbrand
2020-12-09  9:52 ` [External] " Muchun Song
2020-12-09  9:52   ` Muchun Song
2021-02-20  1:18 ` Dan Williams
2021-02-20  1:18   ` Dan Williams
2021-02-22 11:06   ` Joao Martins
2021-02-22 11:06     ` Joao Martins
2021-02-22 14:32     ` Joao Martins
2021-02-22 14:32       ` Joao Martins
2021-02-23 16:28   ` Joao Martins
2021-02-23 16:28     ` Joao Martins
2021-02-23 16:44     ` Dan Williams
2021-02-23 16:44       ` Dan Williams
2021-02-23 17:15       ` Joao Martins
2021-02-23 17:15         ` Joao Martins
2021-02-23 18:15         ` Dan Williams
2021-02-23 18:15           ` Dan Williams
2021-02-23 18:54       ` Jason Gunthorpe
2021-02-23 22:48         ` Dan Williams
2021-02-23 22:48           ` Dan Williams
2021-02-23 23:07           ` Jason Gunthorpe
2021-02-24  0:14             ` Dan Williams
2021-02-24  0:14               ` Dan Williams
2021-02-24  1:00               ` Jason Gunthorpe
2021-02-24  1:32                 ` Dan Williams
2021-02-24  1:32                   ` Dan Williams

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=00b3b94c-1f12-d05e-c885-674becbe64c5@oracle.com \
    --to=joao.m.martins@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=dan.j.williams@intel.com \
    --cc=jgg@ziepe.ca \
    --cc=linux-mm@kvack.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=mike.kravetz@oracle.com \
    --cc=songmuchun@bytedance.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.