All of lore.kernel.org
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Ard Biesheuvel <ardb@kernel.org>,
	Mike Rapoport <rppt@linux.ibm.com>,
	Borislav Petkov <bp@alien8.de>, David Airlie <airlied@linux.ie>,
	Will Deacon <will@kernel.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Ard Biesheuvel <ard.biesheuvel@linaro.org>,
	Joao Martins <joao.m.martins@oracle.com>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	"Rafael J. Wysocki" <rafael.j.wysocki@intel.com>,
	Jonathan Cameron <Jonathan.Cameron@huawei.com>,
	X86 ML <x86@kernel.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Pavel Tatashin <pasha.tatashin@soleen.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Ben Skeggs <bskeggs@redhat.com>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Jason Gunthorpe <jgg@mellanox.com>, Jia He <justin.he@arm.com>,
	Ingo Molnar <mingo@redhat.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Paul Mackerras <paulus@ozlabs.org>,
	Brice Goglin <Brice.Goglin@inr ia.fr>,
	Michael Ellerman <mpe@ellerman.id.au>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Daniel Vetter <daniel@ffwll.ch>,
	Andy Lutomirski <luto@kernel.org>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	Linux MM <linux-mm@kvack.org>,
	linux-nvdimm <linux-nvdimm@lists.01.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Linux ACPI <linux-acpi@vger.kernel.org>,
	Maling list - DRI developers <dri-devel@lists.freedesktop.org>
Subject: Re: [PATCH v4 00/23] device-dax: Support sub-dividing soft-reserved ranges
Date: Fri, 21 Aug 2020 12:15:03 +0200	[thread overview]
Message-ID: <6af3de0d-ffdc-8942-3922-ebaeef20dd63@redhat.com> (raw)
In-Reply-To: <CAPcyv4j8-5nWU5GPDBoFicwR84qM=hWRtd78DkcCg4PW-8i6Vg@mail.gmail.com>

>>
>> 1. On x86-64, e820 indicates "soft-reserved" memory. This memory is not
>> automatically used in the buddy during boot, but remains untouched
>> (similar to pmem). But as it involves ACPI as well, it could also be
>> used on arm64 (-e820), correct?
> 
> Correct, arm64 also gets the EFI support for enumerating memory this
> way. However, I would clarify that whether soft-reserved is given to
> the buddy allocator by default or not is the kernel's policy choice,
> "buddy-by-default" is ok and is what will happen anyways with older
> kernels on platforms that enumerate a memory range this way.

Is "soft-reserved" then the right terminology for that? It sounds very
x86-64/e820 specific. Maybe a compressed for of "performance
differentiated memory" might be a better fit to expose to user space, no?

> 
>> 2. Soft-reserved memory is volatile RAM with differing performance
>> characteristics ("performance differentiated memory"). What would be
>> examples of such memory?
> 
> Likely the most prominent one that drove the creation of the "EFI
> Specific Purpose" attribute bit is high-bandwidth memory. One concrete
> example of that was a platform called Knights Landing [1] that ended
> up shipping firmware that lied to the OS about the latency
> characteristics of the memory to try to reverse engineer OS behavior
> to not allocate from that memory range by default. With the EFI
> attribute firmware performance tables can tell the truth about the
> performance characteristics of the memory range *and* indicate that
> the OS not use it for general purpose allocations by default.

Thanks for clarifying!

> 
> [1]: https://software.intel.com/content/www/us/en/develop/blogs/an-intro-to-mcdram-high-bandwidth-memory-on-knights-landing.html
> 
>> Like, memory that is faster than RAM (scratch
>> pad), or slower (pmem)? Or both? :)
> 
> Both, but note that PMEM is already hard-reserved by default.
> Soft-reserved is about a memory range that, for example, an
> administrator may want to reserve 100% for a weather simulation where
> if even a small amount of memory was stolen for the page cache the
> application may not meet its performance targets. It could also be a
> memory range that is so slow that only applications with higher
> latency tolerances would be prepared to consume it.
> 
> In other words the soft-reserved memory can be used to indicate memory
> that is either too precious, or too slow for general purpose OS
> allocations.

Right, so actually performance-differentiated in any way :)

> 
>> Is it a valid use case to use pmem
>> in a hypervisor to back this memory?
> 
> Depends on the pmem. That performance capability is indicated by the
> ACPI HMAT, not the EFI soft-reserved designation.
> 
>> 3. There seem to be use cases where "soft-reserved" memory is used via
>> DAX. What is an example use case? I assume it's *not* to treat it like
>> PMEM but instead e.g., use it as a fast buffer inside applications or
>> similar.
> 
> Right, in that weather-simulation example that application could just
> mmap /dev/daxX.Y and never worry about contending for the "fast
> memory" resource on the platform. Alternatively if that resource needs
> to be shared and/or over-commited then kernel memory-management
> services are needed and that dax-device can be assigned to kmem.
> 
>> 4. There seem to be use cases where some part of "soft-reserved" memory
>> is used via DAX, some other is given to the buddy. What is an example
>> use case? Is this really necessary or only some theoretical use case?
> 
> It's as necessary as pmem namespace partitioning, or the inclusion of
> dax-kmem upstream in the first place. In that kmem case the motivation
> was that some users want a portion of pmem provisioned for storage and
> some for volatile usage. The motivation is similar here, platform
> firmware can only identify memory attributes on coarse boundaries,
> finer grained provisioning decisions are up to the administrator /
> platform-owner and the kernel is a just a facilitator of that policy.
> 
>>
>> 5. The "provisioned along performance relevant address boundaries." part
>> is unclear to me. Can you give an example of how this would look like
>> from user space? Like, split that memory in blocks of size X with
>> alignment Y and give them to separate applications?
> 
> One example of platform address boundaries are the memory address
> ranges that alias in a direct-mapped memory-side-cache. In the
> direct-map-cache aliasing may repeat every N GBs where N is the ratio
> of far-to-near memory. ("Near memory" ==  cache "Far memory" ==
> backing memory). Also refer back to the background in the page
> allocator shuffling patches [2]. With this partitioning mechanism you
> could, for one example use case, assign different VMs to exclusive
> colors in the memory side cache.

Interesting, thanks!

> 
> [2]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=e900a918b098
> 
>> 6. If you add such memory to the buddy, is there any way the system can
>> differentiate it from other memory? E.g., via fake/other NUMA nodes?
> 
> Numa node numbers / are how performance differentiated memory ranges
> are enumerated. The expectation is that all distinct performance
> memory targets have unique ACPI proximity domains and Linux numa node
> numbers as a result.

Makes sense to me (although it's somehow weird, because memory of the
same socket/node would be represented via different NUMA nodes), thanks!

> 
>> Also, can you give examples of how kmem-added memory is represented in
>> /proc/iomem for a) pmem and b) soft-resered memory after this series
>> (skimming over the patches, I think there is a change for pmem, right?)?
> 
> I don't expect a change. The only difference is the parent resource
> will be marked "Soft Reserved" instead of "Persistent Memory".

Right, I misread patch #11 while skimming - I thought the device
resource would be dropped.

> 
>> I am really wondering if it's the right approach to squeeze this into
>> our pmem/nvdimm infrastructure just because it's easy to do. E.g., man
>> "ndctl" - "ndctl - Manage "libnvdimm" subsystem devices (Non-volatile
>> Memory)" speaks explicitly about non-volatile memory.
> 
> In fact it's not squeezed into PMEM infrastructure. dax-kmem and
> device-dax are independent of PMEM. PMEM is one source of potential
> device-dax instances, soft-reserved memory is another orthogonal
> source. This is why device-dax needs its own userspace policy directed
> partitioning mechanism because there is no PMEM to store the
> configuration for partitioned higph-bandwidth memory. The userspace
> tooling for this mechanism is targeted for a tool called daxctl that
> has no PMEM dependencies. Look to Joao's use case that is using this
> infrastructure independent of PMEM with manual soft-reservations
> specified on the kernel command-line.

Thanks for clarifying, I was under the impression we would be reusing
libnvdimm to manage that memory.

-- 
Thanks,

David / dhildenb
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

WARNING: multiple messages have this Message-ID (diff)
From: David Hildenbrand <david@redhat.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Ira Weiny <ira.weiny@intel.com>, Ard Biesheuvel <ardb@kernel.org>,
	Mike Rapoport <rppt@linux.ibm.com>,
	Borislav Petkov <bp@alien8.de>,
	Vishal Verma <vishal.l.verma@intel.com>,
	David Airlie <airlied@linux.ie>, Will Deacon <will@kernel.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Ard Biesheuvel <ard.biesheuvel@linaro.org>,
	Joao Martins <joao.m.martins@oracle.com>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Dave Jiang <dave.jiang@intel.com>,
	"Rafael J. Wysocki" <rafael.j.wysocki@intel.com>,
	Jonathan Cameron <Jonathan.Cameron@huawei.com>,
	Wei Yang <richardw.yang@linux.intel.com>, X86 ML <x86@kernel.org>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Pavel Tatashin <pasha.tatashin@soleen.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Ben Skeggs <bskeggs@redhat.com>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Jason Gunthorpe <jgg@mellanox.com>, Jia He <justin.he@arm.com>,
	Ingo Molnar <mingo@redhat.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Paul Mackerras <paulus@ozlabs.org>,
	Brice Goglin <Brice.Goglin@inria.fr>,
	Jeff Moyer <jmoyer@redhat.com>,
	Michael Ellerman <mpe@ellerman.id.au>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Daniel Vetter <daniel@ffwll.ch>,
	Andy Lutomirski <luto@kernel.org>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	Linux MM <linux-mm@kvack.org>,
	linux-nvdimm <linux-nvdimm@lists.01.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Linux ACPI <linux-acpi@vger.kernel.org>,
	Maling list - DRI developers  <dri-devel@lists.freedesktop.org>
Subject: Re: [PATCH v4 00/23] device-dax: Support sub-dividing soft-reserved ranges
Date: Fri, 21 Aug 2020 12:15:03 +0200	[thread overview]
Message-ID: <6af3de0d-ffdc-8942-3922-ebaeef20dd63@redhat.com> (raw)
In-Reply-To: <CAPcyv4j8-5nWU5GPDBoFicwR84qM=hWRtd78DkcCg4PW-8i6Vg@mail.gmail.com>

>>
>> 1. On x86-64, e820 indicates "soft-reserved" memory. This memory is not
>> automatically used in the buddy during boot, but remains untouched
>> (similar to pmem). But as it involves ACPI as well, it could also be
>> used on arm64 (-e820), correct?
> 
> Correct, arm64 also gets the EFI support for enumerating memory this
> way. However, I would clarify that whether soft-reserved is given to
> the buddy allocator by default or not is the kernel's policy choice,
> "buddy-by-default" is ok and is what will happen anyways with older
> kernels on platforms that enumerate a memory range this way.

Is "soft-reserved" then the right terminology for that? It sounds very
x86-64/e820 specific. Maybe a compressed for of "performance
differentiated memory" might be a better fit to expose to user space, no?

> 
>> 2. Soft-reserved memory is volatile RAM with differing performance
>> characteristics ("performance differentiated memory"). What would be
>> examples of such memory?
> 
> Likely the most prominent one that drove the creation of the "EFI
> Specific Purpose" attribute bit is high-bandwidth memory. One concrete
> example of that was a platform called Knights Landing [1] that ended
> up shipping firmware that lied to the OS about the latency
> characteristics of the memory to try to reverse engineer OS behavior
> to not allocate from that memory range by default. With the EFI
> attribute firmware performance tables can tell the truth about the
> performance characteristics of the memory range *and* indicate that
> the OS not use it for general purpose allocations by default.

Thanks for clarifying!

> 
> [1]: https://software.intel.com/content/www/us/en/develop/blogs/an-intro-to-mcdram-high-bandwidth-memory-on-knights-landing.html
> 
>> Like, memory that is faster than RAM (scratch
>> pad), or slower (pmem)? Or both? :)
> 
> Both, but note that PMEM is already hard-reserved by default.
> Soft-reserved is about a memory range that, for example, an
> administrator may want to reserve 100% for a weather simulation where
> if even a small amount of memory was stolen for the page cache the
> application may not meet its performance targets. It could also be a
> memory range that is so slow that only applications with higher
> latency tolerances would be prepared to consume it.
> 
> In other words the soft-reserved memory can be used to indicate memory
> that is either too precious, or too slow for general purpose OS
> allocations.

Right, so actually performance-differentiated in any way :)

> 
>> Is it a valid use case to use pmem
>> in a hypervisor to back this memory?
> 
> Depends on the pmem. That performance capability is indicated by the
> ACPI HMAT, not the EFI soft-reserved designation.
> 
>> 3. There seem to be use cases where "soft-reserved" memory is used via
>> DAX. What is an example use case? I assume it's *not* to treat it like
>> PMEM but instead e.g., use it as a fast buffer inside applications or
>> similar.
> 
> Right, in that weather-simulation example that application could just
> mmap /dev/daxX.Y and never worry about contending for the "fast
> memory" resource on the platform. Alternatively if that resource needs
> to be shared and/or over-commited then kernel memory-management
> services are needed and that dax-device can be assigned to kmem.
> 
>> 4. There seem to be use cases where some part of "soft-reserved" memory
>> is used via DAX, some other is given to the buddy. What is an example
>> use case? Is this really necessary or only some theoretical use case?
> 
> It's as necessary as pmem namespace partitioning, or the inclusion of
> dax-kmem upstream in the first place. In that kmem case the motivation
> was that some users want a portion of pmem provisioned for storage and
> some for volatile usage. The motivation is similar here, platform
> firmware can only identify memory attributes on coarse boundaries,
> finer grained provisioning decisions are up to the administrator /
> platform-owner and the kernel is a just a facilitator of that policy.
> 
>>
>> 5. The "provisioned along performance relevant address boundaries." part
>> is unclear to me. Can you give an example of how this would look like
>> from user space? Like, split that memory in blocks of size X with
>> alignment Y and give them to separate applications?
> 
> One example of platform address boundaries are the memory address
> ranges that alias in a direct-mapped memory-side-cache. In the
> direct-map-cache aliasing may repeat every N GBs where N is the ratio
> of far-to-near memory. ("Near memory" ==  cache "Far memory" ==
> backing memory). Also refer back to the background in the page
> allocator shuffling patches [2]. With this partitioning mechanism you
> could, for one example use case, assign different VMs to exclusive
> colors in the memory side cache.

Interesting, thanks!

> 
> [2]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=e900a918b098
> 
>> 6. If you add such memory to the buddy, is there any way the system can
>> differentiate it from other memory? E.g., via fake/other NUMA nodes?
> 
> Numa node numbers / are how performance differentiated memory ranges
> are enumerated. The expectation is that all distinct performance
> memory targets have unique ACPI proximity domains and Linux numa node
> numbers as a result.

Makes sense to me (although it's somehow weird, because memory of the
same socket/node would be represented via different NUMA nodes), thanks!

> 
>> Also, can you give examples of how kmem-added memory is represented in
>> /proc/iomem for a) pmem and b) soft-resered memory after this series
>> (skimming over the patches, I think there is a change for pmem, right?)?
> 
> I don't expect a change. The only difference is the parent resource
> will be marked "Soft Reserved" instead of "Persistent Memory".

Right, I misread patch #11 while skimming - I thought the device
resource would be dropped.

> 
>> I am really wondering if it's the right approach to squeeze this into
>> our pmem/nvdimm infrastructure just because it's easy to do. E.g., man
>> "ndctl" - "ndctl - Manage "libnvdimm" subsystem devices (Non-volatile
>> Memory)" speaks explicitly about non-volatile memory.
> 
> In fact it's not squeezed into PMEM infrastructure. dax-kmem and
> device-dax are independent of PMEM. PMEM is one source of potential
> device-dax instances, soft-reserved memory is another orthogonal
> source. This is why device-dax needs its own userspace policy directed
> partitioning mechanism because there is no PMEM to store the
> configuration for partitioned higph-bandwidth memory. The userspace
> tooling for this mechanism is targeted for a tool called daxctl that
> has no PMEM dependencies. Look to Joao's use case that is using this
> infrastructure independent of PMEM with manual soft-reservations
> specified on the kernel command-line.

Thanks for clarifying, I was under the impression we would be reusing
libnvdimm to manage that memory.

-- 
Thanks,

David / dhildenb


WARNING: multiple messages have this Message-ID (diff)
From: David Hildenbrand <david@redhat.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Ira Weiny <ira.weiny@intel.com>, Ard Biesheuvel <ardb@kernel.org>,
	Mike Rapoport <rppt@linux.ibm.com>,
	Borislav Petkov <bp@alien8.de>,
	Vishal Verma <vishal.l.verma@intel.com>,
	David Airlie <airlied@linux.ie>, Will Deacon <will@kernel.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Ard Biesheuvel <ard.biesheuvel@linaro.org>,
	Joao Martins <joao.m.martins@oracle.com>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Dave Jiang <dave.jiang@intel.com>,
	"Rafael J. Wysocki" <rafael.j.wysocki@intel.com>,
	Jonathan Cameron <Jonathan.Cameron@huawei.com>,
	Wei Yang <richardw.yang@linux.intel.com>, X86 ML <x86@kernel.org>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Pavel Tatashin <pasha.tatashin@soleen.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Ben Skeggs <bskeggs@redhat.com>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Jason Gunthorpe <jgg@mellanox.com>, Jia He <justin.he@arm.com>,
	Ingo Molnar <mingo@redhat.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Paul Mackerras <paulus@ozlabs.org>,
	Brice Goglin <Brice.Goglin@inria.fr>,
	Jeff Moyer <jmoyer@redhat.com>,
	Michael Ellerman <mpe@ellerman.id.au>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Daniel Vetter <daniel@ffwll.ch>,
	Andy Lutomirski <luto@kernel.org>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	Linux MM <linux-mm@kvack.org>,
	linux-nvdimm <linux-nvdimm@lists.01.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Linux ACPI <linux-acpi@vger.kernel.org>,
	Maling list - DRI developers <dri-devel@lists.freedesktop.org>
Subject: Re: [PATCH v4 00/23] device-dax: Support sub-dividing soft-reserved ranges
Date: Fri, 21 Aug 2020 12:15:03 +0200	[thread overview]
Message-ID: <6af3de0d-ffdc-8942-3922-ebaeef20dd63@redhat.com> (raw)
In-Reply-To: <CAPcyv4j8-5nWU5GPDBoFicwR84qM=hWRtd78DkcCg4PW-8i6Vg@mail.gmail.com>

>>
>> 1. On x86-64, e820 indicates "soft-reserved" memory. This memory is not
>> automatically used in the buddy during boot, but remains untouched
>> (similar to pmem). But as it involves ACPI as well, it could also be
>> used on arm64 (-e820), correct?
> 
> Correct, arm64 also gets the EFI support for enumerating memory this
> way. However, I would clarify that whether soft-reserved is given to
> the buddy allocator by default or not is the kernel's policy choice,
> "buddy-by-default" is ok and is what will happen anyways with older
> kernels on platforms that enumerate a memory range this way.

Is "soft-reserved" then the right terminology for that? It sounds very
x86-64/e820 specific. Maybe a compressed for of "performance
differentiated memory" might be a better fit to expose to user space, no?

> 
>> 2. Soft-reserved memory is volatile RAM with differing performance
>> characteristics ("performance differentiated memory"). What would be
>> examples of such memory?
> 
> Likely the most prominent one that drove the creation of the "EFI
> Specific Purpose" attribute bit is high-bandwidth memory. One concrete
> example of that was a platform called Knights Landing [1] that ended
> up shipping firmware that lied to the OS about the latency
> characteristics of the memory to try to reverse engineer OS behavior
> to not allocate from that memory range by default. With the EFI
> attribute firmware performance tables can tell the truth about the
> performance characteristics of the memory range *and* indicate that
> the OS not use it for general purpose allocations by default.

Thanks for clarifying!

> 
> [1]: https://software.intel.com/content/www/us/en/develop/blogs/an-intro-to-mcdram-high-bandwidth-memory-on-knights-landing.html
> 
>> Like, memory that is faster than RAM (scratch
>> pad), or slower (pmem)? Or both? :)
> 
> Both, but note that PMEM is already hard-reserved by default.
> Soft-reserved is about a memory range that, for example, an
> administrator may want to reserve 100% for a weather simulation where
> if even a small amount of memory was stolen for the page cache the
> application may not meet its performance targets. It could also be a
> memory range that is so slow that only applications with higher
> latency tolerances would be prepared to consume it.
> 
> In other words the soft-reserved memory can be used to indicate memory
> that is either too precious, or too slow for general purpose OS
> allocations.

Right, so actually performance-differentiated in any way :)

> 
>> Is it a valid use case to use pmem
>> in a hypervisor to back this memory?
> 
> Depends on the pmem. That performance capability is indicated by the
> ACPI HMAT, not the EFI soft-reserved designation.
> 
>> 3. There seem to be use cases where "soft-reserved" memory is used via
>> DAX. What is an example use case? I assume it's *not* to treat it like
>> PMEM but instead e.g., use it as a fast buffer inside applications or
>> similar.
> 
> Right, in that weather-simulation example that application could just
> mmap /dev/daxX.Y and never worry about contending for the "fast
> memory" resource on the platform. Alternatively if that resource needs
> to be shared and/or over-commited then kernel memory-management
> services are needed and that dax-device can be assigned to kmem.
> 
>> 4. There seem to be use cases where some part of "soft-reserved" memory
>> is used via DAX, some other is given to the buddy. What is an example
>> use case? Is this really necessary or only some theoretical use case?
> 
> It's as necessary as pmem namespace partitioning, or the inclusion of
> dax-kmem upstream in the first place. In that kmem case the motivation
> was that some users want a portion of pmem provisioned for storage and
> some for volatile usage. The motivation is similar here, platform
> firmware can only identify memory attributes on coarse boundaries,
> finer grained provisioning decisions are up to the administrator /
> platform-owner and the kernel is a just a facilitator of that policy.
> 
>>
>> 5. The "provisioned along performance relevant address boundaries." part
>> is unclear to me. Can you give an example of how this would look like
>> from user space? Like, split that memory in blocks of size X with
>> alignment Y and give them to separate applications?
> 
> One example of platform address boundaries are the memory address
> ranges that alias in a direct-mapped memory-side-cache. In the
> direct-map-cache aliasing may repeat every N GBs where N is the ratio
> of far-to-near memory. ("Near memory" ==  cache "Far memory" ==
> backing memory). Also refer back to the background in the page
> allocator shuffling patches [2]. With this partitioning mechanism you
> could, for one example use case, assign different VMs to exclusive
> colors in the memory side cache.

Interesting, thanks!

> 
> [2]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=e900a918b098
> 
>> 6. If you add such memory to the buddy, is there any way the system can
>> differentiate it from other memory? E.g., via fake/other NUMA nodes?
> 
> Numa node numbers / are how performance differentiated memory ranges
> are enumerated. The expectation is that all distinct performance
> memory targets have unique ACPI proximity domains and Linux numa node
> numbers as a result.

Makes sense to me (although it's somehow weird, because memory of the
same socket/node would be represented via different NUMA nodes), thanks!

> 
>> Also, can you give examples of how kmem-added memory is represented in
>> /proc/iomem for a) pmem and b) soft-resered memory after this series
>> (skimming over the patches, I think there is a change for pmem, right?)?
> 
> I don't expect a change. The only difference is the parent resource
> will be marked "Soft Reserved" instead of "Persistent Memory".

Right, I misread patch #11 while skimming - I thought the device
resource would be dropped.

> 
>> I am really wondering if it's the right approach to squeeze this into
>> our pmem/nvdimm infrastructure just because it's easy to do. E.g., man
>> "ndctl" - "ndctl - Manage "libnvdimm" subsystem devices (Non-volatile
>> Memory)" speaks explicitly about non-volatile memory.
> 
> In fact it's not squeezed into PMEM infrastructure. dax-kmem and
> device-dax are independent of PMEM. PMEM is one source of potential
> device-dax instances, soft-reserved memory is another orthogonal
> source. This is why device-dax needs its own userspace policy directed
> partitioning mechanism because there is no PMEM to store the
> configuration for partitioned higph-bandwidth memory. The userspace
> tooling for this mechanism is targeted for a tool called daxctl that
> has no PMEM dependencies. Look to Joao's use case that is using this
> infrastructure independent of PMEM with manual soft-reservations
> specified on the kernel command-line.

Thanks for clarifying, I was under the impression we would be reusing
libnvdimm to manage that memory.

-- 
Thanks,

David / dhildenb



WARNING: multiple messages have this Message-ID (diff)
From: David Hildenbrand <david@redhat.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	David Airlie <airlied@linux.ie>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Maling list - DRI developers <dri-devel@lists.freedesktop.org>,
	Paul Mackerras <paulus@ozlabs.org>, Linux MM <linux-mm@kvack.org>,
	Michael Ellerman <mpe@ellerman.id.au>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Joao Martins <joao.m.martins@oracle.com>,
	Will Deacon <will@kernel.org>, Ard Biesheuvel <ardb@kernel.org>,
	Dave Jiang <dave.jiang@intel.com>,
	Linux ACPI <linux-acpi@vger.kernel.org>,
	linux-nvdimm <linux-nvdimm@lists.01.org>,
	Vishal Verma <vishal.l.verma@intel.com>, X86 ML <x86@kernel.org>,
	Mike Rapoport <rppt@linux.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Jeff Moyer <jmoyer@redhat.com>,
	Jason Gunthorpe <jgg@mellanox.com>,
	Ben Skeggs <bskeggs@redhat.com>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Pavel Tatashin <pasha.tatashin@soleen.com>,
	Ira Weiny <ira.weiny@intel.com>, Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Jonathan Cameron <Jonathan.Cameron@huawei.com>,
	Jia He <justin.he@arm.com>, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Ard Biesheuvel <ard.biesheuvel@linaro.org>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Wei Yang <richardw.yang@linux.intel.com>,
	Brice Goglin <Brice.Goglin@inria.fr>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH v4 00/23] device-dax: Support sub-dividing soft-reserved ranges
Date: Fri, 21 Aug 2020 12:15:03 +0200	[thread overview]
Message-ID: <6af3de0d-ffdc-8942-3922-ebaeef20dd63@redhat.com> (raw)
In-Reply-To: <CAPcyv4j8-5nWU5GPDBoFicwR84qM=hWRtd78DkcCg4PW-8i6Vg@mail.gmail.com>

>>
>> 1. On x86-64, e820 indicates "soft-reserved" memory. This memory is not
>> automatically used in the buddy during boot, but remains untouched
>> (similar to pmem). But as it involves ACPI as well, it could also be
>> used on arm64 (-e820), correct?
> 
> Correct, arm64 also gets the EFI support for enumerating memory this
> way. However, I would clarify that whether soft-reserved is given to
> the buddy allocator by default or not is the kernel's policy choice,
> "buddy-by-default" is ok and is what will happen anyways with older
> kernels on platforms that enumerate a memory range this way.

Is "soft-reserved" then the right terminology for that? It sounds very
x86-64/e820 specific. Maybe a compressed for of "performance
differentiated memory" might be a better fit to expose to user space, no?

> 
>> 2. Soft-reserved memory is volatile RAM with differing performance
>> characteristics ("performance differentiated memory"). What would be
>> examples of such memory?
> 
> Likely the most prominent one that drove the creation of the "EFI
> Specific Purpose" attribute bit is high-bandwidth memory. One concrete
> example of that was a platform called Knights Landing [1] that ended
> up shipping firmware that lied to the OS about the latency
> characteristics of the memory to try to reverse engineer OS behavior
> to not allocate from that memory range by default. With the EFI
> attribute firmware performance tables can tell the truth about the
> performance characteristics of the memory range *and* indicate that
> the OS not use it for general purpose allocations by default.

Thanks for clarifying!

> 
> [1]: https://software.intel.com/content/www/us/en/develop/blogs/an-intro-to-mcdram-high-bandwidth-memory-on-knights-landing.html
> 
>> Like, memory that is faster than RAM (scratch
>> pad), or slower (pmem)? Or both? :)
> 
> Both, but note that PMEM is already hard-reserved by default.
> Soft-reserved is about a memory range that, for example, an
> administrator may want to reserve 100% for a weather simulation where
> if even a small amount of memory was stolen for the page cache the
> application may not meet its performance targets. It could also be a
> memory range that is so slow that only applications with higher
> latency tolerances would be prepared to consume it.
> 
> In other words the soft-reserved memory can be used to indicate memory
> that is either too precious, or too slow for general purpose OS
> allocations.

Right, so actually performance-differentiated in any way :)

> 
>> Is it a valid use case to use pmem
>> in a hypervisor to back this memory?
> 
> Depends on the pmem. That performance capability is indicated by the
> ACPI HMAT, not the EFI soft-reserved designation.
> 
>> 3. There seem to be use cases where "soft-reserved" memory is used via
>> DAX. What is an example use case? I assume it's *not* to treat it like
>> PMEM but instead e.g., use it as a fast buffer inside applications or
>> similar.
> 
> Right, in that weather-simulation example that application could just
> mmap /dev/daxX.Y and never worry about contending for the "fast
> memory" resource on the platform. Alternatively if that resource needs
> to be shared and/or over-commited then kernel memory-management
> services are needed and that dax-device can be assigned to kmem.
> 
>> 4. There seem to be use cases where some part of "soft-reserved" memory
>> is used via DAX, some other is given to the buddy. What is an example
>> use case? Is this really necessary or only some theoretical use case?
> 
> It's as necessary as pmem namespace partitioning, or the inclusion of
> dax-kmem upstream in the first place. In that kmem case the motivation
> was that some users want a portion of pmem provisioned for storage and
> some for volatile usage. The motivation is similar here, platform
> firmware can only identify memory attributes on coarse boundaries,
> finer grained provisioning decisions are up to the administrator /
> platform-owner and the kernel is a just a facilitator of that policy.
> 
>>
>> 5. The "provisioned along performance relevant address boundaries." part
>> is unclear to me. Can you give an example of how this would look like
>> from user space? Like, split that memory in blocks of size X with
>> alignment Y and give them to separate applications?
> 
> One example of platform address boundaries are the memory address
> ranges that alias in a direct-mapped memory-side-cache. In the
> direct-map-cache aliasing may repeat every N GBs where N is the ratio
> of far-to-near memory. ("Near memory" ==  cache "Far memory" ==
> backing memory). Also refer back to the background in the page
> allocator shuffling patches [2]. With this partitioning mechanism you
> could, for one example use case, assign different VMs to exclusive
> colors in the memory side cache.

Interesting, thanks!

> 
> [2]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=e900a918b098
> 
>> 6. If you add such memory to the buddy, is there any way the system can
>> differentiate it from other memory? E.g., via fake/other NUMA nodes?
> 
> Numa node numbers / are how performance differentiated memory ranges
> are enumerated. The expectation is that all distinct performance
> memory targets have unique ACPI proximity domains and Linux numa node
> numbers as a result.

Makes sense to me (although it's somehow weird, because memory of the
same socket/node would be represented via different NUMA nodes), thanks!

> 
>> Also, can you give examples of how kmem-added memory is represented in
>> /proc/iomem for a) pmem and b) soft-resered memory after this series
>> (skimming over the patches, I think there is a change for pmem, right?)?
> 
> I don't expect a change. The only difference is the parent resource
> will be marked "Soft Reserved" instead of "Persistent Memory".

Right, I misread patch #11 while skimming - I thought the device
resource would be dropped.

> 
>> I am really wondering if it's the right approach to squeeze this into
>> our pmem/nvdimm infrastructure just because it's easy to do. E.g., man
>> "ndctl" - "ndctl - Manage "libnvdimm" subsystem devices (Non-volatile
>> Memory)" speaks explicitly about non-volatile memory.
> 
> In fact it's not squeezed into PMEM infrastructure. dax-kmem and
> device-dax are independent of PMEM. PMEM is one source of potential
> device-dax instances, soft-reserved memory is another orthogonal
> source. This is why device-dax needs its own userspace policy directed
> partitioning mechanism because there is no PMEM to store the
> configuration for partitioned higph-bandwidth memory. The userspace
> tooling for this mechanism is targeted for a tool called daxctl that
> has no PMEM dependencies. Look to Joao's use case that is using this
> infrastructure independent of PMEM with manual soft-reservations
> specified on the kernel command-line.

Thanks for clarifying, I was under the impression we would be reusing
libnvdimm to manage that memory.

-- 
Thanks,

David / dhildenb

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

  reply	other threads:[~2020-08-21 10:15 UTC|newest]

Thread overview: 174+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-08-03  5:02 [PATCH v4 00/23] device-dax: Support sub-dividing soft-reserved ranges Dan Williams
2020-08-03  5:02 ` Dan Williams
2020-08-03  5:02 ` Dan Williams
2020-08-03  5:02 ` [PATCH v4 01/23] x86/numa: Cleanup configuration dependent command-line options Dan Williams
2020-08-03  5:02   ` Dan Williams
2020-08-03  5:02   ` Dan Williams
2020-08-03  5:02 ` [PATCH v4 02/23] x86/numa: Add 'nohmat' option Dan Williams
2020-08-03  5:02   ` Dan Williams
2020-08-03  5:02   ` Dan Williams
2020-08-03  5:02 ` [PATCH v4 03/23] efi/fake_mem: Arrange for a resource entry per efi_fake_mem instance Dan Williams
2020-08-03  5:02   ` Dan Williams
2020-08-03  5:02   ` Dan Williams
2020-08-03  5:02 ` [PATCH v4 04/23] ACPI: HMAT: Refactor hmat_register_target_device to hmem_register_device Dan Williams
2020-08-03  5:02   ` Dan Williams
2020-08-03  5:02   ` Dan Williams
2020-08-03  5:02 ` [PATCH v4 05/23] resource: Report parent to walk_iomem_res_desc() callback Dan Williams
2020-08-03  5:02   ` Dan Williams
2020-08-03  5:02   ` Dan Williams
2020-08-03  5:02 ` [PATCH v4 06/23] mm/memory_hotplug: Introduce default phys_to_target_node() implementation Dan Williams
2020-08-03  5:02   ` Dan Williams
2020-08-03  5:02   ` Dan Williams
2020-08-03  5:03 ` [PATCH v4 07/23] ACPI: HMAT: Attach a device for each soft-reserved range Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03 ` [PATCH v4 08/23] device-dax: Drop the dax_region.pfn_flags attribute Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03 ` [PATCH v4 09/23] device-dax: Move instance creation parameters to 'struct dev_dax_data' Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03 ` [PATCH v4 10/23] device-dax: Make pgmap optional for instance creation Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03 ` [PATCH v4 11/23] device-dax: Kill dax_kmem_res Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-21 10:06   ` David Hildenbrand
2020-08-21 10:06     ` David Hildenbrand
2020-08-21 10:06     ` David Hildenbrand
2020-09-08 15:33     ` Joao Martins
2020-09-08 15:33       ` Joao Martins
2020-09-08 15:33       ` Joao Martins
2020-09-08 18:03       ` David Hildenbrand
2020-09-08 18:03         ` David Hildenbrand
2020-09-08 18:03         ` David Hildenbrand
2020-09-23  8:04       ` David Hildenbrand
2020-09-23  8:04         ` David Hildenbrand
2020-09-23  8:04         ` David Hildenbrand
2020-09-23 21:41         ` Dan Williams
2020-09-23 21:41           ` Dan Williams
2020-09-23 21:41           ` Dan Williams
2020-09-23 21:41           ` Dan Williams
2020-09-24  7:25           ` David Hildenbrand
2020-09-24  7:25             ` David Hildenbrand
2020-09-24  7:25             ` David Hildenbrand
2020-09-24  7:25             ` David Hildenbrand
2020-09-24 13:54             ` Dan Williams
2020-09-24 13:54               ` Dan Williams
2020-09-24 13:54               ` Dan Williams
2020-09-24 13:54               ` Dan Williams
2020-09-24 18:12               ` David Hildenbrand
2020-09-24 18:12                 ` David Hildenbrand
2020-09-24 18:12                 ` David Hildenbrand
2020-09-24 18:12                 ` David Hildenbrand
2020-09-24 21:26                 ` Dan Williams
2020-09-24 21:26                   ` Dan Williams
2020-09-24 21:26                   ` Dan Williams
2020-09-24 21:26                   ` Dan Williams
2020-09-24 21:41                   ` David Hildenbrand
2020-09-24 21:41                     ` David Hildenbrand
2020-09-24 21:41                     ` David Hildenbrand
2020-09-24 21:41                     ` David Hildenbrand
2020-09-24 21:50                     ` Dan Williams
2020-09-24 21:50                       ` Dan Williams
2020-09-24 21:50                       ` Dan Williams
2020-09-24 21:50                       ` Dan Williams
2020-09-25  8:54                       ` David Hildenbrand
2020-09-25  8:54                         ` David Hildenbrand
2020-09-25  8:54                         ` David Hildenbrand
2020-09-25  8:54                         ` David Hildenbrand
2020-08-03  5:03 ` [PATCH v4 12/23] device-dax: Add an allocation interface for device-dax instances Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03 ` [PATCH v4 13/23] device-dax: Introduce 'seed' devices Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03 ` [PATCH v4 14/23] drivers/base: Make device_find_child_by_name() compatible with sysfs inputs Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03 ` [PATCH v4 15/23] device-dax: Add resize support Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-21 22:56   ` Andrew Morton
2020-08-21 22:56     ` Andrew Morton
2020-08-21 22:56     ` Andrew Morton
2020-08-03  5:03 ` [PATCH v4 16/23] mm/memremap_pages: Convert to 'struct range' Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03 ` [PATCH v4 17/23] mm/memremap_pages: Support multiple ranges per invocation Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:04 ` [PATCH v4 18/23] device-dax: Add dis-contiguous resource support Dan Williams
2020-08-03  5:04   ` Dan Williams
2020-08-03  5:04   ` Dan Williams
2020-08-03  5:04 ` [PATCH v4 19/23] device-dax: Introduce 'mapping' devices Dan Williams
2020-08-03  5:04   ` Dan Williams
2020-08-03  5:04   ` Dan Williams
2020-08-03  5:04 ` [PATCH v4 20/23] device-dax: Make align a per-device property Dan Williams
2020-08-03  5:04   ` Dan Williams
2020-08-03  5:04   ` Dan Williams
2020-08-03  5:04 ` [PATCH v4 21/23] device-dax: Add an 'align' attribute Dan Williams
2020-08-03  5:04   ` Dan Williams
2020-08-03  5:04   ` Dan Williams
2020-08-03  5:04 ` [PATCH v4 22/23] dax/hmem: Introduce dax_hmem.region_idle parameter Dan Williams
2020-08-03  5:04   ` Dan Williams
2020-08-03  5:04   ` Dan Williams
2020-08-03  5:04 ` [PATCH v4 23/23] device-dax: Add a range mapping allocation attribute Dan Williams
2020-08-03  5:04   ` Dan Williams
2020-08-03  5:04   ` Dan Williams
2020-08-03  7:47 ` [PATCH v4 00/23] device-dax: Support sub-dividing soft-reserved ranges David Hildenbrand
2020-08-03  7:47   ` David Hildenbrand
2020-08-03  7:47   ` David Hildenbrand
2020-08-20  1:53   ` Dan Williams
2020-08-20  1:53     ` Dan Williams
2020-08-20  1:53     ` Dan Williams
2020-08-20  1:53     ` Dan Williams
2020-08-21 10:15     ` David Hildenbrand [this message]
2020-08-21 10:15       ` David Hildenbrand
2020-08-21 10:15       ` David Hildenbrand
2020-08-21 10:15       ` David Hildenbrand
2020-08-21 18:27       ` Dan Williams
2020-08-21 18:27         ` Dan Williams
2020-08-21 18:27         ` Dan Williams
2020-08-21 18:27         ` Dan Williams
2020-08-21 18:30         ` David Hildenbrand
2020-08-21 18:30           ` David Hildenbrand
2020-08-21 18:30           ` David Hildenbrand
2020-08-21 18:30           ` David Hildenbrand
2020-08-21 21:17           ` Dan Williams
2020-08-21 21:17             ` Dan Williams
2020-08-21 21:17             ` Dan Williams
2020-08-21 21:17             ` Dan Williams
2020-08-21 21:33             ` David Hildenbrand
2020-08-21 21:33               ` David Hildenbrand
2020-08-21 21:33               ` David Hildenbrand
2020-08-21 21:33               ` David Hildenbrand
2020-08-21 21:42               ` David Hildenbrand
2020-08-21 21:42                 ` David Hildenbrand
2020-08-21 21:42                 ` David Hildenbrand
2020-08-21 21:42                 ` David Hildenbrand
2020-08-21 21:43               ` David Hildenbrand
2020-08-21 21:43                 ` David Hildenbrand
2020-08-21 21:43                 ` David Hildenbrand
2020-08-21 21:43                 ` David Hildenbrand
2020-08-21 21:46               ` David Hildenbrand
2020-08-21 21:46                 ` David Hildenbrand
2020-08-21 21:46                 ` David Hildenbrand
2020-08-21 21:46                 ` David Hildenbrand
2020-08-21 23:21     ` Andrew Morton
2020-08-21 23:21       ` Andrew Morton
2020-08-21 23:21       ` Andrew Morton
2020-08-21 23:21       ` Andrew Morton
2020-08-22  2:32       ` Leizhen (ThunderTown)
2020-08-22  2:32         ` Leizhen (ThunderTown)
2020-08-22  2:32         ` Leizhen (ThunderTown)
2020-08-22  2:32         ` Leizhen (ThunderTown)
2020-09-08 10:45       ` David Hildenbrand
2020-09-08 10:45         ` David Hildenbrand
2020-09-08 10:45         ` David Hildenbrand
2020-09-08 10:45         ` David Hildenbrand
2020-09-23  0:43         ` Dan Williams
2020-09-23  0:43           ` Dan Williams
2020-09-23  0:43           ` Dan Williams
2020-09-23  0:43           ` Dan Williams

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6af3de0d-ffdc-8942-3922-ebaeef20dd63@redhat.com \
    --to=david@redhat.com \
    --cc=Brice.Goglin@inr \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=airlied@linux.ie \
    --cc=akpm@linux-foundation.org \
    --cc=ard.biesheuvel@linaro.org \
    --cc=ardb@kernel.org \
    --cc=benh@kernel.crashing.org \
    --cc=bp@alien8.de \
    --cc=bskeggs@redhat.com \
    --cc=catalin.marinas@arm.com \
    --cc=dan.j.williams@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=hpa@zytor.com \
    --cc=jgg@mellanox.com \
    --cc=joao.m.martins@oracle.com \
    --cc=justin.he@arm.com \
    --cc=mingo@redhat.com \
    --cc=pasha.tatashin@soleen.com \
    --cc=paulus@ozlabs.org \
    --cc=peterz@infradead.org \
    --cc=rafael.j.wysocki@intel.com \
    --cc=rppt@linux.ibm.com \
    --cc=tglx@linutronix.de \
    --cc=thomas.lendacky@amd.com \
    --cc=will@kernel.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.