All of lore.kernel.org
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: Joao Martins <joao.m.martins@oracle.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Pavel Tatashin <pasha.tatashin@soleen.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Ard Biesheuvel <ard.biesheuvel@linaro.org>,
	Linux MM <linux-mm@kvack.org>,
	linux-nvdimm <linux-nvdimm@lists.01.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Linux ACPI <linux-acpi@vger.kernel.org>,
	Maling list - DRI developers <dri-devel@lists.freedesktop.org>
Subject: Re: [PATCH v4 11/23] device-dax: Kill dax_kmem_res
Date: Fri, 25 Sep 2020 10:54:43 +0200	[thread overview]
Message-ID: <d729e2e3-1f8e-31e6-7095-841b9e3ca47b@redhat.com> (raw)
In-Reply-To: <CAPcyv4jsUiXTqDtnh_fnm_p4NaX2=c3rrjFe6Efa-oWPkTe-fA@mail.gmail.com>

On 24.09.20 23:50, Dan Williams wrote:
> On Thu, Sep 24, 2020 at 2:42 PM David Hildenbrand <david@redhat.com> wrote:
>>
>>
>>
>>> Am 24.09.2020 um 23:26 schrieb Dan Williams <dan.j.williams@intel.com>:
>>>
>>> [..]
>>>>> I'm not suggesting to busy the whole "virtio" range, just the portion
>>>>> that's about to be passed to add_memory_driver_managed().
>>>>
>>>> I'm afraid I don't get your point. For virtio-mem:
>>>>
>>>> Before:
>>>>
>>>> 1. Create virtio0 container resource
>>>>
>>>> 2. (somewhen in the future) add_memory_driver_managed()
>>>> - Create resource (System RAM (virtio_mem)), marking it busy/driver
>>>>   managed
>>>>
>>>> After:
>>>>
>>>> 1. Create virtio0 container resource
>>>>
>>>> 2. (somewhen in the future) Create resource (System RAM (virtio_mem)),
>>>>   marking it busy/driver managed
>>>> 3. add_memory_driver_managed()
>>>>
>>>> Not helpful or simpler IMHO.
>>>
>>> The concern I'm trying to address is the theoretical race window and
>>> layering violation in this sequence in the kmem driver:
>>>
>>> 1/ res = request_mem_region(...);
>>> 2/ res->flags = IORESOURCE_MEM;
>>> 3/ add_memory_driver_managed();
>>>
>>> Between 2/ and 3/ something can race and think that it owns the
>>> region. Do I think it will happen in practice, no, but it's still a
>>> pattern that deserves come cleanup.
>>
>> I think in that unlikely event (rather impossible), add_memory_driver_managed() should fail, detecting a conflicting (busy) resource. Not sure what will happen next ( and did not double-check).
> 
> add_memory_driver_managed() will fail, but the release_mem_region() in
> kmem to unwind on the error path will do the wrong thing because that
> other driver thinks it got ownership of the region.
> 

I think if somebody would race and claim the region for itself (after we
unchecked the BUSY flag), there would be another memory resource below
our resource container (e.g., via __request_region()).

So, interestingly, the current code will do a

release_resource->__release_resource(old, true);

which will remove whatever somebody added below the resource.

If we were to do a

remove_resource->__release_resource(old, false);

we would only remove what we temporarily added, relocating anychildren
(someone nasty added).

But yeah, I don't think we have to worry about this case.

>> But yeah, the way the BUSY bit is cleared here is wrong - simply overwriting other bits. And it would be even better if we could avoid manually messing with flags here.
> 
> I'm ok to leave it alone for now (hasn't been and likely never will be
> a problem in practice), but I think it was still worth grumbling

Definitely, it gives us a better understanding.

> about. I'll leave that part of kmem alone in the upcoming split of
> dax_kmem_res removal.

Yeah, stuff is more complicated than I would wished, so I guess it's
better to leave it alone for now until we actually see issues with
somebody else regarding *our* device-owned region (or we're able to come
up with a cleanup that keeps all corner cases working for kmem and
virtio-mem).

-- 
Thanks,

David / dhildenb
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

WARNING: multiple messages have this Message-ID (diff)
From: David Hildenbrand <david@redhat.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: Joao Martins <joao.m.martins@oracle.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Vishal Verma <vishal.l.verma@intel.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Pavel Tatashin <pasha.tatashin@soleen.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Ard Biesheuvel <ard.biesheuvel@linaro.org>,
	Linux MM <linux-mm@kvack.org>,
	linux-nvdimm <linux-nvdimm@lists.01.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Linux ACPI <linux-acpi@vger.kernel.org>,
	Maling list - DRI developers  <dri-devel@lists.freedesktop.org>
Subject: Re: [PATCH v4 11/23] device-dax: Kill dax_kmem_res
Date: Fri, 25 Sep 2020 10:54:43 +0200	[thread overview]
Message-ID: <d729e2e3-1f8e-31e6-7095-841b9e3ca47b@redhat.com> (raw)
In-Reply-To: <CAPcyv4jsUiXTqDtnh_fnm_p4NaX2=c3rrjFe6Efa-oWPkTe-fA@mail.gmail.com>

On 24.09.20 23:50, Dan Williams wrote:
> On Thu, Sep 24, 2020 at 2:42 PM David Hildenbrand <david@redhat.com> wrote:
>>
>>
>>
>>> Am 24.09.2020 um 23:26 schrieb Dan Williams <dan.j.williams@intel.com>:
>>>
>>> [..]
>>>>> I'm not suggesting to busy the whole "virtio" range, just the portion
>>>>> that's about to be passed to add_memory_driver_managed().
>>>>
>>>> I'm afraid I don't get your point. For virtio-mem:
>>>>
>>>> Before:
>>>>
>>>> 1. Create virtio0 container resource
>>>>
>>>> 2. (somewhen in the future) add_memory_driver_managed()
>>>> - Create resource (System RAM (virtio_mem)), marking it busy/driver
>>>>   managed
>>>>
>>>> After:
>>>>
>>>> 1. Create virtio0 container resource
>>>>
>>>> 2. (somewhen in the future) Create resource (System RAM (virtio_mem)),
>>>>   marking it busy/driver managed
>>>> 3. add_memory_driver_managed()
>>>>
>>>> Not helpful or simpler IMHO.
>>>
>>> The concern I'm trying to address is the theoretical race window and
>>> layering violation in this sequence in the kmem driver:
>>>
>>> 1/ res = request_mem_region(...);
>>> 2/ res->flags = IORESOURCE_MEM;
>>> 3/ add_memory_driver_managed();
>>>
>>> Between 2/ and 3/ something can race and think that it owns the
>>> region. Do I think it will happen in practice, no, but it's still a
>>> pattern that deserves come cleanup.
>>
>> I think in that unlikely event (rather impossible), add_memory_driver_managed() should fail, detecting a conflicting (busy) resource. Not sure what will happen next ( and did not double-check).
> 
> add_memory_driver_managed() will fail, but the release_mem_region() in
> kmem to unwind on the error path will do the wrong thing because that
> other driver thinks it got ownership of the region.
> 

I think if somebody would race and claim the region for itself (after we
unchecked the BUSY flag), there would be another memory resource below
our resource container (e.g., via __request_region()).

So, interestingly, the current code will do a

release_resource->__release_resource(old, true);

which will remove whatever somebody added below the resource.

If we were to do a

remove_resource->__release_resource(old, false);

we would only remove what we temporarily added, relocating anychildren
(someone nasty added).

But yeah, I don't think we have to worry about this case.

>> But yeah, the way the BUSY bit is cleared here is wrong - simply overwriting other bits. And it would be even better if we could avoid manually messing with flags here.
> 
> I'm ok to leave it alone for now (hasn't been and likely never will be
> a problem in practice), but I think it was still worth grumbling

Definitely, it gives us a better understanding.

> about. I'll leave that part of kmem alone in the upcoming split of
> dax_kmem_res removal.

Yeah, stuff is more complicated than I would wished, so I guess it's
better to leave it alone for now until we actually see issues with
somebody else regarding *our* device-owned region (or we're able to come
up with a cleanup that keeps all corner cases working for kmem and
virtio-mem).

-- 
Thanks,

David / dhildenb


WARNING: multiple messages have this Message-ID (diff)
From: David Hildenbrand <david@redhat.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: Joao Martins <joao.m.martins@oracle.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Vishal Verma <vishal.l.verma@intel.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Pavel Tatashin <pasha.tatashin@soleen.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Ard Biesheuvel <ard.biesheuvel@linaro.org>,
	Linux MM <linux-mm@kvack.org>,
	linux-nvdimm <linux-nvdimm@lists.01.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Linux ACPI <linux-acpi@vger.kernel.org>,
	Maling list - DRI developers <dri-devel@lists.freedesktop.org>
Subject: Re: [PATCH v4 11/23] device-dax: Kill dax_kmem_res
Date: Fri, 25 Sep 2020 10:54:43 +0200	[thread overview]
Message-ID: <d729e2e3-1f8e-31e6-7095-841b9e3ca47b@redhat.com> (raw)
In-Reply-To: <CAPcyv4jsUiXTqDtnh_fnm_p4NaX2=c3rrjFe6Efa-oWPkTe-fA@mail.gmail.com>

On 24.09.20 23:50, Dan Williams wrote:
> On Thu, Sep 24, 2020 at 2:42 PM David Hildenbrand <david@redhat.com> wrote:
>>
>>
>>
>>> Am 24.09.2020 um 23:26 schrieb Dan Williams <dan.j.williams@intel.com>:
>>>
>>> [..]
>>>>> I'm not suggesting to busy the whole "virtio" range, just the portion
>>>>> that's about to be passed to add_memory_driver_managed().
>>>>
>>>> I'm afraid I don't get your point. For virtio-mem:
>>>>
>>>> Before:
>>>>
>>>> 1. Create virtio0 container resource
>>>>
>>>> 2. (somewhen in the future) add_memory_driver_managed()
>>>> - Create resource (System RAM (virtio_mem)), marking it busy/driver
>>>>   managed
>>>>
>>>> After:
>>>>
>>>> 1. Create virtio0 container resource
>>>>
>>>> 2. (somewhen in the future) Create resource (System RAM (virtio_mem)),
>>>>   marking it busy/driver managed
>>>> 3. add_memory_driver_managed()
>>>>
>>>> Not helpful or simpler IMHO.
>>>
>>> The concern I'm trying to address is the theoretical race window and
>>> layering violation in this sequence in the kmem driver:
>>>
>>> 1/ res = request_mem_region(...);
>>> 2/ res->flags = IORESOURCE_MEM;
>>> 3/ add_memory_driver_managed();
>>>
>>> Between 2/ and 3/ something can race and think that it owns the
>>> region. Do I think it will happen in practice, no, but it's still a
>>> pattern that deserves come cleanup.
>>
>> I think in that unlikely event (rather impossible), add_memory_driver_managed() should fail, detecting a conflicting (busy) resource. Not sure what will happen next ( and did not double-check).
> 
> add_memory_driver_managed() will fail, but the release_mem_region() in
> kmem to unwind on the error path will do the wrong thing because that
> other driver thinks it got ownership of the region.
> 

I think if somebody would race and claim the region for itself (after we
unchecked the BUSY flag), there would be another memory resource below
our resource container (e.g., via __request_region()).

So, interestingly, the current code will do a

release_resource->__release_resource(old, true);

which will remove whatever somebody added below the resource.

If we were to do a

remove_resource->__release_resource(old, false);

we would only remove what we temporarily added, relocating anychildren
(someone nasty added).

But yeah, I don't think we have to worry about this case.

>> But yeah, the way the BUSY bit is cleared here is wrong - simply overwriting other bits. And it would be even better if we could avoid manually messing with flags here.
> 
> I'm ok to leave it alone for now (hasn't been and likely never will be
> a problem in practice), but I think it was still worth grumbling

Definitely, it gives us a better understanding.

> about. I'll leave that part of kmem alone in the upcoming split of
> dax_kmem_res removal.

Yeah, stuff is more complicated than I would wished, so I guess it's
better to leave it alone for now until we actually see issues with
somebody else regarding *our* device-owned region (or we're able to come
up with a cleanup that keeps all corner cases working for kmem and
virtio-mem).

-- 
Thanks,

David / dhildenb



WARNING: multiple messages have this Message-ID (diff)
From: David Hildenbrand <david@redhat.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>,
	Ard Biesheuvel <ard.biesheuvel@linaro.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Vishal Verma <vishal.l.verma@intel.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Maling list - DRI developers <dri-devel@lists.freedesktop.org>,
	Linux MM <linux-mm@kvack.org>,
	Joao Martins <joao.m.martins@oracle.com>,
	Linux ACPI <linux-acpi@vger.kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	linux-nvdimm <linux-nvdimm@lists.01.org>
Subject: Re: [PATCH v4 11/23] device-dax: Kill dax_kmem_res
Date: Fri, 25 Sep 2020 10:54:43 +0200	[thread overview]
Message-ID: <d729e2e3-1f8e-31e6-7095-841b9e3ca47b@redhat.com> (raw)
In-Reply-To: <CAPcyv4jsUiXTqDtnh_fnm_p4NaX2=c3rrjFe6Efa-oWPkTe-fA@mail.gmail.com>

On 24.09.20 23:50, Dan Williams wrote:
> On Thu, Sep 24, 2020 at 2:42 PM David Hildenbrand <david@redhat.com> wrote:
>>
>>
>>
>>> Am 24.09.2020 um 23:26 schrieb Dan Williams <dan.j.williams@intel.com>:
>>>
>>> [..]
>>>>> I'm not suggesting to busy the whole "virtio" range, just the portion
>>>>> that's about to be passed to add_memory_driver_managed().
>>>>
>>>> I'm afraid I don't get your point. For virtio-mem:
>>>>
>>>> Before:
>>>>
>>>> 1. Create virtio0 container resource
>>>>
>>>> 2. (somewhen in the future) add_memory_driver_managed()
>>>> - Create resource (System RAM (virtio_mem)), marking it busy/driver
>>>>   managed
>>>>
>>>> After:
>>>>
>>>> 1. Create virtio0 container resource
>>>>
>>>> 2. (somewhen in the future) Create resource (System RAM (virtio_mem)),
>>>>   marking it busy/driver managed
>>>> 3. add_memory_driver_managed()
>>>>
>>>> Not helpful or simpler IMHO.
>>>
>>> The concern I'm trying to address is the theoretical race window and
>>> layering violation in this sequence in the kmem driver:
>>>
>>> 1/ res = request_mem_region(...);
>>> 2/ res->flags = IORESOURCE_MEM;
>>> 3/ add_memory_driver_managed();
>>>
>>> Between 2/ and 3/ something can race and think that it owns the
>>> region. Do I think it will happen in practice, no, but it's still a
>>> pattern that deserves come cleanup.
>>
>> I think in that unlikely event (rather impossible), add_memory_driver_managed() should fail, detecting a conflicting (busy) resource. Not sure what will happen next ( and did not double-check).
> 
> add_memory_driver_managed() will fail, but the release_mem_region() in
> kmem to unwind on the error path will do the wrong thing because that
> other driver thinks it got ownership of the region.
> 

I think if somebody would race and claim the region for itself (after we
unchecked the BUSY flag), there would be another memory resource below
our resource container (e.g., via __request_region()).

So, interestingly, the current code will do a

release_resource->__release_resource(old, true);

which will remove whatever somebody added below the resource.

If we were to do a

remove_resource->__release_resource(old, false);

we would only remove what we temporarily added, relocating anychildren
(someone nasty added).

But yeah, I don't think we have to worry about this case.

>> But yeah, the way the BUSY bit is cleared here is wrong - simply overwriting other bits. And it would be even better if we could avoid manually messing with flags here.
> 
> I'm ok to leave it alone for now (hasn't been and likely never will be
> a problem in practice), but I think it was still worth grumbling

Definitely, it gives us a better understanding.

> about. I'll leave that part of kmem alone in the upcoming split of
> dax_kmem_res removal.

Yeah, stuff is more complicated than I would wished, so I guess it's
better to leave it alone for now until we actually see issues with
somebody else regarding *our* device-owned region (or we're able to come
up with a cleanup that keeps all corner cases working for kmem and
virtio-mem).

-- 
Thanks,

David / dhildenb

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

  reply	other threads:[~2020-09-25  8:55 UTC|newest]

Thread overview: 174+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-08-03  5:02 [PATCH v4 00/23] device-dax: Support sub-dividing soft-reserved ranges Dan Williams
2020-08-03  5:02 ` Dan Williams
2020-08-03  5:02 ` Dan Williams
2020-08-03  5:02 ` [PATCH v4 01/23] x86/numa: Cleanup configuration dependent command-line options Dan Williams
2020-08-03  5:02   ` Dan Williams
2020-08-03  5:02   ` Dan Williams
2020-08-03  5:02 ` [PATCH v4 02/23] x86/numa: Add 'nohmat' option Dan Williams
2020-08-03  5:02   ` Dan Williams
2020-08-03  5:02   ` Dan Williams
2020-08-03  5:02 ` [PATCH v4 03/23] efi/fake_mem: Arrange for a resource entry per efi_fake_mem instance Dan Williams
2020-08-03  5:02   ` Dan Williams
2020-08-03  5:02   ` Dan Williams
2020-08-03  5:02 ` [PATCH v4 04/23] ACPI: HMAT: Refactor hmat_register_target_device to hmem_register_device Dan Williams
2020-08-03  5:02   ` Dan Williams
2020-08-03  5:02   ` Dan Williams
2020-08-03  5:02 ` [PATCH v4 05/23] resource: Report parent to walk_iomem_res_desc() callback Dan Williams
2020-08-03  5:02   ` Dan Williams
2020-08-03  5:02   ` Dan Williams
2020-08-03  5:02 ` [PATCH v4 06/23] mm/memory_hotplug: Introduce default phys_to_target_node() implementation Dan Williams
2020-08-03  5:02   ` Dan Williams
2020-08-03  5:02   ` Dan Williams
2020-08-03  5:03 ` [PATCH v4 07/23] ACPI: HMAT: Attach a device for each soft-reserved range Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03 ` [PATCH v4 08/23] device-dax: Drop the dax_region.pfn_flags attribute Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03 ` [PATCH v4 09/23] device-dax: Move instance creation parameters to 'struct dev_dax_data' Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03 ` [PATCH v4 10/23] device-dax: Make pgmap optional for instance creation Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03 ` [PATCH v4 11/23] device-dax: Kill dax_kmem_res Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-21 10:06   ` David Hildenbrand
2020-08-21 10:06     ` David Hildenbrand
2020-08-21 10:06     ` David Hildenbrand
2020-09-08 15:33     ` Joao Martins
2020-09-08 15:33       ` Joao Martins
2020-09-08 15:33       ` Joao Martins
2020-09-08 18:03       ` David Hildenbrand
2020-09-08 18:03         ` David Hildenbrand
2020-09-08 18:03         ` David Hildenbrand
2020-09-23  8:04       ` David Hildenbrand
2020-09-23  8:04         ` David Hildenbrand
2020-09-23  8:04         ` David Hildenbrand
2020-09-23 21:41         ` Dan Williams
2020-09-23 21:41           ` Dan Williams
2020-09-23 21:41           ` Dan Williams
2020-09-23 21:41           ` Dan Williams
2020-09-24  7:25           ` David Hildenbrand
2020-09-24  7:25             ` David Hildenbrand
2020-09-24  7:25             ` David Hildenbrand
2020-09-24  7:25             ` David Hildenbrand
2020-09-24 13:54             ` Dan Williams
2020-09-24 13:54               ` Dan Williams
2020-09-24 13:54               ` Dan Williams
2020-09-24 13:54               ` Dan Williams
2020-09-24 18:12               ` David Hildenbrand
2020-09-24 18:12                 ` David Hildenbrand
2020-09-24 18:12                 ` David Hildenbrand
2020-09-24 18:12                 ` David Hildenbrand
2020-09-24 21:26                 ` Dan Williams
2020-09-24 21:26                   ` Dan Williams
2020-09-24 21:26                   ` Dan Williams
2020-09-24 21:26                   ` Dan Williams
2020-09-24 21:41                   ` David Hildenbrand
2020-09-24 21:41                     ` David Hildenbrand
2020-09-24 21:41                     ` David Hildenbrand
2020-09-24 21:41                     ` David Hildenbrand
2020-09-24 21:50                     ` Dan Williams
2020-09-24 21:50                       ` Dan Williams
2020-09-24 21:50                       ` Dan Williams
2020-09-24 21:50                       ` Dan Williams
2020-09-25  8:54                       ` David Hildenbrand [this message]
2020-09-25  8:54                         ` David Hildenbrand
2020-09-25  8:54                         ` David Hildenbrand
2020-09-25  8:54                         ` David Hildenbrand
2020-08-03  5:03 ` [PATCH v4 12/23] device-dax: Add an allocation interface for device-dax instances Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03 ` [PATCH v4 13/23] device-dax: Introduce 'seed' devices Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03 ` [PATCH v4 14/23] drivers/base: Make device_find_child_by_name() compatible with sysfs inputs Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03 ` [PATCH v4 15/23] device-dax: Add resize support Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-21 22:56   ` Andrew Morton
2020-08-21 22:56     ` Andrew Morton
2020-08-21 22:56     ` Andrew Morton
2020-08-03  5:03 ` [PATCH v4 16/23] mm/memremap_pages: Convert to 'struct range' Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03 ` [PATCH v4 17/23] mm/memremap_pages: Support multiple ranges per invocation Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:03   ` Dan Williams
2020-08-03  5:04 ` [PATCH v4 18/23] device-dax: Add dis-contiguous resource support Dan Williams
2020-08-03  5:04   ` Dan Williams
2020-08-03  5:04   ` Dan Williams
2020-08-03  5:04 ` [PATCH v4 19/23] device-dax: Introduce 'mapping' devices Dan Williams
2020-08-03  5:04   ` Dan Williams
2020-08-03  5:04   ` Dan Williams
2020-08-03  5:04 ` [PATCH v4 20/23] device-dax: Make align a per-device property Dan Williams
2020-08-03  5:04   ` Dan Williams
2020-08-03  5:04   ` Dan Williams
2020-08-03  5:04 ` [PATCH v4 21/23] device-dax: Add an 'align' attribute Dan Williams
2020-08-03  5:04   ` Dan Williams
2020-08-03  5:04   ` Dan Williams
2020-08-03  5:04 ` [PATCH v4 22/23] dax/hmem: Introduce dax_hmem.region_idle parameter Dan Williams
2020-08-03  5:04   ` Dan Williams
2020-08-03  5:04   ` Dan Williams
2020-08-03  5:04 ` [PATCH v4 23/23] device-dax: Add a range mapping allocation attribute Dan Williams
2020-08-03  5:04   ` Dan Williams
2020-08-03  5:04   ` Dan Williams
2020-08-03  7:47 ` [PATCH v4 00/23] device-dax: Support sub-dividing soft-reserved ranges David Hildenbrand
2020-08-03  7:47   ` David Hildenbrand
2020-08-03  7:47   ` David Hildenbrand
2020-08-20  1:53   ` Dan Williams
2020-08-20  1:53     ` Dan Williams
2020-08-20  1:53     ` Dan Williams
2020-08-20  1:53     ` Dan Williams
2020-08-21 10:15     ` David Hildenbrand
2020-08-21 10:15       ` David Hildenbrand
2020-08-21 10:15       ` David Hildenbrand
2020-08-21 10:15       ` David Hildenbrand
2020-08-21 18:27       ` Dan Williams
2020-08-21 18:27         ` Dan Williams
2020-08-21 18:27         ` Dan Williams
2020-08-21 18:27         ` Dan Williams
2020-08-21 18:30         ` David Hildenbrand
2020-08-21 18:30           ` David Hildenbrand
2020-08-21 18:30           ` David Hildenbrand
2020-08-21 18:30           ` David Hildenbrand
2020-08-21 21:17           ` Dan Williams
2020-08-21 21:17             ` Dan Williams
2020-08-21 21:17             ` Dan Williams
2020-08-21 21:17             ` Dan Williams
2020-08-21 21:33             ` David Hildenbrand
2020-08-21 21:33               ` David Hildenbrand
2020-08-21 21:33               ` David Hildenbrand
2020-08-21 21:33               ` David Hildenbrand
2020-08-21 21:42               ` David Hildenbrand
2020-08-21 21:42                 ` David Hildenbrand
2020-08-21 21:42                 ` David Hildenbrand
2020-08-21 21:42                 ` David Hildenbrand
2020-08-21 21:43               ` David Hildenbrand
2020-08-21 21:43                 ` David Hildenbrand
2020-08-21 21:43                 ` David Hildenbrand
2020-08-21 21:43                 ` David Hildenbrand
2020-08-21 21:46               ` David Hildenbrand
2020-08-21 21:46                 ` David Hildenbrand
2020-08-21 21:46                 ` David Hildenbrand
2020-08-21 21:46                 ` David Hildenbrand
2020-08-21 23:21     ` Andrew Morton
2020-08-21 23:21       ` Andrew Morton
2020-08-21 23:21       ` Andrew Morton
2020-08-21 23:21       ` Andrew Morton
2020-08-22  2:32       ` Leizhen (ThunderTown)
2020-08-22  2:32         ` Leizhen (ThunderTown)
2020-08-22  2:32         ` Leizhen (ThunderTown)
2020-08-22  2:32         ` Leizhen (ThunderTown)
2020-09-08 10:45       ` David Hildenbrand
2020-09-08 10:45         ` David Hildenbrand
2020-09-08 10:45         ` David Hildenbrand
2020-09-08 10:45         ` David Hildenbrand
2020-09-23  0:43         ` Dan Williams
2020-09-23  0:43           ` Dan Williams
2020-09-23  0:43           ` Dan Williams
2020-09-23  0:43           ` Dan Williams

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d729e2e3-1f8e-31e6-7095-841b9e3ca47b@redhat.com \
    --to=david@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=ard.biesheuvel@linaro.org \
    --cc=dan.j.williams@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=joao.m.martins@oracle.com \
    --cc=linux-acpi@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=pasha.tatashin@soleen.com \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.