stable.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] mm: Take a page reference when removing device exclusive entries
@ 2023-03-30  1:25 Alistair Popple
  2023-03-30  1:44 ` John Hubbard
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Alistair Popple @ 2023-03-30  1:25 UTC (permalink / raw)
  To: linux-mm, Andrew Morton
  Cc: Ralph Campbell, John Hubbard, nouveau, Matthew Wilcox,
	Alistair Popple, stable

Device exclusive page table entries are used to prevent CPU access to
a page whilst it is being accessed from a device. Typically this is
used to implement atomic operations when the underlying bus does not
support atomic access. When a CPU thread encounters a device exclusive
entry it locks the page and restores the original entry after calling
mmu notifiers to signal drivers that exclusive access is no longer
available.

The device exclusive entry holds a reference to the page making it
safe to access the struct page whilst the entry is present. However
the fault handling code does not hold the PTL when taking the page
lock. This means if there are multiple threads faulting concurrently
on the device exclusive entry one will remove the entry whilst others
will wait on the page lock without holding a reference.

This can lead to threads locking or waiting on a folio with a zero
refcount. Whilst mmap_lock prevents the pages getting freed via
munmap() they may still be freed by a migration. This leads to
warnings such as PAGE_FLAGS_CHECK_AT_FREE due to the page being locked
when the refcount drops to zero.

Fix this by trying to take a reference on the folio before locking
it. The code already checks the PTE under the PTL and aborts if the
entry is no longer there. It is also possible the folio has been
unmapped, freed and re-allocated allowing a reference to be taken on
an unrelated folio. This case is also detected by the PTE check and
the folio is unlocked without further changes.

Signed-off-by: Alistair Popple <apopple@nvidia.com>
Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Fixes: b756a3b5e7ea ("mm: device exclusive memory access")
Cc: stable@vger.kernel.org

---

Changes for v2:

 - Rebased to Linus master
 - Reworded commit message
 - Switched to using folios (thanks Matthew!)
 - Added Reviewed-by's
---
 mm/memory.c | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/mm/memory.c b/mm/memory.c
index f456f3b5049c..01a23ad48a04 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3563,8 +3563,21 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf)
 	struct vm_area_struct *vma = vmf->vma;
 	struct mmu_notifier_range range;
 
-	if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags))
+	/*
+	 * We need a reference to lock the folio because we don't hold
+	 * the PTL so a racing thread can remove the device-exclusive
+	 * entry and unmap it. If the folio is free the entry must
+	 * have been removed already. If it happens to have already
+	 * been re-allocated after being freed all we do is lock and
+	 * unlock it.
+	 */
+	if (!folio_try_get(folio))
+		return 0;
+
+	if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags)) {
+		folio_put(folio);
 		return VM_FAULT_RETRY;
+	}
 	mmu_notifier_range_init_owner(&range, MMU_NOTIFY_EXCLUSIVE, 0,
 				vma->vm_mm, vmf->address & PAGE_MASK,
 				(vmf->address & PAGE_MASK) + PAGE_SIZE, NULL);
@@ -3577,6 +3590,7 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf)
 
 	pte_unmap_unlock(vmf->pte, vmf->ptl);
 	folio_unlock(folio);
+	folio_put(folio);
 
 	mmu_notifier_invalidate_range_end(&range);
 	return 0;
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v2] mm: Take a page reference when removing device exclusive entries
  2023-03-30  1:25 [PATCH v2] mm: Take a page reference when removing device exclusive entries Alistair Popple
@ 2023-03-30  1:44 ` John Hubbard
  2023-03-30  2:23 ` Christoph Hellwig
  2023-04-03 12:02 ` David Hildenbrand
  2 siblings, 0 replies; 5+ messages in thread
From: John Hubbard @ 2023-03-30  1:44 UTC (permalink / raw)
  To: Alistair Popple, linux-mm, Andrew Morton
  Cc: Ralph Campbell, nouveau, Matthew Wilcox, stable

On 3/29/23 18:25, Alistair Popple wrote:
> Device exclusive page table entries are used to prevent CPU access to
> a page whilst it is being accessed from a device. Typically this is
> used to implement atomic operations when the underlying bus does not
> support atomic access. When a CPU thread encounters a device exclusive
> entry it locks the page and restores the original entry after calling
> mmu notifiers to signal drivers that exclusive access is no longer
> available.
> 
> The device exclusive entry holds a reference to the page making it
> safe to access the struct page whilst the entry is present. However
> the fault handling code does not hold the PTL when taking the page
> lock. This means if there are multiple threads faulting concurrently
> on the device exclusive entry one will remove the entry whilst others
> will wait on the page lock without holding a reference.
> 
> This can lead to threads locking or waiting on a folio with a zero
> refcount. Whilst mmap_lock prevents the pages getting freed via
> munmap() they may still be freed by a migration. This leads to
> warnings such as PAGE_FLAGS_CHECK_AT_FREE due to the page being locked
> when the refcount drops to zero.
> 
> Fix this by trying to take a reference on the folio before locking
> it. The code already checks the PTE under the PTL and aborts if the
> entry is no longer there. It is also possible the folio has been
> unmapped, freed and re-allocated allowing a reference to be taken on
> an unrelated folio. This case is also detected by the PTE check and
> the folio is unlocked without further changes.
> 
> Signed-off-by: Alistair Popple <apopple@nvidia.com>
> Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>
> Reviewed-by: John Hubbard <jhubbard@nvidia.com>
> Fixes: b756a3b5e7ea ("mm: device exclusive memory access")
> Cc: stable@vger.kernel.org
> 
> ---
> 
> Changes for v2:
> 
>   - Rebased to Linus master
>   - Reworded commit message
>   - Switched to using folios (thanks Matthew!)
>   - Added Reviewed-by's

v2 looks correct to me.

thanks,
-- 
John Hubbard
NVIDIA

> ---
>   mm/memory.c | 16 +++++++++++++++-
>   1 file changed, 15 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index f456f3b5049c..01a23ad48a04 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3563,8 +3563,21 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf)
>   	struct vm_area_struct *vma = vmf->vma;
>   	struct mmu_notifier_range range;
>   
> -	if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags))
> +	/*
> +	 * We need a reference to lock the folio because we don't hold
> +	 * the PTL so a racing thread can remove the device-exclusive
> +	 * entry and unmap it. If the folio is free the entry must
> +	 * have been removed already. If it happens to have already
> +	 * been re-allocated after being freed all we do is lock and
> +	 * unlock it.
> +	 */
> +	if (!folio_try_get(folio))
> +		return 0;
> +
> +	if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags)) {
> +		folio_put(folio);
>   		return VM_FAULT_RETRY;
> +	}
>   	mmu_notifier_range_init_owner(&range, MMU_NOTIFY_EXCLUSIVE, 0,
>   				vma->vm_mm, vmf->address & PAGE_MASK,
>   				(vmf->address & PAGE_MASK) + PAGE_SIZE, NULL);
> @@ -3577,6 +3590,7 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf)
>   
>   	pte_unmap_unlock(vmf->pte, vmf->ptl);
>   	folio_unlock(folio);
> +	folio_put(folio);
>   
>   	mmu_notifier_invalidate_range_end(&range);
>   	return 0;



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v2] mm: Take a page reference when removing device exclusive entries
  2023-03-30  1:25 [PATCH v2] mm: Take a page reference when removing device exclusive entries Alistair Popple
  2023-03-30  1:44 ` John Hubbard
@ 2023-03-30  2:23 ` Christoph Hellwig
  2023-03-30  3:11   ` Alistair Popple
  2023-04-03 12:02 ` David Hildenbrand
  2 siblings, 1 reply; 5+ messages in thread
From: Christoph Hellwig @ 2023-03-30  2:23 UTC (permalink / raw)
  To: Alistair Popple
  Cc: linux-mm, Andrew Morton, Ralph Campbell, John Hubbard, nouveau,
	Matthew Wilcox, stable

s/page/folio/ in the entire commit log?

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v2] mm: Take a page reference when removing device exclusive entries
  2023-03-30  2:23 ` Christoph Hellwig
@ 2023-03-30  3:11   ` Alistair Popple
  0 siblings, 0 replies; 5+ messages in thread
From: Alistair Popple @ 2023-03-30  3:11 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-mm, Andrew Morton, Ralph Campbell, John Hubbard, nouveau,
	Matthew Wilcox, stable


Christoph Hellwig <hch@infradead.org> writes:

> s/page/folio/ in the entire commit log?

I debated that but settled on leaving it as is because device exclusive
entries only deal with non-compound pages for now and didn't want to
give any other impression. Happy to change that though if people think
it would be better/clearer.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v2] mm: Take a page reference when removing device exclusive entries
  2023-03-30  1:25 [PATCH v2] mm: Take a page reference when removing device exclusive entries Alistair Popple
  2023-03-30  1:44 ` John Hubbard
  2023-03-30  2:23 ` Christoph Hellwig
@ 2023-04-03 12:02 ` David Hildenbrand
  2 siblings, 0 replies; 5+ messages in thread
From: David Hildenbrand @ 2023-04-03 12:02 UTC (permalink / raw)
  To: Alistair Popple, linux-mm, Andrew Morton
  Cc: Ralph Campbell, John Hubbard, nouveau, Matthew Wilcox, stable

On 30.03.23 03:25, Alistair Popple wrote:
> Device exclusive page table entries are used to prevent CPU access to
> a page whilst it is being accessed from a device. Typically this is
> used to implement atomic operations when the underlying bus does not
> support atomic access. When a CPU thread encounters a device exclusive
> entry it locks the page and restores the original entry after calling
> mmu notifiers to signal drivers that exclusive access is no longer
> available.
> 
> The device exclusive entry holds a reference to the page making it
> safe to access the struct page whilst the entry is present. However
> the fault handling code does not hold the PTL when taking the page
> lock. This means if there are multiple threads faulting concurrently
> on the device exclusive entry one will remove the entry whilst others
> will wait on the page lock without holding a reference.
> 
> This can lead to threads locking or waiting on a folio with a zero
> refcount. Whilst mmap_lock prevents the pages getting freed via
> munmap() they may still be freed by a migration. This leads to
> warnings such as PAGE_FLAGS_CHECK_AT_FREE due to the page being locked
> when the refcount drops to zero.
> 
> Fix this by trying to take a reference on the folio before locking
> it. The code already checks the PTE under the PTL and aborts if the
> entry is no longer there. It is also possible the folio has been
> unmapped, freed and re-allocated allowing a reference to be taken on
> an unrelated folio. This case is also detected by the PTE check and
> the folio is unlocked without further changes.
> 
> Signed-off-by: Alistair Popple <apopple@nvidia.com>
> Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>
> Reviewed-by: John Hubbard <jhubbard@nvidia.com>
> Fixes: b756a3b5e7ea ("mm: device exclusive memory access")
> Cc: stable@vger.kernel.org

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2023-04-03 12:03 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-30  1:25 [PATCH v2] mm: Take a page reference when removing device exclusive entries Alistair Popple
2023-03-30  1:44 ` John Hubbard
2023-03-30  2:23 ` Christoph Hellwig
2023-03-30  3:11   ` Alistair Popple
2023-04-03 12:02 ` David Hildenbrand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).