stable.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: mm-commits@vger.kernel.org, stable@vger.kernel.org,
	rcampbell@nvidia.com, jhubbard@nvidia.com, apopple@nvidia.com,
	akpm@linux-foundation.org
Subject: [to-be-updated] mm-take-a-page-reference-when-removing-device-exclusive-entries.patch removed from -mm tree
Date: Wed, 29 Mar 2023 15:05:26 -0700	[thread overview]
Message-ID: <20230329220527.91585C433D2@smtp.kernel.org> (raw)


The quilt patch titled
     Subject: mm: take a page reference when removing device exclusive entries
has been removed from the -mm tree.  Its filename was
     mm-take-a-page-reference-when-removing-device-exclusive-entries.patch

This patch was dropped because an updated version will be merged

------------------------------------------------------
From: Alistair Popple <apopple@nvidia.com>
Subject: mm: take a page reference when removing device exclusive entries
Date: Tue, 28 Mar 2023 13:14:34 +1100

Device exclusive page table entries are used to prevent CPU access to a
page whilst it is being accessed from a device.  Typically this is used to
implement atomic operations when the underlying bus does not support
atomic access.  When a CPU thread encounters a device exclusive entry it
locks the page and restores the original entry after calling mmu notifiers
to signal drivers that exclusive access is no longer available.

The device exclusive entry holds a reference to the page making it safe to
access the struct page whilst the entry is present.  However the fault
handling code does not hold the PTL when taking the page lock.  This means
if there are multiple threads faulting concurrently on the device
exclusive entry one will remove the entry whilst others will wait on the
page lock without holding a reference.

This can lead to threads locking or waiting on a page with a zero
refcount.  Whilst mmap_lock prevents the pages getting freed via munmap()
they may still be freed by a migration.  This leads to warnings such as
PAGE_FLAGS_CHECK_AT_FREE due to the page being locked when the refcount
drops to zero.  Note that during removal of the device exclusive entry the
PTE is currently re-checked under the PTL so no futher bad page accesses
occur once it is locked.

Link: https://lkml.kernel.org/r/20230328021434.292971-1-apopple@nvidia.com
Fixes: b756a3b5e7ea ("mm: device exclusive memory access")
Signed-off-by: Alistair Popple <apopple@nvidia.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Ralph Campbell <rcampbell@nvidia.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/memory.c |   14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

--- a/mm/memory.c~mm-take-a-page-reference-when-removing-device-exclusive-entries
+++ a/mm/memory.c
@@ -3563,8 +3563,19 @@ static vm_fault_t remove_device_exclusiv
 	struct vm_area_struct *vma = vmf->vma;
 	struct mmu_notifier_range range;
 
-	if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags))
+	/*
+	 * We need a page reference to lock the page because we don't hold the
+	 * PTL so a racing thread can remove the device-exclusive entry and
+	 * unmap the page. If the page is free the entry must have been
+	 * removed already.
+	 */
+	if (!get_page_unless_zero(vmf->page))
+		return 0;
+
+	if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags)) {
+		put_page(vmf->page);
 		return VM_FAULT_RETRY;
+	}
 	mmu_notifier_range_init_owner(&range, MMU_NOTIFY_EXCLUSIVE, 0,
 				vma->vm_mm, vmf->address & PAGE_MASK,
 				(vmf->address & PAGE_MASK) + PAGE_SIZE, NULL);
@@ -3577,6 +3588,7 @@ static vm_fault_t remove_device_exclusiv
 
 	pte_unmap_unlock(vmf->pte, vmf->ptl);
 	folio_unlock(folio);
+	put_page(vmf->page);
 
 	mmu_notifier_invalidate_range_end(&range);
 	return 0;
_

Patches currently in -mm which might be from apopple@nvidia.com are



                 reply	other threads:[~2023-03-29 22:05 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230329220527.91585C433D2@smtp.kernel.org \
    --to=akpm@linux-foundation.org \
    --cc=apopple@nvidia.com \
    --cc=jhubbard@nvidia.com \
    --cc=mm-commits@vger.kernel.org \
    --cc=rcampbell@nvidia.com \
    --cc=stable@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).