From: akpm@linux-foundation.org
To: mm-commits@vger.kernel.org, soheil@google.com,
rientjes@google.com, hughd@google.com, edumazet@google.com,
arjunroy@google.com
Subject: + mm-memoryc-properly-pte_offset_map_lock-unlock-in-vm_insert_pages.patch added to -mm tree
Date: Sat, 20 Jun 2020 17:18:48 -0700 [thread overview]
Message-ID: <20200621001848.GzMgT%akpm@linux-foundation.org> (raw)
The patch titled
Subject: mm/memory.c: properly pte_offset_map_lock/unlock in vm_insert_pages()
has been added to the -mm tree. Its filename is
mm-memoryc-properly-pte_offset_map_lock-unlock-in-vm_insert_pages.patch
This patch should soon appear at
http://ozlabs.org/~akpm/mmots/broken-out/mm-memoryc-properly-pte_offset_map_lock-unlock-in-vm_insert_pages.patch
and later at
http://ozlabs.org/~akpm/mmotm/broken-out/mm-memoryc-properly-pte_offset_map_lock-unlock-in-vm_insert_pages.patch
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: Arjun Roy <arjunroy@google.com>
Subject: mm/memory.c: properly pte_offset_map_lock/unlock in vm_insert_pages()
Calls to pte_offset_map() in vm_insert_pages() are erroneously not matched
with a call to pte_unmap(). This would cause problems on architectures
where that is not a no-op.
This patch does away with the non-traditional locking in the existing
code, and instead uses pte_offset_map_lock/unlock() as usual, incrementing
PTE as necessary. The PTE pointer is kept within bounds since we clamp it
with PTRS_PER_PTE.
Link: http://lkml.kernel.org/r/20200618220446.20284-1-arjunroy.kdev@gmail.com
Fixes: 8cd3984d81d5 ("mm/memory.c: add vm_insert_pages()")
Signed-off-by: Arjun Roy <arjunroy@google.com>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
mm/memory.c | 21 +++++++++++----------
1 file changed, 11 insertions(+), 10 deletions(-)
--- a/mm/memory.c~mm-memoryc-properly-pte_offset_map_lock-unlock-in-vm_insert_pages
+++ a/mm/memory.c
@@ -1498,7 +1498,7 @@ out:
}
#ifdef pte_index
-static int insert_page_in_batch_locked(struct mm_struct *mm, pmd_t *pmd,
+static int insert_page_in_batch_locked(struct mm_struct *mm, pte_t *pte,
unsigned long addr, struct page *page, pgprot_t prot)
{
int err;
@@ -1506,8 +1506,9 @@ static int insert_page_in_batch_locked(s
if (!page_count(page))
return -EINVAL;
err = validate_page_before_insert(page);
- return err ? err : insert_page_into_pte_locked(
- mm, pte_offset_map(pmd, addr), addr, page, prot);
+ if (err)
+ return err;
+ return insert_page_into_pte_locked(mm, pte, addr, page, prot);
}
/* insert_pages() amortizes the cost of spinlock operations
@@ -1517,7 +1518,8 @@ static int insert_pages(struct vm_area_s
struct page **pages, unsigned long *num, pgprot_t prot)
{
pmd_t *pmd = NULL;
- spinlock_t *pte_lock = NULL;
+ pte_t *start_pte, *pte;
+ spinlock_t *pte_lock;
struct mm_struct *const mm = vma->vm_mm;
unsigned long curr_page_idx = 0;
unsigned long remaining_pages_total = *num;
@@ -1536,18 +1538,17 @@ more:
ret = -ENOMEM;
if (pte_alloc(mm, pmd))
goto out;
- pte_lock = pte_lockptr(mm, pmd);
while (pages_to_write_in_pmd) {
int pte_idx = 0;
const int batch_size = min_t(int, pages_to_write_in_pmd, 8);
- spin_lock(pte_lock);
- for (; pte_idx < batch_size; ++pte_idx) {
- int err = insert_page_in_batch_locked(mm, pmd,
+ start_pte = pte_offset_map_lock(mm, pmd, addr, &pte_lock);
+ for (pte = start_pte; pte_idx < batch_size; ++pte, ++pte_idx) {
+ int err = insert_page_in_batch_locked(mm, pte,
addr, pages[curr_page_idx], prot);
if (unlikely(err)) {
- spin_unlock(pte_lock);
+ pte_unmap_unlock(start_pte, pte_lock);
ret = err;
remaining_pages_total -= pte_idx;
goto out;
@@ -1555,7 +1556,7 @@ more:
addr += PAGE_SIZE;
++curr_page_idx;
}
- spin_unlock(pte_lock);
+ pte_unmap_unlock(start_pte, pte_lock);
pages_to_write_in_pmd -= batch_size;
remaining_pages_total -= batch_size;
}
_
Patches currently in -mm which might be from arjunroy@google.com are
mm-memoryc-properly-pte_offset_map_lock-unlock-in-vm_insert_pages.patch
reply other threads:[~2020-06-21 0:18 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200621001848.GzMgT%akpm@linux-foundation.org \
--to=akpm@linux-foundation.org \
--cc=arjunroy@google.com \
--cc=edumazet@google.com \
--cc=hughd@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mm-commits@vger.kernel.org \
--cc=rientjes@google.com \
--cc=soheil@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).