From: "Kirill A. Shutemov" <kirill@shutemov.name>
To: "Thomas Hellström (VMware)" <thomas_os@shipmail.org>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
torvalds@linux-foundation.org,
"Thomas Hellstrom" <thellstrom@vmware.com>,
"Matthew Wilcox" <willy@infradead.org>,
"Will Deacon" <will.deacon@arm.com>,
"Peter Zijlstra" <peterz@infradead.org>,
"Rik van Riel" <riel@surriel.com>,
"Minchan Kim" <minchan@kernel.org>,
"Michal Hocko" <mhocko@suse.com>,
"Huang Ying" <ying.huang@intel.com>,
"Jérôme Glisse" <jglisse@redhat.com>
Subject: Re: [PATCH v4 2/9] mm: pagewalk: Take the pagetable lock in walk_pte_range()
Date: Wed, 9 Oct 2019 18:14:00 +0300 [thread overview]
Message-ID: <20191009151400.bserdtpoczmawqn5@box> (raw)
In-Reply-To: <20191008091508.2682-3-thomas_os@shipmail.org>
On Tue, Oct 08, 2019 at 11:15:01AM +0200, Thomas Hellström (VMware) wrote:
> From: Thomas Hellstrom <thellstrom@vmware.com>
>
> Without the lock, anybody modifying a pte from within this function might
> have it concurrently modified by someone else.
>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Rik van Riel <riel@surriel.com>
> Cc: Minchan Kim <minchan@kernel.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Huang Ying <ying.huang@intel.com>
> Cc: Jérôme Glisse <jglisse@redhat.com>
> Cc: Kirill A. Shutemov <kirill@shutemov.name>
> Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
> Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
> ---
> mm/pagewalk.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/mm/pagewalk.c b/mm/pagewalk.c
> index d48c2a986ea3..83c0b78363b4 100644
> --- a/mm/pagewalk.c
> +++ b/mm/pagewalk.c
> @@ -10,8 +10,9 @@ static int walk_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
> pte_t *pte;
> int err = 0;
> const struct mm_walk_ops *ops = walk->ops;
> + spinlock_t *ptl;
>
> - pte = pte_offset_map(pmd, addr);
> + pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
> for (;;) {
> err = ops->pte_entry(pte, addr, addr + PAGE_SIZE, walk);
> if (err)
> @@ -22,7 +23,7 @@ static int walk_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
> pte++;
> }
>
> - pte_unmap(pte);
> + pte_unmap_unlock(pte - 1, ptl);
NAK.
If ->pte_entry() fails on the first entry of the page table, pte - 1 will
point out side the page table.
And the '- 1' is totally unnecessary as we break the loop before pte++ on
the last iteration.
--
Kirill A. Shutemov
next prev parent reply other threads:[~2019-10-09 15:14 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-08 9:14 [PATCH v4 0/9] Emulated coherent graphics memory take 2 Thomas Hellström (VMware)
2019-10-08 9:15 ` [PATCH v4 1/9] mm: Remove BUG_ON mmap_sem not held from xxx_trans_huge_lock() Thomas Hellström (VMware)
2019-10-08 9:15 ` [PATCH v4 2/9] mm: pagewalk: Take the pagetable lock in walk_pte_range() Thomas Hellström (VMware)
2019-10-09 15:14 ` Kirill A. Shutemov [this message]
2019-10-09 16:07 ` Linus Torvalds
2019-10-08 9:15 ` [PATCH v4 3/9] mm: pagewalk: Don't split transhuge pmds when a pmd_entry is present Thomas Hellström (VMware)
2019-10-09 15:27 ` Kirill A. Shutemov
2019-10-09 15:27 ` Kirill A. Shutemov
2019-10-09 16:20 ` Thomas Hellström (VMware)
2019-10-09 16:20 ` Thomas Hellström (VMware)
2019-10-09 16:21 ` Linus Torvalds
2019-10-09 17:03 ` Thomas Hellström (VMware)
2019-10-09 17:16 ` Linus Torvalds
2019-10-09 18:52 ` Thomas Hellstrom
2019-10-09 19:20 ` Linus Torvalds
2019-10-09 20:06 ` Thomas Hellström (VMware)
2019-10-09 20:20 ` Linus Torvalds
2019-10-09 22:30 ` Thomas Hellström (VMware)
2019-10-09 23:50 ` Thomas Hellström (VMware)
2019-10-09 23:51 ` Linus Torvalds
2019-10-10 0:18 ` Linus Torvalds
2019-10-10 1:09 ` Thomas Hellström (VMware)
2019-10-10 2:07 ` Linus Torvalds
2019-10-10 6:15 ` Thomas Hellström (VMware)
2019-10-08 9:15 ` [PATCH v4 4/9] mm: Add a walk_page_mapping() function to the pagewalk code Thomas Hellström (VMware)
2019-10-08 9:15 ` [PATCH v4 5/9] mm: Add write-protect and clean utilities for address space ranges Thomas Hellström (VMware)
2019-10-08 17:06 ` Linus Torvalds
2019-10-08 18:25 ` Thomas Hellstrom
2019-10-08 9:15 ` [PATCH v4 6/9] drm/vmwgfx: Implement an infrastructure for write-coherent resources Thomas Hellström (VMware)
2019-10-08 9:15 ` [PATCH v4 7/9] drm/vmwgfx: Use an RBtree instead of linked list for MOB resources Thomas Hellström (VMware)
2019-10-08 9:15 ` [PATCH v4 8/9] drm/vmwgfx: Implement an infrastructure for read-coherent resources Thomas Hellström (VMware)
2019-10-08 9:15 ` [PATCH v4 9/9] drm/vmwgfx: Add surface dirty-tracking callbacks Thomas Hellström (VMware)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191009151400.bserdtpoczmawqn5@box \
--to=kirill@shutemov.name \
--cc=jglisse@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=minchan@kernel.org \
--cc=peterz@infradead.org \
--cc=riel@surriel.com \
--cc=thellstrom@vmware.com \
--cc=thomas_os@shipmail.org \
--cc=torvalds@linux-foundation.org \
--cc=will.deacon@arm.com \
--cc=willy@infradead.org \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).