linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: SeongJae Park <sj38.park@gmail.com>, akpm@linux-foundation.org
Cc: markubo@amazon.com, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org, SeongJae Park <sjpark@amazon.de>
Subject: Re: [PATCH] mm/damon/vaddr: Safely walk page table
Date: Tue, 31 Aug 2021 11:53:05 +0200	[thread overview]
Message-ID: <daa567fe-3026-669a-d4a4-bdbaff036fe6@redhat.com> (raw)
In-Reply-To: <20210827150400.6305-1-sj38.park@gmail.com>

On 27.08.21 17:04, SeongJae Park wrote:
> From: SeongJae Park <sjpark@amazon.de>
> 
> Commit d7f647622761 ("mm/damon: implement primitives for the virtual
> memory address spaces") of linux-mm[1] tries to find PTE or PMD for
> arbitrary virtual address using 'follow_invalidate_pte()' without proper
> locking[2].  This commit fixes the issue by using another page table
> walk function for more general use case under proper locking.
> 
> [1] https://github.com/hnaz/linux-mm/commit/d7f647622761
> [2] https://lore.kernel.org/linux-mm/3b094493-9c1e-6024-bfd5-7eca66399b7e@redhat.com
> 
> Fixes: d7f647622761 ("mm/damon: implement primitives for the virtual memory address spaces")
> Reported-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: SeongJae Park <sjpark@amazon.de>
> ---
>   mm/damon/vaddr.c | 81 +++++++++++++++++++++++++++++++++++++++++++-----
>   1 file changed, 74 insertions(+), 7 deletions(-)
> 
> diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
> index 230db7413278..b3677f2ef54b 100644
> --- a/mm/damon/vaddr.c
> +++ b/mm/damon/vaddr.c
> @@ -8,10 +8,12 @@
>   #define pr_fmt(fmt) "damon-va: " fmt
>   
>   #include <linux/damon.h>
> +#include <linux/hugetlb.h>
>   #include <linux/mm.h>
>   #include <linux/mmu_notifier.h>
>   #include <linux/highmem.h>
>   #include <linux/page_idle.h>
> +#include <linux/pagewalk.h>
>   #include <linux/random.h>
>   #include <linux/sched/mm.h>
>   #include <linux/slab.h>
> @@ -446,14 +448,69 @@ static void damon_pmdp_mkold(pmd_t *pmd, struct mm_struct *mm,
>   #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>   }
>   
> +struct damon_walk_private {
> +	pmd_t *pmd;
> +	pte_t *pte;
> +	spinlock_t *ptl;
> +};
> +
> +static int damon_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long next,
> +		struct mm_walk *walk)
> +{
> +	struct damon_walk_private *priv = walk->private;
> +
> +	if (pmd_huge(*pmd)) {
> +		priv->ptl = pmd_lock(walk->mm, pmd);
> +		if (pmd_huge(*pmd)) {
> +			priv->pmd = pmd;
> +			return 0;
> +		}
> +		spin_unlock(priv->ptl);
> +	}
> +
> +	if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
> +		return -EINVAL;
> +	priv->pte = pte_offset_map_lock(walk->mm, pmd, addr, &priv->ptl);
> +	if (!pte_present(*priv->pte)) {
> +		pte_unmap_unlock(priv->pte, priv->ptl);
> +		priv->pte = NULL;
> +		return -EINVAL;
> +	}
> +	return 0;
> +}
> +
> +static struct mm_walk_ops damon_walk_ops = {
> +	.pmd_entry = damon_pmd_entry,
> +};
> +
> +int damon_follow_pte_pmd(struct mm_struct *mm, unsigned long addr,
> +		struct damon_walk_private *private)
> +{
> +	int rc;
> +
> +	private->pte = NULL;
> +	private->pmd = NULL;
> +	rc = walk_page_range(mm, addr, addr + 1, &damon_walk_ops, private);
> +	if (!rc && !private->pte && !private->pmd)
> +		return -EINVAL;
> +	return rc;
> +}
> +
>   static void damon_va_mkold(struct mm_struct *mm, unsigned long addr)
>   {
> -	pte_t *pte = NULL;
> -	pmd_t *pmd = NULL;
> +	struct damon_walk_private walk_result;
> +	pte_t *pte;
> +	pmd_t *pmd;
>   	spinlock_t *ptl;
>   
> -	if (follow_invalidate_pte(mm, addr, NULL, &pte, &pmd, &ptl))
> +	mmap_write_lock(mm);

Can you elaborate why mmap_read_lock() isn't sufficient for your use 
case? The write mode might heavily affect damon performance and workload 
impact.


Also, I wonder if it wouldn't be much easier and cleaner to just handle 
it completely in the .pmd_entry callback, instead of returning PMDs, 
PTEs, LOCKs, ... here.

You could have

static struct mm_walk_ops damon_mkold_ops = {
	.pmd_entry = damon_mkold_pmd_entry,
};

and

static struct mm_walk_ops damon_young_ops = {
	.pmd_entry = damon_young_pmd_entry,
};

And then just handle everything completely inside the callback, avoiding 
having to return locked PTEs, PMDs, .... and instead handling it at a 
single location. Simply forward the page_sz pointer in the latter case 
to damon_young_ops.


damon_va_mkold()/damon_va_young() would mostly only call 
walk_page_range() with the right ops and eventually convert some return 
values.

-- 
Thanks,

David / dhildenb


  parent reply	other threads:[~2021-08-31  9:53 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-27 15:04 [PATCH] mm/damon/vaddr: Safely walk page table SeongJae Park
2021-08-30 11:52 ` Boehme, Markus
2021-08-31  9:53 ` David Hildenbrand [this message]
2021-08-31 10:49   ` SeongJae Park
2021-08-31 11:46     ` David Hildenbrand
2021-08-31 11:56       ` SeongJae Park

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=daa567fe-3026-669a-d4a4-bdbaff036fe6@redhat.com \
    --to=david@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=markubo@amazon.com \
    --cc=sj38.park@gmail.com \
    --cc=sjpark@amazon.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).