All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] RISC-V: Acquire mmap lock before invoking walk_page_range
@ 2020-06-17 20:37 ` Atish Patra
  0 siblings, 0 replies; 10+ messages in thread
From: Atish Patra @ 2020-06-17 20:37 UTC (permalink / raw)
  To: linux-kernel
  Cc: Atish Patra, Albert Ou, Andrew Morton, Daniel Jordan,
	linux-riscv, Michel Lespinasse, Mike Rapoport, Palmer Dabbelt,
	Paul Walmsley, Zong Li

As per walk_page_range documentation, mmap lock should be acquired by the
caller before invoking walk_page_range. mmap_assert_locked gets triggered
without that. The details can be found here.

http://lists.infradead.org/pipermail/linux-riscv/2020-June/010335.html

Fixes: 395a21ff859c(riscv: add ARCH_HAS_SET_DIRECT_MAP support)
Signed-off-by: Atish Patra <atish.patra@wdc.com>
---
 arch/riscv/mm/pageattr.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c
index ec2c70f84994..289a9a5ea5b5 100644
--- a/arch/riscv/mm/pageattr.c
+++ b/arch/riscv/mm/pageattr.c
@@ -151,6 +151,7 @@ int set_memory_nx(unsigned long addr, int numpages)
 
 int set_direct_map_invalid_noflush(struct page *page)
 {
+	int ret;
 	unsigned long start = (unsigned long)page_address(page);
 	unsigned long end = start + PAGE_SIZE;
 	struct pageattr_masks masks = {
@@ -158,11 +159,16 @@ int set_direct_map_invalid_noflush(struct page *page)
 		.clear_mask = __pgprot(_PAGE_PRESENT)
 	};
 
-	return walk_page_range(&init_mm, start, end, &pageattr_ops, &masks);
+	mmap_read_lock(&init_mm);
+	ret = walk_page_range(&init_mm, start, end, &pageattr_ops, &masks);
+	mmap_read_unlock(&init_mm);
+
+	return ret;
 }
 
 int set_direct_map_default_noflush(struct page *page)
 {
+	int ret;
 	unsigned long start = (unsigned long)page_address(page);
 	unsigned long end = start + PAGE_SIZE;
 	struct pageattr_masks masks = {
@@ -170,7 +176,11 @@ int set_direct_map_default_noflush(struct page *page)
 		.clear_mask = __pgprot(0)
 	};
 
-	return walk_page_range(&init_mm, start, end, &pageattr_ops, &masks);
+	mmap_read_lock(&init_mm);
+	ret = walk_page_range(&init_mm, start, end, &pageattr_ops, &masks);
+	mmap_read_unlock(&init_mm);
+
+	return ret;
 }
 
 void __kernel_map_pages(struct page *page, int numpages, int enable)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH] RISC-V: Acquire mmap lock before invoking walk_page_range
@ 2020-06-17 20:37 ` Atish Patra
  0 siblings, 0 replies; 10+ messages in thread
From: Atish Patra @ 2020-06-17 20:37 UTC (permalink / raw)
  To: linux-kernel
  Cc: Albert Ou, Mike Rapoport, Daniel Jordan, Atish Patra,
	Palmer Dabbelt, Zong Li, Paul Walmsley, Andrew Morton,
	Michel Lespinasse, linux-riscv

As per walk_page_range documentation, mmap lock should be acquired by the
caller before invoking walk_page_range. mmap_assert_locked gets triggered
without that. The details can be found here.

http://lists.infradead.org/pipermail/linux-riscv/2020-June/010335.html

Fixes: 395a21ff859c(riscv: add ARCH_HAS_SET_DIRECT_MAP support)
Signed-off-by: Atish Patra <atish.patra@wdc.com>
---
 arch/riscv/mm/pageattr.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c
index ec2c70f84994..289a9a5ea5b5 100644
--- a/arch/riscv/mm/pageattr.c
+++ b/arch/riscv/mm/pageattr.c
@@ -151,6 +151,7 @@ int set_memory_nx(unsigned long addr, int numpages)
 
 int set_direct_map_invalid_noflush(struct page *page)
 {
+	int ret;
 	unsigned long start = (unsigned long)page_address(page);
 	unsigned long end = start + PAGE_SIZE;
 	struct pageattr_masks masks = {
@@ -158,11 +159,16 @@ int set_direct_map_invalid_noflush(struct page *page)
 		.clear_mask = __pgprot(_PAGE_PRESENT)
 	};
 
-	return walk_page_range(&init_mm, start, end, &pageattr_ops, &masks);
+	mmap_read_lock(&init_mm);
+	ret = walk_page_range(&init_mm, start, end, &pageattr_ops, &masks);
+	mmap_read_unlock(&init_mm);
+
+	return ret;
 }
 
 int set_direct_map_default_noflush(struct page *page)
 {
+	int ret;
 	unsigned long start = (unsigned long)page_address(page);
 	unsigned long end = start + PAGE_SIZE;
 	struct pageattr_masks masks = {
@@ -170,7 +176,11 @@ int set_direct_map_default_noflush(struct page *page)
 		.clear_mask = __pgprot(0)
 	};
 
-	return walk_page_range(&init_mm, start, end, &pageattr_ops, &masks);
+	mmap_read_lock(&init_mm);
+	ret = walk_page_range(&init_mm, start, end, &pageattr_ops, &masks);
+	mmap_read_unlock(&init_mm);
+
+	return ret;
 }
 
 void __kernel_map_pages(struct page *page, int numpages, int enable)
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH] RISC-V: Acquire mmap lock before invoking walk_page_range
  2020-06-17 20:37 ` Atish Patra
@ 2020-06-18  0:01   ` Michel Lespinasse
  -1 siblings, 0 replies; 10+ messages in thread
From: Michel Lespinasse @ 2020-06-18  0:01 UTC (permalink / raw)
  To: Atish Patra
  Cc: LKML, Albert Ou, Andrew Morton, Daniel Jordan, linux-riscv,
	Mike Rapoport, Palmer Dabbelt, Paul Walmsley, Zong Li

On Wed, Jun 17, 2020 at 1:38 PM Atish Patra <atish.patra@wdc.com> wrote:
> As per walk_page_range documentation, mmap lock should be acquired by the
> caller before invoking walk_page_range. mmap_assert_locked gets triggered
> without that. The details can be found here.
>
> http://lists.infradead.org/pipermail/linux-riscv/2020-June/010335.html
>
> Fixes: 395a21ff859c(riscv: add ARCH_HAS_SET_DIRECT_MAP support)
> Signed-off-by: Atish Patra <atish.patra@wdc.com>

Thanks for the fix.

Reviewed-by: Michel Lespinasse <walken@google.com>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] RISC-V: Acquire mmap lock before invoking walk_page_range
@ 2020-06-18  0:01   ` Michel Lespinasse
  0 siblings, 0 replies; 10+ messages in thread
From: Michel Lespinasse @ 2020-06-18  0:01 UTC (permalink / raw)
  To: Atish Patra
  Cc: Albert Ou, LKML, Mike Rapoport, Palmer Dabbelt, Zong Li,
	Paul Walmsley, Andrew Morton, Daniel Jordan, linux-riscv

On Wed, Jun 17, 2020 at 1:38 PM Atish Patra <atish.patra@wdc.com> wrote:
> As per walk_page_range documentation, mmap lock should be acquired by the
> caller before invoking walk_page_range. mmap_assert_locked gets triggered
> without that. The details can be found here.
>
> http://lists.infradead.org/pipermail/linux-riscv/2020-June/010335.html
>
> Fixes: 395a21ff859c(riscv: add ARCH_HAS_SET_DIRECT_MAP support)
> Signed-off-by: Atish Patra <atish.patra@wdc.com>

Thanks for the fix.

Reviewed-by: Michel Lespinasse <walken@google.com>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] RISC-V: Acquire mmap lock before invoking walk_page_range
  2020-06-18  0:01   ` Michel Lespinasse
@ 2020-06-18  2:29     ` Zong Li
  -1 siblings, 0 replies; 10+ messages in thread
From: Zong Li @ 2020-06-18  2:29 UTC (permalink / raw)
  To: Michel Lespinasse
  Cc: Atish Patra, LKML, Albert Ou, Andrew Morton, Daniel Jordan,
	linux-riscv, Mike Rapoport, Palmer Dabbelt, Paul Walmsley

On Thu, Jun 18, 2020 at 8:01 AM Michel Lespinasse <walken@google.com> wrote:
>
> On Wed, Jun 17, 2020 at 1:38 PM Atish Patra <atish.patra@wdc.com> wrote:
> > As per walk_page_range documentation, mmap lock should be acquired by the
> > caller before invoking walk_page_range. mmap_assert_locked gets triggered
> > without that. The details can be found here.
> >
> > http://lists.infradead.org/pipermail/linux-riscv/2020-June/010335.html
> >
> > Fixes: 395a21ff859c(riscv: add ARCH_HAS_SET_DIRECT_MAP support)
> > Signed-off-by: Atish Patra <atish.patra@wdc.com>
>
> Thanks for the fix.
>
> Reviewed-by: Michel Lespinasse <walken@google.com>

It also looks good to me. Thanks for the fix.

Reviewed-by: Zong Li <zong.li@sifive.com>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] RISC-V: Acquire mmap lock before invoking walk_page_range
@ 2020-06-18  2:29     ` Zong Li
  0 siblings, 0 replies; 10+ messages in thread
From: Zong Li @ 2020-06-18  2:29 UTC (permalink / raw)
  To: Michel Lespinasse
  Cc: Albert Ou, LKML, Daniel Jordan, Atish Patra, Palmer Dabbelt,
	Paul Walmsley, Andrew Morton, Mike Rapoport, linux-riscv

On Thu, Jun 18, 2020 at 8:01 AM Michel Lespinasse <walken@google.com> wrote:
>
> On Wed, Jun 17, 2020 at 1:38 PM Atish Patra <atish.patra@wdc.com> wrote:
> > As per walk_page_range documentation, mmap lock should be acquired by the
> > caller before invoking walk_page_range. mmap_assert_locked gets triggered
> > without that. The details can be found here.
> >
> > http://lists.infradead.org/pipermail/linux-riscv/2020-June/010335.html
> >
> > Fixes: 395a21ff859c(riscv: add ARCH_HAS_SET_DIRECT_MAP support)
> > Signed-off-by: Atish Patra <atish.patra@wdc.com>
>
> Thanks for the fix.
>
> Reviewed-by: Michel Lespinasse <walken@google.com>

It also looks good to me. Thanks for the fix.

Reviewed-by: Zong Li <zong.li@sifive.com>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] RISC-V: Acquire mmap lock before invoking walk_page_range
  2020-06-18  2:29     ` Zong Li
@ 2020-06-19  1:33       ` Atish Patra
  -1 siblings, 0 replies; 10+ messages in thread
From: Atish Patra @ 2020-06-19  1:33 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: Michel Lespinasse, Albert Ou, LKML, Daniel Jordan, Atish Patra,
	Paul Walmsley, Andrew Morton, Mike Rapoport, linux-riscv,
	Zong Li

On Wed, Jun 17, 2020 at 7:29 PM Zong Li <zong.li@sifive.com> wrote:
>
> On Thu, Jun 18, 2020 at 8:01 AM Michel Lespinasse <walken@google.com> wrote:
> >
> > On Wed, Jun 17, 2020 at 1:38 PM Atish Patra <atish.patra@wdc.com> wrote:
> > > As per walk_page_range documentation, mmap lock should be acquired by the
> > > caller before invoking walk_page_range. mmap_assert_locked gets triggered
> > > without that. The details can be found here.
> > >
> > > http://lists.infradead.org/pipermail/linux-riscv/2020-June/010335.html
> > >
> > > Fixes: 395a21ff859c(riscv: add ARCH_HAS_SET_DIRECT_MAP support)
> > > Signed-off-by: Atish Patra <atish.patra@wdc.com>
> >
> > Thanks for the fix.
> >
> > Reviewed-by: Michel Lespinasse <walken@google.com>
>
> It also looks good to me. Thanks for the fix.
>
> Reviewed-by: Zong Li <zong.li@sifive.com>
>

Hi Palmer,
Can you include this one in rc2 PR as well ?
Anybody who gets this issue with their rootfs can't use rc1 without
turning off DEBUG_VM.

-- 
Regards,
Atish

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] RISC-V: Acquire mmap lock before invoking walk_page_range
@ 2020-06-19  1:33       ` Atish Patra
  0 siblings, 0 replies; 10+ messages in thread
From: Atish Patra @ 2020-06-19  1:33 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: Albert Ou, Mike Rapoport, LKML, Daniel Jordan, Atish Patra,
	Zong Li, Paul Walmsley, Andrew Morton, Michel Lespinasse,
	linux-riscv

On Wed, Jun 17, 2020 at 7:29 PM Zong Li <zong.li@sifive.com> wrote:
>
> On Thu, Jun 18, 2020 at 8:01 AM Michel Lespinasse <walken@google.com> wrote:
> >
> > On Wed, Jun 17, 2020 at 1:38 PM Atish Patra <atish.patra@wdc.com> wrote:
> > > As per walk_page_range documentation, mmap lock should be acquired by the
> > > caller before invoking walk_page_range. mmap_assert_locked gets triggered
> > > without that. The details can be found here.
> > >
> > > http://lists.infradead.org/pipermail/linux-riscv/2020-June/010335.html
> > >
> > > Fixes: 395a21ff859c(riscv: add ARCH_HAS_SET_DIRECT_MAP support)
> > > Signed-off-by: Atish Patra <atish.patra@wdc.com>
> >
> > Thanks for the fix.
> >
> > Reviewed-by: Michel Lespinasse <walken@google.com>
>
> It also looks good to me. Thanks for the fix.
>
> Reviewed-by: Zong Li <zong.li@sifive.com>
>

Hi Palmer,
Can you include this one in rc2 PR as well ?
Anybody who gets this issue with their rootfs can't use rc1 without
turning off DEBUG_VM.

-- 
Regards,
Atish


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] RISC-V: Acquire mmap lock before invoking walk_page_range
  2020-06-17 20:37 ` Atish Patra
@ 2020-06-19  1:53   ` Palmer Dabbelt
  -1 siblings, 0 replies; 10+ messages in thread
From: Palmer Dabbelt @ 2020-06-19  1:53 UTC (permalink / raw)
  To: Atish Patra, Will Deacon
  Cc: linux-kernel, Atish Patra, aou, akpm, daniel.m.jordan,
	linux-riscv, walken, rppt, Paul Walmsley, zong.li

On Wed, 17 Jun 2020 13:37:32 PDT (-0700), Atish Patra wrote:
> As per walk_page_range documentation, mmap lock should be acquired by the
> caller before invoking walk_page_range. mmap_assert_locked gets triggered
> without that. The details can be found here.
>
> http://lists.infradead.org/pipermail/linux-riscv/2020-June/010335.html
>
> Fixes: 395a21ff859c(riscv: add ARCH_HAS_SET_DIRECT_MAP support)
> Signed-off-by: Atish Patra <atish.patra@wdc.com>
> ---
>  arch/riscv/mm/pageattr.c | 14 ++++++++++++--
>  1 file changed, 12 insertions(+), 2 deletions(-)
>
> diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c
> index ec2c70f84994..289a9a5ea5b5 100644
> --- a/arch/riscv/mm/pageattr.c
> +++ b/arch/riscv/mm/pageattr.c
> @@ -151,6 +151,7 @@ int set_memory_nx(unsigned long addr, int numpages)
>
>  int set_direct_map_invalid_noflush(struct page *page)
>  {
> +	int ret;
>  	unsigned long start = (unsigned long)page_address(page);
>  	unsigned long end = start + PAGE_SIZE;
>  	struct pageattr_masks masks = {
> @@ -158,11 +159,16 @@ int set_direct_map_invalid_noflush(struct page *page)
>  		.clear_mask = __pgprot(_PAGE_PRESENT)
>  	};
>
> -	return walk_page_range(&init_mm, start, end, &pageattr_ops, &masks);
> +	mmap_read_lock(&init_mm);
> +	ret = walk_page_range(&init_mm, start, end, &pageattr_ops, &masks);
> +	mmap_read_unlock(&init_mm);
> +
> +	return ret;
>  }
>
>  int set_direct_map_default_noflush(struct page *page)
>  {
> +	int ret;
>  	unsigned long start = (unsigned long)page_address(page);
>  	unsigned long end = start + PAGE_SIZE;
>  	struct pageattr_masks masks = {
> @@ -170,7 +176,11 @@ int set_direct_map_default_noflush(struct page *page)
>  		.clear_mask = __pgprot(0)
>  	};
>
> -	return walk_page_range(&init_mm, start, end, &pageattr_ops, &masks);
> +	mmap_read_lock(&init_mm);
> +	ret = walk_page_range(&init_mm, start, end, &pageattr_ops, &masks);
> +	mmap_read_unlock(&init_mm);
> +
> +	return ret;
>  }
>
>  void __kernel_map_pages(struct page *page, int numpages, int enable)

+Will, who pointed out that we could avoid the lock by using apply_page_range.

Given that the bug doesn't reproduce for me, we don't otherwise use
apply_page_range, and the commit is somewhat suspect (I screwed up that PR, and
the original patch mentions avoiding caching invalid states) I'm going to just
take this as is and add it to the list of things to look at.

I've put this on fixes: walk_page_range() directly says you must take the lock
and I don't want to wait for pedantic reasons on a boot issue, even if it's one
that doesn't show up for me.

Thanks!

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] RISC-V: Acquire mmap lock before invoking walk_page_range
@ 2020-06-19  1:53   ` Palmer Dabbelt
  0 siblings, 0 replies; 10+ messages in thread
From: Palmer Dabbelt @ 2020-06-19  1:53 UTC (permalink / raw)
  To: Atish Patra, Will Deacon
  Cc: aou, rppt, linux-kernel, daniel.m.jordan, Atish Patra, zong.li,
	Paul Walmsley, akpm, walken, linux-riscv

On Wed, 17 Jun 2020 13:37:32 PDT (-0700), Atish Patra wrote:
> As per walk_page_range documentation, mmap lock should be acquired by the
> caller before invoking walk_page_range. mmap_assert_locked gets triggered
> without that. The details can be found here.
>
> http://lists.infradead.org/pipermail/linux-riscv/2020-June/010335.html
>
> Fixes: 395a21ff859c(riscv: add ARCH_HAS_SET_DIRECT_MAP support)
> Signed-off-by: Atish Patra <atish.patra@wdc.com>
> ---
>  arch/riscv/mm/pageattr.c | 14 ++++++++++++--
>  1 file changed, 12 insertions(+), 2 deletions(-)
>
> diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c
> index ec2c70f84994..289a9a5ea5b5 100644
> --- a/arch/riscv/mm/pageattr.c
> +++ b/arch/riscv/mm/pageattr.c
> @@ -151,6 +151,7 @@ int set_memory_nx(unsigned long addr, int numpages)
>
>  int set_direct_map_invalid_noflush(struct page *page)
>  {
> +	int ret;
>  	unsigned long start = (unsigned long)page_address(page);
>  	unsigned long end = start + PAGE_SIZE;
>  	struct pageattr_masks masks = {
> @@ -158,11 +159,16 @@ int set_direct_map_invalid_noflush(struct page *page)
>  		.clear_mask = __pgprot(_PAGE_PRESENT)
>  	};
>
> -	return walk_page_range(&init_mm, start, end, &pageattr_ops, &masks);
> +	mmap_read_lock(&init_mm);
> +	ret = walk_page_range(&init_mm, start, end, &pageattr_ops, &masks);
> +	mmap_read_unlock(&init_mm);
> +
> +	return ret;
>  }
>
>  int set_direct_map_default_noflush(struct page *page)
>  {
> +	int ret;
>  	unsigned long start = (unsigned long)page_address(page);
>  	unsigned long end = start + PAGE_SIZE;
>  	struct pageattr_masks masks = {
> @@ -170,7 +176,11 @@ int set_direct_map_default_noflush(struct page *page)
>  		.clear_mask = __pgprot(0)
>  	};
>
> -	return walk_page_range(&init_mm, start, end, &pageattr_ops, &masks);
> +	mmap_read_lock(&init_mm);
> +	ret = walk_page_range(&init_mm, start, end, &pageattr_ops, &masks);
> +	mmap_read_unlock(&init_mm);
> +
> +	return ret;
>  }
>
>  void __kernel_map_pages(struct page *page, int numpages, int enable)

+Will, who pointed out that we could avoid the lock by using apply_page_range.

Given that the bug doesn't reproduce for me, we don't otherwise use
apply_page_range, and the commit is somewhat suspect (I screwed up that PR, and
the original patch mentions avoiding caching invalid states) I'm going to just
take this as is and add it to the list of things to look at.

I've put this on fixes: walk_page_range() directly says you must take the lock
and I don't want to wait for pedantic reasons on a boot issue, even if it's one
that doesn't show up for me.

Thanks!


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2020-06-19  1:53 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-17 20:37 [PATCH] RISC-V: Acquire mmap lock before invoking walk_page_range Atish Patra
2020-06-17 20:37 ` Atish Patra
2020-06-18  0:01 ` Michel Lespinasse
2020-06-18  0:01   ` Michel Lespinasse
2020-06-18  2:29   ` Zong Li
2020-06-18  2:29     ` Zong Li
2020-06-19  1:33     ` Atish Patra
2020-06-19  1:33       ` Atish Patra
2020-06-19  1:53 ` Palmer Dabbelt
2020-06-19  1:53   ` Palmer Dabbelt

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.