stable.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] arm64: mte: Allow PTRACE_PEEKMTETAGS access to the zero page
@ 2021-02-10 18:03 Catalin Marinas
  2021-02-10 18:52 ` Luis Machado
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Catalin Marinas @ 2021-02-10 18:03 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Luis Machado, Kevin Brodsky, Vincenzo Frascino, Steven Price,
	stable, Will Deacon

The ptrace(PTRACE_PEEKMTETAGS) implementation checks whether the user
page has valid tags (mapped with PROT_MTE) by testing the PG_mte_tagged
page flag. If this bit is cleared, ptrace(PTRACE_PEEKMTETAGS) returns
-EIO.

A newly created (PROT_MTE) mapping points to the zero page which had its
tags zeroed during cpu_enable_mte(). If there were no prior writes to
this mapping, ptrace(PTRACE_PEEKMTETAGS) fails with -EIO since the zero
page does not have the PG_mte_tagged flag set.

Set PG_mte_tagged on the zero page when its tags are cleared during
boot. In addition, to avoid ptrace(PTRACE_PEEKMTETAGS) succeeding on
!PROT_MTE mappings pointing to the zero page, change the
__access_remote_tags() check to (vm_flags & VM_MTE) instead of
PG_mte_tagged.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Fixes: 34bfeea4a9e9 ("arm64: mte: Clear the tags when a page is mapped in user-space with PROT_MTE")
Cc: <stable@vger.kernel.org> # 5.10.x
Cc: Will Deacon <will@kernel.org>
Reported-by: Luis Machado <luis.machado@linaro.org>
---

The fix is actually checking VM_MTE instead of PG_mte_tagged in
__access_remote_tags() but I added the WARN_ON(!PG_mte_tagged) and
setting the flag on the zero page in case we break this assumption in
the future.

 arch/arm64/kernel/cpufeature.c | 6 +-----
 arch/arm64/kernel/mte.c        | 3 ++-
 2 files changed, 3 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index e99eddec0a46..3e6331b64932 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1701,16 +1701,12 @@ static void bti_enable(const struct arm64_cpu_capabilities *__unused)
 #ifdef CONFIG_ARM64_MTE
 static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap)
 {
-	static bool cleared_zero_page = false;
-
 	/*
 	 * Clear the tags in the zero page. This needs to be done via the
 	 * linear map which has the Tagged attribute.
 	 */
-	if (!cleared_zero_page) {
-		cleared_zero_page = true;
+	if (!test_and_set_bit(PG_mte_tagged, &ZERO_PAGE(0)->flags))
 		mte_clear_page_tags(lm_alias(empty_zero_page));
-	}
 
 	kasan_init_hw_tags_cpu();
 }
diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
index dc9ada64feed..80b62fe49dcf 100644
--- a/arch/arm64/kernel/mte.c
+++ b/arch/arm64/kernel/mte.c
@@ -329,11 +329,12 @@ static int __access_remote_tags(struct mm_struct *mm, unsigned long addr,
 		 * would cause the existing tags to be cleared if the page
 		 * was never mapped with PROT_MTE.
 		 */
-		if (!test_bit(PG_mte_tagged, &page->flags)) {
+		if (!(vma->vm_flags & VM_MTE)) {
 			ret = -EOPNOTSUPP;
 			put_page(page);
 			break;
 		}
+		WARN_ON_ONCE(!test_bit(PG_mte_tagged, &page->flags));
 
 		/* limit access to the end of the page */
 		offset = offset_in_page(addr);

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] arm64: mte: Allow PTRACE_PEEKMTETAGS access to the zero page
  2021-02-10 18:03 [PATCH] arm64: mte: Allow PTRACE_PEEKMTETAGS access to the zero page Catalin Marinas
@ 2021-02-10 18:52 ` Luis Machado
  2021-02-11 10:35   ` Catalin Marinas
  2021-02-11 10:56 ` Vincenzo Frascino
  2021-02-12 16:45 ` Catalin Marinas
  2 siblings, 1 reply; 6+ messages in thread
From: Luis Machado @ 2021-02-10 18:52 UTC (permalink / raw)
  To: Catalin Marinas, linux-arm-kernel
  Cc: Kevin Brodsky, Vincenzo Frascino, Steven Price, stable,
	Will Deacon, David Spickett

On 2/10/21 3:03 PM, Catalin Marinas wrote:
> The ptrace(PTRACE_PEEKMTETAGS) implementation checks whether the user
> page has valid tags (mapped with PROT_MTE) by testing the PG_mte_tagged
> page flag. If this bit is cleared, ptrace(PTRACE_PEEKMTETAGS) returns
> -EIO.
> 
> A newly created (PROT_MTE) mapping points to the zero page which had its
> tags zeroed during cpu_enable_mte(). If there were no prior writes to
> this mapping, ptrace(PTRACE_PEEKMTETAGS) fails with -EIO since the zero
> page does not have the PG_mte_tagged flag set.
> 
> Set PG_mte_tagged on the zero page when its tags are cleared during
> boot. In addition, to avoid ptrace(PTRACE_PEEKMTETAGS) succeeding on
> !PROT_MTE mappings pointing to the zero page, change the
> __access_remote_tags() check to (vm_flags & VM_MTE) instead of
> PG_mte_tagged.
> 
> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> Fixes: 34bfeea4a9e9 ("arm64: mte: Clear the tags when a page is mapped in user-space with PROT_MTE")
> Cc: <stable@vger.kernel.org> # 5.10.x
> Cc: Will Deacon <will@kernel.org>
> Reported-by: Luis Machado <luis.machado@linaro.org>
> ---
> 
> The fix is actually checking VM_MTE instead of PG_mte_tagged in
> __access_remote_tags() but I added the WARN_ON(!PG_mte_tagged) and
> setting the flag on the zero page in case we break this assumption in
> the future.
> 
>   arch/arm64/kernel/cpufeature.c | 6 +-----
>   arch/arm64/kernel/mte.c        | 3 ++-
>   2 files changed, 3 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index e99eddec0a46..3e6331b64932 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -1701,16 +1701,12 @@ static void bti_enable(const struct arm64_cpu_capabilities *__unused)
>   #ifdef CONFIG_ARM64_MTE
>   static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap)
>   {
> -	static bool cleared_zero_page = false;
> -
>   	/*
>   	 * Clear the tags in the zero page. This needs to be done via the
>   	 * linear map which has the Tagged attribute.
>   	 */
> -	if (!cleared_zero_page) {
> -		cleared_zero_page = true;
> +	if (!test_and_set_bit(PG_mte_tagged, &ZERO_PAGE(0)->flags))
>   		mte_clear_page_tags(lm_alias(empty_zero_page));
> -	}
>   
>   	kasan_init_hw_tags_cpu();
>   }
> diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
> index dc9ada64feed..80b62fe49dcf 100644
> --- a/arch/arm64/kernel/mte.c
> +++ b/arch/arm64/kernel/mte.c
> @@ -329,11 +329,12 @@ static int __access_remote_tags(struct mm_struct *mm, unsigned long addr,
>   		 * would cause the existing tags to be cleared if the page
>   		 * was never mapped with PROT_MTE.
>   		 */
> -		if (!test_bit(PG_mte_tagged, &page->flags)) {
> +		if (!(vma->vm_flags & VM_MTE)) {
>   			ret = -EOPNOTSUPP;
>   			put_page(page);
>   			break;
>   		}
> +		WARN_ON_ONCE(!test_bit(PG_mte_tagged, &page->flags));
>   
>   		/* limit access to the end of the page */
>   		offset = offset_in_page(addr);
> 

Thanks. I gave this a try and it works as expected. So memory that is 
PROT_MTE but has not been accessed yet can be inspected with PEEKMTETAGS 
without getting an EIO back.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] arm64: mte: Allow PTRACE_PEEKMTETAGS access to the zero page
  2021-02-10 18:52 ` Luis Machado
@ 2021-02-11 10:35   ` Catalin Marinas
  0 siblings, 0 replies; 6+ messages in thread
From: Catalin Marinas @ 2021-02-11 10:35 UTC (permalink / raw)
  To: Luis Machado
  Cc: linux-arm-kernel, Kevin Brodsky, Vincenzo Frascino, Steven Price,
	stable, Will Deacon, David Spickett

On Wed, Feb 10, 2021 at 03:52:18PM -0300, Luis Machado wrote:
> On 2/10/21 3:03 PM, Catalin Marinas wrote:
> > The ptrace(PTRACE_PEEKMTETAGS) implementation checks whether the user
> > page has valid tags (mapped with PROT_MTE) by testing the PG_mte_tagged
> > page flag. If this bit is cleared, ptrace(PTRACE_PEEKMTETAGS) returns
> > -EIO.
> > 
> > A newly created (PROT_MTE) mapping points to the zero page which had its
> > tags zeroed during cpu_enable_mte(). If there were no prior writes to
> > this mapping, ptrace(PTRACE_PEEKMTETAGS) fails with -EIO since the zero
> > page does not have the PG_mte_tagged flag set.
> > 
> > Set PG_mte_tagged on the zero page when its tags are cleared during
> > boot. In addition, to avoid ptrace(PTRACE_PEEKMTETAGS) succeeding on
> > !PROT_MTE mappings pointing to the zero page, change the
> > __access_remote_tags() check to (vm_flags & VM_MTE) instead of
> > PG_mte_tagged.
> > 
> > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> > Fixes: 34bfeea4a9e9 ("arm64: mte: Clear the tags when a page is mapped in user-space with PROT_MTE")
> > Cc: <stable@vger.kernel.org> # 5.10.x
> > Cc: Will Deacon <will@kernel.org>
> > Reported-by: Luis Machado <luis.machado@linaro.org>
[...]
> Thanks. I gave this a try and it works as expected. So memory that is
> PROT_MTE but has not been accessed yet can be inspected with PEEKMTETAGS
> without getting an EIO back.

Thanks. I assume I can add your tested-by.

-- 
Catalin

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] arm64: mte: Allow PTRACE_PEEKMTETAGS access to the zero page
  2021-02-10 18:03 [PATCH] arm64: mte: Allow PTRACE_PEEKMTETAGS access to the zero page Catalin Marinas
  2021-02-10 18:52 ` Luis Machado
@ 2021-02-11 10:56 ` Vincenzo Frascino
  2021-02-12 16:45 ` Catalin Marinas
  2 siblings, 0 replies; 6+ messages in thread
From: Vincenzo Frascino @ 2021-02-11 10:56 UTC (permalink / raw)
  To: Catalin Marinas, linux-arm-kernel
  Cc: Luis Machado, Kevin Brodsky, Steven Price, stable, Will Deacon



On 2/10/21 6:03 PM, Catalin Marinas wrote:
> The ptrace(PTRACE_PEEKMTETAGS) implementation checks whether the user
> page has valid tags (mapped with PROT_MTE) by testing the PG_mte_tagged
> page flag. If this bit is cleared, ptrace(PTRACE_PEEKMTETAGS) returns
> -EIO.
> 
> A newly created (PROT_MTE) mapping points to the zero page which had its
> tags zeroed during cpu_enable_mte(). If there were no prior writes to
> this mapping, ptrace(PTRACE_PEEKMTETAGS) fails with -EIO since the zero
> page does not have the PG_mte_tagged flag set.
> 
> Set PG_mte_tagged on the zero page when its tags are cleared during
> boot. In addition, to avoid ptrace(PTRACE_PEEKMTETAGS) succeeding on
> !PROT_MTE mappings pointing to the zero page, change the
> __access_remote_tags() check to (vm_flags & VM_MTE) instead of
> PG_mte_tagged.
> 
> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> Fixes: 34bfeea4a9e9 ("arm64: mte: Clear the tags when a page is mapped in user-space with PROT_MTE")
> Cc: <stable@vger.kernel.org> # 5.10.x
> Cc: Will Deacon <will@kernel.org>
> Reported-by: Luis Machado <luis.machado@linaro.org>
> ---
> 
> The fix is actually checking VM_MTE instead of PG_mte_tagged in
> __access_remote_tags() but I added the WARN_ON(!PG_mte_tagged) and
> setting the flag on the zero page in case we break this assumption in
> the future.
> 
>  arch/arm64/kernel/cpufeature.c | 6 +-----
>  arch/arm64/kernel/mte.c        | 3 ++-
>  2 files changed, 3 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index e99eddec0a46..3e6331b64932 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -1701,16 +1701,12 @@ static void bti_enable(const struct arm64_cpu_capabilities *__unused)
>  #ifdef CONFIG_ARM64_MTE
>  static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap)
>  {
> -	static bool cleared_zero_page = false;
> -
>  	/*
>  	 * Clear the tags in the zero page. This needs to be done via the
>  	 * linear map which has the Tagged attribute.
>  	 */
> -	if (!cleared_zero_page) {
> -		cleared_zero_page = true;
> +	if (!test_and_set_bit(PG_mte_tagged, &ZERO_PAGE(0)->flags))
>  		mte_clear_page_tags(lm_alias(empty_zero_page));
> -	}
>  
>  	kasan_init_hw_tags_cpu();
>  }
> diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
> index dc9ada64feed..80b62fe49dcf 100644
> --- a/arch/arm64/kernel/mte.c
> +++ b/arch/arm64/kernel/mte.c
> @@ -329,11 +329,12 @@ static int __access_remote_tags(struct mm_struct *mm, unsigned long addr,
>  		 * would cause the existing tags to be cleared if the page
>  		 * was never mapped with PROT_MTE.
>  		 */
> -		if (!test_bit(PG_mte_tagged, &page->flags)) {
> +		if (!(vma->vm_flags & VM_MTE)) {
>  			ret = -EOPNOTSUPP;
>  			put_page(page);
>  			break;
>  		}
> +		WARN_ON_ONCE(!test_bit(PG_mte_tagged, &page->flags));
>  

Nit: I would live a white line before WARN_ON_ONCE() to improve readability and
maybe transform it in WARN_ONCE() with a message (alternatively a comment on
top) based on what you are explaining in the commit message.

Otherwise:

Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>

>  		/* limit access to the end of the page */
>  		offset = offset_in_page(addr);
> 

-- 
Regards,
Vincenzo

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] arm64: mte: Allow PTRACE_PEEKMTETAGS access to the zero page
  2021-02-10 18:03 [PATCH] arm64: mte: Allow PTRACE_PEEKMTETAGS access to the zero page Catalin Marinas
  2021-02-10 18:52 ` Luis Machado
  2021-02-11 10:56 ` Vincenzo Frascino
@ 2021-02-12 16:45 ` Catalin Marinas
  2 siblings, 0 replies; 6+ messages in thread
From: Catalin Marinas @ 2021-02-12 16:45 UTC (permalink / raw)
  To: linux-arm-kernel, Catalin Marinas
  Cc: Will Deacon, Kevin Brodsky, Steven Price, stable,
	Vincenzo Frascino, Luis Machado

On Wed, 10 Feb 2021 18:03:16 +0000, Catalin Marinas wrote:
> The ptrace(PTRACE_PEEKMTETAGS) implementation checks whether the user
> page has valid tags (mapped with PROT_MTE) by testing the PG_mte_tagged
> page flag. If this bit is cleared, ptrace(PTRACE_PEEKMTETAGS) returns
> -EIO.
> 
> A newly created (PROT_MTE) mapping points to the zero page which had its
> tags zeroed during cpu_enable_mte(). If there were no prior writes to
> this mapping, ptrace(PTRACE_PEEKMTETAGS) fails with -EIO since the zero
> page does not have the PG_mte_tagged flag set.
> 
> [...]

Applied to arm64 (for-next/fixes), thanks!

[1/1] arm64: mte: Allow PTRACE_PEEKMTETAGS access to the zero page
      https://git.kernel.org/arm64/c/68d54ceeec0e

-- 
Catalin


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH] arm64: mte: Allow PTRACE_PEEKMTETAGS access to the zero page
@ 2021-02-16 18:56 vivek
  0 siblings, 0 replies; 6+ messages in thread
From: vivek @ 2021-02-16 18:56 UTC (permalink / raw)
  To: bitu.kv; +Cc: Catalin Marinas, # 5 . 10 . x, Will Deacon

From: Catalin Marinas <catalin.marinas@arm.com>

The ptrace(PTRACE_PEEKMTETAGS) implementation checks whether the user
page has valid tags (mapped with PROT_MTE) by testing the PG_mte_tagged
page flag. If this bit is cleared, ptrace(PTRACE_PEEKMTETAGS) returns
-EIO.

A newly created (PROT_MTE) mapping points to the zero page which had its
tags zeroed during cpu_enable_mte(). If there were no prior writes to
this mapping, ptrace(PTRACE_PEEKMTETAGS) fails with -EIO since the zero
page does not have the PG_mte_tagged flag set.

Set PG_mte_tagged on the zero page when its tags are cleared during
boot. In addition, to avoid ptrace(PTRACE_PEEKMTETAGS) succeeding on
!PROT_MTE mappings pointing to the zero page, change the
__access_remote_tags() check to (vm_flags & VM_MTE) instead of
PG_mte_tagged.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Fixes: 34bfeea4a9e9 ("arm64: mte: Clear the tags when a page is mapped in user-space with PROT_MTE")
Cc: <stable@vger.kernel.org> # 5.10.x
Cc: Will Deacon <will@kernel.org>
Reported-by: Luis Machado <luis.machado@linaro.org>
Tested-by: Luis Machado <luis.machado@linaro.org>
Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Link: https://lore.kernel.org/r/20210210180316.23654-1-catalin.marinas@arm.com
---
 arch/arm64/kernel/cpufeature.c | 6 +-----
 arch/arm64/kernel/mte.c        | 3 ++-
 2 files changed, 3 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index e99edde..3e6331b 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1701,16 +1701,12 @@ static void bti_enable(const struct arm64_cpu_capabilities *__unused)
 #ifdef CONFIG_ARM64_MTE
 static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap)
 {
-	static bool cleared_zero_page = false;
-
 	/*
 	 * Clear the tags in the zero page. This needs to be done via the
 	 * linear map which has the Tagged attribute.
 	 */
-	if (!cleared_zero_page) {
-		cleared_zero_page = true;
+	if (!test_and_set_bit(PG_mte_tagged, &ZERO_PAGE(0)->flags))
 		mte_clear_page_tags(lm_alias(empty_zero_page));
-	}
 
 	kasan_init_hw_tags_cpu();
 }
diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
index dc9ada6..80b62fe 100644
--- a/arch/arm64/kernel/mte.c
+++ b/arch/arm64/kernel/mte.c
@@ -329,11 +329,12 @@ static int __access_remote_tags(struct mm_struct *mm, unsigned long addr,
 		 * would cause the existing tags to be cleared if the page
 		 * was never mapped with PROT_MTE.
 		 */
-		if (!test_bit(PG_mte_tagged, &page->flags)) {
+		if (!(vma->vm_flags & VM_MTE)) {
 			ret = -EOPNOTSUPP;
 			put_page(page);
 			break;
 		}
+		WARN_ON_ONCE(!test_bit(PG_mte_tagged, &page->flags));
 
 		/* limit access to the end of the page */
 		offset = offset_in_page(addr);
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2021-02-16 18:57 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-10 18:03 [PATCH] arm64: mte: Allow PTRACE_PEEKMTETAGS access to the zero page Catalin Marinas
2021-02-10 18:52 ` Luis Machado
2021-02-11 10:35   ` Catalin Marinas
2021-02-11 10:56 ` Vincenzo Frascino
2021-02-12 16:45 ` Catalin Marinas
2021-02-16 18:56 vivek

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).