All of lore.kernel.org
 help / color / mirror / Atom feed
From: Vincenzo Frascino <vincenzo.frascino@arm.com>
To: Catalin Marinas <catalin.marinas@arm.com>,
	linux-arm-kernel@lists.infradead.org
Cc: Luis Machado <luis.machado@linaro.org>,
	Kevin Brodsky <kevin.brodsky@arm.com>,
	Steven Price <steven.price@arm.com>,
	stable@vger.kernel.org, Will Deacon <will@kernel.org>
Subject: Re: [PATCH] arm64: mte: Allow PTRACE_PEEKMTETAGS access to the zero page
Date: Thu, 11 Feb 2021 10:56:46 +0000	[thread overview]
Message-ID: <aa94d2b9-d2f1-04fd-7cfe-8a1ab078e5c3@arm.com> (raw)
In-Reply-To: <20210210180316.23654-1-catalin.marinas@arm.com>



On 2/10/21 6:03 PM, Catalin Marinas wrote:
> The ptrace(PTRACE_PEEKMTETAGS) implementation checks whether the user
> page has valid tags (mapped with PROT_MTE) by testing the PG_mte_tagged
> page flag. If this bit is cleared, ptrace(PTRACE_PEEKMTETAGS) returns
> -EIO.
> 
> A newly created (PROT_MTE) mapping points to the zero page which had its
> tags zeroed during cpu_enable_mte(). If there were no prior writes to
> this mapping, ptrace(PTRACE_PEEKMTETAGS) fails with -EIO since the zero
> page does not have the PG_mte_tagged flag set.
> 
> Set PG_mte_tagged on the zero page when its tags are cleared during
> boot. In addition, to avoid ptrace(PTRACE_PEEKMTETAGS) succeeding on
> !PROT_MTE mappings pointing to the zero page, change the
> __access_remote_tags() check to (vm_flags & VM_MTE) instead of
> PG_mte_tagged.
> 
> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> Fixes: 34bfeea4a9e9 ("arm64: mte: Clear the tags when a page is mapped in user-space with PROT_MTE")
> Cc: <stable@vger.kernel.org> # 5.10.x
> Cc: Will Deacon <will@kernel.org>
> Reported-by: Luis Machado <luis.machado@linaro.org>
> ---
> 
> The fix is actually checking VM_MTE instead of PG_mte_tagged in
> __access_remote_tags() but I added the WARN_ON(!PG_mte_tagged) and
> setting the flag on the zero page in case we break this assumption in
> the future.
> 
>  arch/arm64/kernel/cpufeature.c | 6 +-----
>  arch/arm64/kernel/mte.c        | 3 ++-
>  2 files changed, 3 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index e99eddec0a46..3e6331b64932 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -1701,16 +1701,12 @@ static void bti_enable(const struct arm64_cpu_capabilities *__unused)
>  #ifdef CONFIG_ARM64_MTE
>  static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap)
>  {
> -	static bool cleared_zero_page = false;
> -
>  	/*
>  	 * Clear the tags in the zero page. This needs to be done via the
>  	 * linear map which has the Tagged attribute.
>  	 */
> -	if (!cleared_zero_page) {
> -		cleared_zero_page = true;
> +	if (!test_and_set_bit(PG_mte_tagged, &ZERO_PAGE(0)->flags))
>  		mte_clear_page_tags(lm_alias(empty_zero_page));
> -	}
>  
>  	kasan_init_hw_tags_cpu();
>  }
> diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
> index dc9ada64feed..80b62fe49dcf 100644
> --- a/arch/arm64/kernel/mte.c
> +++ b/arch/arm64/kernel/mte.c
> @@ -329,11 +329,12 @@ static int __access_remote_tags(struct mm_struct *mm, unsigned long addr,
>  		 * would cause the existing tags to be cleared if the page
>  		 * was never mapped with PROT_MTE.
>  		 */
> -		if (!test_bit(PG_mte_tagged, &page->flags)) {
> +		if (!(vma->vm_flags & VM_MTE)) {
>  			ret = -EOPNOTSUPP;
>  			put_page(page);
>  			break;
>  		}
> +		WARN_ON_ONCE(!test_bit(PG_mte_tagged, &page->flags));
>  

Nit: I would live a white line before WARN_ON_ONCE() to improve readability and
maybe transform it in WARN_ONCE() with a message (alternatively a comment on
top) based on what you are explaining in the commit message.

Otherwise:

Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>

>  		/* limit access to the end of the page */
>  		offset = offset_in_page(addr);
> 

-- 
Regards,
Vincenzo

WARNING: multiple messages have this Message-ID (diff)
From: Vincenzo Frascino <vincenzo.frascino@arm.com>
To: Catalin Marinas <catalin.marinas@arm.com>,
	linux-arm-kernel@lists.infradead.org
Cc: Will Deacon <will@kernel.org>,
	Kevin Brodsky <kevin.brodsky@arm.com>,
	Luis Machado <luis.machado@linaro.org>,
	stable@vger.kernel.org, Steven Price <steven.price@arm.com>
Subject: Re: [PATCH] arm64: mte: Allow PTRACE_PEEKMTETAGS access to the zero page
Date: Thu, 11 Feb 2021 10:56:46 +0000	[thread overview]
Message-ID: <aa94d2b9-d2f1-04fd-7cfe-8a1ab078e5c3@arm.com> (raw)
In-Reply-To: <20210210180316.23654-1-catalin.marinas@arm.com>



On 2/10/21 6:03 PM, Catalin Marinas wrote:
> The ptrace(PTRACE_PEEKMTETAGS) implementation checks whether the user
> page has valid tags (mapped with PROT_MTE) by testing the PG_mte_tagged
> page flag. If this bit is cleared, ptrace(PTRACE_PEEKMTETAGS) returns
> -EIO.
> 
> A newly created (PROT_MTE) mapping points to the zero page which had its
> tags zeroed during cpu_enable_mte(). If there were no prior writes to
> this mapping, ptrace(PTRACE_PEEKMTETAGS) fails with -EIO since the zero
> page does not have the PG_mte_tagged flag set.
> 
> Set PG_mte_tagged on the zero page when its tags are cleared during
> boot. In addition, to avoid ptrace(PTRACE_PEEKMTETAGS) succeeding on
> !PROT_MTE mappings pointing to the zero page, change the
> __access_remote_tags() check to (vm_flags & VM_MTE) instead of
> PG_mte_tagged.
> 
> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> Fixes: 34bfeea4a9e9 ("arm64: mte: Clear the tags when a page is mapped in user-space with PROT_MTE")
> Cc: <stable@vger.kernel.org> # 5.10.x
> Cc: Will Deacon <will@kernel.org>
> Reported-by: Luis Machado <luis.machado@linaro.org>
> ---
> 
> The fix is actually checking VM_MTE instead of PG_mte_tagged in
> __access_remote_tags() but I added the WARN_ON(!PG_mte_tagged) and
> setting the flag on the zero page in case we break this assumption in
> the future.
> 
>  arch/arm64/kernel/cpufeature.c | 6 +-----
>  arch/arm64/kernel/mte.c        | 3 ++-
>  2 files changed, 3 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index e99eddec0a46..3e6331b64932 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -1701,16 +1701,12 @@ static void bti_enable(const struct arm64_cpu_capabilities *__unused)
>  #ifdef CONFIG_ARM64_MTE
>  static void cpu_enable_mte(struct arm64_cpu_capabilities const *cap)
>  {
> -	static bool cleared_zero_page = false;
> -
>  	/*
>  	 * Clear the tags in the zero page. This needs to be done via the
>  	 * linear map which has the Tagged attribute.
>  	 */
> -	if (!cleared_zero_page) {
> -		cleared_zero_page = true;
> +	if (!test_and_set_bit(PG_mte_tagged, &ZERO_PAGE(0)->flags))
>  		mte_clear_page_tags(lm_alias(empty_zero_page));
> -	}
>  
>  	kasan_init_hw_tags_cpu();
>  }
> diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
> index dc9ada64feed..80b62fe49dcf 100644
> --- a/arch/arm64/kernel/mte.c
> +++ b/arch/arm64/kernel/mte.c
> @@ -329,11 +329,12 @@ static int __access_remote_tags(struct mm_struct *mm, unsigned long addr,
>  		 * would cause the existing tags to be cleared if the page
>  		 * was never mapped with PROT_MTE.
>  		 */
> -		if (!test_bit(PG_mte_tagged, &page->flags)) {
> +		if (!(vma->vm_flags & VM_MTE)) {
>  			ret = -EOPNOTSUPP;
>  			put_page(page);
>  			break;
>  		}
> +		WARN_ON_ONCE(!test_bit(PG_mte_tagged, &page->flags));
>  

Nit: I would live a white line before WARN_ON_ONCE() to improve readability and
maybe transform it in WARN_ONCE() with a message (alternatively a comment on
top) based on what you are explaining in the commit message.

Otherwise:

Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>

>  		/* limit access to the end of the page */
>  		offset = offset_in_page(addr);
> 

-- 
Regards,
Vincenzo

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  parent reply	other threads:[~2021-02-11 10:57 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-10 18:03 [PATCH] arm64: mte: Allow PTRACE_PEEKMTETAGS access to the zero page Catalin Marinas
2021-02-10 18:03 ` Catalin Marinas
2021-02-10 18:52 ` Luis Machado
2021-02-10 18:52   ` Luis Machado
2021-02-11 10:35   ` Catalin Marinas
2021-02-11 10:35     ` Catalin Marinas
2021-02-11 10:56 ` Vincenzo Frascino [this message]
2021-02-11 10:56   ` Vincenzo Frascino
2021-02-12 16:45 ` Catalin Marinas
2021-02-12 16:45   ` Catalin Marinas
2021-02-16 18:56 vivek

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aa94d2b9-d2f1-04fd-7cfe-8a1ab078e5c3@arm.com \
    --to=vincenzo.frascino@arm.com \
    --cc=catalin.marinas@arm.com \
    --cc=kevin.brodsky@arm.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=luis.machado@linaro.org \
    --cc=stable@vger.kernel.org \
    --cc=steven.price@arm.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.