linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Khalid Aziz <khalid.aziz@oracle.com>
To: Jann Horn <jannh@google.com>,
	"David S. Miller" <davem@davemloft.net>,
	sparclinux@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org,
	Christoph Hellwig <hch@infradead.org>,
	Anthony Yznaga <anthony.yznaga@oracle.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>,
	linux-arm-kernel@lists.infradead.org,
	Michael Ellerman <mpe@ellerman.id.au>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Paul Mackerras <paulus@samba.org>,
	linuxppc-dev@lists.ozlabs.org
Subject: Re: [PATCH 1/2] mm/mprotect: Call arch_validate_prot under mmap_lock and with length
Date: Wed, 7 Oct 2020 14:14:09 -0600	[thread overview]
Message-ID: <d5332a7b-c300-6d28-18b9-4b7d4110ef86@oracle.com> (raw)
In-Reply-To: <20201007073932.865218-1-jannh@google.com>

On 10/7/20 1:39 AM, Jann Horn wrote:
> arch_validate_prot() is a hook that can validate whether a given set of
> protection flags is valid in an mprotect() operation. It is given the set
> of protection flags and the address being modified.
> 
> However, the address being modified can currently not actually be used in
> a meaningful way because:
> 
> 1. Only the address is given, but not the length, and the operation can
>    span multiple VMAs. Therefore, the callee can't actually tell which
>    virtual address range, or which VMAs, are being targeted.
> 2. The mmap_lock is not held, meaning that if the callee were to check
>    the VMA at @addr, that VMA would be unrelated to the one the
>    operation is performed on.
> 
> Currently, custom arch_validate_prot() handlers are defined by
> arm64, powerpc and sparc.
> arm64 and powerpc don't care about the address range, they just check the
> flags against CPU support masks.
> sparc's arch_validate_prot() attempts to look at the VMA, but doesn't take
> the mmap_lock.
> 
> Change the function signature to also take a length, and move the
> arch_validate_prot() call in mm/mprotect.c down into the locked region.
> 
> Cc: stable@vger.kernel.org
> Fixes: 9035cf9a97e4 ("mm: Add address parameter to arch_validate_prot()")
> Suggested-by: Khalid Aziz <khalid.aziz@oracle.com>
> Suggested-by: Christoph Hellwig <hch@infradead.org>
> Signed-off-by: Jann Horn <jannh@google.com>
> ---
>  arch/arm64/include/asm/mman.h   | 4 ++--
>  arch/powerpc/include/asm/mman.h | 3 ++-
>  arch/powerpc/kernel/syscalls.c  | 2 +-
>  arch/sparc/include/asm/mman.h   | 6 ++++--
>  include/linux/mman.h            | 3 ++-
>  mm/mprotect.c                   | 6 ++++--
>  6 files changed, 15 insertions(+), 9 deletions(-)


This looks good to me.

As Chris pointed out, the call to arch_validate_prot() from do_mmap2()
is made without holding mmap_lock. Lock is not acquired until
vm_mmap_pgoff(). This variance is uncomfortable but I am more
uncomfortable forcing all implementations of validate_prot to require
mmap_lock be held when non-sparc implementations do not have such need
yet. Since do_mmap2() is in powerpc specific code, for now this patch
solves a current problem. That leaves open the question of should
generic mmap call arch_validate_prot and return EINVAL for invalid
combination of protection bits, but that is better addressed in a
separate patch.

Reviewed-by: Khalid Aziz <khalid.aziz@oracle.com>

> 
> diff --git a/arch/arm64/include/asm/mman.h b/arch/arm64/include/asm/mman.h
> index 081ec8de9ea6..0876a87986dc 100644
> --- a/arch/arm64/include/asm/mman.h
> +++ b/arch/arm64/include/asm/mman.h
> @@ -23,7 +23,7 @@ static inline pgprot_t arch_vm_get_page_prot(unsigned long vm_flags)
>  #define arch_vm_get_page_prot(vm_flags) arch_vm_get_page_prot(vm_flags)
>  
>  static inline bool arch_validate_prot(unsigned long prot,
> -	unsigned long addr __always_unused)
> +	unsigned long addr __always_unused, unsigned long len __always_unused)
>  {
>  	unsigned long supported = PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM;
>  
> @@ -32,6 +32,6 @@ static inline bool arch_validate_prot(unsigned long prot,
>  
>  	return (prot & ~supported) == 0;
>  }
> -#define arch_validate_prot(prot, addr) arch_validate_prot(prot, addr)
> +#define arch_validate_prot(prot, addr, len) arch_validate_prot(prot, addr, len)
>  
>  #endif /* ! __ASM_MMAN_H__ */
> diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm/mman.h
> index 7cb6d18f5cd6..65dd9b594985 100644
> --- a/arch/powerpc/include/asm/mman.h
> +++ b/arch/powerpc/include/asm/mman.h
> @@ -36,7 +36,8 @@ static inline pgprot_t arch_vm_get_page_prot(unsigned long vm_flags)
>  }
>  #define arch_vm_get_page_prot(vm_flags) arch_vm_get_page_prot(vm_flags)
>  
> -static inline bool arch_validate_prot(unsigned long prot, unsigned long addr)
> +static inline bool arch_validate_prot(unsigned long prot, unsigned long addr,
> +				      unsigned long len)
>  {
>  	if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_SAO))
>  		return false;
> diff --git a/arch/powerpc/kernel/syscalls.c b/arch/powerpc/kernel/syscalls.c
> index 078608ec2e92..b1fabb97d138 100644
> --- a/arch/powerpc/kernel/syscalls.c
> +++ b/arch/powerpc/kernel/syscalls.c
> @@ -43,7 +43,7 @@ static inline long do_mmap2(unsigned long addr, size_t len,
>  {
>  	long ret = -EINVAL;
>  
> -	if (!arch_validate_prot(prot, addr))
> +	if (!arch_validate_prot(prot, addr, len))
>  		goto out;
>  
>  	if (shift) {
> diff --git a/arch/sparc/include/asm/mman.h b/arch/sparc/include/asm/mman.h
> index f94532f25db1..e85222c76585 100644
> --- a/arch/sparc/include/asm/mman.h
> +++ b/arch/sparc/include/asm/mman.h
> @@ -52,9 +52,11 @@ static inline pgprot_t sparc_vm_get_page_prot(unsigned long vm_flags)
>  	return (vm_flags & VM_SPARC_ADI) ? __pgprot(_PAGE_MCD_4V) : __pgprot(0);
>  }
>  
> -#define arch_validate_prot(prot, addr) sparc_validate_prot(prot, addr)
> -static inline int sparc_validate_prot(unsigned long prot, unsigned long addr)
> +#define arch_validate_prot(prot, addr, len) sparc_validate_prot(prot, addr, len)
> +static inline int sparc_validate_prot(unsigned long prot, unsigned long addr,
> +				      unsigned long len)
>  {
> +	mmap_assert_write_locked(current->mm);
>  	if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_ADI))
>  		return 0;
>  	if (prot & PROT_ADI) {
> diff --git a/include/linux/mman.h b/include/linux/mman.h
> index 6f34c33075f9..5b4d554d3189 100644
> --- a/include/linux/mman.h
> +++ b/include/linux/mman.h
> @@ -96,7 +96,8 @@ static inline void vm_unacct_memory(long pages)
>   *
>   * Returns true if the prot flags are valid
>   */
> -static inline bool arch_validate_prot(unsigned long prot, unsigned long addr)
> +static inline bool arch_validate_prot(unsigned long prot, unsigned long addr,
> +				      unsigned long len)
>  {
>  	return (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM)) == 0;
>  }
> diff --git a/mm/mprotect.c b/mm/mprotect.c
> index ce8b8a5eacbb..e2d6b51acbf8 100644
> --- a/mm/mprotect.c
> +++ b/mm/mprotect.c
> @@ -533,14 +533,16 @@ static int do_mprotect_pkey(unsigned long start, size_t len,
>  	end = start + len;
>  	if (end <= start)
>  		return -ENOMEM;
> -	if (!arch_validate_prot(prot, start))
> -		return -EINVAL;
>  
>  	reqprot = prot;
>  
>  	if (mmap_write_lock_killable(current->mm))
>  		return -EINTR;
>  
> +	error = -EINVAL;
> +	if (!arch_validate_prot(prot, start, len))
> +		goto out;
> +
>  	/*
>  	 * If userspace did not allocate the pkey, do not let
>  	 * them use it here.
> 
> base-commit: c85fb28b6f999db9928b841f63f1beeb3074eeca
> 




  parent reply	other threads:[~2020-10-07 20:14 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-07  7:39 [PATCH 1/2] mm/mprotect: Call arch_validate_prot under mmap_lock and with length Jann Horn
2020-10-07  7:39 ` [PATCH 2/2] sparc: Check VMA range in sparc_validate_prot() Jann Horn
2020-10-07 12:36   ` Christoph Hellwig
2020-10-07 20:15   ` Khalid Aziz
2020-10-07 12:35 ` [PATCH 1/2] mm/mprotect: Call arch_validate_prot under mmap_lock and with length Christoph Hellwig
2020-10-07 14:42   ` Jann Horn
2020-10-08  6:21     ` Christoph Hellwig
2020-10-08 10:34     ` Michael Ellerman
2020-10-08 11:03       ` Catalin Marinas
2020-10-07 20:14 ` Khalid Aziz [this message]
2020-10-10 11:09   ` Catalin Marinas
2020-10-12 17:03     ` Khalid Aziz
2020-10-12 17:22       ` Catalin Marinas
2020-10-12 19:14         ` Khalid Aziz
2020-10-13  9:16           ` Catalin Marinas
2020-10-14 21:21             ` Khalid Aziz
2020-10-14 22:29               ` Jann Horn
2020-10-15  9:05               ` Catalin Marinas
2020-10-15 14:53                 ` Khalid Aziz
2020-10-08 10:12 ` Catalin Marinas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d5332a7b-c300-6d28-18b9-4b7d4110ef86@oracle.com \
    --to=khalid.aziz@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=anthony.yznaga@oracle.com \
    --cc=benh@kernel.crashing.org \
    --cc=catalin.marinas@arm.com \
    --cc=davem@davemloft.net \
    --cc=hch@infradead.org \
    --cc=jannh@google.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mpe@ellerman.id.au \
    --cc=paulus@samba.org \
    --cc=sparclinux@vger.kernel.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).