linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] arm64: Correctly bounds check virt_addr_valid
@ 2016-09-21 17:28 Laura Abbott
  2016-09-21 17:43 ` Laura Abbott
  2016-09-21 17:58 ` Mark Rutland
  0 siblings, 2 replies; 5+ messages in thread
From: Laura Abbott @ 2016-09-21 17:28 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Ard Biesheuvel, Mark Rutland
  Cc: Laura Abbott, Kees Cook, linux-arm-kernel, linux-kernel

virt_addr_valid is supposed to return true if and only if virt_to_page
returns a valid page structure. The current macro does math on whatever
address is given and passes that to pfn_valid to verify. vmalloc and
module addresses can happen to generate a pfn that 'happens' to be
valid. Fix this by only performing the pfn_valid check on addresses that
have the potential to be valid.

Signed-off-by: Laura Abbott <labbott@redhat.com>
---
This caused a bug at least twice in hardened usercopy so it is an
actual problem. A further TODO is full DEBUG_VIRTUAL support to
catch these types of mistakes.
---
 arch/arm64/include/asm/memory.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 31b7322..f741e19 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -214,7 +214,7 @@ static inline void *phys_to_virt(phys_addr_t x)
 
 #ifndef CONFIG_SPARSEMEM_VMEMMAP
 #define virt_to_page(kaddr)	pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
-#define virt_addr_valid(kaddr)	pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
+#define virt_addr_valid(kaddr)	(((u64)kaddr) >= PAGE_OFFSET && pfn_valid(__pa(kaddr) >> PAGE_SHIFT))
 #else
 #define __virt_to_pgoff(kaddr)	(((u64)(kaddr) & ~PAGE_OFFSET) / PAGE_SIZE * sizeof(struct page))
 #define __page_to_voff(kaddr)	(((u64)(page) & ~VMEMMAP_START) * PAGE_SIZE / sizeof(struct page))
@@ -222,8 +222,8 @@ static inline void *phys_to_virt(phys_addr_t x)
 #define page_to_virt(page)	((void *)((__page_to_voff(page)) | PAGE_OFFSET))
 #define virt_to_page(vaddr)	((struct page *)((__virt_to_pgoff(vaddr)) | VMEMMAP_START))
 
-#define virt_addr_valid(kaddr)	pfn_valid((((u64)(kaddr) & ~PAGE_OFFSET) \
-					   + PHYS_OFFSET) >> PAGE_SHIFT)
+#define virt_addr_valid(kaddr)	(((u64)kaddr) >= PAGE_OFFSET && pfn_valid((((u64)(kaddr) & ~PAGE_OFFSET) \
+					   + PHYS_OFFSET) >> PAGE_SHIFT))
 #endif
 #endif
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] arm64: Correctly bounds check virt_addr_valid
  2016-09-21 17:28 [PATCH] arm64: Correctly bounds check virt_addr_valid Laura Abbott
@ 2016-09-21 17:43 ` Laura Abbott
  2016-09-21 17:58 ` Mark Rutland
  1 sibling, 0 replies; 5+ messages in thread
From: Laura Abbott @ 2016-09-21 17:43 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Ard Biesheuvel, Mark Rutland
  Cc: Kees Cook, linux-arm-kernel, linux-kernel

On 09/21/2016 10:28 AM, Laura Abbott wrote:
> virt_addr_valid is supposed to return true if and only if virt_to_page
> returns a valid page structure. The current macro does math on whatever
> address is given and passes that to pfn_valid to verify. vmalloc and
> module addresses can happen to generate a pfn that 'happens' to be
> valid. Fix this by only performing the pfn_valid check on addresses that
> have the potential to be valid.
>
> Signed-off-by: Laura Abbott <labbott@redhat.com>
> ---
> This caused a bug at least twice in hardened usercopy so it is an
> actual problem. A further TODO is full DEBUG_VIRTUAL support to
> catch these types of mistakes.
> ---
>  arch/arm64/include/asm/memory.h | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index 31b7322..f741e19 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -214,7 +214,7 @@ static inline void *phys_to_virt(phys_addr_t x)
>
>  #ifndef CONFIG_SPARSEMEM_VMEMMAP
>  #define virt_to_page(kaddr)	pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
> -#define virt_addr_valid(kaddr)	pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
> +#define virt_addr_valid(kaddr)	(((u64)kaddr) >= PAGE_OFFSET && pfn_valid(__pa(kaddr) >> PAGE_SHIFT))
>  #else
>  #define __virt_to_pgoff(kaddr)	(((u64)(kaddr) & ~PAGE_OFFSET) / PAGE_SIZE * sizeof(struct page))
>  #define __page_to_voff(kaddr)	(((u64)(page) & ~VMEMMAP_START) * PAGE_SIZE / sizeof(struct page))
> @@ -222,8 +222,8 @@ static inline void *phys_to_virt(phys_addr_t x)
>  #define page_to_virt(page)	((void *)((__page_to_voff(page)) | PAGE_OFFSET))
>  #define virt_to_page(vaddr)	((struct page *)((__virt_to_pgoff(vaddr)) | VMEMMAP_START))
>
> -#define virt_addr_valid(kaddr)	pfn_valid((((u64)(kaddr) & ~PAGE_OFFSET) \
> -					   + PHYS_OFFSET) >> PAGE_SHIFT)
> +#define virt_addr_valid(kaddr)	(((u64)kaddr) >= PAGE_OFFSET && pfn_valid((((u64)(kaddr) & ~PAGE_OFFSET) \
> +					   + PHYS_OFFSET) >> PAGE_SHIFT))
>  #endif
>  #endif
>
>

Bah, I realized I butchered the macro parenthesization. I'll fix that
in a v2. I'll wait for comments on this first.

Thanks,
Laura

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] arm64: Correctly bounds check virt_addr_valid
  2016-09-21 17:28 [PATCH] arm64: Correctly bounds check virt_addr_valid Laura Abbott
  2016-09-21 17:43 ` Laura Abbott
@ 2016-09-21 17:58 ` Mark Rutland
  2016-09-21 19:34   ` Laura Abbott
  1 sibling, 1 reply; 5+ messages in thread
From: Mark Rutland @ 2016-09-21 17:58 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Catalin Marinas, Will Deacon, Ard Biesheuvel, Kees Cook,
	linux-arm-kernel, linux-kernel

Hi,

On Wed, Sep 21, 2016 at 10:28:48AM -0700, Laura Abbott wrote:
> virt_addr_valid is supposed to return true if and only if virt_to_page
> returns a valid page structure. The current macro does math on whatever
> address is given and passes that to pfn_valid to verify. vmalloc and
> module addresses can happen to generate a pfn that 'happens' to be
> valid. Fix this by only performing the pfn_valid check on addresses that
> have the potential to be valid.
> 
> Signed-off-by: Laura Abbott <labbott@redhat.com>
> ---
> This caused a bug at least twice in hardened usercopy so it is an
> actual problem.

Are there other potentially-broken users of virt_addr_valid? It's not
clear to me what some drivers are doing with this, and therefore whether
we need to cc stable.

> A further TODO is full DEBUG_VIRTUAL support to
> catch these types of mistakes.
> ---
>  arch/arm64/include/asm/memory.h | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index 31b7322..f741e19 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -214,7 +214,7 @@ static inline void *phys_to_virt(phys_addr_t x)
>  
>  #ifndef CONFIG_SPARSEMEM_VMEMMAP
>  #define virt_to_page(kaddr)	pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
> -#define virt_addr_valid(kaddr)	pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
> +#define virt_addr_valid(kaddr)	(((u64)kaddr) >= PAGE_OFFSET && pfn_valid(__pa(kaddr) >> PAGE_SHIFT))
>  #else
>  #define __virt_to_pgoff(kaddr)	(((u64)(kaddr) & ~PAGE_OFFSET) / PAGE_SIZE * sizeof(struct page))
>  #define __page_to_voff(kaddr)	(((u64)(page) & ~VMEMMAP_START) * PAGE_SIZE / sizeof(struct page))
> @@ -222,8 +222,8 @@ static inline void *phys_to_virt(phys_addr_t x)
>  #define page_to_virt(page)	((void *)((__page_to_voff(page)) | PAGE_OFFSET))
>  #define virt_to_page(vaddr)	((struct page *)((__virt_to_pgoff(vaddr)) | VMEMMAP_START))
>  
> -#define virt_addr_valid(kaddr)	pfn_valid((((u64)(kaddr) & ~PAGE_OFFSET) \
> -					   + PHYS_OFFSET) >> PAGE_SHIFT)
> +#define virt_addr_valid(kaddr)	(((u64)kaddr) >= PAGE_OFFSET && pfn_valid((((u64)(kaddr) & ~PAGE_OFFSET) \
> +					   + PHYS_OFFSET) >> PAGE_SHIFT))
>  #endif
>  #endif

Given the common sub-expression, perhaps it would be better to leave
these as-is, but prefix them with '_', and after the #endif, have
something like:

#define _virt_addr_is_linear(kaddr)	(((u64)(kaddr)) >= PAGE_OFFSET)
#define virt_addr_valid(kaddr)		(_virt_addr_is_linear(kaddr) && _virt_addr_valid(kaddr))

Otherwise, modulo the parenthesis issue you mentioned, this looks
logically correct to me.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] arm64: Correctly bounds check virt_addr_valid
  2016-09-21 17:58 ` Mark Rutland
@ 2016-09-21 19:34   ` Laura Abbott
  2016-09-21 20:06     ` Mark Rutland
  0 siblings, 1 reply; 5+ messages in thread
From: Laura Abbott @ 2016-09-21 19:34 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Catalin Marinas, Will Deacon, Ard Biesheuvel, Kees Cook,
	linux-arm-kernel, linux-kernel

On 09/21/2016 10:58 AM, Mark Rutland wrote:
> Hi,
>
> On Wed, Sep 21, 2016 at 10:28:48AM -0700, Laura Abbott wrote:
>> virt_addr_valid is supposed to return true if and only if virt_to_page
>> returns a valid page structure. The current macro does math on whatever
>> address is given and passes that to pfn_valid to verify. vmalloc and
>> module addresses can happen to generate a pfn that 'happens' to be
>> valid. Fix this by only performing the pfn_valid check on addresses that
>> have the potential to be valid.
>>
>> Signed-off-by: Laura Abbott <labbott@redhat.com>
>> ---
>> This caused a bug at least twice in hardened usercopy so it is an
>> actual problem.
>
> Are there other potentially-broken users of virt_addr_valid? It's not
> clear to me what some drivers are doing with this, and therefore whether
> we need to cc stable.
>

The number of users is pretty limited. Some of them use it as a debugging
check, others are using it more like hardened usercopy. The number of
users that would actually affect arm64 seems so small I don't think it's
worth trying to backport to stable. Hardened usercopy was getting hit
particularly hard because usercopy was happening on all types of memory
whereas the drivers tend to be more limited in scope.

>> A further TODO is full DEBUG_VIRTUAL support to
>> catch these types of mistakes.
>> ---
>>  arch/arm64/include/asm/memory.h | 6 +++---
>>  1 file changed, 3 insertions(+), 3 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
>> index 31b7322..f741e19 100644
>> --- a/arch/arm64/include/asm/memory.h
>> +++ b/arch/arm64/include/asm/memory.h
>> @@ -214,7 +214,7 @@ static inline void *phys_to_virt(phys_addr_t x)
>>
>>  #ifndef CONFIG_SPARSEMEM_VMEMMAP
>>  #define virt_to_page(kaddr)	pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
>> -#define virt_addr_valid(kaddr)	pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
>> +#define virt_addr_valid(kaddr)	(((u64)kaddr) >= PAGE_OFFSET && pfn_valid(__pa(kaddr) >> PAGE_SHIFT))
>>  #else
>>  #define __virt_to_pgoff(kaddr)	(((u64)(kaddr) & ~PAGE_OFFSET) / PAGE_SIZE * sizeof(struct page))
>>  #define __page_to_voff(kaddr)	(((u64)(page) & ~VMEMMAP_START) * PAGE_SIZE / sizeof(struct page))
>> @@ -222,8 +222,8 @@ static inline void *phys_to_virt(phys_addr_t x)
>>  #define page_to_virt(page)	((void *)((__page_to_voff(page)) | PAGE_OFFSET))
>>  #define virt_to_page(vaddr)	((struct page *)((__virt_to_pgoff(vaddr)) | VMEMMAP_START))
>>
>> -#define virt_addr_valid(kaddr)	pfn_valid((((u64)(kaddr) & ~PAGE_OFFSET) \
>> -					   + PHYS_OFFSET) >> PAGE_SHIFT)
>> +#define virt_addr_valid(kaddr)	(((u64)kaddr) >= PAGE_OFFSET && pfn_valid((((u64)(kaddr) & ~PAGE_OFFSET) \
>> +					   + PHYS_OFFSET) >> PAGE_SHIFT))
>>  #endif
>>  #endif
>
> Given the common sub-expression, perhaps it would be better to leave
> these as-is, but prefix them with '_', and after the #endif, have
> something like:
>
> #define _virt_addr_is_linear(kaddr)	(((u64)(kaddr)) >= PAGE_OFFSET)
> #define virt_addr_valid(kaddr)		(_virt_addr_is_linear(kaddr) && _virt_addr_valid(kaddr))
>

Good suggestion.

> Otherwise, modulo the parenthesis issue you mentioned, this looks
> logically correct to me.
>
> Thanks,
> Mark.
>

Thanks,
Laura

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] arm64: Correctly bounds check virt_addr_valid
  2016-09-21 19:34   ` Laura Abbott
@ 2016-09-21 20:06     ` Mark Rutland
  0 siblings, 0 replies; 5+ messages in thread
From: Mark Rutland @ 2016-09-21 20:06 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Catalin Marinas, Will Deacon, Ard Biesheuvel, Kees Cook,
	linux-arm-kernel, linux-kernel

On Wed, Sep 21, 2016 at 12:34:46PM -0700, Laura Abbott wrote:
> On 09/21/2016 10:58 AM, Mark Rutland wrote:
> >Are there other potentially-broken users of virt_addr_valid? It's not
> >clear to me what some drivers are doing with this, and therefore whether
> >we need to cc stable.
> 
> The number of users is pretty limited. Some of them use it as a debugging
> check, others are using it more like hardened usercopy. The number of
> users that would actually affect arm64 seems so small I don't think it's
> worth trying to backport to stable.

Ok.

> Hardened usercopy was getting hit particularly hard because usercopy was
> happening on all types of memory whereas the drivers tend to be more limited
> in scope.

Sure.

> >Given the common sub-expression, perhaps it would be better to leave
> >these as-is, but prefix them with '_', and after the #endif, have
> >something like:
> >
> >#define _virt_addr_is_linear(kaddr)	(((u64)(kaddr)) >= PAGE_OFFSET)
> >#define virt_addr_valid(kaddr)		(_virt_addr_is_linear(kaddr) && _virt_addr_valid(kaddr))
> >
> 
> Good suggestion.

FWIW, with that, feel free to add:

Acked-by: Mark Rutland <mark.rutland@arm.com>

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2016-09-21 20:06 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-09-21 17:28 [PATCH] arm64: Correctly bounds check virt_addr_valid Laura Abbott
2016-09-21 17:43 ` Laura Abbott
2016-09-21 17:58 ` Mark Rutland
2016-09-21 19:34   ` Laura Abbott
2016-09-21 20:06     ` Mark Rutland

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).