All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] arm: Correct virt_addr_valid
@ 2013-12-11  1:23 Laura Abbott
  2013-12-11  1:23 ` [PATCH] arm64: " Laura Abbott
  0 siblings, 1 reply; 13+ messages in thread
From: Laura Abbott @ 2013-12-11  1:23 UTC (permalink / raw)
  To: linux-arm-kernel

The definition of virt_addr_valid is that virt_addr_valid should
return true if and only if virt_to_page returns a valid pointer.
The current definition of virt_addr_valid only checks against the
virtual address range. There's no guarantee that just because a
virtual address falls bewteen PAGE_OFFSET and high_memory the
associated physical memory has a valid backing struct page. Follow
the example of other architectures and convert to pfn_valid to
verify that the virtual address is actually valid.

Cc: Russell King <linux@arm.linux.org.uk>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
---
 arch/arm/include/asm/memory.h |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
index 9ecccc8..0e84427 100644
--- a/arch/arm/include/asm/memory.h
+++ b/arch/arm/include/asm/memory.h
@@ -350,7 +350,7 @@ static inline __deprecated void *bus_to_virt(unsigned long x)
 #define ARCH_PFN_OFFSET		PHYS_PFN_OFFSET
 
 #define virt_to_page(kaddr)	pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
-#define virt_addr_valid(kaddr)	((unsigned long)(kaddr) >= PAGE_OFFSET && (unsigned long)(kaddr) < (unsigned long)high_memory)
+#define virt_addr_valid(kaddr)	pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
 
 #endif
 
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
hosted by The Linux Foundation

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH] arm64: Correct virt_addr_valid
  2013-12-11  1:23 [PATCH] arm: Correct virt_addr_valid Laura Abbott
@ 2013-12-11  1:23 ` Laura Abbott
  2013-12-11 10:44   ` Will Deacon
  0 siblings, 1 reply; 13+ messages in thread
From: Laura Abbott @ 2013-12-11  1:23 UTC (permalink / raw)
  To: linux-arm-kernel

The definition of virt_addr_valid is that virt_addr_valid should
return true if and only if virt_to_page returns a valid pointer.
The current definition of virt_addr_valid only checks against the
virtual address range. There's no guarantee that just because a
virtual address falls bewteen PAGE_OFFSET and high_memory the
associated physical memory has a valid backing struct page. Follow
the example of other architectures and convert to pfn_valid to
verify that the virtual address is actually valid.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
---
 arch/arm64/include/asm/memory.h |    3 +--
 1 files changed, 1 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 3776217..9dc5dc3 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -146,8 +146,7 @@ static inline void *phys_to_virt(phys_addr_t x)
 #define ARCH_PFN_OFFSET		PHYS_PFN_OFFSET
 
 #define virt_to_page(kaddr)	pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
-#define	virt_addr_valid(kaddr)	(((void *)(kaddr) >= (void *)PAGE_OFFSET) && \
-				 ((void *)(kaddr) < (void *)high_memory))
+#define	virt_addr_valid(kaddr)	pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
 
 #endif
 
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
hosted by The Linux Foundation

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH] arm64: Correct virt_addr_valid
  2013-12-11  1:23 ` [PATCH] arm64: " Laura Abbott
@ 2013-12-11 10:44   ` Will Deacon
  2013-12-11 11:06     ` Russell King - ARM Linux
  2013-12-11 17:35     ` Laura Abbott
  0 siblings, 2 replies; 13+ messages in thread
From: Will Deacon @ 2013-12-11 10:44 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Dec 11, 2013 at 01:23:02AM +0000, Laura Abbott wrote:
> The definition of virt_addr_valid is that virt_addr_valid should
> return true if and only if virt_to_page returns a valid pointer.
> The current definition of virt_addr_valid only checks against the
> virtual address range. There's no guarantee that just because a
> virtual address falls bewteen PAGE_OFFSET and high_memory the
> associated physical memory has a valid backing struct page. Follow
> the example of other architectures and convert to pfn_valid to
> verify that the virtual address is actually valid.
> 
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Nicolas Pitre <nico@linaro.org>
> Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
> ---
>  arch/arm64/include/asm/memory.h |    3 +--
>  1 files changed, 1 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index 3776217..9dc5dc3 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -146,8 +146,7 @@ static inline void *phys_to_virt(phys_addr_t x)
>  #define ARCH_PFN_OFFSET		PHYS_PFN_OFFSET
>  
>  #define virt_to_page(kaddr)	pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
> -#define	virt_addr_valid(kaddr)	(((void *)(kaddr) >= (void *)PAGE_OFFSET) && \
> -				 ((void *)(kaddr) < (void *)high_memory))
> +#define	virt_addr_valid(kaddr)	pfn_valid(__pa(kaddr) >> PAGE_SHIFT)

Hmm, this is pretty expensive on both arm and arm64, since we end up doing a
binary search through all of the memblocks.

Are you seeing real problems with the current code?

Will

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH] arm64: Correct virt_addr_valid
  2013-12-11 10:44   ` Will Deacon
@ 2013-12-11 11:06     ` Russell King - ARM Linux
  2013-12-11 17:26       ` Will Deacon
  2013-12-11 17:35     ` Laura Abbott
  1 sibling, 1 reply; 13+ messages in thread
From: Russell King - ARM Linux @ 2013-12-11 11:06 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Dec 11, 2013 at 10:44:29AM +0000, Will Deacon wrote:
> Hmm, this is pretty expensive on both arm and arm64, since we end up doing a
> binary search through all of the memblocks.

People say "binary search == expensive" almost as a knee jerk
reaction, because classical thinking is that binary searches are
expensive for systems with caches.  Have you considered how many
memblocks you end up with on a normal system?

How expensive is a binary search across one element?  Two elements?
Four elements?  In the very very rare case (there's only one platform)
eight elements?

For one element, it's the same as a linear search - we only have to
look at one element and confirm whether the pointer is within range.
Same for two - we check one and check the other.  As memblock is array
based, both blocks share the same cache line.

For four, it means we look at most three elements, at least two of
which share a cache line.  In terms of cache line loading, it's no
more expensive than a linear search.  In terms of CPU cycles, it's
a win because we don't need to expend cycles looking at the fourth
element.

For eight (which is starting to get into the "rare" territory, and
three cache lines, four elements vs a linear search which can be up
to four cache lines and obviously eight elements.

Now, bear in mind that the normal case is one, there's a number with
two, four starts to become rare, and eight is almost non-existent...

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH] arm64: Correct virt_addr_valid
  2013-12-11 11:06     ` Russell King - ARM Linux
@ 2013-12-11 17:26       ` Will Deacon
  2013-12-11 21:13         ` Russell King - ARM Linux
  0 siblings, 1 reply; 13+ messages in thread
From: Will Deacon @ 2013-12-11 17:26 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Dec 11, 2013 at 11:06:18AM +0000, Russell King - ARM Linux wrote:
> On Wed, Dec 11, 2013 at 10:44:29AM +0000, Will Deacon wrote:
> > Hmm, this is pretty expensive on both arm and arm64, since we end up doing a
> > binary search through all of the memblocks.
> 
> People say "binary search == expensive" almost as a knee jerk
> reaction, because classical thinking is that binary searches are
> expensive for systems with caches.  Have you considered how many
> memblocks you end up with on a normal system?
> 
> How expensive is a binary search across one element?  Two elements?
> Four elements?  In the very very rare case (there's only one platform)
> eight elements?
> 
> For one element, it's the same as a linear search - we only have to
> look at one element and confirm whether the pointer is within range.
> Same for two - we check one and check the other.  As memblock is array
> based, both blocks share the same cache line.
> 
> For four, it means we look at most three elements, at least two of
> which share a cache line.  In terms of cache line loading, it's no
> more expensive than a linear search.  In terms of CPU cycles, it's
> a win because we don't need to expend cycles looking at the fourth
> element.
> 
> For eight (which is starting to get into the "rare" territory, and
> three cache lines, four elements vs a linear search which can be up
> to four cache lines and obviously eight elements.
> 
> Now, bear in mind that the normal case is one, there's a number with
> two, four starts to become rare, and eight is almost non-existent...

Sure, but it's going to be notably more expensive than what we currently
have. The question then is: does this code occur frequently (i.e. in a loop)
on some hot path?

Turning to grep, the answer seems to be "no", so I'll stop complaining about
a problem that we don't have :)

Will

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH] arm64: Correct virt_addr_valid
  2013-12-11 10:44   ` Will Deacon
  2013-12-11 11:06     ` Russell King - ARM Linux
@ 2013-12-11 17:35     ` Laura Abbott
  1 sibling, 0 replies; 13+ messages in thread
From: Laura Abbott @ 2013-12-11 17:35 UTC (permalink / raw)
  To: linux-arm-kernel

On 12/11/2013 2:44 AM, Will Deacon wrote:
> On Wed, Dec 11, 2013 at 01:23:02AM +0000, Laura Abbott wrote:
>> The definition of virt_addr_valid is that virt_addr_valid should
>> return true if and only if virt_to_page returns a valid pointer.
>> The current definition of virt_addr_valid only checks against the
>> virtual address range. There's no guarantee that just because a
>> virtual address falls bewteen PAGE_OFFSET and high_memory the
>> associated physical memory has a valid backing struct page. Follow
>> the example of other architectures and convert to pfn_valid to
>> verify that the virtual address is actually valid.
>>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Will Deacon <will.deacon@arm.com>
>> Cc: Nicolas Pitre <nico@linaro.org>
>> Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
>> ---
>>   arch/arm64/include/asm/memory.h |    3 +--
>>   1 files changed, 1 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
>> index 3776217..9dc5dc3 100644
>> --- a/arch/arm64/include/asm/memory.h
>> +++ b/arch/arm64/include/asm/memory.h
>> @@ -146,8 +146,7 @@ static inline void *phys_to_virt(phys_addr_t x)
>>   #define ARCH_PFN_OFFSET		PHYS_PFN_OFFSET
>>
>>   #define virt_to_page(kaddr)	pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
>> -#define	virt_addr_valid(kaddr)	(((void *)(kaddr) >= (void *)PAGE_OFFSET) && \
>> -				 ((void *)(kaddr) < (void *)high_memory))
>> +#define	virt_addr_valid(kaddr)	pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
>
> Hmm, this is pretty expensive on both arm and arm64, since we end up doing a
> binary search through all of the memblocks.
>
> Are you seeing real problems with the current code?
>

No, thankfully I'm not seeing actual problems at the moment. I found 
this while looking at other code. It's also worth noting that almost all 
other architectures use this same code as well so I don't see this as 
something that would be uniquely bad on ARM.

> Will
>

Laura


-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH] arm64: Correct virt_addr_valid
  2013-12-11 17:26       ` Will Deacon
@ 2013-12-11 21:13         ` Russell King - ARM Linux
  2013-12-12 17:57           ` Catalin Marinas
  0 siblings, 1 reply; 13+ messages in thread
From: Russell King - ARM Linux @ 2013-12-11 21:13 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Dec 11, 2013 at 05:26:35PM +0000, Will Deacon wrote:
> On Wed, Dec 11, 2013 at 11:06:18AM +0000, Russell King - ARM Linux wrote:
> > On Wed, Dec 11, 2013 at 10:44:29AM +0000, Will Deacon wrote:
> > > Hmm, this is pretty expensive on both arm and arm64, since we end up doing a
> > > binary search through all of the memblocks.
> > 
> > People say "binary search == expensive" almost as a knee jerk
> > reaction, because classical thinking is that binary searches are
> > expensive for systems with caches.  Have you considered how many
> > memblocks you end up with on a normal system?
> > 
> > How expensive is a binary search across one element?  Two elements?
> > Four elements?  In the very very rare case (there's only one platform)
> > eight elements?
> > 
> > For one element, it's the same as a linear search - we only have to
> > look at one element and confirm whether the pointer is within range.
> > Same for two - we check one and check the other.  As memblock is array
> > based, both blocks share the same cache line.
> > 
> > For four, it means we look at most three elements, at least two of
> > which share a cache line.  In terms of cache line loading, it's no
> > more expensive than a linear search.  In terms of CPU cycles, it's
> > a win because we don't need to expend cycles looking at the fourth
> > element.
> > 
> > For eight (which is starting to get into the "rare" territory, and
> > three cache lines, four elements vs a linear search which can be up
> > to four cache lines and obviously eight elements.
> > 
> > Now, bear in mind that the normal case is one, there's a number with
> > two, four starts to become rare, and eight is almost non-existent...
> 
> Sure, but it's going to be notably more expensive than what we currently
> have. The question then is: does this code occur frequently (i.e. in a loop)
> on some hot path?
> 
> Turning to grep, the answer seems to be "no", so I'll stop complaining about
> a problem that we don't have :)

There is actually a concern here, and that's if the v:p translation isn't
linear, could it return false results?

According to my grep skills, we have one platform where this is true -
Realview:

 * 256MB @ 0x00000000 -> PAGE_OFFSET
 * 512MB @ 0x20000000 -> PAGE_OFFSET + 0x10000000
 * 256MB @ 0x80000000 -> PAGE_OFFSET + 0x30000000

The v:p translation is done via:

         ((virt) >= PAGE_OFFSET2 ? (virt) - PAGE_OFFSET2 + 0x80000000 : \
          (virt) >= PAGE_OFFSET1 ? (virt) - PAGE_OFFSET1 + 0x20000000 : \
          (virt) - PAGE_OFFSET)

Now the questions - what do values below PAGE_OFFSET give us?  Very
large numbers, which pfn_valid() should return false for.  What about
values > PAGE_OFFSET2 + 256MB?  The same.

So this all _looks_ fine.  Wait a moment, what about highmem?  Let's say
that the last 256MB is only available as highmem, and let's go back to
Laura's patch:

old:
#define	virt_addr_valid(kaddr)	(((void *)(kaddr) >= (void *)PAGE_OFFSET) && \
				 ((void *)(kaddr) < (void *)high_memory))
new:
#define	virt_addr_valid(kaddr)	pfn_valid(__pa(kaddr) >> PAGE_SHIFT)

The former _excludes_ highmem, but the latter _includes_ it.

virt_addr_valid(v) should only ever return _true_ for the lowmem area,
never anywhere else - that's part of its point.  It's there to answer
the question "is this a valid virtual pointer which I can dereference".

So... We actually need a combination of both of these tests.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH] arm64: Correct virt_addr_valid
  2013-12-11 21:13         ` Russell King - ARM Linux
@ 2013-12-12 17:57           ` Catalin Marinas
  2013-12-12 18:02             ` Russell King - ARM Linux
  0 siblings, 1 reply; 13+ messages in thread
From: Catalin Marinas @ 2013-12-12 17:57 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Dec 11, 2013 at 09:13:33PM +0000, Russell King - ARM Linux wrote:
> There is actually a concern here, and that's if the v:p translation isn't
> linear, could it return false results?
> 
> According to my grep skills, we have one platform where this is true -
> Realview:
> 
>  * 256MB @ 0x00000000 -> PAGE_OFFSET
>  * 512MB @ 0x20000000 -> PAGE_OFFSET + 0x10000000
>  * 256MB @ 0x80000000 -> PAGE_OFFSET + 0x30000000
> 
> The v:p translation is done via:
> 
>          ((virt) >= PAGE_OFFSET2 ? (virt) - PAGE_OFFSET2 + 0x80000000 : \
>           (virt) >= PAGE_OFFSET1 ? (virt) - PAGE_OFFSET1 + 0x20000000 : \
>           (virt) - PAGE_OFFSET)
> 
> Now the questions - what do values below PAGE_OFFSET give us?  Very
> large numbers, which pfn_valid() should return false for.  What about
> values > PAGE_OFFSET2 + 256MB?  The same.
> 
> So this all _looks_ fine.  Wait a moment, what about highmem?  Let's say
> that the last 256MB is only available as highmem, and let's go back to
> Laura's patch:
> 
> old:
> #define	virt_addr_valid(kaddr)	(((void *)(kaddr) >= (void *)PAGE_OFFSET) && \
> 				 ((void *)(kaddr) < (void *)high_memory))
> new:
> #define	virt_addr_valid(kaddr)	pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
> 
> The former _excludes_ highmem, but the latter _includes_ it.
> 
> virt_addr_valid(v) should only ever return _true_ for the lowmem area,
> never anywhere else - that's part of its point.  It's there to answer
> the question "is this a valid virtual pointer which I can dereference".
> 
> So... We actually need a combination of both of these tests.

Just to avoid any confusion, on arm64 we don't have non-linear v:p
translation as there is plenty of VA space to live with holes. So the
original patch is fine.

On 32-bit, it could be a problem if you have holes in the VA space, so
it needs both high_memory and pfn_valid checks.

-- 
Catalin

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH] arm64: Correct virt_addr_valid
  2013-12-12 17:57           ` Catalin Marinas
@ 2013-12-12 18:02             ` Russell King - ARM Linux
  2013-12-12 22:09               ` Laura Abbott
  0 siblings, 1 reply; 13+ messages in thread
From: Russell King - ARM Linux @ 2013-12-12 18:02 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Dec 12, 2013 at 05:57:54PM +0000, Catalin Marinas wrote:
> On Wed, Dec 11, 2013 at 09:13:33PM +0000, Russell King - ARM Linux wrote:
> > There is actually a concern here, and that's if the v:p translation isn't
> > linear, could it return false results?
> > 
> > According to my grep skills, we have one platform where this is true -
> > Realview:
> > 
> >  * 256MB @ 0x00000000 -> PAGE_OFFSET
> >  * 512MB @ 0x20000000 -> PAGE_OFFSET + 0x10000000
> >  * 256MB @ 0x80000000 -> PAGE_OFFSET + 0x30000000
> > 
> > The v:p translation is done via:
> > 
> >          ((virt) >= PAGE_OFFSET2 ? (virt) - PAGE_OFFSET2 + 0x80000000 : \
> >           (virt) >= PAGE_OFFSET1 ? (virt) - PAGE_OFFSET1 + 0x20000000 : \
> >           (virt) - PAGE_OFFSET)
> > 
> > Now the questions - what do values below PAGE_OFFSET give us?  Very
> > large numbers, which pfn_valid() should return false for.  What about
> > values > PAGE_OFFSET2 + 256MB?  The same.
> > 
> > So this all _looks_ fine.  Wait a moment, what about highmem?  Let's say
> > that the last 256MB is only available as highmem, and let's go back to
> > Laura's patch:
> > 
> > old:
> > #define	virt_addr_valid(kaddr)	(((void *)(kaddr) >= (void *)PAGE_OFFSET) && \
> > 				 ((void *)(kaddr) < (void *)high_memory))
> > new:
> > #define	virt_addr_valid(kaddr)	pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
> > 
> > The former _excludes_ highmem, but the latter _includes_ it.
> > 
> > virt_addr_valid(v) should only ever return _true_ for the lowmem area,
> > never anywhere else - that's part of its point.  It's there to answer
> > the question "is this a valid virtual pointer which I can dereference".
> > 
> > So... We actually need a combination of both of these tests.
> 
> Just to avoid any confusion, on arm64 we don't have non-linear v:p
> translation as there is plenty of VA space to live with holes. So the
> original patch is fine.

The point I make above actually has nothing to do with non-linear v:p
translations.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH] arm64: Correct virt_addr_valid
  2013-12-12 18:02             ` Russell King - ARM Linux
@ 2013-12-12 22:09               ` Laura Abbott
  2013-12-13 11:57                 ` Catalin Marinas
  0 siblings, 1 reply; 13+ messages in thread
From: Laura Abbott @ 2013-12-12 22:09 UTC (permalink / raw)
  To: linux-arm-kernel

On 12/12/2013 10:02 AM, Russell King - ARM Linux wrote:
> On Thu, Dec 12, 2013 at 05:57:54PM +0000, Catalin Marinas wrote:
>> On Wed, Dec 11, 2013 at 09:13:33PM +0000, Russell King - ARM Linux wrote:
>>> There is actually a concern here, and that's if the v:p translation isn't
>>> linear, could it return false results?
>>>
>>> According to my grep skills, we have one platform where this is true -
>>> Realview:
>>>
>>>   * 256MB @ 0x00000000 -> PAGE_OFFSET
>>>   * 512MB @ 0x20000000 -> PAGE_OFFSET + 0x10000000
>>>   * 256MB @ 0x80000000 -> PAGE_OFFSET + 0x30000000
>>>
>>> The v:p translation is done via:
>>>
>>>           ((virt) >= PAGE_OFFSET2 ? (virt) - PAGE_OFFSET2 + 0x80000000 : \
>>>            (virt) >= PAGE_OFFSET1 ? (virt) - PAGE_OFFSET1 + 0x20000000 : \
>>>            (virt) - PAGE_OFFSET)
>>>
>>> Now the questions - what do values below PAGE_OFFSET give us?  Very
>>> large numbers, which pfn_valid() should return false for.  What about
>>> values > PAGE_OFFSET2 + 256MB?  The same.
>>>
>>> So this all _looks_ fine.  Wait a moment, what about highmem?  Let's say
>>> that the last 256MB is only available as highmem, and let's go back to
>>> Laura's patch:
>>>
>>> old:
>>> #define	virt_addr_valid(kaddr)	(((void *)(kaddr) >= (void *)PAGE_OFFSET) && \
>>> 				 ((void *)(kaddr) < (void *)high_memory))
>>> new:
>>> #define	virt_addr_valid(kaddr)	pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
>>>
>>> The former _excludes_ highmem, but the latter _includes_ it.
>>>
>>> virt_addr_valid(v) should only ever return _true_ for the lowmem area,
>>> never anywhere else - that's part of its point.  It's there to answer
>>> the question "is this a valid virtual pointer which I can dereference".
>>>
>>> So... We actually need a combination of both of these tests.
>>
>> Just to avoid any confusion, on arm64 we don't have non-linear v:p
>> translation as there is plenty of VA space to live with holes. So the
>> original patch is fine.
>
> The point I make above actually has nothing to do with non-linear v:p
> translations.
>

Yes, I believe the point was that if we call virt_addr_valid on a 
not-direct-mapped address it should return false. We still need the 
range check on arm64 systems as well to ensure this.

Laura

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH] arm64: Correct virt_addr_valid
  2013-12-12 22:09               ` Laura Abbott
@ 2013-12-13 11:57                 ` Catalin Marinas
  2013-12-16 18:28                   ` Laura Abbott
  0 siblings, 1 reply; 13+ messages in thread
From: Catalin Marinas @ 2013-12-13 11:57 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Dec 12, 2013 at 10:09:05PM +0000, Laura Abbott wrote:
> On 12/12/2013 10:02 AM, Russell King - ARM Linux wrote:
> > On Thu, Dec 12, 2013 at 05:57:54PM +0000, Catalin Marinas wrote:
> >> On Wed, Dec 11, 2013 at 09:13:33PM +0000, Russell King - ARM Linux wrote:
> >>> There is actually a concern here, and that's if the v:p translation isn't
> >>> linear, could it return false results?
> >>>
> >>> According to my grep skills, we have one platform where this is true -
> >>> Realview:
> >>>
> >>>   * 256MB @ 0x00000000 -> PAGE_OFFSET
> >>>   * 512MB @ 0x20000000 -> PAGE_OFFSET + 0x10000000
> >>>   * 256MB @ 0x80000000 -> PAGE_OFFSET + 0x30000000
> >>>
> >>> The v:p translation is done via:
> >>>
> >>>           ((virt) >= PAGE_OFFSET2 ? (virt) - PAGE_OFFSET2 + 0x80000000 : \
> >>>            (virt) >= PAGE_OFFSET1 ? (virt) - PAGE_OFFSET1 + 0x20000000 : \
> >>>            (virt) - PAGE_OFFSET)
> >>>
> >>> Now the questions - what do values below PAGE_OFFSET give us?  Very
> >>> large numbers, which pfn_valid() should return false for.  What about
> >>> values > PAGE_OFFSET2 + 256MB?  The same.
> >>>
> >>> So this all _looks_ fine.  Wait a moment, what about highmem?  Let's say
> >>> that the last 256MB is only available as highmem, and let's go back to
> >>> Laura's patch:
> >>>
> >>> old:
> >>> #define	virt_addr_valid(kaddr)	(((void *)(kaddr) >= (void *)PAGE_OFFSET) && \
> >>> 				 ((void *)(kaddr) < (void *)high_memory))
> >>> new:
> >>> #define	virt_addr_valid(kaddr)	pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
> >>>
> >>> The former _excludes_ highmem, but the latter _includes_ it.
> >>>
> >>> virt_addr_valid(v) should only ever return _true_ for the lowmem area,
> >>> never anywhere else - that's part of its point.  It's there to answer
> >>> the question "is this a valid virtual pointer which I can dereference".
> >>>
> >>> So... We actually need a combination of both of these tests.
> >>
> >> Just to avoid any confusion, on arm64 we don't have non-linear v:p
> >> translation as there is plenty of VA space to live with holes. So the
> >> original patch is fine.
> >
> > The point I make above actually has nothing to do with non-linear v:p
> > translations.

OK, I re-read it now.

> Yes, I believe the point was that if we call virt_addr_valid on a 
> not-direct-mapped address it should return false. We still need the 
> range check on arm64 systems as well to ensure this.

On arm64 we don't have highmem, so all RAM would be directly mapped (and
linear). Is there a case on a 64-bit architecture where pfn_valid() is
true but the memory not mapped? We don't unmap any memory which is
pfn_valid().

-- 
Catalin

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH] arm64: Correct virt_addr_valid
  2013-12-13 11:57                 ` Catalin Marinas
@ 2013-12-16 18:28                   ` Laura Abbott
  2013-12-17 14:19                     ` Catalin Marinas
  0 siblings, 1 reply; 13+ messages in thread
From: Laura Abbott @ 2013-12-16 18:28 UTC (permalink / raw)
  To: linux-arm-kernel

On 12/13/2013 3:57 AM, Catalin Marinas wrote:
>
> OK, I re-read it now.
>
>> Yes, I believe the point was that if we call virt_addr_valid on a
>> not-direct-mapped address it should return false. We still need the
>> range check on arm64 systems as well to ensure this.
>
> On arm64 we don't have highmem, so all RAM would be directly mapped (and
> linear). Is there a case on a 64-bit architecture where pfn_valid() is
> true but the memory not mapped? We don't unmap any memory which is
> pfn_valid().
>

We don't have highmem but we still have a vmalloc region. Calling 
virt_to_page on a vmalloc address will not give a valid page so 
virt_addr_valid should return false on anything in the vmalloc region.

Thanks,
Laura

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH] arm64: Correct virt_addr_valid
  2013-12-16 18:28                   ` Laura Abbott
@ 2013-12-17 14:19                     ` Catalin Marinas
  0 siblings, 0 replies; 13+ messages in thread
From: Catalin Marinas @ 2013-12-17 14:19 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Dec 16, 2013 at 06:28:45PM +0000, Laura Abbott wrote:
> On 12/13/2013 3:57 AM, Catalin Marinas wrote:
> > OK, I re-read it now.
> >
> >> Yes, I believe the point was that if we call virt_addr_valid on a
> >> not-direct-mapped address it should return false. We still need the
> >> range check on arm64 systems as well to ensure this.
> >
> > On arm64 we don't have highmem, so all RAM would be directly mapped (and
> > linear). Is there a case on a 64-bit architecture where pfn_valid() is
> > true but the memory not mapped? We don't unmap any memory which is
> > pfn_valid().
> >
> 
> We don't have highmem but we still have a vmalloc region. Calling 
> virt_to_page on a vmalloc address will not give a valid page so 
> virt_addr_valid should return false on anything in the vmalloc region.

We are talking about pfn_valid(__pa(addr) >> PAGE_SHIFT). The __pa()
cannot be used on vmalloc addresses, I agree, but from a practical point
of view it doesn't return valid memory address. I can't think of a
scenario where it would (and if it does, your v2 patch should first
check the range before invoking __pa()).

-- 
Catalin

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2013-12-17 14:19 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-12-11  1:23 [PATCH] arm: Correct virt_addr_valid Laura Abbott
2013-12-11  1:23 ` [PATCH] arm64: " Laura Abbott
2013-12-11 10:44   ` Will Deacon
2013-12-11 11:06     ` Russell King - ARM Linux
2013-12-11 17:26       ` Will Deacon
2013-12-11 21:13         ` Russell King - ARM Linux
2013-12-12 17:57           ` Catalin Marinas
2013-12-12 18:02             ` Russell King - ARM Linux
2013-12-12 22:09               ` Laura Abbott
2013-12-13 11:57                 ` Catalin Marinas
2013-12-16 18:28                   ` Laura Abbott
2013-12-17 14:19                     ` Catalin Marinas
2013-12-11 17:35     ` Laura Abbott

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.