linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] arm64/mm: Fix pfn_valid() for ZONE_DEVICE based memory
@ 2021-01-29  7:39 Anshuman Khandual
  2021-01-29  7:39 ` [PATCH 1/2] " Anshuman Khandual
  2021-01-29  7:39 ` [PATCH 2/2] arm64/mm: Reorganize pfn_valid() Anshuman Khandual
  0 siblings, 2 replies; 6+ messages in thread
From: Anshuman Khandual @ 2021-01-29  7:39 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, linux-mm
  Cc: Anshuman Khandual, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Mark Rutland, James Morse, Robin Murphy, Jérôme Glisse,
	Dan Williams, David Hildenbrand, Mike Rapoport

This series fixes pfn_valid() for ZONE_DEVICE based memory and also improves
its performance for normal hotplug memory. While here, it also reorganizes
pfn_valid() on CONFIG_SPARSEMEM. This series is based on v5.11-rc5.

Question - should pfn_section_valid() be tested both for boot and non boot
memory as well ?

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Jérôme Glisse <jglisse@redhat.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Mike Rapoport <rppt@linux.ibm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org

Changes in V1:

- Test pfn_section_valid() for non boot memory

Changes in RFC:

https://lore.kernel.org/linux-arm-kernel/1608621144-4001-1-git-send-email-anshuman.khandual@arm.com/

Anshuman Khandual (2):
  arm64/mm: Fix pfn_valid() for ZONE_DEVICE based memory
  arm64/mm: Reorganize pfn_valid()

 arch/arm64/mm/init.c | 46 +++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 41 insertions(+), 5 deletions(-)

-- 
2.20.1



^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/2] arm64/mm: Fix pfn_valid() for ZONE_DEVICE based memory
  2021-01-29  7:39 [PATCH 0/2] arm64/mm: Fix pfn_valid() for ZONE_DEVICE based memory Anshuman Khandual
@ 2021-01-29  7:39 ` Anshuman Khandual
  2021-01-29  9:58   ` David Hildenbrand
  2021-01-29  7:39 ` [PATCH 2/2] arm64/mm: Reorganize pfn_valid() Anshuman Khandual
  1 sibling, 1 reply; 6+ messages in thread
From: Anshuman Khandual @ 2021-01-29  7:39 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, linux-mm
  Cc: Anshuman Khandual, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Mark Rutland, James Morse, Robin Murphy, Jérôme Glisse,
	Dan Williams, David Hildenbrand, Mike Rapoport

pfn_valid() validates a pfn but basically it checks for a valid struct page
backing for that pfn. It should always return positive for memory ranges
backed with struct page mapping. But currently pfn_valid() fails for all
ZONE_DEVICE based memory types even though they have struct page mapping.

pfn_valid() asserts that there is a memblock entry for a given pfn without
MEMBLOCK_NOMAP flag being set. The problem with ZONE_DEVICE based memory is
that they do not have memblock entries. Hence memblock_is_map_memory() will
invariably fail via memblock_search() for a ZONE_DEVICE based address. This
eventually fails pfn_valid() which is wrong. memblock_is_map_memory() needs
to be skipped for such memory ranges. As ZONE_DEVICE memory gets hotplugged
into the system via memremap_pages() called from a driver, their respective
memory sections will not have SECTION_IS_EARLY set.

Normal hotplug memory will never have MEMBLOCK_NOMAP set in their memblock
regions. Because the flag MEMBLOCK_NOMAP was specifically designed and set
for firmware reserved memory regions. memblock_is_map_memory() can just be
skipped as its always going to be positive and that will be an optimization
for the normal hotplug memory. Like ZONE_DEVICE based memory, all normal
hotplugged memory too will not have SECTION_IS_EARLY set for their sections

Skipping memblock_is_map_memory() for all non early memory sections would
fix pfn_valid() problem for ZONE_DEVICE based memory and also improve its
performance for normal hotplug memory as well.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Fixes: 73b20c84d42d ("arm64: mm: implement pte_devmap support")
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
 arch/arm64/mm/init.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 709d98fea90c..1141075e4d53 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -230,6 +230,18 @@ int pfn_valid(unsigned long pfn)
 
 	if (!valid_section(__pfn_to_section(pfn)))
 		return 0;
+
+	/*
+	 * ZONE_DEVICE memory does not have the memblock entries.
+	 * memblock_is_map_memory() check for ZONE_DEVICE based
+	 * addresses will always fail. Even the normal hotplugged
+	 * memory will never have MEMBLOCK_NOMAP flag set in their
+	 * memblock entries. Skip memblock search for all non early
+	 * memory sections covering all of hotplug memory including
+	 * both normal and ZONE_DEVICE based.
+	 */
+	if (!early_section(__pfn_to_section(pfn)))
+		return pfn_section_valid(__pfn_to_section(pfn), pfn);
 #endif
 	return memblock_is_map_memory(addr);
 }
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/2] arm64/mm: Reorganize pfn_valid()
  2021-01-29  7:39 [PATCH 0/2] arm64/mm: Fix pfn_valid() for ZONE_DEVICE based memory Anshuman Khandual
  2021-01-29  7:39 ` [PATCH 1/2] " Anshuman Khandual
@ 2021-01-29  7:39 ` Anshuman Khandual
  2021-01-29 10:07   ` David Hildenbrand
  1 sibling, 1 reply; 6+ messages in thread
From: Anshuman Khandual @ 2021-01-29  7:39 UTC (permalink / raw)
  To: linux-arm-kernel, linux-kernel, linux-mm
  Cc: Anshuman Khandual, Catalin Marinas, Will Deacon, Ard Biesheuvel,
	Mark Rutland, James Morse, Robin Murphy, Jérôme Glisse,
	Dan Williams, David Hildenbrand, Mike Rapoport

There are multiple instances of pfn_to_section_nr() and __pfn_to_section()
when CONFIG_SPARSEMEM is enabled. This can be just optimized if the memory
section is fetched earlier. Hence bifurcate pfn_valid() into two different
definitions depending on whether CONFIG_SPARSEMEM is enabled. Also replace
the open coded pfn <--> addr conversion with __[pfn|phys]_to_[phys|pfn]().
This does not cause any functional change.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
 arch/arm64/mm/init.c | 38 +++++++++++++++++++++++++++++++-------
 1 file changed, 31 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 1141075e4d53..09adca90c57a 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -217,18 +217,25 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
 	free_area_init(max_zone_pfns);
 }
 
+#ifdef CONFIG_SPARSEMEM
 int pfn_valid(unsigned long pfn)
 {
-	phys_addr_t addr = pfn << PAGE_SHIFT;
+	struct mem_section *ms = __pfn_to_section(pfn);
+	phys_addr_t addr = __pfn_to_phys(pfn);
 
-	if ((addr >> PAGE_SHIFT) != pfn)
+	/*
+	 * Ensure the upper PAGE_SHIFT bits are clear in the
+	 * pfn. Else it might lead to false positives when
+	 * some of the upper bits are set, but the lower bits
+	 * match a valid pfn.
+	 */
+	if (__phys_to_pfn(addr) != pfn)
 		return 0;
 
-#ifdef CONFIG_SPARSEMEM
 	if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS)
 		return 0;
 
-	if (!valid_section(__pfn_to_section(pfn)))
+	if (!valid_section(ms))
 		return 0;
 
 	/*
@@ -240,11 +247,28 @@ int pfn_valid(unsigned long pfn)
 	 * memory sections covering all of hotplug memory including
 	 * both normal and ZONE_DEVICE based.
 	 */
-	if (!early_section(__pfn_to_section(pfn)))
-		return pfn_section_valid(__pfn_to_section(pfn), pfn);
-#endif
+	if (!early_section(ms))
+		return pfn_section_valid(ms, pfn);
+
 	return memblock_is_map_memory(addr);
 }
+#else
+int pfn_valid(unsigned long pfn)
+{
+	phys_addr_t addr = __pfn_to_phys(pfn);
+
+	/*
+	 * Ensure the upper PAGE_SHIFT bits are clear in the
+	 * pfn. Else it might lead to false positives when
+	 * some of the upper bits are set, but the lower bits
+	 * match a valid pfn.
+	 */
+	if (__phys_to_pfn(addr) != pfn)
+		return 0;
+
+	return memblock_is_map_memory(addr);
+}
+#endif
 EXPORT_SYMBOL(pfn_valid);
 
 static phys_addr_t memory_limit = PHYS_ADDR_MAX;
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/2] arm64/mm: Fix pfn_valid() for ZONE_DEVICE based memory
  2021-01-29  7:39 ` [PATCH 1/2] " Anshuman Khandual
@ 2021-01-29  9:58   ` David Hildenbrand
  0 siblings, 0 replies; 6+ messages in thread
From: David Hildenbrand @ 2021-01-29  9:58 UTC (permalink / raw)
  To: Anshuman Khandual, linux-arm-kernel, linux-kernel, linux-mm
  Cc: Catalin Marinas, Will Deacon, Ard Biesheuvel, Mark Rutland,
	James Morse, Robin Murphy, Jérôme Glisse, Dan Williams,
	Mike Rapoport

On 29.01.21 08:39, Anshuman Khandual wrote:
> pfn_valid() validates a pfn but basically it checks for a valid struct page
> backing for that pfn. It should always return positive for memory ranges
> backed with struct page mapping. But currently pfn_valid() fails for all
> ZONE_DEVICE based memory types even though they have struct page mapping.
> 
> pfn_valid() asserts that there is a memblock entry for a given pfn without
> MEMBLOCK_NOMAP flag being set. The problem with ZONE_DEVICE based memory is
> that they do not have memblock entries. Hence memblock_is_map_memory() will
> invariably fail via memblock_search() for a ZONE_DEVICE based address. This
> eventually fails pfn_valid() which is wrong. memblock_is_map_memory() needs
> to be skipped for such memory ranges. As ZONE_DEVICE memory gets hotplugged
> into the system via memremap_pages() called from a driver, their respective
> memory sections will not have SECTION_IS_EARLY set.
> 
> Normal hotplug memory will never have MEMBLOCK_NOMAP set in their memblock
> regions. Because the flag MEMBLOCK_NOMAP was specifically designed and set
> for firmware reserved memory regions. memblock_is_map_memory() can just be
> skipped as its always going to be positive and that will be an optimization
> for the normal hotplug memory. Like ZONE_DEVICE based memory, all normal
> hotplugged memory too will not have SECTION_IS_EARLY set for their sections
> 
> Skipping memblock_is_map_memory() for all non early memory sections would
> fix pfn_valid() problem for ZONE_DEVICE based memory and also improve its
> performance for normal hotplug memory as well.
> 
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Ard Biesheuvel <ardb@kernel.org>
> Cc: Robin Murphy <robin.murphy@arm.com>
> Cc: linux-arm-kernel@lists.infradead.org
> Cc: linux-kernel@vger.kernel.org
> Fixes: 73b20c84d42d ("arm64: mm: implement pte_devmap support")
> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
> ---
>   arch/arm64/mm/init.c | 12 ++++++++++++
>   1 file changed, 12 insertions(+)
> 
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 709d98fea90c..1141075e4d53 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -230,6 +230,18 @@ int pfn_valid(unsigned long pfn)
>   
>   	if (!valid_section(__pfn_to_section(pfn)))
>   		return 0;
> +
> +	/*
> +	 * ZONE_DEVICE memory does not have the memblock entries.
> +	 * memblock_is_map_memory() check for ZONE_DEVICE based
> +	 * addresses will always fail. Even the normal hotplugged
> +	 * memory will never have MEMBLOCK_NOMAP flag set in their
> +	 * memblock entries. Skip memblock search for all non early
> +	 * memory sections covering all of hotplug memory including
> +	 * both normal and ZONE_DEVICE based.
> +	 */
> +	if (!early_section(__pfn_to_section(pfn)))
> +		return pfn_section_valid(__pfn_to_section(pfn), pfn);
>   #endif
>   	return memblock_is_map_memory(addr);
>   }
> 

LGTM

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 2/2] arm64/mm: Reorganize pfn_valid()
  2021-01-29  7:39 ` [PATCH 2/2] arm64/mm: Reorganize pfn_valid() Anshuman Khandual
@ 2021-01-29 10:07   ` David Hildenbrand
  2021-02-01  3:47     ` Anshuman Khandual
  0 siblings, 1 reply; 6+ messages in thread
From: David Hildenbrand @ 2021-01-29 10:07 UTC (permalink / raw)
  To: Anshuman Khandual, linux-arm-kernel, linux-kernel, linux-mm
  Cc: Catalin Marinas, Will Deacon, Ard Biesheuvel, Mark Rutland,
	James Morse, Robin Murphy, Jérôme Glisse, Dan Williams,
	Mike Rapoport

On 29.01.21 08:39, Anshuman Khandual wrote:
> There are multiple instances of pfn_to_section_nr() and __pfn_to_section()
> when CONFIG_SPARSEMEM is enabled. This can be just optimized if the memory
> section is fetched earlier. Hence bifurcate pfn_valid() into two different
> definitions depending on whether CONFIG_SPARSEMEM is enabled. Also replace
> the open coded pfn <--> addr conversion with __[pfn|phys]_to_[phys|pfn]().
> This does not cause any functional change.
> 
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Ard Biesheuvel <ardb@kernel.org>
> Cc: linux-arm-kernel@lists.infradead.org
> Cc: linux-kernel@vger.kernel.org
> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
> ---
>   arch/arm64/mm/init.c | 38 +++++++++++++++++++++++++++++++-------
>   1 file changed, 31 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 1141075e4d53..09adca90c57a 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -217,18 +217,25 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
>   	free_area_init(max_zone_pfns);
>   }
>   
> +#ifdef CONFIG_SPARSEMEM
>   int pfn_valid(unsigned long pfn)
>   {
> -	phys_addr_t addr = pfn << PAGE_SHIFT;
> +	struct mem_section *ms = __pfn_to_section(pfn);
> +	phys_addr_t addr = __pfn_to_phys(pfn);

I'd just use PFN_PHYS() here, which is more frequently used in the kernel.

>   
> -	if ((addr >> PAGE_SHIFT) != pfn)
> +	/*
> +	 * Ensure the upper PAGE_SHIFT bits are clear in the
> +	 * pfn. Else it might lead to false positives when
> +	 * some of the upper bits are set, but the lower bits
> +	 * match a valid pfn.
> +	 */
> +	if (__phys_to_pfn(addr) != pfn)

and here PHYS_PFN(). Comment is helpful. :)

>   		return 0;
>   
> -#ifdef CONFIG_SPARSEMEM
>   	if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS)
>   		return 0;
>   
> -	if (!valid_section(__pfn_to_section(pfn)))
> +	if (!valid_section(ms))
>   		return 0;
>   
>   	/*
> @@ -240,11 +247,28 @@ int pfn_valid(unsigned long pfn)
>   	 * memory sections covering all of hotplug memory including
>   	 * both normal and ZONE_DEVICE based.
>   	 */
> -	if (!early_section(__pfn_to_section(pfn)))
> -		return pfn_section_valid(__pfn_to_section(pfn), pfn);
> -#endif
> +	if (!early_section(ms))
> +		return pfn_section_valid(ms, pfn);
> +
>   	return memblock_is_map_memory(addr);
>   }
> +#else
> +int pfn_valid(unsigned long pfn)
> +{
> +	phys_addr_t addr = __pfn_to_phys(pfn);
> +
> +	/*
> +	 * Ensure the upper PAGE_SHIFT bits are clear in the
> +	 * pfn. Else it might lead to false positives when
> +	 * some of the upper bits are set, but the lower bits
> +	 * match a valid pfn.
> +	 */
> +	if (__phys_to_pfn(addr) != pfn)
> +		return 0;
> +
> +	return memblock_is_map_memory(addr);
> +}


I think you can avoid duplicating the code by doing something like:


phys_addr_t addr = PFN_PHYS(pfn);

if (PHYS_PFN(addr) != pfn)
	return 0;

#ifdef CONFIG_SPARSEMEM
{
	struct mem_section *ms = __pfn_to_section(pfn);

	if (!valid_section(ms))
		return 0;
	if (!early_section(ms))
		return pfn_section_valid(ms, pfn);
}
#endif
return memblock_is_map_memory(addr);

-- 
Thanks,

David / dhildenb



^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 2/2] arm64/mm: Reorganize pfn_valid()
  2021-01-29 10:07   ` David Hildenbrand
@ 2021-02-01  3:47     ` Anshuman Khandual
  0 siblings, 0 replies; 6+ messages in thread
From: Anshuman Khandual @ 2021-02-01  3:47 UTC (permalink / raw)
  To: David Hildenbrand, linux-arm-kernel, linux-kernel, linux-mm
  Cc: Catalin Marinas, Will Deacon, Ard Biesheuvel, Mark Rutland,
	James Morse, Robin Murphy, Jérôme Glisse, Dan Williams,
	Mike Rapoport



On 1/29/21 3:37 PM, David Hildenbrand wrote:
> On 29.01.21 08:39, Anshuman Khandual wrote:
>> There are multiple instances of pfn_to_section_nr() and __pfn_to_section()
>> when CONFIG_SPARSEMEM is enabled. This can be just optimized if the memory
>> section is fetched earlier. Hence bifurcate pfn_valid() into two different
>> definitions depending on whether CONFIG_SPARSEMEM is enabled. Also replace
>> the open coded pfn <--> addr conversion with __[pfn|phys]_to_[phys|pfn]().
>> This does not cause any functional change.
>>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Will Deacon <will@kernel.org>
>> Cc: Ard Biesheuvel <ardb@kernel.org>
>> Cc: linux-arm-kernel@lists.infradead.org
>> Cc: linux-kernel@vger.kernel.org
>> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
>> ---
>>   arch/arm64/mm/init.c | 38 +++++++++++++++++++++++++++++++-------
>>   1 file changed, 31 insertions(+), 7 deletions(-)
>>
>> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
>> index 1141075e4d53..09adca90c57a 100644
>> --- a/arch/arm64/mm/init.c
>> +++ b/arch/arm64/mm/init.c
>> @@ -217,18 +217,25 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
>>       free_area_init(max_zone_pfns);
>>   }
>>   +#ifdef CONFIG_SPARSEMEM
>>   int pfn_valid(unsigned long pfn)
>>   {
>> -    phys_addr_t addr = pfn << PAGE_SHIFT;
>> +    struct mem_section *ms = __pfn_to_section(pfn);
>> +    phys_addr_t addr = __pfn_to_phys(pfn);
> 
> I'd just use PFN_PHYS() here, which is more frequently used in the kernel.

Sure, will replace.

> 
>>   -    if ((addr >> PAGE_SHIFT) != pfn)
>> +    /*
>> +     * Ensure the upper PAGE_SHIFT bits are clear in the
>> +     * pfn. Else it might lead to false positives when
>> +     * some of the upper bits are set, but the lower bits
>> +     * match a valid pfn.
>> +     */
>> +    if (__phys_to_pfn(addr) != pfn)
> 
> and here PHYS_PFN(). Comment is helpful. :)

Sure, will replace.

> 
>>           return 0;
>>   -#ifdef CONFIG_SPARSEMEM
>>       if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS)
>>           return 0;
>>   -    if (!valid_section(__pfn_to_section(pfn)))
>> +    if (!valid_section(ms))
>>           return 0;
>>         /*
>> @@ -240,11 +247,28 @@ int pfn_valid(unsigned long pfn)
>>        * memory sections covering all of hotplug memory including
>>        * both normal and ZONE_DEVICE based.
>>        */
>> -    if (!early_section(__pfn_to_section(pfn)))
>> -        return pfn_section_valid(__pfn_to_section(pfn), pfn);
>> -#endif
>> +    if (!early_section(ms))
>> +        return pfn_section_valid(ms, pfn);
>> +
>>       return memblock_is_map_memory(addr);
>>   }
>> +#else
>> +int pfn_valid(unsigned long pfn)
>> +{
>> +    phys_addr_t addr = __pfn_to_phys(pfn);
>> +
>> +    /*
>> +     * Ensure the upper PAGE_SHIFT bits are clear in the
>> +     * pfn. Else it might lead to false positives when
>> +     * some of the upper bits are set, but the lower bits
>> +     * match a valid pfn.
>> +     */
>> +    if (__phys_to_pfn(addr) != pfn)
>> +        return 0;
>> +
>> +    return memblock_is_map_memory(addr);
>> +}
> 
> 
> I think you can avoid duplicating the code by doing something like:

Right and also this looks more compact as well. Initially though about
it but then was apprehensive about the style in #ifdef { } #endif code
block inside the function. After this change, the resulting patch also
clears checkpatch.pl test. Will do the change.

> 
> 
> phys_addr_t addr = PFN_PHYS(pfn);
> 
> if (PHYS_PFN(addr) != pfn)
>     return 0;
> 
> #ifdef CONFIG_SPARSEMEM
> {
>     struct mem_section *ms = __pfn_to_section(pfn);
> 
>     if (!valid_section(ms))
>         return 0;
>     if (!early_section(ms))
>         return pfn_section_valid(ms, pfn);
> }
> #endif
> return memblock_is_map_memory(addr);
> 


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2021-02-01  3:47 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-29  7:39 [PATCH 0/2] arm64/mm: Fix pfn_valid() for ZONE_DEVICE based memory Anshuman Khandual
2021-01-29  7:39 ` [PATCH 1/2] " Anshuman Khandual
2021-01-29  9:58   ` David Hildenbrand
2021-01-29  7:39 ` [PATCH 2/2] arm64/mm: Reorganize pfn_valid() Anshuman Khandual
2021-01-29 10:07   ` David Hildenbrand
2021-02-01  3:47     ` Anshuman Khandual

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).