linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] powerpc: Fix random segfault when freeing hugetlb range
@ 2020-08-31  7:58 Christophe Leroy
  2020-09-02  3:23 ` Aneesh Kumar K.V
  2020-09-17 11:27 ` Michael Ellerman
  0 siblings, 2 replies; 5+ messages in thread
From: Christophe Leroy @ 2020-08-31  7:58 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman
  Cc: linux-kernel, linuxppc-dev

The following random segfault is observed from time to time with
map_hugetlb selftest:

root@localhost:~# ./map_hugetlb 1 19
524288 kB hugepages
Mapping 1 Mbytes
Segmentation fault

[   31.219972] map_hugetlb[365]: segfault (11) at 117 nip 77974f8c lr 779a6834 code 1 in ld-2.23.so[77966000+21000]
[   31.220192] map_hugetlb[365]: code: 9421ffc0 480318d1 93410028 90010044 9361002c 93810030 93a10034 93c10038
[   31.220307] map_hugetlb[365]: code: 93e1003c 93210024 8123007c 81430038 <80e90004> 814a0004 7f443a14 813a0004
[   31.221911] BUG: Bad rss-counter state mm:(ptrval) type:MM_FILEPAGES val:33
[   31.229362] BUG: Bad rss-counter state mm:(ptrval) type:MM_ANONPAGES val:5

This fault is due to hugetlb_free_pgd_range() freeing page tables
that are also used by regular pages.

As explain in the comment at the beginning of
hugetlb_free_pgd_range(), the verification done in free_pgd_range()
on floor and ceiling is not done here, which means
hugetlb_free_pte_range() can free outside the expected range.

As the verification cannot be done in hugetlb_free_pgd_range(), it
must be done in hugetlb_free_pte_range().

Fixes: b250c8c08c79 ("powerpc/8xx: Manage 512k huge pages as standard pages.")
Cc: stable@vger.kernel.org
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
 arch/powerpc/mm/hugetlbpage.c | 18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index 26292544630f..e7ae2a2c4545 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -330,10 +330,24 @@ static void free_hugepd_range(struct mmu_gather *tlb, hugepd_t *hpdp, int pdshif
 				 get_hugepd_cache_index(pdshift - shift));
 }
 
-static void hugetlb_free_pte_range(struct mmu_gather *tlb, pmd_t *pmd, unsigned long addr)
+static void hugetlb_free_pte_range(struct mmu_gather *tlb, pmd_t *pmd,
+				   unsigned long addr, unsigned long end,
+				   unsigned long floor, unsigned long ceiling)
 {
+	unsigned long start = addr;
 	pgtable_t token = pmd_pgtable(*pmd);
 
+	start &= PMD_MASK;
+	if (start < floor)
+		return;
+	if (ceiling) {
+		ceiling &= PMD_MASK;
+		if (!ceiling)
+			return;
+	}
+	if (end - 1 > ceiling - 1)
+		return;
+
 	pmd_clear(pmd);
 	pte_free_tlb(tlb, token, addr);
 	mm_dec_nr_ptes(tlb->mm);
@@ -363,7 +377,7 @@ static void hugetlb_free_pmd_range(struct mmu_gather *tlb, pud_t *pud,
 			 */
 			WARN_ON(!IS_ENABLED(CONFIG_PPC_8xx));
 
-			hugetlb_free_pte_range(tlb, pmd, addr);
+			hugetlb_free_pte_range(tlb, pmd, addr, end, floor, ceiling);
 
 			continue;
 		}
-- 
2.25.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] powerpc: Fix random segfault when freeing hugetlb range
  2020-08-31  7:58 [PATCH] powerpc: Fix random segfault when freeing hugetlb range Christophe Leroy
@ 2020-09-02  3:23 ` Aneesh Kumar K.V
  2020-09-02  8:11   ` Christophe Leroy
  2020-09-17 11:27 ` Michael Ellerman
  1 sibling, 1 reply; 5+ messages in thread
From: Aneesh Kumar K.V @ 2020-09-02  3:23 UTC (permalink / raw)
  To: Christophe Leroy, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman
  Cc: linuxppc-dev, linux-kernel

Christophe Leroy <christophe.leroy@csgroup.eu> writes:

> The following random segfault is observed from time to time with
> map_hugetlb selftest:
>
> root@localhost:~# ./map_hugetlb 1 19
> 524288 kB hugepages
> Mapping 1 Mbytes
> Segmentation fault
>
> [   31.219972] map_hugetlb[365]: segfault (11) at 117 nip 77974f8c lr 779a6834 code 1 in ld-2.23.so[77966000+21000]
> [   31.220192] map_hugetlb[365]: code: 9421ffc0 480318d1 93410028 90010044 9361002c 93810030 93a10034 93c10038
> [   31.220307] map_hugetlb[365]: code: 93e1003c 93210024 8123007c 81430038 <80e90004> 814a0004 7f443a14 813a0004
> [   31.221911] BUG: Bad rss-counter state mm:(ptrval) type:MM_FILEPAGES val:33
> [   31.229362] BUG: Bad rss-counter state mm:(ptrval) type:MM_ANONPAGES val:5
>
> This fault is due to hugetlb_free_pgd_range() freeing page tables
> that are also used by regular pages.
>
> As explain in the comment at the beginning of
> hugetlb_free_pgd_range(), the verification done in free_pgd_range()
> on floor and ceiling is not done here, which means
> hugetlb_free_pte_range() can free outside the expected range.
>
> As the verification cannot be done in hugetlb_free_pgd_range(), it
> must be done in hugetlb_free_pte_range().
>

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>

> Fixes: b250c8c08c79 ("powerpc/8xx: Manage 512k huge pages as standard pages.")
> Cc: stable@vger.kernel.org
> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
> ---
>  arch/powerpc/mm/hugetlbpage.c | 18 ++++++++++++++++--
>  1 file changed, 16 insertions(+), 2 deletions(-)
>
> diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
> index 26292544630f..e7ae2a2c4545 100644
> --- a/arch/powerpc/mm/hugetlbpage.c
> +++ b/arch/powerpc/mm/hugetlbpage.c
> @@ -330,10 +330,24 @@ static void free_hugepd_range(struct mmu_gather *tlb, hugepd_t *hpdp, int pdshif
>  				 get_hugepd_cache_index(pdshift - shift));
>  }
>  
> -static void hugetlb_free_pte_range(struct mmu_gather *tlb, pmd_t *pmd, unsigned long addr)
> +static void hugetlb_free_pte_range(struct mmu_gather *tlb, pmd_t *pmd,
> +				   unsigned long addr, unsigned long end,
> +				   unsigned long floor, unsigned long ceiling)
>  {
> +	unsigned long start = addr;
>  	pgtable_t token = pmd_pgtable(*pmd);
>  
> +	start &= PMD_MASK;
> +	if (start < floor)
> +		return;
> +	if (ceiling) {
> +		ceiling &= PMD_MASK;
> +		if (!ceiling)
> +			return;
> +	}
> +	if (end - 1 > ceiling - 1)
> +		return;
> +

We do repeat that for pud/pmd/pte hugetlb_free_range. Can we consolidate
that with comment explaining we are checking if the pgtable entry is
mapping outside the range?

>  	pmd_clear(pmd);
>  	pte_free_tlb(tlb, token, addr);
>  	mm_dec_nr_ptes(tlb->mm);
> @@ -363,7 +377,7 @@ static void hugetlb_free_pmd_range(struct mmu_gather *tlb, pud_t *pud,
>  			 */
>  			WARN_ON(!IS_ENABLED(CONFIG_PPC_8xx));
>  
> -			hugetlb_free_pte_range(tlb, pmd, addr);
> +			hugetlb_free_pte_range(tlb, pmd, addr, end, floor, ceiling);
>  
>  			continue;
>  		}
> -- 
> 2.25.0

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] powerpc: Fix random segfault when freeing hugetlb range
  2020-09-02  3:23 ` Aneesh Kumar K.V
@ 2020-09-02  8:11   ` Christophe Leroy
  2020-09-02  8:15     ` Aneesh Kumar K.V
  0 siblings, 1 reply; 5+ messages in thread
From: Christophe Leroy @ 2020-09-02  8:11 UTC (permalink / raw)
  To: Aneesh Kumar K.V, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman
  Cc: linuxppc-dev, linux-kernel



Le 02/09/2020 à 05:23, Aneesh Kumar K.V a écrit :
> Christophe Leroy <christophe.leroy@csgroup.eu> writes:
> 
>> The following random segfault is observed from time to time with
>> map_hugetlb selftest:
>>
>> root@localhost:~# ./map_hugetlb 1 19
>> 524288 kB hugepages
>> Mapping 1 Mbytes
>> Segmentation fault
>>
>> [   31.219972] map_hugetlb[365]: segfault (11) at 117 nip 77974f8c lr 779a6834 code 1 in ld-2.23.so[77966000+21000]
>> [   31.220192] map_hugetlb[365]: code: 9421ffc0 480318d1 93410028 90010044 9361002c 93810030 93a10034 93c10038
>> [   31.220307] map_hugetlb[365]: code: 93e1003c 93210024 8123007c 81430038 <80e90004> 814a0004 7f443a14 813a0004
>> [   31.221911] BUG: Bad rss-counter state mm:(ptrval) type:MM_FILEPAGES val:33
>> [   31.229362] BUG: Bad rss-counter state mm:(ptrval) type:MM_ANONPAGES val:5
>>
>> This fault is due to hugetlb_free_pgd_range() freeing page tables
>> that are also used by regular pages.
>>
>> As explain in the comment at the beginning of
>> hugetlb_free_pgd_range(), the verification done in free_pgd_range()
>> on floor and ceiling is not done here, which means
>> hugetlb_free_pte_range() can free outside the expected range.
>>
>> As the verification cannot be done in hugetlb_free_pgd_range(), it
>> must be done in hugetlb_free_pte_range().
>>
> 
> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> 
>> Fixes: b250c8c08c79 ("powerpc/8xx: Manage 512k huge pages as standard pages.")
>> Cc: stable@vger.kernel.org
>> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
>> ---
>>   arch/powerpc/mm/hugetlbpage.c | 18 ++++++++++++++++--
>>   1 file changed, 16 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
>> index 26292544630f..e7ae2a2c4545 100644
>> --- a/arch/powerpc/mm/hugetlbpage.c
>> +++ b/arch/powerpc/mm/hugetlbpage.c
>> @@ -330,10 +330,24 @@ static void free_hugepd_range(struct mmu_gather *tlb, hugepd_t *hpdp, int pdshif
>>   				 get_hugepd_cache_index(pdshift - shift));
>>   }
>>   
>> -static void hugetlb_free_pte_range(struct mmu_gather *tlb, pmd_t *pmd, unsigned long addr)
>> +static void hugetlb_free_pte_range(struct mmu_gather *tlb, pmd_t *pmd,
>> +				   unsigned long addr, unsigned long end,
>> +				   unsigned long floor, unsigned long ceiling)
>>   {
>> +	unsigned long start = addr;
>>   	pgtable_t token = pmd_pgtable(*pmd);
>>   
>> +	start &= PMD_MASK;
>> +	if (start < floor)
>> +		return;
>> +	if (ceiling) {
>> +		ceiling &= PMD_MASK;
>> +		if (!ceiling)
>> +			return;
>> +	}
>> +	if (end - 1 > ceiling - 1)
>> +		return;
>> +
> 
> We do repeat that for pud/pmd/pte hugetlb_free_range. Can we consolidate
> that with comment explaining we are checking if the pgtable entry is
> mapping outside the range?

I was thinking about refactoring that into a helper and add all the 
necessary comments to explain what it does.

Will do that in a followup series if you are OK. This patch is a bug fix 
and also have to go through stable.

Christophe

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] powerpc: Fix random segfault when freeing hugetlb range
  2020-09-02  8:11   ` Christophe Leroy
@ 2020-09-02  8:15     ` Aneesh Kumar K.V
  0 siblings, 0 replies; 5+ messages in thread
From: Aneesh Kumar K.V @ 2020-09-02  8:15 UTC (permalink / raw)
  To: Christophe Leroy, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman
  Cc: linuxppc-dev, linux-kernel

On 9/2/20 1:41 PM, Christophe Leroy wrote:
> 
> 
> Le 02/09/2020 à 05:23, Aneesh Kumar K.V a écrit :
>> Christophe Leroy <christophe.leroy@csgroup.eu> writes:
>>
>>> The following random segfault is observed from time to time with
>>> map_hugetlb selftest:
>>>
>>> root@localhost:~# ./map_hugetlb 1 19
>>> 524288 kB hugepages
>>> Mapping 1 Mbytes
>>> Segmentation fault
>>>
>>> [   31.219972] map_hugetlb[365]: segfault (11) at 117 nip 77974f8c lr 
>>> 779a6834 code 1 in ld-2.23.so[77966000+21000]
>>> [   31.220192] map_hugetlb[365]: code: 9421ffc0 480318d1 93410028 
>>> 90010044 9361002c 93810030 93a10034 93c10038
>>> [   31.220307] map_hugetlb[365]: code: 93e1003c 93210024 8123007c 
>>> 81430038 <80e90004> 814a0004 7f443a14 813a0004
>>> [   31.221911] BUG: Bad rss-counter state mm:(ptrval) 
>>> type:MM_FILEPAGES val:33
>>> [   31.229362] BUG: Bad rss-counter state mm:(ptrval) 
>>> type:MM_ANONPAGES val:5
>>>
>>> This fault is due to hugetlb_free_pgd_range() freeing page tables
>>> that are also used by regular pages.
>>>
>>> As explain in the comment at the beginning of
>>> hugetlb_free_pgd_range(), the verification done in free_pgd_range()
>>> on floor and ceiling is not done here, which means
>>> hugetlb_free_pte_range() can free outside the expected range.
>>>
>>> As the verification cannot be done in hugetlb_free_pgd_range(), it
>>> must be done in hugetlb_free_pte_range().
>>>
>>
>> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
>>
>>> Fixes: b250c8c08c79 ("powerpc/8xx: Manage 512k huge pages as standard 
>>> pages.")
>>> Cc: stable@vger.kernel.org
>>> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
>>> ---
>>>   arch/powerpc/mm/hugetlbpage.c | 18 ++++++++++++++++--
>>>   1 file changed, 16 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/arch/powerpc/mm/hugetlbpage.c 
>>> b/arch/powerpc/mm/hugetlbpage.c
>>> index 26292544630f..e7ae2a2c4545 100644
>>> --- a/arch/powerpc/mm/hugetlbpage.c
>>> +++ b/arch/powerpc/mm/hugetlbpage.c
>>> @@ -330,10 +330,24 @@ static void free_hugepd_range(struct mmu_gather 
>>> *tlb, hugepd_t *hpdp, int pdshif
>>>                    get_hugepd_cache_index(pdshift - shift));
>>>   }
>>> -static void hugetlb_free_pte_range(struct mmu_gather *tlb, pmd_t 
>>> *pmd, unsigned long addr)
>>> +static void hugetlb_free_pte_range(struct mmu_gather *tlb, pmd_t *pmd,
>>> +                   unsigned long addr, unsigned long end,
>>> +                   unsigned long floor, unsigned long ceiling)
>>>   {
>>> +    unsigned long start = addr;
>>>       pgtable_t token = pmd_pgtable(*pmd);
>>> +    start &= PMD_MASK;
>>> +    if (start < floor)
>>> +        return;
>>> +    if (ceiling) {
>>> +        ceiling &= PMD_MASK;
>>> +        if (!ceiling)
>>> +            return;
>>> +    }
>>> +    if (end - 1 > ceiling - 1)
>>> +        return;
>>> +
>>
>> We do repeat that for pud/pmd/pte hugetlb_free_range. Can we consolidate
>> that with comment explaining we are checking if the pgtable entry is
>> mapping outside the range?
> 
> I was thinking about refactoring that into a helper and add all the 
> necessary comments to explain what it does.
> 
> Will do that in a followup series if you are OK. This patch is a bug fix 
> and also have to go through stable.
> 

agreed.

Thanks.
-aneesh

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] powerpc: Fix random segfault when freeing hugetlb range
  2020-08-31  7:58 [PATCH] powerpc: Fix random segfault when freeing hugetlb range Christophe Leroy
  2020-09-02  3:23 ` Aneesh Kumar K.V
@ 2020-09-17 11:27 ` Michael Ellerman
  1 sibling, 0 replies; 5+ messages in thread
From: Michael Ellerman @ 2020-09-17 11:27 UTC (permalink / raw)
  To: Michael Ellerman, Benjamin Herrenschmidt, Paul Mackerras,
	Christophe Leroy
  Cc: linuxppc-dev, linux-kernel

On Mon, 31 Aug 2020 07:58:19 +0000 (UTC), Christophe Leroy wrote:
> The following random segfault is observed from time to time with
> map_hugetlb selftest:
> 
> root@localhost:~# ./map_hugetlb 1 19
> 524288 kB hugepages
> Mapping 1 Mbytes
> Segmentation fault
> 
> [...]

Applied to powerpc/next.

[1/1] powerpc: Fix random segfault when freeing hugetlb range
      https://git.kernel.org/powerpc/c/542db12a9c42d1ce70c45091765e02f74c129f43

cheers

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-09-17 11:31 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-31  7:58 [PATCH] powerpc: Fix random segfault when freeing hugetlb range Christophe Leroy
2020-09-02  3:23 ` Aneesh Kumar K.V
2020-09-02  8:11   ` Christophe Leroy
2020-09-02  8:15     ` Aneesh Kumar K.V
2020-09-17 11:27 ` Michael Ellerman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).