linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 1/2] powerpc/set_memory: Avoid spinlock recursion in change_page_attr()
@ 2021-12-24 11:07 Christophe Leroy
  2021-12-24 11:07 ` [PATCH v3 2/2] powerpc: Add set_memory_{p/np}() and remove set_memory_attr() Christophe Leroy
  2022-02-16 12:25 ` [PATCH v3 1/2] powerpc/set_memory: Avoid spinlock recursion in change_page_attr() Michael Ellerman
  0 siblings, 2 replies; 4+ messages in thread
From: Christophe Leroy @ 2021-12-24 11:07 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman
  Cc: Maxime Bizon, linuxppc-dev, linux-kernel

Commit 1f9ad21c3b38 ("powerpc/mm: Implement set_memory() routines")
included a spin_lock() to change_page_attr() in order to
safely perform the three step operations. But then
commit 9f7853d7609d ("powerpc/mm: Fix set_memory_*() against
concurrent accesses") modify it to use pte_update() and do
the operation safely against concurrent access.

In the meantime, Maxime reported some spinlock recursion.

[   15.351649] BUG: spinlock recursion on CPU#0, kworker/0:2/217
[   15.357540]  lock: init_mm+0x3c/0x420, .magic: dead4ead, .owner: kworker/0:2/217, .owner_cpu: 0
[   15.366563] CPU: 0 PID: 217 Comm: kworker/0:2 Not tainted 5.15.0+ #523
[   15.373350] Workqueue: events do_free_init
[   15.377615] Call Trace:
[   15.380232] [e4105ac0] [800946a4] do_raw_spin_lock+0xf8/0x120 (unreliable)
[   15.387340] [e4105ae0] [8001f4ec] change_page_attr+0x40/0x1d4
[   15.393413] [e4105b10] [801424e0] __apply_to_page_range+0x164/0x310
[   15.400009] [e4105b60] [80169620] free_pcp_prepare+0x1e4/0x4a0
[   15.406045] [e4105ba0] [8016c5a0] free_unref_page+0x40/0x2b8
[   15.411979] [e4105be0] [8018724c] kasan_depopulate_vmalloc_pte+0x6c/0x94
[   15.418989] [e4105c00] [801424e0] __apply_to_page_range+0x164/0x310
[   15.425451] [e4105c50] [80187834] kasan_release_vmalloc+0xbc/0x134
[   15.431898] [e4105c70] [8015f7a8] __purge_vmap_area_lazy+0x4e4/0xdd8
[   15.438560] [e4105d30] [80160d10] _vm_unmap_aliases.part.0+0x17c/0x24c
[   15.445283] [e4105d60] [801642d0] __vunmap+0x2f0/0x5c8
[   15.450684] [e4105db0] [800e32d0] do_free_init+0x68/0x94
[   15.456181] [e4105dd0] [8005d094] process_one_work+0x4bc/0x7b8
[   15.462283] [e4105e90] [8005d614] worker_thread+0x284/0x6e8
[   15.468227] [e4105f00] [8006aaec] kthread+0x1f0/0x210
[   15.473489] [e4105f40] [80017148] ret_from_kernel_thread+0x14/0x1c

Remove the read / modify / write sequence to make the operation atomic
and remove the spin_lock() in change_page_attr().

To do the operation atomically, we can't use pte modification helpers
anymore. Because all platforms have different combination of bits, it
is not easy to use those bits directly. But all have the
_PAGE_KERNEL_{RO/ROX/RW/RWX} set of flags. All we need it to compare
two sets to know which bits are set or cleared.

For instance, by comparing _PAGE_KERNEL_ROX and _PAGE_KERNEL_RO you
know which bit gets cleared and which bit get set when changing exec
permission.

Reported-by: Maxime Bizon <mbizon@freebox.fr>
Link: https://lore.kernel.org/all/20211212112152.GA27070@sakura/
Cc: Russell Currey <ruscur@russell.cc>
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
v3: Use pte_update() directly instead of having a read / modify / write sequence
---
 arch/powerpc/mm/pageattr.c | 32 +++++++++++++-------------------
 1 file changed, 13 insertions(+), 19 deletions(-)

diff --git a/arch/powerpc/mm/pageattr.c b/arch/powerpc/mm/pageattr.c
index edea388e9d3f..8812454e70ff 100644
--- a/arch/powerpc/mm/pageattr.c
+++ b/arch/powerpc/mm/pageattr.c
@@ -15,12 +15,14 @@
 #include <asm/pgtable.h>
 
 
+static pte_basic_t pte_update_delta(pte_t *ptep, unsigned long addr,
+				    unsigned long old, unsigned long new)
+{
+	return pte_update(&init_mm, addr, ptep, old & ~new, new & ~old, 0);
+}
+
 /*
- * Updates the attributes of a page in three steps:
- *
- * 1. take the page_table_lock
- * 2. install the new entry with the updated attributes
- * 3. flush the TLB
+ * Updates the attributes of a page atomically.
  *
  * This sequence is safe against concurrent updates, and also allows updating the
  * attributes of a page currently being executed or accessed.
@@ -28,41 +30,33 @@
 static int change_page_attr(pte_t *ptep, unsigned long addr, void *data)
 {
 	long action = (long)data;
-	pte_t pte;
-
-	spin_lock(&init_mm.page_table_lock);
-
-	pte = ptep_get(ptep);
 
-	/* modify the PTE bits as desired, then apply */
+	/* modify the PTE bits as desired */
 	switch (action) {
 	case SET_MEMORY_RO:
-		pte = pte_wrprotect(pte);
+		/* Don't clear DIRTY bit */
+		pte_update_delta(ptep, addr, _PAGE_KERNEL_RW & ~_PAGE_DIRTY, _PAGE_KERNEL_RO);
 		break;
 	case SET_MEMORY_RW:
-		pte = pte_mkwrite(pte_mkdirty(pte));
+		pte_update_delta(ptep, addr, _PAGE_KERNEL_RO, _PAGE_KERNEL_RW);
 		break;
 	case SET_MEMORY_NX:
-		pte = pte_exprotect(pte);
+		pte_update_delta(ptep, addr, _PAGE_KERNEL_ROX, _PAGE_KERNEL_RO);
 		break;
 	case SET_MEMORY_X:
-		pte = pte_mkexec(pte);
+		pte_update_delta(ptep, addr, _PAGE_KERNEL_RO, _PAGE_KERNEL_ROX);
 		break;
 	default:
 		WARN_ON_ONCE(1);
 		break;
 	}
 
-	pte_update(&init_mm, addr, ptep, ~0UL, pte_val(pte), 0);
-
 	/* See ptesync comment in radix__set_pte_at() */
 	if (radix_enabled())
 		asm volatile("ptesync": : :"memory");
 
 	flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
 
-	spin_unlock(&init_mm.page_table_lock);
-
 	return 0;
 }
 
-- 
2.33.1

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH v3 2/2] powerpc: Add set_memory_{p/np}() and remove set_memory_attr()
  2021-12-24 11:07 [PATCH v3 1/2] powerpc/set_memory: Avoid spinlock recursion in change_page_attr() Christophe Leroy
@ 2021-12-24 11:07 ` Christophe Leroy
  2022-01-19 12:28   ` Christophe Leroy
  2022-02-16 12:25 ` [PATCH v3 1/2] powerpc/set_memory: Avoid spinlock recursion in change_page_attr() Michael Ellerman
  1 sibling, 1 reply; 4+ messages in thread
From: Christophe Leroy @ 2021-12-24 11:07 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman
  Cc: linux-kernel, stable, Maxime Bizon, linuxppc-dev

set_memory_attr() was implemented by commit 4d1755b6a762 ("powerpc/mm:
implement set_memory_attr()") because the set_memory_xx() couldn't
be used at that time to modify memory "on the fly" as explained it
the commit.

But set_memory_attr() uses set_pte_at() which leads to warnings when
CONFIG_DEBUG_VM is selected, because set_pte_at() is unexpected for
updating existing page table entries.

The check could be bypassed by using __set_pte_at() instead,
as it was the case before commit c988cfd38e48 ("powerpc/32:
use set_memory_attr()") but since commit 9f7853d7609d ("powerpc/mm:
Fix set_memory_*() against concurrent accesses") it is now possible
to use set_memory_xx() functions to update page table entries
"on the fly" because the update is now atomic.

For DEBUG_PAGEALLOC we need to clear and set back _PAGE_PRESENT.
Add set_memory_np() and set_memory_p() for that.

Replace all uses of set_memory_attr() by the relevant set_memory_xx()
and remove set_memory_attr().

Reported-by: Maxime Bizon <mbizon@freebox.fr>
Fixes: c988cfd38e48 ("powerpc/32: use set_memory_attr()")
Cc: stable@vger.kernel.org
Depends-on: 9f7853d7609d ("powerpc/mm: Fix set_memory_*() against concurrent accesses")
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reviewed-by: Russell Currey <ruscur@russell.cc>
Tested-by: Maxime Bizon <mbizon@freebox.fr>
---
v3: Use _PAGE_PRESENT directly as all platforms have the bit

v2: Add comment to SET_MEMORY_P and SET_MEMORY_NP
---
 arch/powerpc/include/asm/set_memory.h | 12 ++++++++-
 arch/powerpc/mm/pageattr.c            | 39 +++++----------------------
 arch/powerpc/mm/pgtable_32.c          | 24 ++++++++---------
 3 files changed, 28 insertions(+), 47 deletions(-)

diff --git a/arch/powerpc/include/asm/set_memory.h b/arch/powerpc/include/asm/set_memory.h
index b040094f7920..7ebc807aa8cc 100644
--- a/arch/powerpc/include/asm/set_memory.h
+++ b/arch/powerpc/include/asm/set_memory.h
@@ -6,6 +6,8 @@
 #define SET_MEMORY_RW	1
 #define SET_MEMORY_NX	2
 #define SET_MEMORY_X	3
+#define SET_MEMORY_NP	4	/* Set memory non present */
+#define SET_MEMORY_P	5	/* Set memory present */
 
 int change_memory_attr(unsigned long addr, int numpages, long action);
 
@@ -29,6 +31,14 @@ static inline int set_memory_x(unsigned long addr, int numpages)
 	return change_memory_attr(addr, numpages, SET_MEMORY_X);
 }
 
-int set_memory_attr(unsigned long addr, int numpages, pgprot_t prot);
+static inline int set_memory_np(unsigned long addr, int numpages)
+{
+	return change_memory_attr(addr, numpages, SET_MEMORY_NP);
+}
+
+static inline int set_memory_p(unsigned long addr, int numpages)
+{
+	return change_memory_attr(addr, numpages, SET_MEMORY_P);
+}
 
 #endif
diff --git a/arch/powerpc/mm/pageattr.c b/arch/powerpc/mm/pageattr.c
index 8812454e70ff..85753e32a4de 100644
--- a/arch/powerpc/mm/pageattr.c
+++ b/arch/powerpc/mm/pageattr.c
@@ -46,6 +46,12 @@ static int change_page_attr(pte_t *ptep, unsigned long addr, void *data)
 	case SET_MEMORY_X:
 		pte_update_delta(ptep, addr, _PAGE_KERNEL_RO, _PAGE_KERNEL_ROX);
 		break;
+	case SET_MEMORY_NP:
+		pte_update(&init_mm, addr, ptep, _PAGE_PRESENT, 0, 0);
+		break;
+	case SET_MEMORY_P:
+		pte_update(&init_mm, addr, ptep, 0, _PAGE_PRESENT, 0);
+		break;
 	default:
 		WARN_ON_ONCE(1);
 		break;
@@ -90,36 +96,3 @@ int change_memory_attr(unsigned long addr, int numpages, long action)
 	return apply_to_existing_page_range(&init_mm, start, size,
 					    change_page_attr, (void *)action);
 }
-
-/*
- * Set the attributes of a page:
- *
- * This function is used by PPC32 at the end of init to set final kernel memory
- * protection. It includes changing the maping of the page it is executing from
- * and data pages it is using.
- */
-static int set_page_attr(pte_t *ptep, unsigned long addr, void *data)
-{
-	pgprot_t prot = __pgprot((unsigned long)data);
-
-	spin_lock(&init_mm.page_table_lock);
-
-	set_pte_at(&init_mm, addr, ptep, pte_modify(*ptep, prot));
-	flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
-
-	spin_unlock(&init_mm.page_table_lock);
-
-	return 0;
-}
-
-int set_memory_attr(unsigned long addr, int numpages, pgprot_t prot)
-{
-	unsigned long start = ALIGN_DOWN(addr, PAGE_SIZE);
-	unsigned long sz = numpages * PAGE_SIZE;
-
-	if (numpages <= 0)
-		return 0;
-
-	return apply_to_existing_page_range(&init_mm, start, sz, set_page_attr,
-					    (void *)pgprot_val(prot));
-}
diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
index 906e4e4328b2..f71ededdc02a 100644
--- a/arch/powerpc/mm/pgtable_32.c
+++ b/arch/powerpc/mm/pgtable_32.c
@@ -135,10 +135,12 @@ void mark_initmem_nx(void)
 	unsigned long numpages = PFN_UP((unsigned long)_einittext) -
 				 PFN_DOWN((unsigned long)_sinittext);
 
-	if (v_block_mapped((unsigned long)_sinittext))
+	if (v_block_mapped((unsigned long)_sinittext)) {
 		mmu_mark_initmem_nx();
-	else
-		set_memory_attr((unsigned long)_sinittext, numpages, PAGE_KERNEL);
+	} else {
+		set_memory_nx((unsigned long)_sinittext, numpages);
+		set_memory_rw((unsigned long)_sinittext, numpages);
+	}
 }
 
 #ifdef CONFIG_STRICT_KERNEL_RWX
@@ -152,18 +154,14 @@ void mark_rodata_ro(void)
 		return;
 	}
 
-	numpages = PFN_UP((unsigned long)_etext) -
-		   PFN_DOWN((unsigned long)_stext);
-
-	set_memory_attr((unsigned long)_stext, numpages, PAGE_KERNEL_ROX);
 	/*
-	 * mark .rodata as read only. Use __init_begin rather than __end_rodata
-	 * to cover NOTES and EXCEPTION_TABLE.
+	 * mark .text and .rodata as read only. Use __init_begin rather than
+	 * __end_rodata to cover NOTES and EXCEPTION_TABLE.
 	 */
 	numpages = PFN_UP((unsigned long)__init_begin) -
-		   PFN_DOWN((unsigned long)__start_rodata);
+		   PFN_DOWN((unsigned long)_stext);
 
-	set_memory_attr((unsigned long)__start_rodata, numpages, PAGE_KERNEL_RO);
+	set_memory_ro((unsigned long)_stext, numpages);
 
 	// mark_initmem_nx() should have already run by now
 	ptdump_check_wx();
@@ -179,8 +177,8 @@ void __kernel_map_pages(struct page *page, int numpages, int enable)
 		return;
 
 	if (enable)
-		set_memory_attr(addr, numpages, PAGE_KERNEL);
+		set_memory_p(addr, numpages);
 	else
-		set_memory_attr(addr, numpages, __pgprot(0));
+		set_memory_np(addr, numpages);
 }
 #endif /* CONFIG_DEBUG_PAGEALLOC */
-- 
2.33.1

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH v3 2/2] powerpc: Add set_memory_{p/np}() and remove set_memory_attr()
  2021-12-24 11:07 ` [PATCH v3 2/2] powerpc: Add set_memory_{p/np}() and remove set_memory_attr() Christophe Leroy
@ 2022-01-19 12:28   ` Christophe Leroy
  0 siblings, 0 replies; 4+ messages in thread
From: Christophe Leroy @ 2022-01-19 12:28 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: linux-kernel, stable, Paul Mackerras, Maxime Bizon, linuxppc-dev

Hi Michael,

Can we get this series in fixes as well ?

Thanks
Christophe

Le 24/12/2021 à 12:07, Christophe Leroy a écrit :
> set_memory_attr() was implemented by commit 4d1755b6a762 ("powerpc/mm:
> implement set_memory_attr()") because the set_memory_xx() couldn't
> be used at that time to modify memory "on the fly" as explained it
> the commit.
> 
> But set_memory_attr() uses set_pte_at() which leads to warnings when
> CONFIG_DEBUG_VM is selected, because set_pte_at() is unexpected for
> updating existing page table entries.
> 
> The check could be bypassed by using __set_pte_at() instead,
> as it was the case before commit c988cfd38e48 ("powerpc/32:
> use set_memory_attr()") but since commit 9f7853d7609d ("powerpc/mm:
> Fix set_memory_*() against concurrent accesses") it is now possible
> to use set_memory_xx() functions to update page table entries
> "on the fly" because the update is now atomic.
> 
> For DEBUG_PAGEALLOC we need to clear and set back _PAGE_PRESENT.
> Add set_memory_np() and set_memory_p() for that.
> 
> Replace all uses of set_memory_attr() by the relevant set_memory_xx()
> and remove set_memory_attr().
> 
> Reported-by: Maxime Bizon <mbizon@freebox.fr>
> Fixes: c988cfd38e48 ("powerpc/32: use set_memory_attr()")
> Cc: stable@vger.kernel.org
> Depends-on: 9f7853d7609d ("powerpc/mm: Fix set_memory_*() against concurrent accesses")
> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
> Reviewed-by: Russell Currey <ruscur@russell.cc>
> Tested-by: Maxime Bizon <mbizon@freebox.fr>
> ---
> v3: Use _PAGE_PRESENT directly as all platforms have the bit
> 
> v2: Add comment to SET_MEMORY_P and SET_MEMORY_NP
> ---
>   arch/powerpc/include/asm/set_memory.h | 12 ++++++++-
>   arch/powerpc/mm/pageattr.c            | 39 +++++----------------------
>   arch/powerpc/mm/pgtable_32.c          | 24 ++++++++---------
>   3 files changed, 28 insertions(+), 47 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/set_memory.h b/arch/powerpc/include/asm/set_memory.h
> index b040094f7920..7ebc807aa8cc 100644
> --- a/arch/powerpc/include/asm/set_memory.h
> +++ b/arch/powerpc/include/asm/set_memory.h
> @@ -6,6 +6,8 @@
>   #define SET_MEMORY_RW	1
>   #define SET_MEMORY_NX	2
>   #define SET_MEMORY_X	3
> +#define SET_MEMORY_NP	4	/* Set memory non present */
> +#define SET_MEMORY_P	5	/* Set memory present */
>   
>   int change_memory_attr(unsigned long addr, int numpages, long action);
>   
> @@ -29,6 +31,14 @@ static inline int set_memory_x(unsigned long addr, int numpages)
>   	return change_memory_attr(addr, numpages, SET_MEMORY_X);
>   }
>   
> -int set_memory_attr(unsigned long addr, int numpages, pgprot_t prot);
> +static inline int set_memory_np(unsigned long addr, int numpages)
> +{
> +	return change_memory_attr(addr, numpages, SET_MEMORY_NP);
> +}
> +
> +static inline int set_memory_p(unsigned long addr, int numpages)
> +{
> +	return change_memory_attr(addr, numpages, SET_MEMORY_P);
> +}
>   
>   #endif
> diff --git a/arch/powerpc/mm/pageattr.c b/arch/powerpc/mm/pageattr.c
> index 8812454e70ff..85753e32a4de 100644
> --- a/arch/powerpc/mm/pageattr.c
> +++ b/arch/powerpc/mm/pageattr.c
> @@ -46,6 +46,12 @@ static int change_page_attr(pte_t *ptep, unsigned long addr, void *data)
>   	case SET_MEMORY_X:
>   		pte_update_delta(ptep, addr, _PAGE_KERNEL_RO, _PAGE_KERNEL_ROX);
>   		break;
> +	case SET_MEMORY_NP:
> +		pte_update(&init_mm, addr, ptep, _PAGE_PRESENT, 0, 0);
> +		break;
> +	case SET_MEMORY_P:
> +		pte_update(&init_mm, addr, ptep, 0, _PAGE_PRESENT, 0);
> +		break;
>   	default:
>   		WARN_ON_ONCE(1);
>   		break;
> @@ -90,36 +96,3 @@ int change_memory_attr(unsigned long addr, int numpages, long action)
>   	return apply_to_existing_page_range(&init_mm, start, size,
>   					    change_page_attr, (void *)action);
>   }
> -
> -/*
> - * Set the attributes of a page:
> - *
> - * This function is used by PPC32 at the end of init to set final kernel memory
> - * protection. It includes changing the maping of the page it is executing from
> - * and data pages it is using.
> - */
> -static int set_page_attr(pte_t *ptep, unsigned long addr, void *data)
> -{
> -	pgprot_t prot = __pgprot((unsigned long)data);
> -
> -	spin_lock(&init_mm.page_table_lock);
> -
> -	set_pte_at(&init_mm, addr, ptep, pte_modify(*ptep, prot));
> -	flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> -
> -	spin_unlock(&init_mm.page_table_lock);
> -
> -	return 0;
> -}
> -
> -int set_memory_attr(unsigned long addr, int numpages, pgprot_t prot)
> -{
> -	unsigned long start = ALIGN_DOWN(addr, PAGE_SIZE);
> -	unsigned long sz = numpages * PAGE_SIZE;
> -
> -	if (numpages <= 0)
> -		return 0;
> -
> -	return apply_to_existing_page_range(&init_mm, start, sz, set_page_attr,
> -					    (void *)pgprot_val(prot));
> -}
> diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
> index 906e4e4328b2..f71ededdc02a 100644
> --- a/arch/powerpc/mm/pgtable_32.c
> +++ b/arch/powerpc/mm/pgtable_32.c
> @@ -135,10 +135,12 @@ void mark_initmem_nx(void)
>   	unsigned long numpages = PFN_UP((unsigned long)_einittext) -
>   				 PFN_DOWN((unsigned long)_sinittext);
>   
> -	if (v_block_mapped((unsigned long)_sinittext))
> +	if (v_block_mapped((unsigned long)_sinittext)) {
>   		mmu_mark_initmem_nx();
> -	else
> -		set_memory_attr((unsigned long)_sinittext, numpages, PAGE_KERNEL);
> +	} else {
> +		set_memory_nx((unsigned long)_sinittext, numpages);
> +		set_memory_rw((unsigned long)_sinittext, numpages);
> +	}
>   }
>   
>   #ifdef CONFIG_STRICT_KERNEL_RWX
> @@ -152,18 +154,14 @@ void mark_rodata_ro(void)
>   		return;
>   	}
>   
> -	numpages = PFN_UP((unsigned long)_etext) -
> -		   PFN_DOWN((unsigned long)_stext);
> -
> -	set_memory_attr((unsigned long)_stext, numpages, PAGE_KERNEL_ROX);
>   	/*
> -	 * mark .rodata as read only. Use __init_begin rather than __end_rodata
> -	 * to cover NOTES and EXCEPTION_TABLE.
> +	 * mark .text and .rodata as read only. Use __init_begin rather than
> +	 * __end_rodata to cover NOTES and EXCEPTION_TABLE.
>   	 */
>   	numpages = PFN_UP((unsigned long)__init_begin) -
> -		   PFN_DOWN((unsigned long)__start_rodata);
> +		   PFN_DOWN((unsigned long)_stext);
>   
> -	set_memory_attr((unsigned long)__start_rodata, numpages, PAGE_KERNEL_RO);
> +	set_memory_ro((unsigned long)_stext, numpages);
>   
>   	// mark_initmem_nx() should have already run by now
>   	ptdump_check_wx();
> @@ -179,8 +177,8 @@ void __kernel_map_pages(struct page *page, int numpages, int enable)
>   		return;
>   
>   	if (enable)
> -		set_memory_attr(addr, numpages, PAGE_KERNEL);
> +		set_memory_p(addr, numpages);
>   	else
> -		set_memory_attr(addr, numpages, __pgprot(0));
> +		set_memory_np(addr, numpages);
>   }
>   #endif /* CONFIG_DEBUG_PAGEALLOC */

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v3 1/2] powerpc/set_memory: Avoid spinlock recursion in change_page_attr()
  2021-12-24 11:07 [PATCH v3 1/2] powerpc/set_memory: Avoid spinlock recursion in change_page_attr() Christophe Leroy
  2021-12-24 11:07 ` [PATCH v3 2/2] powerpc: Add set_memory_{p/np}() and remove set_memory_attr() Christophe Leroy
@ 2022-02-16 12:25 ` Michael Ellerman
  1 sibling, 0 replies; 4+ messages in thread
From: Michael Ellerman @ 2022-02-16 12:25 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Christophe Leroy, Michael Ellerman,
	Paul Mackerras
  Cc: Maxime Bizon, linuxppc-dev, linux-kernel

On Fri, 24 Dec 2021 11:07:33 +0000, Christophe Leroy wrote:
> Commit 1f9ad21c3b38 ("powerpc/mm: Implement set_memory() routines")
> included a spin_lock() to change_page_attr() in order to
> safely perform the three step operations. But then
> commit 9f7853d7609d ("powerpc/mm: Fix set_memory_*() against
> concurrent accesses") modify it to use pte_update() and do
> the operation safely against concurrent access.
> 
> [...]

Applied to powerpc/next.

[1/2] powerpc/set_memory: Avoid spinlock recursion in change_page_attr()
      https://git.kernel.org/powerpc/c/a4c182ecf33584b9b2d1aa9dad073014a504c01f
[2/2] powerpc: Add set_memory_{p/np}() and remove set_memory_attr()
      https://git.kernel.org/powerpc/c/f222ab83df92acf72691a2021e1f0d99880dcdf1

cheers

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-02-16 12:33 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-24 11:07 [PATCH v3 1/2] powerpc/set_memory: Avoid spinlock recursion in change_page_attr() Christophe Leroy
2021-12-24 11:07 ` [PATCH v3 2/2] powerpc: Add set_memory_{p/np}() and remove set_memory_attr() Christophe Leroy
2022-01-19 12:28   ` Christophe Leroy
2022-02-16 12:25 ` [PATCH v3 1/2] powerpc/set_memory: Avoid spinlock recursion in change_page_attr() Michael Ellerman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).