All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/5] powerpc/64s/radix: Fix memory hotplug section page table creation
@ 2019-07-24  8:46 Nicholas Piggin
  2019-07-24  8:46 ` [PATCH 2/5] powerpc/64s/radix: Fix memory hot-unplug page table split Nicholas Piggin
                   ` (5 more replies)
  0 siblings, 6 replies; 15+ messages in thread
From: Nicholas Piggin @ 2019-07-24  8:46 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Madhavan Srinivasan, Aneesh Kumar K . V, Reza Arbab,
	Anju T Sudhakar, Nicholas Piggin

create_physical_mapping expects physical addresses, but creating and
splitting these mappings after boot is supplying virtual (effective)
addresses. This can be irritated by booting with mem= to limit memory
then probing an unused physical memory range:

  echo <addr> > /sys/devices/system/memory/probe

This mostly works by accident, firstly because __va(__va(x)) == __va(x)
so the virtual address does not get corrupted. Secondly because pfn_pte
masks out the upper bits of the pfn beyond the physical address limit,
so a pfn constructed with a 0xc000000000000000 virtual linear address
will be masked back to the correct physical address in the pte.

Cc: Reza Arbab <arbab@linux.vnet.ibm.com>
Fixes: 6cc27341b21a8 ("powerpc/mm: add radix__create_section_mapping()")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/mm/book3s64/radix_pgtable.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
index b4ca9e95e678..c5cc16ab1954 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -902,7 +902,7 @@ int __meminit radix__create_section_mapping(unsigned long start, unsigned long e
 		return -1;
 	}
 
-	return create_physical_mapping(start, end, nid);
+	return create_physical_mapping(__pa(start), __pa(end), nid);
 }
 
 int __meminit radix__remove_section_mapping(unsigned long start, unsigned long end)
-- 
2.22.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 2/5] powerpc/64s/radix: Fix memory hot-unplug page table split
  2019-07-24  8:46 [PATCH 1/5] powerpc/64s/radix: Fix memory hotplug section page table creation Nicholas Piggin
@ 2019-07-24  8:46 ` Nicholas Piggin
  2019-07-24  9:55   ` Aneesh Kumar K.V
  2019-07-24  8:46 ` [PATCH 3/5] powerpc/perf: fix imc allocation failure handling Nicholas Piggin
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 15+ messages in thread
From: Nicholas Piggin @ 2019-07-24  8:46 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Madhavan Srinivasan, Aneesh Kumar K . V, Reza Arbab,
	Anju T Sudhakar, Nicholas Piggin

create_physical_mapping expects physical addresses, but splitting
these mapping on hot unplug is supplying virtual (effective)
addresses.

[I'm not sure how to test this one]

Cc: Balbir Singh <bsingharora@gmail.com>
Fixes: 4dd5f8a99e791 ("powerpc/mm/radix: Split linear mapping on hot-unplug")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/mm/book3s64/radix_pgtable.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
index c5cc16ab1954..2204d8eeb784 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -737,8 +737,8 @@ static int __meminit stop_machine_change_mapping(void *data)
 
 	spin_unlock(&init_mm.page_table_lock);
 	pte_clear(&init_mm, params->aligned_start, params->pte);
-	create_physical_mapping(params->aligned_start, params->start, -1);
-	create_physical_mapping(params->end, params->aligned_end, -1);
+	create_physical_mapping(__pa(params->aligned_start), __pa(params->start), -1);
+	create_physical_mapping(__pa(params->end), __pa(params->aligned_end), -1);
 	spin_lock(&init_mm.page_table_lock);
 	return 0;
 }
-- 
2.22.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 3/5] powerpc/perf: fix imc allocation failure handling
  2019-07-24  8:46 [PATCH 1/5] powerpc/64s/radix: Fix memory hotplug section page table creation Nicholas Piggin
  2019-07-24  8:46 ` [PATCH 2/5] powerpc/64s/radix: Fix memory hot-unplug page table split Nicholas Piggin
@ 2019-07-24  8:46 ` Nicholas Piggin
  2019-07-24  9:58   ` Aneesh Kumar K.V
  2019-07-24  8:46 ` [PATCH 4/5] powerpc/64: Add VIRTUAL_BUG_ON checks for __va and __pa addresses Nicholas Piggin
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 15+ messages in thread
From: Nicholas Piggin @ 2019-07-24  8:46 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Madhavan Srinivasan, Aneesh Kumar K . V, Reza Arbab,
	Anju T Sudhakar, Nicholas Piggin

The alloc_pages_node return value should be tested for failure
before being passed to page_address.

Cc: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Tested-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/perf/imc-pmu.c | 29 ++++++++++++++++++-----------
 1 file changed, 18 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/perf/imc-pmu.c b/arch/powerpc/perf/imc-pmu.c
index dea243185ea4..cb50a9e1fd2d 100644
--- a/arch/powerpc/perf/imc-pmu.c
+++ b/arch/powerpc/perf/imc-pmu.c
@@ -577,6 +577,7 @@ static int core_imc_mem_init(int cpu, int size)
 {
 	int nid, rc = 0, core_id = (cpu / threads_per_core);
 	struct imc_mem_info *mem_info;
+	struct page *page;
 
 	/*
 	 * alloc_pages_node() will allocate memory for core in the
@@ -587,11 +588,12 @@ static int core_imc_mem_init(int cpu, int size)
 	mem_info->id = core_id;
 
 	/* We need only vbase for core counters */
-	mem_info->vbase = page_address(alloc_pages_node(nid,
-					  GFP_KERNEL | __GFP_ZERO | __GFP_THISNODE |
-					  __GFP_NOWARN, get_order(size)));
-	if (!mem_info->vbase)
+	page = alloc_pages_node(nid,
+				GFP_KERNEL | __GFP_ZERO | __GFP_THISNODE |
+				__GFP_NOWARN, get_order(size));
+	if (!page)
 		return -ENOMEM;
+	mem_info->vbase = page_address(page);
 
 	/* Init the mutex */
 	core_imc_refc[core_id].id = core_id;
@@ -849,15 +851,17 @@ static int thread_imc_mem_alloc(int cpu_id, int size)
 	int nid = cpu_to_node(cpu_id);
 
 	if (!local_mem) {
+		struct page *page;
 		/*
 		 * This case could happen only once at start, since we dont
 		 * free the memory in cpu offline path.
 		 */
-		local_mem = page_address(alloc_pages_node(nid,
+		page = alloc_pages_node(nid,
 				  GFP_KERNEL | __GFP_ZERO | __GFP_THISNODE |
-				  __GFP_NOWARN, get_order(size)));
-		if (!local_mem)
+				  __GFP_NOWARN, get_order(size));
+		if (!page)
 			return -ENOMEM;
+		local_mem = page_address(page);
 
 		per_cpu(thread_imc_mem, cpu_id) = local_mem;
 	}
@@ -1095,11 +1099,14 @@ static int trace_imc_mem_alloc(int cpu_id, int size)
 	int core_id = (cpu_id / threads_per_core);
 
 	if (!local_mem) {
-		local_mem = page_address(alloc_pages_node(phys_id,
-					GFP_KERNEL | __GFP_ZERO | __GFP_THISNODE |
-					__GFP_NOWARN, get_order(size)));
-		if (!local_mem)
+		struct page *page;
+
+		page = alloc_pages_node(phys_id,
+				GFP_KERNEL | __GFP_ZERO | __GFP_THISNODE |
+				__GFP_NOWARN, get_order(size));
+		if (!page)
 			return -ENOMEM;
+		local_mem = page_address(page);
 		per_cpu(trace_imc_mem, cpu_id) = local_mem;
 
 		/* Initialise the counters for trace mode */
-- 
2.22.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 4/5] powerpc/64: Add VIRTUAL_BUG_ON checks for __va and __pa addresses
  2019-07-24  8:46 [PATCH 1/5] powerpc/64s/radix: Fix memory hotplug section page table creation Nicholas Piggin
  2019-07-24  8:46 ` [PATCH 2/5] powerpc/64s/radix: Fix memory hot-unplug page table split Nicholas Piggin
  2019-07-24  8:46 ` [PATCH 3/5] powerpc/perf: fix imc allocation failure handling Nicholas Piggin
@ 2019-07-24  8:46 ` Nicholas Piggin
  2019-07-24  9:59   ` Aneesh Kumar K.V
                     ` (2 more replies)
  2019-07-24  8:46 ` [PATCH 5/5] powerpc/64s/radix: Remove redundant pfn_pte bitop, add VM_BUG_ON Nicholas Piggin
                   ` (2 subsequent siblings)
  5 siblings, 3 replies; 15+ messages in thread
From: Nicholas Piggin @ 2019-07-24  8:46 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Madhavan Srinivasan, Aneesh Kumar K . V, Reza Arbab,
	Anju T Sudhakar, Nicholas Piggin

Ensure __va is given a physical address below PAGE_OFFSET, and __pa is
given a virtual address above PAGE_OFFSET.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/page.h | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index 0d52f57fca04..c8bb14ff4713 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -215,9 +215,19 @@ static inline bool pfn_valid(unsigned long pfn)
 /*
  * gcc miscompiles (unsigned long)(&static_var) - PAGE_OFFSET
  * with -mcmodel=medium, so we use & and | instead of - and + on 64-bit.
+ * This also results in better code generation.
  */
-#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) | PAGE_OFFSET))
-#define __pa(x) ((unsigned long)(x) & 0x0fffffffffffffffUL)
+#define __va(x)								\
+({									\
+	VIRTUAL_BUG_ON((unsigned long)(x) >= PAGE_OFFSET);		\
+	(void *)(unsigned long)((phys_addr_t)(x) | PAGE_OFFSET);	\
+})
+
+#define __pa(x)								\
+({									\
+	VIRTUAL_BUG_ON((unsigned long)(x) < PAGE_OFFSET);		\
+	(unsigned long)(x) & 0x0fffffffffffffffUL;			\
+})
 
 #else /* 32-bit, non book E */
 #define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) + PAGE_OFFSET - MEMORY_START))
-- 
2.22.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 5/5] powerpc/64s/radix: Remove redundant pfn_pte bitop, add VM_BUG_ON
  2019-07-24  8:46 [PATCH 1/5] powerpc/64s/radix: Fix memory hotplug section page table creation Nicholas Piggin
                   ` (2 preceding siblings ...)
  2019-07-24  8:46 ` [PATCH 4/5] powerpc/64: Add VIRTUAL_BUG_ON checks for __va and __pa addresses Nicholas Piggin
@ 2019-07-24  8:46 ` Nicholas Piggin
  2019-07-24  9:59   ` Aneesh Kumar K.V
  2019-07-24  9:54 ` [PATCH 1/5] powerpc/64s/radix: Fix memory hotplug section page table creation Aneesh Kumar K.V
  2019-08-22 13:08 ` Michael Ellerman
  5 siblings, 1 reply; 15+ messages in thread
From: Nicholas Piggin @ 2019-07-24  8:46 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Madhavan Srinivasan, Aneesh Kumar K . V, Reza Arbab,
	Anju T Sudhakar, Nicholas Piggin

pfn_pte is never given a pte above the addressable physical memory
limit, so the masking is redundant. In case of a software bug, it
is not obviously better to silently truncate the pfn than to corrupt
the pte (either one will result in memory corruption or crashes),
so there is no reason to add this to the fast path.

Add VM_BUG_ON to catch cases where the pfn is invalid. These would
catch the create_section_mapping bug fixed by a previous commit.

  [16885.256466] ------------[ cut here ]------------
  [16885.256492] kernel BUG at arch/powerpc/include/asm/book3s/64/pgtable.h:612!
  cpu 0x0: Vector: 700 (Program Check) at [c0000000ee0a36d0]
      pc: c000000000080738: __map_kernel_page+0x248/0x6f0
      lr: c000000000080ac0: __map_kernel_page+0x5d0/0x6f0
      sp: c0000000ee0a3960
     msr: 9000000000029033
    current = 0xc0000000ec63b400
    paca    = 0xc0000000017f0000   irqmask: 0x03   irq_happened: 0x01
      pid   = 85, comm = sh
  kernel BUG at arch/powerpc/include/asm/book3s/64/pgtable.h:612!
  Linux version 5.3.0-rc1-00001-g0fe93e5f3394
  enter ? for help
  [c0000000ee0a3a00] c000000000d37378 create_physical_mapping+0x260/0x360
  [c0000000ee0a3b10] c000000000d370bc create_section_mapping+0x1c/0x3c
  [c0000000ee0a3b30] c000000000071f54 arch_add_memory+0x74/0x130

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/book3s/64/pgtable.h | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index 8308f32e9782..8e47fb85dfa6 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -608,8 +608,10 @@ static inline bool pte_access_permitted(pte_t pte, bool write)
  */
 static inline pte_t pfn_pte(unsigned long pfn, pgprot_t pgprot)
 {
-	return __pte((((pte_basic_t)(pfn) << PAGE_SHIFT) & PTE_RPN_MASK) |
-		     pgprot_val(pgprot));
+	VM_BUG_ON(pfn >> (64 - PAGE_SHIFT));
+	VM_BUG_ON((pfn << PAGE_SHIFT) & ~PTE_RPN_MASK);
+
+	return __pte(((pte_basic_t)pfn << PAGE_SHIFT) | pgprot_val(pgprot));
 }
 
 static inline unsigned long pte_pfn(pte_t pte)
-- 
2.22.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/5] powerpc/64s/radix: Fix memory hotplug section page table creation
  2019-07-24  8:46 [PATCH 1/5] powerpc/64s/radix: Fix memory hotplug section page table creation Nicholas Piggin
                   ` (3 preceding siblings ...)
  2019-07-24  8:46 ` [PATCH 5/5] powerpc/64s/radix: Remove redundant pfn_pte bitop, add VM_BUG_ON Nicholas Piggin
@ 2019-07-24  9:54 ` Aneesh Kumar K.V
  2019-08-22 13:08 ` Michael Ellerman
  5 siblings, 0 replies; 15+ messages in thread
From: Aneesh Kumar K.V @ 2019-07-24  9:54 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev
  Cc: Anju T Sudhakar, Madhavan Srinivasan, Reza Arbab, Nicholas Piggin

Nicholas Piggin <npiggin@gmail.com> writes:

> create_physical_mapping expects physical addresses, but creating and
> splitting these mappings after boot is supplying virtual (effective)
> addresses. This can be irritated by booting with mem= to limit memory
> then probing an unused physical memory range:
>
>   echo <addr> > /sys/devices/system/memory/probe
>
> This mostly works by accident, firstly because __va(__va(x)) == __va(x)
> so the virtual address does not get corrupted. Secondly because pfn_pte
> masks out the upper bits of the pfn beyond the physical address limit,
> so a pfn constructed with a 0xc000000000000000 virtual linear address
> will be masked back to the correct physical address in the pte.
>

Good catch. Did this result in any error?

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>

> Cc: Reza Arbab <arbab@linux.vnet.ibm.com>
> Fixes: 6cc27341b21a8 ("powerpc/mm: add radix__create_section_mapping()")
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
>  arch/powerpc/mm/book3s64/radix_pgtable.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
> index b4ca9e95e678..c5cc16ab1954 100644
> --- a/arch/powerpc/mm/book3s64/radix_pgtable.c
> +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
> @@ -902,7 +902,7 @@ int __meminit radix__create_section_mapping(unsigned long start, unsigned long e
>  		return -1;
>  	}
>  
> -	return create_physical_mapping(start, end, nid);
> +	return create_physical_mapping(__pa(start), __pa(end), nid);


While we are here, should we change the prototype to take phys_addr_t ?

>  }
>  
>  int __meminit radix__remove_section_mapping(unsigned long start, unsigned long end)
> -- 
> 2.22.0


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/5] powerpc/64s/radix: Fix memory hot-unplug page table split
  2019-07-24  8:46 ` [PATCH 2/5] powerpc/64s/radix: Fix memory hot-unplug page table split Nicholas Piggin
@ 2019-07-24  9:55   ` Aneesh Kumar K.V
  0 siblings, 0 replies; 15+ messages in thread
From: Aneesh Kumar K.V @ 2019-07-24  9:55 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev
  Cc: Anju T Sudhakar, Madhavan Srinivasan, Reza Arbab, Nicholas Piggin

Nicholas Piggin <npiggin@gmail.com> writes:

> create_physical_mapping expects physical addresses, but splitting
> these mapping on hot unplug is supplying virtual (effective)
> addresses.
>
> [I'm not sure how to test this one]
>

Memory hot unplug with kvm?


Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>

> Cc: Balbir Singh <bsingharora@gmail.com>
> Fixes: 4dd5f8a99e791 ("powerpc/mm/radix: Split linear mapping on hot-unplug")
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
>  arch/powerpc/mm/book3s64/radix_pgtable.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
> index c5cc16ab1954..2204d8eeb784 100644
> --- a/arch/powerpc/mm/book3s64/radix_pgtable.c
> +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
> @@ -737,8 +737,8 @@ static int __meminit stop_machine_change_mapping(void *data)
>  
>  	spin_unlock(&init_mm.page_table_lock);
>  	pte_clear(&init_mm, params->aligned_start, params->pte);
> -	create_physical_mapping(params->aligned_start, params->start, -1);
> -	create_physical_mapping(params->end, params->aligned_end, -1);
> +	create_physical_mapping(__pa(params->aligned_start), __pa(params->start), -1);
> +	create_physical_mapping(__pa(params->end), __pa(params->aligned_end), -1);
>  	spin_lock(&init_mm.page_table_lock);
>  	return 0;
>  }
> -- 
> 2.22.0


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 3/5] powerpc/perf: fix imc allocation failure handling
  2019-07-24  8:46 ` [PATCH 3/5] powerpc/perf: fix imc allocation failure handling Nicholas Piggin
@ 2019-07-24  9:58   ` Aneesh Kumar K.V
  0 siblings, 0 replies; 15+ messages in thread
From: Aneesh Kumar K.V @ 2019-07-24  9:58 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev
  Cc: Anju T Sudhakar, Madhavan Srinivasan, Reza Arbab, Nicholas Piggin

Nicholas Piggin <npiggin@gmail.com> writes:

> The alloc_pages_node return value should be tested for failure
> before being passed to page_address.
>


Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>

This need Fixes: tag? It fix a real crash unlike the other patch in this
series? 

> Cc: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
> Tested-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
>  arch/powerpc/perf/imc-pmu.c | 29 ++++++++++++++++++-----------
>  1 file changed, 18 insertions(+), 11 deletions(-)
>
> diff --git a/arch/powerpc/perf/imc-pmu.c b/arch/powerpc/perf/imc-pmu.c
> index dea243185ea4..cb50a9e1fd2d 100644
> --- a/arch/powerpc/perf/imc-pmu.c
> +++ b/arch/powerpc/perf/imc-pmu.c
> @@ -577,6 +577,7 @@ static int core_imc_mem_init(int cpu, int size)
>  {
>  	int nid, rc = 0, core_id = (cpu / threads_per_core);
>  	struct imc_mem_info *mem_info;
> +	struct page *page;
>  
>  	/*
>  	 * alloc_pages_node() will allocate memory for core in the
> @@ -587,11 +588,12 @@ static int core_imc_mem_init(int cpu, int size)
>  	mem_info->id = core_id;
>  
>  	/* We need only vbase for core counters */
> -	mem_info->vbase = page_address(alloc_pages_node(nid,
> -					  GFP_KERNEL | __GFP_ZERO | __GFP_THISNODE |
> -					  __GFP_NOWARN, get_order(size)));
> -	if (!mem_info->vbase)
> +	page = alloc_pages_node(nid,
> +				GFP_KERNEL | __GFP_ZERO | __GFP_THISNODE |
> +				__GFP_NOWARN, get_order(size));
> +	if (!page)
>  		return -ENOMEM;
> +	mem_info->vbase = page_address(page);
>  
>  	/* Init the mutex */
>  	core_imc_refc[core_id].id = core_id;
> @@ -849,15 +851,17 @@ static int thread_imc_mem_alloc(int cpu_id, int size)
>  	int nid = cpu_to_node(cpu_id);
>  
>  	if (!local_mem) {
> +		struct page *page;
>  		/*
>  		 * This case could happen only once at start, since we dont
>  		 * free the memory in cpu offline path.
>  		 */
> -		local_mem = page_address(alloc_pages_node(nid,
> +		page = alloc_pages_node(nid,
>  				  GFP_KERNEL | __GFP_ZERO | __GFP_THISNODE |
> -				  __GFP_NOWARN, get_order(size)));
> -		if (!local_mem)
> +				  __GFP_NOWARN, get_order(size));
> +		if (!page)
>  			return -ENOMEM;
> +		local_mem = page_address(page);
>  
>  		per_cpu(thread_imc_mem, cpu_id) = local_mem;
>  	}
> @@ -1095,11 +1099,14 @@ static int trace_imc_mem_alloc(int cpu_id, int size)
>  	int core_id = (cpu_id / threads_per_core);
>  
>  	if (!local_mem) {
> -		local_mem = page_address(alloc_pages_node(phys_id,
> -					GFP_KERNEL | __GFP_ZERO | __GFP_THISNODE |
> -					__GFP_NOWARN, get_order(size)));
> -		if (!local_mem)
> +		struct page *page;
> +
> +		page = alloc_pages_node(phys_id,
> +				GFP_KERNEL | __GFP_ZERO | __GFP_THISNODE |
> +				__GFP_NOWARN, get_order(size));
> +		if (!page)
>  			return -ENOMEM;
> +		local_mem = page_address(page);
>  		per_cpu(trace_imc_mem, cpu_id) = local_mem;
>  
>  		/* Initialise the counters for trace mode */
> -- 
> 2.22.0


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 4/5] powerpc/64: Add VIRTUAL_BUG_ON checks for __va and __pa addresses
  2019-07-24  8:46 ` [PATCH 4/5] powerpc/64: Add VIRTUAL_BUG_ON checks for __va and __pa addresses Nicholas Piggin
@ 2019-07-24  9:59   ` Aneesh Kumar K.V
  2019-07-29 11:57   ` Christophe Leroy
  2021-12-24 13:24   ` Christophe Leroy
  2 siblings, 0 replies; 15+ messages in thread
From: Aneesh Kumar K.V @ 2019-07-24  9:59 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev
  Cc: Anju T Sudhakar, Madhavan Srinivasan, Reza Arbab, Nicholas Piggin

Nicholas Piggin <npiggin@gmail.com> writes:

> Ensure __va is given a physical address below PAGE_OFFSET, and __pa is
> given a virtual address above PAGE_OFFSET.
>

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>

> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
>  arch/powerpc/include/asm/page.h | 14 ++++++++++++--
>  1 file changed, 12 insertions(+), 2 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
> index 0d52f57fca04..c8bb14ff4713 100644
> --- a/arch/powerpc/include/asm/page.h
> +++ b/arch/powerpc/include/asm/page.h
> @@ -215,9 +215,19 @@ static inline bool pfn_valid(unsigned long pfn)
>  /*
>   * gcc miscompiles (unsigned long)(&static_var) - PAGE_OFFSET
>   * with -mcmodel=medium, so we use & and | instead of - and + on 64-bit.
> + * This also results in better code generation.
>   */
> -#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) | PAGE_OFFSET))
> -#define __pa(x) ((unsigned long)(x) & 0x0fffffffffffffffUL)
> +#define __va(x)								\
> +({									\
> +	VIRTUAL_BUG_ON((unsigned long)(x) >= PAGE_OFFSET);		\
> +	(void *)(unsigned long)((phys_addr_t)(x) | PAGE_OFFSET);	\
> +})

Can we make that static inline? Is there a need for __pa and __va to be
#define like we do #ifndef anywhere?

> +
> +#define __pa(x)								\
> +({									\
> +	VIRTUAL_BUG_ON((unsigned long)(x) < PAGE_OFFSET);		\
> +	(unsigned long)(x) & 0x0fffffffffffffffUL;			\
> +})
>  
>  #else /* 32-bit, non book E */
>  #define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) + PAGE_OFFSET - MEMORY_START))
> -- 
> 2.22.0


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 5/5] powerpc/64s/radix: Remove redundant pfn_pte bitop, add VM_BUG_ON
  2019-07-24  8:46 ` [PATCH 5/5] powerpc/64s/radix: Remove redundant pfn_pte bitop, add VM_BUG_ON Nicholas Piggin
@ 2019-07-24  9:59   ` Aneesh Kumar K.V
  0 siblings, 0 replies; 15+ messages in thread
From: Aneesh Kumar K.V @ 2019-07-24  9:59 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev
  Cc: Anju T Sudhakar, Madhavan Srinivasan, Reza Arbab, Nicholas Piggin

Nicholas Piggin <npiggin@gmail.com> writes:

> pfn_pte is never given a pte above the addressable physical memory
> limit, so the masking is redundant. In case of a software bug, it
> is not obviously better to silently truncate the pfn than to corrupt
> the pte (either one will result in memory corruption or crashes),
> so there is no reason to add this to the fast path.
>
> Add VM_BUG_ON to catch cases where the pfn is invalid. These would
> catch the create_section_mapping bug fixed by a previous commit.
>
>   [16885.256466] ------------[ cut here ]------------
>   [16885.256492] kernel BUG at arch/powerpc/include/asm/book3s/64/pgtable.h:612!
>   cpu 0x0: Vector: 700 (Program Check) at [c0000000ee0a36d0]
>       pc: c000000000080738: __map_kernel_page+0x248/0x6f0
>       lr: c000000000080ac0: __map_kernel_page+0x5d0/0x6f0
>       sp: c0000000ee0a3960
>      msr: 9000000000029033
>     current = 0xc0000000ec63b400
>     paca    = 0xc0000000017f0000   irqmask: 0x03   irq_happened: 0x01
>       pid   = 85, comm = sh
>   kernel BUG at arch/powerpc/include/asm/book3s/64/pgtable.h:612!
>   Linux version 5.3.0-rc1-00001-g0fe93e5f3394
>   enter ? for help
>   [c0000000ee0a3a00] c000000000d37378 create_physical_mapping+0x260/0x360
>   [c0000000ee0a3b10] c000000000d370bc create_section_mapping+0x1c/0x3c
>   [c0000000ee0a3b30] c000000000071f54 arch_add_memory+0x74/0x130
>

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>

> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
>  arch/powerpc/include/asm/book3s/64/pgtable.h | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
> index 8308f32e9782..8e47fb85dfa6 100644
> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
> @@ -608,8 +608,10 @@ static inline bool pte_access_permitted(pte_t pte, bool write)
>   */
>  static inline pte_t pfn_pte(unsigned long pfn, pgprot_t pgprot)
>  {
> -	return __pte((((pte_basic_t)(pfn) << PAGE_SHIFT) & PTE_RPN_MASK) |
> -		     pgprot_val(pgprot));
> +	VM_BUG_ON(pfn >> (64 - PAGE_SHIFT));
> +	VM_BUG_ON((pfn << PAGE_SHIFT) & ~PTE_RPN_MASK);
> +
> +	return __pte(((pte_basic_t)pfn << PAGE_SHIFT) | pgprot_val(pgprot));
>  }
>  
>  static inline unsigned long pte_pfn(pte_t pte)
> -- 
> 2.22.0


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 4/5] powerpc/64: Add VIRTUAL_BUG_ON checks for __va and __pa addresses
  2019-07-24  8:46 ` [PATCH 4/5] powerpc/64: Add VIRTUAL_BUG_ON checks for __va and __pa addresses Nicholas Piggin
  2019-07-24  9:59   ` Aneesh Kumar K.V
@ 2019-07-29 11:57   ` Christophe Leroy
  2021-12-24 13:24   ` Christophe Leroy
  2 siblings, 0 replies; 15+ messages in thread
From: Christophe Leroy @ 2019-07-29 11:57 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev
  Cc: Aneesh Kumar K . V, Madhavan Srinivasan, Anju T Sudhakar, Reza Arbab



Le 24/07/2019 à 10:46, Nicholas Piggin a écrit :
> Ensure __va is given a physical address below PAGE_OFFSET, and __pa is
> given a virtual address above PAGE_OFFSET.
> 
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
>   arch/powerpc/include/asm/page.h | 14 ++++++++++++--
>   1 file changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
> index 0d52f57fca04..c8bb14ff4713 100644
> --- a/arch/powerpc/include/asm/page.h
> +++ b/arch/powerpc/include/asm/page.h
> @@ -215,9 +215,19 @@ static inline bool pfn_valid(unsigned long pfn)
>   /*
>    * gcc miscompiles (unsigned long)(&static_var) - PAGE_OFFSET
>    * with -mcmodel=medium, so we use & and | instead of - and + on 64-bit.
> + * This also results in better code generation.
>    */
> -#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) | PAGE_OFFSET))
> -#define __pa(x) ((unsigned long)(x) & 0x0fffffffffffffffUL)
> +#define __va(x)								\
> +({									\
> +	VIRTUAL_BUG_ON((unsigned long)(x) >= PAGE_OFFSET);		\

Do we really want to add a BUG_ON here ?
Can't we just add a WARN_ON, like in 
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/arch/powerpc/include/asm/io.h?id=6bf752daca07c85c181159f75dcf65b12056883b 
?

> +	(void *)(unsigned long)((phys_addr_t)(x) | PAGE_OFFSET);	\
> +})
> +
> +#define __pa(x)								\
> +({									\
> +	VIRTUAL_BUG_ON((unsigned long)(x) < PAGE_OFFSET);		\

Same

> +	(unsigned long)(x) & 0x0fffffffffffffffUL;			\
> +})
>   
>   #else /* 32-bit, non book E */
>   #define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) + PAGE_OFFSET - MEMORY_START))
> 

Would it be possible to change those macros into static inlines ?

Christophe

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/5] powerpc/64s/radix: Fix memory hotplug section page table creation
  2019-07-24  8:46 [PATCH 1/5] powerpc/64s/radix: Fix memory hotplug section page table creation Nicholas Piggin
                   ` (4 preceding siblings ...)
  2019-07-24  9:54 ` [PATCH 1/5] powerpc/64s/radix: Fix memory hotplug section page table creation Aneesh Kumar K.V
@ 2019-08-22 13:08 ` Michael Ellerman
  5 siblings, 0 replies; 15+ messages in thread
From: Michael Ellerman @ 2019-08-22 13:08 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev
  Cc: Aneesh Kumar K . V, Madhavan Srinivasan, Nicholas Piggin,
	Anju T Sudhakar, Reza Arbab

On Wed, 2019-07-24 at 08:46:34 UTC, Nicholas Piggin wrote:
> create_physical_mapping expects physical addresses, but creating and
> splitting these mappings after boot is supplying virtual (effective)
> addresses. This can be irritated by booting with mem= to limit memory
> then probing an unused physical memory range:
> 
>   echo <addr> > /sys/devices/system/memory/probe
> 
> This mostly works by accident, firstly because __va(__va(x)) == __va(x)
> so the virtual address does not get corrupted. Secondly because pfn_pte
> masks out the upper bits of the pfn beyond the physical address limit,
> so a pfn constructed with a 0xc000000000000000 virtual linear address
> will be masked back to the correct physical address in the pte.
> 
> Cc: Reza Arbab <arbab@linux.vnet.ibm.com>
> Fixes: 6cc27341b21a8 ("powerpc/mm: add radix__create_section_mapping()")
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>

Series applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/8f51e3929470942e6a8744061254fdeef646cd36

cheers

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 4/5] powerpc/64: Add VIRTUAL_BUG_ON checks for __va and __pa addresses
  2019-07-24  8:46 ` [PATCH 4/5] powerpc/64: Add VIRTUAL_BUG_ON checks for __va and __pa addresses Nicholas Piggin
  2019-07-24  9:59   ` Aneesh Kumar K.V
  2019-07-29 11:57   ` Christophe Leroy
@ 2021-12-24 13:24   ` Christophe Leroy
  2021-12-25 10:10     ` Nicholas Piggin
  2 siblings, 1 reply; 15+ messages in thread
From: Christophe Leroy @ 2021-12-24 13:24 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev
  Cc: Aneesh Kumar K . V, Madhavan Srinivasan, Anju T Sudhakar, Reza Arbab

Hi Nic,

Le 24/07/2019 à 10:46, Nicholas Piggin a écrit :
> Ensure __va is given a physical address below PAGE_OFFSET, and __pa is
> given a virtual address above PAGE_OFFSET.
> 
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
>   arch/powerpc/include/asm/page.h | 14 ++++++++++++--
>   1 file changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
> index 0d52f57fca04..c8bb14ff4713 100644
> --- a/arch/powerpc/include/asm/page.h
> +++ b/arch/powerpc/include/asm/page.h
> @@ -215,9 +215,19 @@ static inline bool pfn_valid(unsigned long pfn)
>   /*
>    * gcc miscompiles (unsigned long)(&static_var) - PAGE_OFFSET
>    * with -mcmodel=medium, so we use & and | instead of - and + on 64-bit.
> + * This also results in better code generation.
>    */
> -#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) | PAGE_OFFSET))
> -#define __pa(x) ((unsigned long)(x) & 0x0fffffffffffffffUL)
> +#define __va(x)								\
> +({									\
> +	VIRTUAL_BUG_ON((unsigned long)(x) >= PAGE_OFFSET);		\
> +	(void *)(unsigned long)((phys_addr_t)(x) | PAGE_OFFSET);	\
> +})
> +
> +#define __pa(x)								\
> +({									\
> +	VIRTUAL_BUG_ON((unsigned long)(x) < PAGE_OFFSET);		\

With this, it is likely that virt_addr_valid() BUGs on a non valid address.

I think the purpose of virt_addr_valid() is to check addresses 
seamlessly, see check_heap_object()


> +	(unsigned long)(x) & 0x0fffffffffffffffUL;			\
> +})
>   
>   #else /* 32-bit, non book E */
>   #define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) + PAGE_OFFSET - MEMORY_START))

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 4/5] powerpc/64: Add VIRTUAL_BUG_ON checks for __va and __pa addresses
  2021-12-24 13:24   ` Christophe Leroy
@ 2021-12-25 10:10     ` Nicholas Piggin
  2021-12-26 17:20       ` Christophe Leroy
  0 siblings, 1 reply; 15+ messages in thread
From: Nicholas Piggin @ 2021-12-25 10:10 UTC (permalink / raw)
  To: Christophe Leroy, linuxppc-dev
  Cc: Aneesh Kumar K . V, Anju T Sudhakar, Madhavan Srinivasan, Reza Arbab

Excerpts from Christophe Leroy's message of December 24, 2021 11:24 pm:
> Hi Nic,
> 
> Le 24/07/2019 à 10:46, Nicholas Piggin a écrit :
>> Ensure __va is given a physical address below PAGE_OFFSET, and __pa is
>> given a virtual address above PAGE_OFFSET.
>> 
>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>> ---
>>   arch/powerpc/include/asm/page.h | 14 ++++++++++++--
>>   1 file changed, 12 insertions(+), 2 deletions(-)
>> 
>> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
>> index 0d52f57fca04..c8bb14ff4713 100644
>> --- a/arch/powerpc/include/asm/page.h
>> +++ b/arch/powerpc/include/asm/page.h
>> @@ -215,9 +215,19 @@ static inline bool pfn_valid(unsigned long pfn)
>>   /*
>>    * gcc miscompiles (unsigned long)(&static_var) - PAGE_OFFSET
>>    * with -mcmodel=medium, so we use & and | instead of - and + on 64-bit.
>> + * This also results in better code generation.
>>    */
>> -#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) | PAGE_OFFSET))
>> -#define __pa(x) ((unsigned long)(x) & 0x0fffffffffffffffUL)
>> +#define __va(x)								\
>> +({									\
>> +	VIRTUAL_BUG_ON((unsigned long)(x) >= PAGE_OFFSET);		\
>> +	(void *)(unsigned long)((phys_addr_t)(x) | PAGE_OFFSET);	\
>> +})
>> +
>> +#define __pa(x)								\
>> +({									\
>> +	VIRTUAL_BUG_ON((unsigned long)(x) < PAGE_OFFSET);		\
> 
> With this, it is likely that virt_addr_valid() BUGs on a non valid address.
> 
> I think the purpose of virt_addr_valid() is to check addresses 
> seamlessly, see check_heap_object()

Looks like you're right. How did you catch that?

We could change virt_addr_valid() to make that test first. x86 and arm64
both do checking rather than relying on !pfn_valid for < PAGE_OFFSET
addresses.

Thanks,
Nick

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 4/5] powerpc/64: Add VIRTUAL_BUG_ON checks for __va and __pa addresses
  2021-12-25 10:10     ` Nicholas Piggin
@ 2021-12-26 17:20       ` Christophe Leroy
  0 siblings, 0 replies; 15+ messages in thread
From: Christophe Leroy @ 2021-12-26 17:20 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev
  Cc: Aneesh Kumar K . V, Anju T Sudhakar, Madhavan Srinivasan, Reza Arbab



Le 25/12/2021 à 11:10, Nicholas Piggin a écrit :
> Excerpts from Christophe Leroy's message of December 24, 2021 11:24 pm:
>> Hi Nic,
>>
>> Le 24/07/2019 à 10:46, Nicholas Piggin a écrit :
>>> Ensure __va is given a physical address below PAGE_OFFSET, and __pa is
>>> given a virtual address above PAGE_OFFSET.
>>>
>>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>>> ---
>>>    arch/powerpc/include/asm/page.h | 14 ++++++++++++--
>>>    1 file changed, 12 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
>>> index 0d52f57fca04..c8bb14ff4713 100644
>>> --- a/arch/powerpc/include/asm/page.h
>>> +++ b/arch/powerpc/include/asm/page.h
>>> @@ -215,9 +215,19 @@ static inline bool pfn_valid(unsigned long pfn)
>>>    /*
>>>     * gcc miscompiles (unsigned long)(&static_var) - PAGE_OFFSET
>>>     * with -mcmodel=medium, so we use & and | instead of - and + on 64-bit.
>>> + * This also results in better code generation.
>>>     */
>>> -#define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) | PAGE_OFFSET))
>>> -#define __pa(x) ((unsigned long)(x) & 0x0fffffffffffffffUL)
>>> +#define __va(x)								\
>>> +({									\
>>> +	VIRTUAL_BUG_ON((unsigned long)(x) >= PAGE_OFFSET);		\
>>> +	(void *)(unsigned long)((phys_addr_t)(x) | PAGE_OFFSET);	\
>>> +})
>>> +
>>> +#define __pa(x)								\
>>> +({									\
>>> +	VIRTUAL_BUG_ON((unsigned long)(x) < PAGE_OFFSET);		\
>>
>> With this, it is likely that virt_addr_valid() BUGs on a non valid address.
>>
>> I think the purpose of virt_addr_valid() is to check addresses
>> seamlessly, see check_heap_object()
> 
> Looks like you're right. How did you catch that?

I caught that while looking at the problem reported by Kefeng where he 
says that virt_addr_valid() reports true on vmalloced memory on book3e/64


> 
> We could change virt_addr_valid() to make that test first. x86 and arm64
> both do checking rather than relying on !pfn_valid for < PAGE_OFFSET
> addresses.

That should work.

Maybe also we should implement a __pa_nodebug() like x86 and arm64 ?

Christophe

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2021-12-26 17:21 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-24  8:46 [PATCH 1/5] powerpc/64s/radix: Fix memory hotplug section page table creation Nicholas Piggin
2019-07-24  8:46 ` [PATCH 2/5] powerpc/64s/radix: Fix memory hot-unplug page table split Nicholas Piggin
2019-07-24  9:55   ` Aneesh Kumar K.V
2019-07-24  8:46 ` [PATCH 3/5] powerpc/perf: fix imc allocation failure handling Nicholas Piggin
2019-07-24  9:58   ` Aneesh Kumar K.V
2019-07-24  8:46 ` [PATCH 4/5] powerpc/64: Add VIRTUAL_BUG_ON checks for __va and __pa addresses Nicholas Piggin
2019-07-24  9:59   ` Aneesh Kumar K.V
2019-07-29 11:57   ` Christophe Leroy
2021-12-24 13:24   ` Christophe Leroy
2021-12-25 10:10     ` Nicholas Piggin
2021-12-26 17:20       ` Christophe Leroy
2019-07-24  8:46 ` [PATCH 5/5] powerpc/64s/radix: Remove redundant pfn_pte bitop, add VM_BUG_ON Nicholas Piggin
2019-07-24  9:59   ` Aneesh Kumar K.V
2019-07-24  9:54 ` [PATCH 1/5] powerpc/64s/radix: Fix memory hotplug section page table creation Aneesh Kumar K.V
2019-08-22 13:08 ` Michael Ellerman

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.