* [PATCH 1/6] powerpc: Fix virt_addr_valid() for 64-bit Book3E & 32-bit
@ 2022-04-06 14:57 Michael Ellerman
2022-04-06 14:57 ` [PATCH 2/6] Revert "powerpc: Set max_mapnr correctly" Michael Ellerman
` (5 more replies)
0 siblings, 6 replies; 10+ messages in thread
From: Michael Ellerman @ 2022-04-06 14:57 UTC (permalink / raw)
To: linuxppc-dev; +Cc: erhard_f, wangkefeng.wang, npiggin
From: Kefeng Wang <wangkefeng.wang@huawei.com>
mpe: On 64-bit Book3E vmalloc space starts at 0x8000000000000000.
Because of the way __pa() works we have:
__pa(0x8000000000000000) == 0, and therefore
virt_to_pfn(0x8000000000000000) == 0, and therefore
virt_addr_valid(0x8000000000000000) == true
Which is wrong, virt_addr_valid() should be false for vmalloc space.
In fact all vmalloc addresses that alias with a valid PFN will return
true from virt_addr_valid(). That can cause bugs with hardened usercopy
as described below by Kefeng Wang:
When running ethtool eth0 on 64-bit Book3E, a BUG occurred:
usercopy: Kernel memory exposure attempt detected from SLUB object not in SLUB page?! (offset 0, size 1048)!
kernel BUG at mm/usercopy.c:99
...
usercopy_abort+0x64/0xa0 (unreliable)
__check_heap_object+0x168/0x190
__check_object_size+0x1a0/0x200
dev_ethtool+0x2494/0x2b20
dev_ioctl+0x5d0/0x770
sock_do_ioctl+0xf0/0x1d0
sock_ioctl+0x3ec/0x5a0
__se_sys_ioctl+0xf0/0x160
system_call_exception+0xfc/0x1f0
system_call_common+0xf8/0x200
The code shows below,
data = vzalloc(array_size(gstrings.len, ETH_GSTRING_LEN));
copy_to_user(useraddr, data, gstrings.len * ETH_GSTRING_LEN))
The data is alloced by vmalloc(), virt_addr_valid(ptr) will return true
on 64-bit Book3E, which leads to the panic.
As commit 4dd7554a6456 ("powerpc/64: Add VIRTUAL_BUG_ON checks for __va
and __pa addresses") does, make sure the virt addr above PAGE_OFFSET in
the virt_addr_valid() for 64-bit, also add upper limit check to make
sure the virt is below high_memory.
Meanwhile, for 32-bit PAGE_OFFSET is the virtual address of the start
of lowmem, high_memory is the upper low virtual address, the check is
suitable for 32-bit, this will fix the issue mentioned in commit
602946ec2f90 ("powerpc: Set max_mapnr correctly") too.
On 32-bit there is a similar problem with high memory, that was fixed in
commit 602946ec2f90 ("powerpc: Set max_mapnr correctly"), but that
commit breaks highmem and needs to be reverted.
We can't easily fix __pa(), we have code that relies on its current
behaviour. So for now add extra checks to virt_addr_valid().
For 64-bit Book3S the extra checks are not necessary, the combination of
virt_to_pfn() and pfn_valid() should yield the correct result, but they
are harmless.
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
[mpe: Add additional change log detail]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
arch/powerpc/include/asm/page.h | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index 254687258f42..f2c5c26869f1 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -132,7 +132,11 @@ static inline bool pfn_valid(unsigned long pfn)
#define virt_to_page(kaddr) pfn_to_page(virt_to_pfn(kaddr))
#define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT)
-#define virt_addr_valid(kaddr) pfn_valid(virt_to_pfn(kaddr))
+#define virt_addr_valid(vaddr) ({ \
+ unsigned long _addr = (unsigned long)vaddr; \
+ _addr >= PAGE_OFFSET && _addr < (unsigned long)high_memory && \
+ pfn_valid(virt_to_pfn(_addr)); \
+})
/*
* On Book-E parts we need __va to parse the device tree and we can't
--
2.34.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 2/6] Revert "powerpc: Set max_mapnr correctly"
2022-04-06 14:57 [PATCH 1/6] powerpc: Fix virt_addr_valid() for 64-bit Book3E & 32-bit Michael Ellerman
@ 2022-04-06 14:57 ` Michael Ellerman
2022-04-06 14:57 ` [PATCH 3/6] powerpc/85xx: Fix virt_to_phys() off-by-one in smp_85xx_start_cpu() Michael Ellerman
` (4 subsequent siblings)
5 siblings, 0 replies; 10+ messages in thread
From: Michael Ellerman @ 2022-04-06 14:57 UTC (permalink / raw)
To: linuxppc-dev; +Cc: erhard_f, wangkefeng.wang, npiggin
From: Kefeng Wang <wangkefeng.wang@huawei.com>
This reverts commit 602946ec2f90d5bd965857753880db29d2d9a1e9.
If CONFIG_HIGHMEM is enabled, no highmem will be added with max_mapnr
set to max_low_pfn, see mem_init():
for (pfn = highmem_mapnr; pfn < max_mapnr; ++pfn) {
...
free_highmem_page();
}
Now that virt_addr_valid() has been fixed in the previous commit, we can
revert the change to max_mapnr.
Fixes: 602946ec2f90 ("powerpc: Set max_mapnr correctly")
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
Reported-by: Erhard F. <erhard_f@mailbox.org>
[mpe: Update change log to reflect series reordering]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
arch/powerpc/mm/mem.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 8e301cd8925b..4d221d033804 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -255,7 +255,7 @@ void __init mem_init(void)
#endif
high_memory = (void *) __va(max_low_pfn * PAGE_SIZE);
- set_max_mapnr(max_low_pfn);
+ set_max_mapnr(max_pfn);
kasan_late_init();
--
2.34.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 3/6] powerpc/85xx: Fix virt_to_phys() off-by-one in smp_85xx_start_cpu()
2022-04-06 14:57 [PATCH 1/6] powerpc: Fix virt_addr_valid() for 64-bit Book3E & 32-bit Michael Ellerman
2022-04-06 14:57 ` [PATCH 2/6] Revert "powerpc: Set max_mapnr correctly" Michael Ellerman
@ 2022-04-06 14:57 ` Michael Ellerman
2022-05-15 10:21 ` Michael Ellerman
2022-04-06 14:58 ` [PATCH 4/6] powerpc/vas: Fix __pa() handling in init_winctx_regs() Michael Ellerman
` (3 subsequent siblings)
5 siblings, 1 reply; 10+ messages in thread
From: Michael Ellerman @ 2022-04-06 14:57 UTC (permalink / raw)
To: linuxppc-dev; +Cc: erhard_f, wangkefeng.wang, npiggin
In smp_85xx_start_cpu() we are passed an address but we're unsure if
it's a real or virtual address, so there's a check to determine that.
The check has an off-by-one in that it tests if the address is greater
than high_memory, but high_memory is the first address of high memory,
so the check should be greater-or-equal.
It seems this has never been a problem in practice, but it also triggers
the DEBUG_VIRTUAL checks in __pa() which we would like to avoid. We can
fix both issues by converting high_memory - 1 to a physical address and
testing against that.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
arch/powerpc/platforms/85xx/smp.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/platforms/85xx/smp.c b/arch/powerpc/platforms/85xx/smp.c
index a1c6a7827c8f..9c43cf32f4c9 100644
--- a/arch/powerpc/platforms/85xx/smp.c
+++ b/arch/powerpc/platforms/85xx/smp.c
@@ -208,7 +208,7 @@ static int smp_85xx_start_cpu(int cpu)
* The bootpage and highmem can be accessed via ioremap(), but
* we need to directly access the spinloop if its in lowmem.
*/
- ioremappable = *cpu_rel_addr > virt_to_phys(high_memory);
+ ioremappable = *cpu_rel_addr > virt_to_phys(high_memory - 1);
/* Map the spin table */
if (ioremappable)
--
2.34.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 4/6] powerpc/vas: Fix __pa() handling in init_winctx_regs()
2022-04-06 14:57 [PATCH 1/6] powerpc: Fix virt_addr_valid() for 64-bit Book3E & 32-bit Michael Ellerman
2022-04-06 14:57 ` [PATCH 2/6] Revert "powerpc: Set max_mapnr correctly" Michael Ellerman
2022-04-06 14:57 ` [PATCH 3/6] powerpc/85xx: Fix virt_to_phys() off-by-one in smp_85xx_start_cpu() Michael Ellerman
@ 2022-04-06 14:58 ` Michael Ellerman
2022-04-06 14:58 ` [PATCH 5/6] powerpc/64: Only WARN if __pa()/__va() called with bad addresses Michael Ellerman
` (2 subsequent siblings)
5 siblings, 0 replies; 10+ messages in thread
From: Michael Ellerman @ 2022-04-06 14:58 UTC (permalink / raw)
To: linuxppc-dev; +Cc: erhard_f, wangkefeng.wang, npiggin
In init_winctx_regs() we call __pa() on winctx->rx_fifo, but some
callers pass a real adress which causes errors with DEBUG_VIRTUAL
enabled.
So check first if we have a virtual address, and otherwise leave the
address unchanged.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
arch/powerpc/platforms/powernv/vas-window.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/platforms/powernv/vas-window.c b/arch/powerpc/platforms/powernv/vas-window.c
index 0f8d39fbf2b2..25324390e292 100644
--- a/arch/powerpc/platforms/powernv/vas-window.c
+++ b/arch/powerpc/platforms/powernv/vas-window.c
@@ -404,7 +404,13 @@ static void init_winctx_regs(struct pnv_vas_window *window,
*
* See also: Design note in function header.
*/
- val = __pa(winctx->rx_fifo);
+
+ // Some callers pass virtual addresses, others pass real
+ if (virt_addr_valid(winctx->rx_fifo))
+ val = virt_to_phys(winctx->rx_fifo);
+ else
+ val = (u64)winctx->rx_fifo;
+
val = SET_FIELD(VAS_PAGE_MIGRATION_SELECT, val, 0);
write_hvwc_reg(window, VREG(LFIFO_BAR), val);
--
2.34.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 5/6] powerpc/64: Only WARN if __pa()/__va() called with bad addresses
2022-04-06 14:57 [PATCH 1/6] powerpc: Fix virt_addr_valid() for 64-bit Book3E & 32-bit Michael Ellerman
` (2 preceding siblings ...)
2022-04-06 14:58 ` [PATCH 4/6] powerpc/vas: Fix __pa() handling in init_winctx_regs() Michael Ellerman
@ 2022-04-06 14:58 ` Michael Ellerman
2022-04-06 15:18 ` Christophe Leroy
2022-04-06 14:58 ` [RFC PATCH 6/6] powerpc/mm: Add virt_addr_valid() checks Michael Ellerman
2022-04-10 12:27 ` [PATCH 1/6] powerpc: Fix virt_addr_valid() for 64-bit Book3E & 32-bit Michael Ellerman
5 siblings, 1 reply; 10+ messages in thread
From: Michael Ellerman @ 2022-04-06 14:58 UTC (permalink / raw)
To: linuxppc-dev; +Cc: erhard_f, wangkefeng.wang, npiggin
We added checks to __pa() / __va() to ensure they're only called with
appropriate addresses. But using BUG_ON() is too strong, it means
virt_addr_valid() will BUG when DEBUG_VIRTUAL is enabled.
Instead switch them to warnings, arm64 does the same.
Fixes: 4dd7554a6456 ("powerpc/64: Add VIRTUAL_BUG_ON checks for __va and __pa addresses")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
arch/powerpc/include/asm/page.h | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index f2c5c26869f1..40a27a56ee40 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -216,6 +216,12 @@ static inline bool pfn_valid(unsigned long pfn)
#define __pa(x) ((phys_addr_t)(unsigned long)(x) - VIRT_PHYS_OFFSET)
#else
#ifdef CONFIG_PPC64
+
+#ifdef CONFIG_DEBUG_VIRTUAL
+#define VIRTUAL_WARN_ON(x) WARN_ON(x)
+#else
+#define VIRTUAL_WARN_ON(x)
+#endif
/*
* gcc miscompiles (unsigned long)(&static_var) - PAGE_OFFSET
* with -mcmodel=medium, so we use & and | instead of - and + on 64-bit.
@@ -223,13 +229,13 @@ static inline bool pfn_valid(unsigned long pfn)
*/
#define __va(x) \
({ \
- VIRTUAL_BUG_ON((unsigned long)(x) >= PAGE_OFFSET); \
+ VIRTUAL_WARN_ON((unsigned long)(x) >= PAGE_OFFSET); \
(void *)(unsigned long)((phys_addr_t)(x) | PAGE_OFFSET); \
})
#define __pa(x) \
({ \
- VIRTUAL_BUG_ON((unsigned long)(x) < PAGE_OFFSET); \
+ VIRTUAL_WARN_ON((unsigned long)(x) < PAGE_OFFSET); \
(unsigned long)(x) & 0x0fffffffffffffffUL; \
})
--
2.34.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [RFC PATCH 6/6] powerpc/mm: Add virt_addr_valid() checks
2022-04-06 14:57 [PATCH 1/6] powerpc: Fix virt_addr_valid() for 64-bit Book3E & 32-bit Michael Ellerman
` (3 preceding siblings ...)
2022-04-06 14:58 ` [PATCH 5/6] powerpc/64: Only WARN if __pa()/__va() called with bad addresses Michael Ellerman
@ 2022-04-06 14:58 ` Michael Ellerman
2022-04-10 12:27 ` [PATCH 1/6] powerpc: Fix virt_addr_valid() for 64-bit Book3E & 32-bit Michael Ellerman
5 siblings, 0 replies; 10+ messages in thread
From: Michael Ellerman @ 2022-04-06 14:58 UTC (permalink / raw)
To: linuxppc-dev; +Cc: erhard_f, wangkefeng.wang, npiggin
We've had several bugs now with virt_addr_valid() being wrong, so lets
add some always-enabled boot time checks that it behaves as expected.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
arch/powerpc/mm/mem.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 4d221d033804..81e9d948a8e8 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -305,6 +305,13 @@ void __init mem_init(void)
MODULES_VADDR, MODULES_END);
#endif
#endif /* CONFIG_PPC32 */
+
+ // Check virt_addr_valid() works as expected
+ WARN_ON(!virt_addr_valid(PAGE_OFFSET));
+ WARN_ON(virt_addr_valid(PAGE_OFFSET - 1));
+ WARN_ON(virt_addr_valid(high_memory));
+ WARN_ON(virt_addr_valid(VMALLOC_START));
+ WARN_ON(virt_addr_valid(VMALLOC_END - 1));
}
void free_initmem(void)
--
2.34.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH 5/6] powerpc/64: Only WARN if __pa()/__va() called with bad addresses
2022-04-06 14:58 ` [PATCH 5/6] powerpc/64: Only WARN if __pa()/__va() called with bad addresses Michael Ellerman
@ 2022-04-06 15:18 ` Christophe Leroy
2022-04-08 4:01 ` Michael Ellerman
0 siblings, 1 reply; 10+ messages in thread
From: Christophe Leroy @ 2022-04-06 15:18 UTC (permalink / raw)
To: Michael Ellerman, linuxppc-dev; +Cc: erhard_f, wangkefeng.wang, npiggin
Le 06/04/2022 à 16:58, Michael Ellerman a écrit :
> We added checks to __pa() / __va() to ensure they're only called with
> appropriate addresses. But using BUG_ON() is too strong, it means
> virt_addr_valid() will BUG when DEBUG_VIRTUAL is enabled.
>
> Instead switch them to warnings, arm64 does the same.
>
> Fixes: 4dd7554a6456 ("powerpc/64: Add VIRTUAL_BUG_ON checks for __va and __pa addresses")
> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
> ---
> arch/powerpc/include/asm/page.h | 10 ++++++++--
> 1 file changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
> index f2c5c26869f1..40a27a56ee40 100644
> --- a/arch/powerpc/include/asm/page.h
> +++ b/arch/powerpc/include/asm/page.h
> @@ -216,6 +216,12 @@ static inline bool pfn_valid(unsigned long pfn)
> #define __pa(x) ((phys_addr_t)(unsigned long)(x) - VIRT_PHYS_OFFSET)
> #else
> #ifdef CONFIG_PPC64
> +
> +#ifdef CONFIG_DEBUG_VIRTUAL
> +#define VIRTUAL_WARN_ON(x) WARN_ON(x)
> +#else
> +#define VIRTUAL_WARN_ON(x)
> +#endif
Could be:
#define VIRTUAL_WARN_ON(x) WARN_ON(IS_ENABLED(CONFIG_DEBUG_VIRTUAL) && (x))
> /*
> * gcc miscompiles (unsigned long)(&static_var) - PAGE_OFFSET
> * with -mcmodel=medium, so we use & and | instead of - and + on 64-bit.
> @@ -223,13 +229,13 @@ static inline bool pfn_valid(unsigned long pfn)
> */
> #define __va(x) \
> ({ \
> - VIRTUAL_BUG_ON((unsigned long)(x) >= PAGE_OFFSET); \
> + VIRTUAL_WARN_ON((unsigned long)(x) >= PAGE_OFFSET); \
> (void *)(unsigned long)((phys_addr_t)(x) | PAGE_OFFSET); \
> })
>
> #define __pa(x) \
> ({ \
> - VIRTUAL_BUG_ON((unsigned long)(x) < PAGE_OFFSET); \
> + VIRTUAL_WARN_ON((unsigned long)(x) < PAGE_OFFSET); \
> (unsigned long)(x) & 0x0fffffffffffffffUL; \
> })
>
Isn't it dangerous to WARN (or BUG) here ? __pa() can be used very early
during boot, like in prom_init.c
Some other architectures have a __pa_nodebug(). The __pa() does the
WARN() then calls __pa_nodebug(). Early users call __pa_nodebug() directly.
Christophe
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 5/6] powerpc/64: Only WARN if __pa()/__va() called with bad addresses
2022-04-06 15:18 ` Christophe Leroy
@ 2022-04-08 4:01 ` Michael Ellerman
0 siblings, 0 replies; 10+ messages in thread
From: Michael Ellerman @ 2022-04-08 4:01 UTC (permalink / raw)
To: Christophe Leroy, linuxppc-dev; +Cc: erhard_f, wangkefeng.wang, npiggin
Christophe Leroy <christophe.leroy@csgroup.eu> writes:
> Le 06/04/2022 à 16:58, Michael Ellerman a écrit :
>> We added checks to __pa() / __va() to ensure they're only called with
>> appropriate addresses. But using BUG_ON() is too strong, it means
>> virt_addr_valid() will BUG when DEBUG_VIRTUAL is enabled.
>>
>> Instead switch them to warnings, arm64 does the same.
>>
>> Fixes: 4dd7554a6456 ("powerpc/64: Add VIRTUAL_BUG_ON checks for __va and __pa addresses")
>> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
>> ---
>> arch/powerpc/include/asm/page.h | 10 ++++++++--
>> 1 file changed, 8 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
>> index f2c5c26869f1..40a27a56ee40 100644
>> --- a/arch/powerpc/include/asm/page.h
>> +++ b/arch/powerpc/include/asm/page.h
>> @@ -216,6 +216,12 @@ static inline bool pfn_valid(unsigned long pfn)
>> #define __pa(x) ((phys_addr_t)(unsigned long)(x) - VIRT_PHYS_OFFSET)
>> #else
>> #ifdef CONFIG_PPC64
>> +
>> +#ifdef CONFIG_DEBUG_VIRTUAL
>> +#define VIRTUAL_WARN_ON(x) WARN_ON(x)
>> +#else
>> +#define VIRTUAL_WARN_ON(x)
>> +#endif
>
> Could be:
>
> #define VIRTUAL_WARN_ON(x) WARN_ON(IS_ENABLED(CONFIG_DEBUG_VIRTUAL) && (x))
>
>> /*
>> * gcc miscompiles (unsigned long)(&static_var) - PAGE_OFFSET
>> * with -mcmodel=medium, so we use & and | instead of - and + on 64-bit.
>> @@ -223,13 +229,13 @@ static inline bool pfn_valid(unsigned long pfn)
>> */
>> #define __va(x) \
>> ({ \
>> - VIRTUAL_BUG_ON((unsigned long)(x) >= PAGE_OFFSET); \
>> + VIRTUAL_WARN_ON((unsigned long)(x) >= PAGE_OFFSET); \
>> (void *)(unsigned long)((phys_addr_t)(x) | PAGE_OFFSET); \
>> })
>>
>> #define __pa(x) \
>> ({ \
>> - VIRTUAL_BUG_ON((unsigned long)(x) < PAGE_OFFSET); \
>> + VIRTUAL_WARN_ON((unsigned long)(x) < PAGE_OFFSET); \
>> (unsigned long)(x) & 0x0fffffffffffffffUL; \
>> })
>>
>
> Isn't it dangerous to WARN (or BUG) here ? __pa() can be used very early
> during boot, like in prom_init.c
Yes. WARN is a bit less dangerous though :)
> Some other architectures have a __pa_nodebug(). The __pa() does the
> WARN() then calls __pa_nodebug(). Early users call __pa_nodebug() directly.
Yeah I saw that, we could go that way.
I think possibly the better option is for __pa() to have no checks,
instead the checks go in the higher level routines like virt_to_phys()
and phys_to_virt().
And then we can check uses of __pa() and any that are *not* early boot
or low level stuff can be converted to virt_to_phys().
cheers
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 1/6] powerpc: Fix virt_addr_valid() for 64-bit Book3E & 32-bit
2022-04-06 14:57 [PATCH 1/6] powerpc: Fix virt_addr_valid() for 64-bit Book3E & 32-bit Michael Ellerman
` (4 preceding siblings ...)
2022-04-06 14:58 ` [RFC PATCH 6/6] powerpc/mm: Add virt_addr_valid() checks Michael Ellerman
@ 2022-04-10 12:27 ` Michael Ellerman
5 siblings, 0 replies; 10+ messages in thread
From: Michael Ellerman @ 2022-04-10 12:27 UTC (permalink / raw)
To: Michael Ellerman, linuxppc-dev; +Cc: erhard_f, wangkefeng.wang, npiggin
On Thu, 7 Apr 2022 00:57:57 +1000, Michael Ellerman wrote:
> From: Kefeng Wang <wangkefeng.wang@huawei.com>
>
> mpe: On 64-bit Book3E vmalloc space starts at 0x8000000000000000.
>
> Because of the way __pa() works we have:
> __pa(0x8000000000000000) == 0, and therefore
> virt_to_pfn(0x8000000000000000) == 0, and therefore
> virt_addr_valid(0x8000000000000000) == true
>
> [...]
Patches 1 & 2 applied to powerpc/fixes.
[1/6] powerpc: Fix virt_addr_valid() for 64-bit Book3E & 32-bit
https://git.kernel.org/powerpc/c/ffa0b64e3be58519ae472ea29a1a1ad681e32f48
[2/6] Revert "powerpc: Set max_mapnr correctly"
https://git.kernel.org/powerpc/c/1ff5c8e8c835e8a81c0868e3050c76563dd56a2c
cheers
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 3/6] powerpc/85xx: Fix virt_to_phys() off-by-one in smp_85xx_start_cpu()
2022-04-06 14:57 ` [PATCH 3/6] powerpc/85xx: Fix virt_to_phys() off-by-one in smp_85xx_start_cpu() Michael Ellerman
@ 2022-05-15 10:21 ` Michael Ellerman
0 siblings, 0 replies; 10+ messages in thread
From: Michael Ellerman @ 2022-05-15 10:21 UTC (permalink / raw)
To: Michael Ellerman, linuxppc-dev
On Thu, 7 Apr 2022 00:57:59 +1000, Michael Ellerman wrote:
> In smp_85xx_start_cpu() we are passed an address but we're unsure if
> it's a real or virtual address, so there's a check to determine that.
>
> [...]
Applied to powerpc/next.
[3/6] powerpc/85xx: Fix virt_to_phys() off-by-one in smp_85xx_start_cpu()
https://git.kernel.org/powerpc/c/0d897255e79e26f471d10bbf72db9eee6f9cb723
cheers
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2022-05-15 10:26 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-06 14:57 [PATCH 1/6] powerpc: Fix virt_addr_valid() for 64-bit Book3E & 32-bit Michael Ellerman
2022-04-06 14:57 ` [PATCH 2/6] Revert "powerpc: Set max_mapnr correctly" Michael Ellerman
2022-04-06 14:57 ` [PATCH 3/6] powerpc/85xx: Fix virt_to_phys() off-by-one in smp_85xx_start_cpu() Michael Ellerman
2022-05-15 10:21 ` Michael Ellerman
2022-04-06 14:58 ` [PATCH 4/6] powerpc/vas: Fix __pa() handling in init_winctx_regs() Michael Ellerman
2022-04-06 14:58 ` [PATCH 5/6] powerpc/64: Only WARN if __pa()/__va() called with bad addresses Michael Ellerman
2022-04-06 15:18 ` Christophe Leroy
2022-04-08 4:01 ` Michael Ellerman
2022-04-06 14:58 ` [RFC PATCH 6/6] powerpc/mm: Add virt_addr_valid() checks Michael Ellerman
2022-04-10 12:27 ` [PATCH 1/6] powerpc: Fix virt_addr_valid() for 64-bit Book3E & 32-bit Michael Ellerman
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.