* [PATCH v5 0/6] Use C inlines for uaccess
@ 2020-01-02 21:13 Pavel Tatashin
2020-01-02 21:13 ` [PATCH v5 1/6] arm/arm64/xen: hypercall.h add includes guards Pavel Tatashin
` (5 more replies)
0 siblings, 6 replies; 11+ messages in thread
From: Pavel Tatashin @ 2020-01-02 21:13 UTC (permalink / raw)
To: pasha.tatashin, jmorris, sashal, linux-kernel, catalin.marinas,
will, steve.capper, linux-arm-kernel, maz, james.morse,
vladimir.murzin, mark.rutland, tglx, gregkh, allison, info,
alexios.zavras, sstabellini, boris.ostrovsky, jgross, stefan,
yamada.masahiro, xen-devel, linux, andrew.cooper3, julien
Changelog:
v5:
- Fixed build issue reported by kbuild
- Removed a comment fix from the first patch
- Added reviewed-by's from Julien Grall
- Now, these patches apply against mainline
v4:
- Split the first patch into two as Julien Grall
- Also as Mark Rutland suggested removed alternatives
from __asm_flush_cache_user_range.
v3:
- Added Acked-by from Stefano Stabellini
- Addressed comments from Mark Rutland
v2:
- Addressed Russell King's concern by not adding
uaccess_* to ARM.
- Removed the accidental change to xtensa
Convert the remaining uaccess_* calls from ASM macros to C inlines.
These patches apply against maineline. I boot tested ARM64, and
compile tested ARM change
Previous discussions:
v4: https://lore.kernel.org/lkml/20191204232058.2500117-1-pasha.tatashin@soleen.com
v3: https://lore.kernel.org/lkml/20191127184453.229321-1-pasha.tatashin@soleen.com
v2: https://lore.kernel.org/lkml/20191122022406.590141-1-pasha.tatashin@soleen.com
v1: https://lore.kernel.org/lkml/20191121184805.414758-1-pasha.tatashin@soleen.com
Pavel Tatashin (6):
arm/arm64/xen: hypercall.h add includes guards
arm/arm64/xen: use C inlines for privcmd_call
arm64: remove uaccess_ttbr0 asm macros from cache functions
arm64: remove __asm_flush_icache_range
arm64: move ARM64_HAS_CACHE_DIC/_IDC from asm to C
arm64: remove the rest of asm-uaccess.h
arch/arm/include/asm/xen/hypercall.h | 10 ++++
arch/arm/xen/enlighten.c | 2 +-
arch/arm/xen/hypercall.S | 4 +-
arch/arm64/include/asm/asm-uaccess.h | 61 -----------------------
arch/arm64/include/asm/cacheflush.h | 55 +++++++++++++++++++--
arch/arm64/include/asm/xen/hypercall.h | 28 +++++++++++
arch/arm64/kernel/entry.S | 27 +++++++++-
arch/arm64/lib/clear_user.S | 2 +-
arch/arm64/lib/copy_from_user.S | 2 +-
arch/arm64/lib/copy_in_user.S | 2 +-
arch/arm64/lib/copy_to_user.S | 2 +-
arch/arm64/mm/cache.S | 68 +++++---------------------
arch/arm64/mm/flush.c | 3 +-
arch/arm64/xen/hypercall.S | 19 +------
include/xen/arm/hypercall.h | 12 ++---
15 files changed, 146 insertions(+), 151 deletions(-)
delete mode 100644 arch/arm64/include/asm/asm-uaccess.h
--
2.17.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v5 1/6] arm/arm64/xen: hypercall.h add includes guards
2020-01-02 21:13 [PATCH v5 0/6] Use C inlines for uaccess Pavel Tatashin
@ 2020-01-02 21:13 ` Pavel Tatashin
2020-01-06 17:18 ` Stefano Stabellini
2020-01-02 21:13 ` [PATCH v5 2/6] arm/arm64/xen: use C inlines for privcmd_call Pavel Tatashin
` (4 subsequent siblings)
5 siblings, 1 reply; 11+ messages in thread
From: Pavel Tatashin @ 2020-01-02 21:13 UTC (permalink / raw)
To: pasha.tatashin, jmorris, sashal, linux-kernel, catalin.marinas,
will, steve.capper, linux-arm-kernel, maz, james.morse,
vladimir.murzin, mark.rutland, tglx, gregkh, allison, info,
alexios.zavras, sstabellini, boris.ostrovsky, jgross, stefan,
yamada.masahiro, xen-devel, linux, andrew.cooper3, julien
The arm and arm64 versions of hypercall.h are missing the include
guards. This is needed because C inlines for privcmd_call are going to
be added to the files.
Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Reviewed-by: Julien Grall <julien@xen.org>
---
arch/arm/include/asm/xen/hypercall.h | 4 ++++
arch/arm64/include/asm/xen/hypercall.h | 4 ++++
include/xen/arm/hypercall.h | 6 +++---
3 files changed, 11 insertions(+), 3 deletions(-)
diff --git a/arch/arm/include/asm/xen/hypercall.h b/arch/arm/include/asm/xen/hypercall.h
index 3522cbaed316..c6882bba5284 100644
--- a/arch/arm/include/asm/xen/hypercall.h
+++ b/arch/arm/include/asm/xen/hypercall.h
@@ -1 +1,5 @@
+#ifndef _ASM_ARM_XEN_HYPERCALL_H
+#define _ASM_ARM_XEN_HYPERCALL_H
#include <xen/arm/hypercall.h>
+
+#endif /* _ASM_ARM_XEN_HYPERCALL_H */
diff --git a/arch/arm64/include/asm/xen/hypercall.h b/arch/arm64/include/asm/xen/hypercall.h
index 3522cbaed316..c3198f9ccd2e 100644
--- a/arch/arm64/include/asm/xen/hypercall.h
+++ b/arch/arm64/include/asm/xen/hypercall.h
@@ -1 +1,5 @@
+#ifndef _ASM_ARM64_XEN_HYPERCALL_H
+#define _ASM_ARM64_XEN_HYPERCALL_H
#include <xen/arm/hypercall.h>
+
+#endif /* _ASM_ARM64_XEN_HYPERCALL_H */
diff --git a/include/xen/arm/hypercall.h b/include/xen/arm/hypercall.h
index b40485e54d80..babcc08af965 100644
--- a/include/xen/arm/hypercall.h
+++ b/include/xen/arm/hypercall.h
@@ -30,8 +30,8 @@
* IN THE SOFTWARE.
*/
-#ifndef _ASM_ARM_XEN_HYPERCALL_H
-#define _ASM_ARM_XEN_HYPERCALL_H
+#ifndef _ARM_XEN_HYPERCALL_H
+#define _ARM_XEN_HYPERCALL_H
#include <linux/bug.h>
@@ -88,4 +88,4 @@ MULTI_mmu_update(struct multicall_entry *mcl, struct mmu_update *req,
BUG();
}
-#endif /* _ASM_ARM_XEN_HYPERCALL_H */
+#endif /* _ARM_XEN_HYPERCALL_H */
--
2.17.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v5 2/6] arm/arm64/xen: use C inlines for privcmd_call
2020-01-02 21:13 [PATCH v5 0/6] Use C inlines for uaccess Pavel Tatashin
2020-01-02 21:13 ` [PATCH v5 1/6] arm/arm64/xen: hypercall.h add includes guards Pavel Tatashin
@ 2020-01-02 21:13 ` Pavel Tatashin
2020-01-02 21:13 ` [PATCH v5 3/6] arm64: remove uaccess_ttbr0 asm macros from cache functions Pavel Tatashin
` (3 subsequent siblings)
5 siblings, 0 replies; 11+ messages in thread
From: Pavel Tatashin @ 2020-01-02 21:13 UTC (permalink / raw)
To: pasha.tatashin, jmorris, sashal, linux-kernel, catalin.marinas,
will, steve.capper, linux-arm-kernel, maz, james.morse,
vladimir.murzin, mark.rutland, tglx, gregkh, allison, info,
alexios.zavras, sstabellini, boris.ostrovsky, jgross, stefan,
yamada.masahiro, xen-devel, linux, andrew.cooper3, julien
privcmd_call requires to enable access to userspace for the
duration of the hypercall.
Currently, this is done via assembly macros. Change it to C
inlines instead.
Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
Reviewed-by: Julien Grall <julien@xen.org>
---
arch/arm/include/asm/xen/hypercall.h | 6 ++++++
arch/arm/xen/enlighten.c | 2 +-
arch/arm/xen/hypercall.S | 4 ++--
arch/arm64/include/asm/xen/hypercall.h | 24 ++++++++++++++++++++++++
arch/arm64/xen/hypercall.S | 19 ++-----------------
include/xen/arm/hypercall.h | 6 +++---
6 files changed, 38 insertions(+), 23 deletions(-)
diff --git a/arch/arm/include/asm/xen/hypercall.h b/arch/arm/include/asm/xen/hypercall.h
index c6882bba5284..cac5bd9ef519 100644
--- a/arch/arm/include/asm/xen/hypercall.h
+++ b/arch/arm/include/asm/xen/hypercall.h
@@ -2,4 +2,10 @@
#define _ASM_ARM_XEN_HYPERCALL_H
#include <xen/arm/hypercall.h>
+static inline long privcmd_call(unsigned int call, unsigned long a1,
+ unsigned long a2, unsigned long a3,
+ unsigned long a4, unsigned long a5)
+{
+ return arch_privcmd_call(call, a1, a2, a3, a4, a5);
+}
#endif /* _ASM_ARM_XEN_HYPERCALL_H */
diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index dd6804a64f1a..e87280c6d25d 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -440,4 +440,4 @@ EXPORT_SYMBOL_GPL(HYPERVISOR_platform_op_raw);
EXPORT_SYMBOL_GPL(HYPERVISOR_multicall);
EXPORT_SYMBOL_GPL(HYPERVISOR_vm_assist);
EXPORT_SYMBOL_GPL(HYPERVISOR_dm_op);
-EXPORT_SYMBOL_GPL(privcmd_call);
+EXPORT_SYMBOL_GPL(arch_privcmd_call);
diff --git a/arch/arm/xen/hypercall.S b/arch/arm/xen/hypercall.S
index b11bba542fac..277078c7da49 100644
--- a/arch/arm/xen/hypercall.S
+++ b/arch/arm/xen/hypercall.S
@@ -94,7 +94,7 @@ HYPERCALL2(multicall);
HYPERCALL2(vm_assist);
HYPERCALL3(dm_op);
-ENTRY(privcmd_call)
+ENTRY(arch_privcmd_call)
stmdb sp!, {r4}
mov r12, r0
mov r0, r1
@@ -119,4 +119,4 @@ ENTRY(privcmd_call)
ldm sp!, {r4}
ret lr
-ENDPROC(privcmd_call);
+ENDPROC(arch_privcmd_call);
diff --git a/arch/arm64/include/asm/xen/hypercall.h b/arch/arm64/include/asm/xen/hypercall.h
index c3198f9ccd2e..1a74fb28607f 100644
--- a/arch/arm64/include/asm/xen/hypercall.h
+++ b/arch/arm64/include/asm/xen/hypercall.h
@@ -1,5 +1,29 @@
#ifndef _ASM_ARM64_XEN_HYPERCALL_H
#define _ASM_ARM64_XEN_HYPERCALL_H
#include <xen/arm/hypercall.h>
+#include <linux/uaccess.h>
+static inline long privcmd_call(unsigned int call, unsigned long a1,
+ unsigned long a2, unsigned long a3,
+ unsigned long a4, unsigned long a5)
+{
+ long rv;
+
+ /*
+ * Privcmd calls are issued by the userspace. The kernel needs to
+ * enable access to TTBR0_EL1 as the hypervisor would issue stage 1
+ * translations to user memory via AT instructions. Since AT
+ * instructions are not affected by the PAN bit (ARMv8.1), we only
+ * need the explicit uaccess_enable/disable if the TTBR0 PAN emulation
+ * is enabled (it implies that hardware UAO and PAN disabled).
+ */
+ uaccess_ttbr0_enable();
+ rv = arch_privcmd_call(call, a1, a2, a3, a4, a5);
+ /*
+ * Disable userspace access from kernel once the hyp call completed.
+ */
+ uaccess_ttbr0_disable();
+
+ return rv;
+}
#endif /* _ASM_ARM64_XEN_HYPERCALL_H */
diff --git a/arch/arm64/xen/hypercall.S b/arch/arm64/xen/hypercall.S
index c5f05c4a4d00..921611778d2a 100644
--- a/arch/arm64/xen/hypercall.S
+++ b/arch/arm64/xen/hypercall.S
@@ -49,7 +49,6 @@
#include <linux/linkage.h>
#include <asm/assembler.h>
-#include <asm/asm-uaccess.h>
#include <xen/interface/xen.h>
@@ -86,27 +85,13 @@ HYPERCALL2(multicall);
HYPERCALL2(vm_assist);
HYPERCALL3(dm_op);
-ENTRY(privcmd_call)
+ENTRY(arch_privcmd_call)
mov x16, x0
mov x0, x1
mov x1, x2
mov x2, x3
mov x3, x4
mov x4, x5
- /*
- * Privcmd calls are issued by the userspace. The kernel needs to
- * enable access to TTBR0_EL1 as the hypervisor would issue stage 1
- * translations to user memory via AT instructions. Since AT
- * instructions are not affected by the PAN bit (ARMv8.1), we only
- * need the explicit uaccess_enable/disable if the TTBR0 PAN emulation
- * is enabled (it implies that hardware UAO and PAN disabled).
- */
- uaccess_ttbr0_enable x6, x7, x8
hvc XEN_IMM
-
- /*
- * Disable userspace access from kernel once the hyp call completed.
- */
- uaccess_ttbr0_disable x6, x7
ret
-ENDPROC(privcmd_call);
+ENDPROC(arch_privcmd_call);
diff --git a/include/xen/arm/hypercall.h b/include/xen/arm/hypercall.h
index babcc08af965..624c8ad7e42a 100644
--- a/include/xen/arm/hypercall.h
+++ b/include/xen/arm/hypercall.h
@@ -41,9 +41,9 @@
struct xen_dm_op_buf;
-long privcmd_call(unsigned call, unsigned long a1,
- unsigned long a2, unsigned long a3,
- unsigned long a4, unsigned long a5);
+long arch_privcmd_call(unsigned int call, unsigned long a1,
+ unsigned long a2, unsigned long a3,
+ unsigned long a4, unsigned long a5);
int HYPERVISOR_xen_version(int cmd, void *arg);
int HYPERVISOR_console_io(int cmd, int count, char *str);
int HYPERVISOR_grant_table_op(unsigned int cmd, void *uop, unsigned int count);
--
2.17.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v5 3/6] arm64: remove uaccess_ttbr0 asm macros from cache functions
2020-01-02 21:13 [PATCH v5 0/6] Use C inlines for uaccess Pavel Tatashin
2020-01-02 21:13 ` [PATCH v5 1/6] arm/arm64/xen: hypercall.h add includes guards Pavel Tatashin
2020-01-02 21:13 ` [PATCH v5 2/6] arm/arm64/xen: use C inlines for privcmd_call Pavel Tatashin
@ 2020-01-02 21:13 ` Pavel Tatashin
2020-01-14 18:14 ` Will Deacon
2020-01-02 21:13 ` [PATCH v5 4/6] arm64: remove __asm_flush_icache_range Pavel Tatashin
` (2 subsequent siblings)
5 siblings, 1 reply; 11+ messages in thread
From: Pavel Tatashin @ 2020-01-02 21:13 UTC (permalink / raw)
To: pasha.tatashin, jmorris, sashal, linux-kernel, catalin.marinas,
will, steve.capper, linux-arm-kernel, maz, james.morse,
vladimir.murzin, mark.rutland, tglx, gregkh, allison, info,
alexios.zavras, sstabellini, boris.ostrovsky, jgross, stefan,
yamada.masahiro, xen-devel, linux, andrew.cooper3, julien
We currently duplicate the logic to enable/disable uaccess via TTBR0,
with C functions and assembly macros. This is a maintenenace burden
and is liable to lead to subtle bugs, so let's get rid of the assembly
macros, and always use the C functions. This requires refactoring
some assembly functions to have a C wrapper.
Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
---
arch/arm64/include/asm/asm-uaccess.h | 22 ----------------
arch/arm64/include/asm/cacheflush.h | 39 +++++++++++++++++++++++++---
arch/arm64/mm/cache.S | 36 ++++++++++---------------
arch/arm64/mm/flush.c | 2 +-
4 files changed, 50 insertions(+), 49 deletions(-)
diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h
index f68a0e64482a..fba2a69f7fef 100644
--- a/arch/arm64/include/asm/asm-uaccess.h
+++ b/arch/arm64/include/asm/asm-uaccess.h
@@ -34,28 +34,6 @@
msr ttbr0_el1, \tmp1 // set the non-PAN TTBR0_EL1
isb
.endm
-
- .macro uaccess_ttbr0_disable, tmp1, tmp2
-alternative_if_not ARM64_HAS_PAN
- save_and_disable_irq \tmp2 // avoid preemption
- __uaccess_ttbr0_disable \tmp1
- restore_irq \tmp2
-alternative_else_nop_endif
- .endm
-
- .macro uaccess_ttbr0_enable, tmp1, tmp2, tmp3
-alternative_if_not ARM64_HAS_PAN
- save_and_disable_irq \tmp3 // avoid preemption
- __uaccess_ttbr0_enable \tmp1, \tmp2
- restore_irq \tmp3
-alternative_else_nop_endif
- .endm
-#else
- .macro uaccess_ttbr0_disable, tmp1, tmp2
- .endm
-
- .macro uaccess_ttbr0_enable, tmp1, tmp2, tmp3
- .endm
#endif
#endif
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 665c78e0665a..cb00c61e0bde 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -61,16 +61,49 @@
* - kaddr - page address
* - size - region size
*/
-extern void __flush_icache_range(unsigned long start, unsigned long end);
-extern int invalidate_icache_range(unsigned long start, unsigned long end);
+extern void __asm_flush_icache_range(unsigned long start, unsigned long end);
+extern long __asm_flush_cache_user_range(unsigned long start,
+ unsigned long end);
+extern int __asm_invalidate_icache_range(unsigned long start,
+ unsigned long end);
extern void __flush_dcache_area(void *addr, size_t len);
extern void __inval_dcache_area(void *addr, size_t len);
extern void __clean_dcache_area_poc(void *addr, size_t len);
extern void __clean_dcache_area_pop(void *addr, size_t len);
extern void __clean_dcache_area_pou(void *addr, size_t len);
-extern long __flush_cache_user_range(unsigned long start, unsigned long end);
extern void sync_icache_aliases(void *kaddr, unsigned long len);
+static inline long __flush_cache_user_range(unsigned long start,
+ unsigned long end)
+{
+ int ret;
+
+ uaccess_ttbr0_enable();
+ ret = __asm_flush_cache_user_range(start, end);
+ uaccess_ttbr0_disable();
+
+ return ret;
+}
+
+static inline void __flush_icache_range(unsigned long start, unsigned long end)
+{
+ uaccess_ttbr0_enable();
+ __asm_flush_icache_range(start, end);
+ uaccess_ttbr0_disable();
+}
+
+static inline int invalidate_icache_range(unsigned long start,
+ unsigned long end)
+{
+ int ret;
+
+ uaccess_ttbr0_enable();
+ ret = __asm_invalidate_icache_range(start, end);
+ uaccess_ttbr0_disable();
+
+ return ret;
+}
+
static inline void flush_icache_range(unsigned long start, unsigned long end)
{
__flush_icache_range(start, end);
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index db767b072601..602b9aa8603a 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -15,7 +15,7 @@
#include <asm/asm-uaccess.h>
/*
- * flush_icache_range(start,end)
+ * __asm_flush_icache_range(start,end)
*
* Ensure that the I and D caches are coherent within specified region.
* This is typically used when code has been written to a memory region,
@@ -24,11 +24,11 @@
* - start - virtual start address of region
* - end - virtual end address of region
*/
-ENTRY(__flush_icache_range)
+ENTRY(__asm_flush_icache_range)
/* FALLTHROUGH */
/*
- * __flush_cache_user_range(start,end)
+ * __asm_flush_cache_user_range(start,end)
*
* Ensure that the I and D caches are coherent within specified region.
* This is typically used when code has been written to a memory region,
@@ -37,8 +37,7 @@ ENTRY(__flush_icache_range)
* - start - virtual start address of region
* - end - virtual end address of region
*/
-ENTRY(__flush_cache_user_range)
- uaccess_ttbr0_enable x2, x3, x4
+ENTRY(__asm_flush_cache_user_range)
alternative_if ARM64_HAS_CACHE_IDC
dsb ishst
b 7f
@@ -60,41 +59,32 @@ alternative_if ARM64_HAS_CACHE_DIC
alternative_else_nop_endif
invalidate_icache_by_line x0, x1, x2, x3, 9f
8: mov x0, #0
-1:
- uaccess_ttbr0_disable x1, x2
- ret
-9:
- mov x0, #-EFAULT
+1: ret
+9: mov x0, #-EFAULT
b 1b
-ENDPROC(__flush_icache_range)
-ENDPROC(__flush_cache_user_range)
+ENDPROC(__asm_flush_icache_range)
+ENDPROC(__asm_flush_cache_user_range)
/*
- * invalidate_icache_range(start,end)
+ * __asm_invalidate_icache_range(start,end)
*
* Ensure that the I cache is invalid within specified region.
*
* - start - virtual start address of region
* - end - virtual end address of region
*/
-ENTRY(invalidate_icache_range)
+ENTRY(__asm_invalidate_icache_range)
alternative_if ARM64_HAS_CACHE_DIC
mov x0, xzr
isb
ret
alternative_else_nop_endif
-
- uaccess_ttbr0_enable x2, x3, x4
-
invalidate_icache_by_line x0, x1, x2, x3, 2f
mov x0, xzr
-1:
- uaccess_ttbr0_disable x1, x2
- ret
-2:
- mov x0, #-EFAULT
+1: ret
+2: mov x0, #-EFAULT
b 1b
-ENDPROC(invalidate_icache_range)
+ENDPROC(__asm_invalidate_icache_range)
/*
* __flush_dcache_area(kaddr, size)
diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
index ac485163a4a7..b23f34d23f31 100644
--- a/arch/arm64/mm/flush.c
+++ b/arch/arm64/mm/flush.c
@@ -75,7 +75,7 @@ EXPORT_SYMBOL(flush_dcache_page);
/*
* Additional functions defined in assembly.
*/
-EXPORT_SYMBOL(__flush_icache_range);
+EXPORT_SYMBOL(__asm_flush_icache_range);
#ifdef CONFIG_ARCH_HAS_PMEM_API
void arch_wb_cache_pmem(void *addr, size_t size)
--
2.17.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v5 4/6] arm64: remove __asm_flush_icache_range
2020-01-02 21:13 [PATCH v5 0/6] Use C inlines for uaccess Pavel Tatashin
` (2 preceding siblings ...)
2020-01-02 21:13 ` [PATCH v5 3/6] arm64: remove uaccess_ttbr0 asm macros from cache functions Pavel Tatashin
@ 2020-01-02 21:13 ` Pavel Tatashin
2020-01-02 21:13 ` [PATCH v5 5/6] arm64: move ARM64_HAS_CACHE_DIC/_IDC from asm to C Pavel Tatashin
2020-01-02 21:13 ` [PATCH v5 6/6] arm64: remove the rest of asm-uaccess.h Pavel Tatashin
5 siblings, 0 replies; 11+ messages in thread
From: Pavel Tatashin @ 2020-01-02 21:13 UTC (permalink / raw)
To: pasha.tatashin, jmorris, sashal, linux-kernel, catalin.marinas,
will, steve.capper, linux-arm-kernel, maz, james.morse,
vladimir.murzin, mark.rutland, tglx, gregkh, allison, info,
alexios.zavras, sstabellini, boris.ostrovsky, jgross, stefan,
yamada.masahiro, xen-devel, linux, andrew.cooper3, julien
__asm_flush_icache_range is an alias to __asm_flush_cache_user_range,
but now that these functions are called from C wrappers the fall
through can instead be done at a higher level.
Remove the __asm_flush_icache_range alias in assembly, and instead call
__flush_cache_user_range() from __flush_icache_range().
Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
---
arch/arm64/include/asm/cacheflush.h | 5 +----
arch/arm64/mm/cache.S | 14 --------------
arch/arm64/mm/flush.c | 2 +-
3 files changed, 2 insertions(+), 19 deletions(-)
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index cb00c61e0bde..047af338ba15 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -61,7 +61,6 @@
* - kaddr - page address
* - size - region size
*/
-extern void __asm_flush_icache_range(unsigned long start, unsigned long end);
extern long __asm_flush_cache_user_range(unsigned long start,
unsigned long end);
extern int __asm_invalidate_icache_range(unsigned long start,
@@ -87,9 +86,7 @@ static inline long __flush_cache_user_range(unsigned long start,
static inline void __flush_icache_range(unsigned long start, unsigned long end)
{
- uaccess_ttbr0_enable();
- __asm_flush_icache_range(start, end);
- uaccess_ttbr0_disable();
+ __flush_cache_user_range(start, end);
}
static inline int invalidate_icache_range(unsigned long start,
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 602b9aa8603a..1981cbaf5d92 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -14,19 +14,6 @@
#include <asm/alternative.h>
#include <asm/asm-uaccess.h>
-/*
- * __asm_flush_icache_range(start,end)
- *
- * Ensure that the I and D caches are coherent within specified region.
- * This is typically used when code has been written to a memory region,
- * and will be executed.
- *
- * - start - virtual start address of region
- * - end - virtual end address of region
- */
-ENTRY(__asm_flush_icache_range)
- /* FALLTHROUGH */
-
/*
* __asm_flush_cache_user_range(start,end)
*
@@ -62,7 +49,6 @@ alternative_else_nop_endif
1: ret
9: mov x0, #-EFAULT
b 1b
-ENDPROC(__asm_flush_icache_range)
ENDPROC(__asm_flush_cache_user_range)
/*
diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
index b23f34d23f31..61521285f27d 100644
--- a/arch/arm64/mm/flush.c
+++ b/arch/arm64/mm/flush.c
@@ -75,7 +75,7 @@ EXPORT_SYMBOL(flush_dcache_page);
/*
* Additional functions defined in assembly.
*/
-EXPORT_SYMBOL(__asm_flush_icache_range);
+EXPORT_SYMBOL(__asm_flush_cache_user_range);
#ifdef CONFIG_ARCH_HAS_PMEM_API
void arch_wb_cache_pmem(void *addr, size_t size)
--
2.17.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v5 5/6] arm64: move ARM64_HAS_CACHE_DIC/_IDC from asm to C
2020-01-02 21:13 [PATCH v5 0/6] Use C inlines for uaccess Pavel Tatashin
` (3 preceding siblings ...)
2020-01-02 21:13 ` [PATCH v5 4/6] arm64: remove __asm_flush_icache_range Pavel Tatashin
@ 2020-01-02 21:13 ` Pavel Tatashin
2020-01-14 18:26 ` Will Deacon
2020-01-02 21:13 ` [PATCH v5 6/6] arm64: remove the rest of asm-uaccess.h Pavel Tatashin
5 siblings, 1 reply; 11+ messages in thread
From: Pavel Tatashin @ 2020-01-02 21:13 UTC (permalink / raw)
To: pasha.tatashin, jmorris, sashal, linux-kernel, catalin.marinas,
will, steve.capper, linux-arm-kernel, maz, james.morse,
vladimir.murzin, mark.rutland, tglx, gregkh, allison, info,
alexios.zavras, sstabellini, boris.ostrovsky, jgross, stefan,
yamada.masahiro, xen-devel, linux, andrew.cooper3, julien
The assmbly functions __asm_flush_cache_user_range and
__asm_invalidate_icache_range have alternatives:
alternative_if ARM64_HAS_CACHE_DIC
...
alternative_if ARM64_HAS_CACHE_IDC
...
But, the implementation of those alternatives is trivial and therefore
can be done in the C inline wrappers.
Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
---
arch/arm64/include/asm/cacheflush.h | 19 +++++++++++++++++++
arch/arm64/mm/cache.S | 27 +++++----------------------
arch/arm64/mm/flush.c | 1 +
3 files changed, 25 insertions(+), 22 deletions(-)
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 047af338ba15..fc5217a18398 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -77,8 +77,22 @@ static inline long __flush_cache_user_range(unsigned long start,
{
int ret;
+ if (cpus_have_const_cap(ARM64_HAS_CACHE_IDC)) {
+ dsb(ishst);
+ if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC)) {
+ isb();
+ return 0;
+ }
+ }
+
uaccess_ttbr0_enable();
ret = __asm_flush_cache_user_range(start, end);
+
+ if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC))
+ isb();
+ else
+ __asm_invalidate_icache_range(start, end);
+
uaccess_ttbr0_disable();
return ret;
@@ -94,6 +108,11 @@ static inline int invalidate_icache_range(unsigned long start,
{
int ret;
+ if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC)) {
+ isb();
+ return 0;
+ }
+
uaccess_ttbr0_enable();
ret = __asm_invalidate_icache_range(start, end);
uaccess_ttbr0_disable();
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 1981cbaf5d92..0093bb9fcd12 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -25,30 +25,18 @@
* - end - virtual end address of region
*/
ENTRY(__asm_flush_cache_user_range)
-alternative_if ARM64_HAS_CACHE_IDC
- dsb ishst
- b 7f
-alternative_else_nop_endif
dcache_line_size x2, x3
sub x3, x2, #1
bic x4, x0, x3
-1:
-user_alt 9f, "dc cvau, x4", "dc civac, x4", ARM64_WORKAROUND_CLEAN_CACHE
+1: user_alt 3f, "dc cvau, x4", "dc civac, x4", ARM64_WORKAROUND_CLEAN_CACHE
add x4, x4, x2
cmp x4, x1
b.lo 1b
dsb ish
-
-7:
-alternative_if ARM64_HAS_CACHE_DIC
- isb
- b 8f
-alternative_else_nop_endif
- invalidate_icache_by_line x0, x1, x2, x3, 9f
-8: mov x0, #0
-1: ret
-9: mov x0, #-EFAULT
- b 1b
+ mov x0, #0
+2: ret
+3: mov x0, #-EFAULT
+ b 2b
ENDPROC(__asm_flush_cache_user_range)
/*
@@ -60,11 +48,6 @@ ENDPROC(__asm_flush_cache_user_range)
* - end - virtual end address of region
*/
ENTRY(__asm_invalidate_icache_range)
-alternative_if ARM64_HAS_CACHE_DIC
- mov x0, xzr
- isb
- ret
-alternative_else_nop_endif
invalidate_icache_by_line x0, x1, x2, x3, 2f
mov x0, xzr
1: ret
diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
index 61521285f27d..adfdacb163ad 100644
--- a/arch/arm64/mm/flush.c
+++ b/arch/arm64/mm/flush.c
@@ -76,6 +76,7 @@ EXPORT_SYMBOL(flush_dcache_page);
* Additional functions defined in assembly.
*/
EXPORT_SYMBOL(__asm_flush_cache_user_range);
+EXPORT_SYMBOL(__asm_invalidate_icache_range);
#ifdef CONFIG_ARCH_HAS_PMEM_API
void arch_wb_cache_pmem(void *addr, size_t size)
--
2.17.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v5 6/6] arm64: remove the rest of asm-uaccess.h
2020-01-02 21:13 [PATCH v5 0/6] Use C inlines for uaccess Pavel Tatashin
` (4 preceding siblings ...)
2020-01-02 21:13 ` [PATCH v5 5/6] arm64: move ARM64_HAS_CACHE_DIC/_IDC from asm to C Pavel Tatashin
@ 2020-01-02 21:13 ` Pavel Tatashin
5 siblings, 0 replies; 11+ messages in thread
From: Pavel Tatashin @ 2020-01-02 21:13 UTC (permalink / raw)
To: pasha.tatashin, jmorris, sashal, linux-kernel, catalin.marinas,
will, steve.capper, linux-arm-kernel, maz, james.morse,
vladimir.murzin, mark.rutland, tglx, gregkh, allison, info,
alexios.zavras, sstabellini, boris.ostrovsky, jgross, stefan,
yamada.masahiro, xen-devel, linux, andrew.cooper3, julien
The __uaccess_ttbr0_disable and __uaccess_ttbr0_enable,
are the last two macros defined in asm-uaccess.h.
For now move them to entry.S where they are used. Eventually,
these macros should be replaced with C wrappers to reduce the
maintenance burden.
Also, once these macros are unified with the C counterparts, it
is a good idea to check that PAN is in correct state on every
enable/disable calls.
Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
---
arch/arm64/include/asm/asm-uaccess.h | 39 ----------------------------
arch/arm64/kernel/entry.S | 27 ++++++++++++++++++-
arch/arm64/lib/clear_user.S | 2 +-
arch/arm64/lib/copy_from_user.S | 2 +-
arch/arm64/lib/copy_in_user.S | 2 +-
arch/arm64/lib/copy_to_user.S | 2 +-
arch/arm64/mm/cache.S | 1 -
7 files changed, 30 insertions(+), 45 deletions(-)
delete mode 100644 arch/arm64/include/asm/asm-uaccess.h
diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h
deleted file mode 100644
index fba2a69f7fef..000000000000
--- a/arch/arm64/include/asm/asm-uaccess.h
+++ /dev/null
@@ -1,39 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef __ASM_ASM_UACCESS_H
-#define __ASM_ASM_UACCESS_H
-
-#include <asm/alternative.h>
-#include <asm/kernel-pgtable.h>
-#include <asm/mmu.h>
-#include <asm/sysreg.h>
-#include <asm/assembler.h>
-
-/*
- * User access enabling/disabling macros.
- */
-#ifdef CONFIG_ARM64_SW_TTBR0_PAN
- .macro __uaccess_ttbr0_disable, tmp1
- mrs \tmp1, ttbr1_el1 // swapper_pg_dir
- bic \tmp1, \tmp1, #TTBR_ASID_MASK
- sub \tmp1, \tmp1, #RESERVED_TTBR0_SIZE // reserved_ttbr0 just before swapper_pg_dir
- msr ttbr0_el1, \tmp1 // set reserved TTBR0_EL1
- isb
- add \tmp1, \tmp1, #RESERVED_TTBR0_SIZE
- msr ttbr1_el1, \tmp1 // set reserved ASID
- isb
- .endm
-
- .macro __uaccess_ttbr0_enable, tmp1, tmp2
- get_current_task \tmp1
- ldr \tmp1, [\tmp1, #TSK_TI_TTBR0] // load saved TTBR0_EL1
- mrs \tmp2, ttbr1_el1
- extr \tmp2, \tmp2, \tmp1, #48
- ror \tmp2, \tmp2, #16
- msr ttbr1_el1, \tmp2 // set the active ASID
- isb
- msr ttbr0_el1, \tmp1 // set the non-PAN TTBR0_EL1
- isb
- .endm
-#endif
-
-#endif
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 7c6a0a41676f..cc6c0dbb7734 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -22,8 +22,8 @@
#include <asm/mmu.h>
#include <asm/processor.h>
#include <asm/ptrace.h>
+#include <asm/kernel-pgtable.h>
#include <asm/thread_info.h>
-#include <asm/asm-uaccess.h>
#include <asm/unistd.h>
/*
@@ -144,6 +144,31 @@ alternative_cb_end
#endif
.endm
+#ifdef CONFIG_ARM64_SW_TTBR0_PAN
+ .macro __uaccess_ttbr0_disable, tmp1
+ mrs \tmp1, ttbr1_el1 // swapper_pg_dir
+ bic \tmp1, \tmp1, #TTBR_ASID_MASK
+ sub \tmp1, \tmp1, #RESERVED_TTBR0_SIZE // reserved_ttbr0 just before swapper_pg_dir
+ msr ttbr0_el1, \tmp1 // set reserved TTBR0_EL1
+ isb
+ add \tmp1, \tmp1, #RESERVED_TTBR0_SIZE
+ msr ttbr1_el1, \tmp1 // set reserved ASID
+ isb
+ .endm
+
+ .macro __uaccess_ttbr0_enable, tmp1, tmp2
+ get_current_task \tmp1
+ ldr \tmp1, [\tmp1, #TSK_TI_TTBR0] // load saved TTBR0_EL1
+ mrs \tmp2, ttbr1_el1
+ extr \tmp2, \tmp2, \tmp1, #48
+ ror \tmp2, \tmp2, #16
+ msr ttbr1_el1, \tmp2 // set the active ASID
+ isb
+ msr ttbr0_el1, \tmp1 // set the non-PAN TTBR0_EL1
+ isb
+ .endm
+#endif
+
.macro kernel_entry, el, regsize = 64
.if \regsize == 32
mov w0, w0 // zero upper 32 bits of x0
diff --git a/arch/arm64/lib/clear_user.S b/arch/arm64/lib/clear_user.S
index aeafc03e961a..b0b4a86a09e2 100644
--- a/arch/arm64/lib/clear_user.S
+++ b/arch/arm64/lib/clear_user.S
@@ -6,7 +6,7 @@
*/
#include <linux/linkage.h>
-#include <asm/asm-uaccess.h>
+#include <asm/alternative.h>
#include <asm/assembler.h>
.text
diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S
index ebb3c06cbb5d..142bc7505518 100644
--- a/arch/arm64/lib/copy_from_user.S
+++ b/arch/arm64/lib/copy_from_user.S
@@ -5,7 +5,7 @@
#include <linux/linkage.h>
-#include <asm/asm-uaccess.h>
+#include <asm/alternative.h>
#include <asm/assembler.h>
#include <asm/cache.h>
diff --git a/arch/arm64/lib/copy_in_user.S b/arch/arm64/lib/copy_in_user.S
index 3d8153a1ebce..04dc48ca26f7 100644
--- a/arch/arm64/lib/copy_in_user.S
+++ b/arch/arm64/lib/copy_in_user.S
@@ -7,7 +7,7 @@
#include <linux/linkage.h>
-#include <asm/asm-uaccess.h>
+#include <asm/alternative.h>
#include <asm/assembler.h>
#include <asm/cache.h>
diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
index 357eae2c18eb..8f3218ae88ab 100644
--- a/arch/arm64/lib/copy_to_user.S
+++ b/arch/arm64/lib/copy_to_user.S
@@ -5,7 +5,7 @@
#include <linux/linkage.h>
-#include <asm/asm-uaccess.h>
+#include <asm/alternative.h>
#include <asm/assembler.h>
#include <asm/cache.h>
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 0093bb9fcd12..627be857b8d0 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -12,7 +12,6 @@
#include <asm/assembler.h>
#include <asm/cpufeature.h>
#include <asm/alternative.h>
-#include <asm/asm-uaccess.h>
/*
* __asm_flush_cache_user_range(start,end)
--
2.17.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH v5 1/6] arm/arm64/xen: hypercall.h add includes guards
2020-01-02 21:13 ` [PATCH v5 1/6] arm/arm64/xen: hypercall.h add includes guards Pavel Tatashin
@ 2020-01-06 17:18 ` Stefano Stabellini
2020-01-08 17:59 ` Pavel Tatashin
0 siblings, 1 reply; 11+ messages in thread
From: Stefano Stabellini @ 2020-01-06 17:18 UTC (permalink / raw)
To: Pavel Tatashin
Cc: jmorris, sashal, linux-kernel, catalin.marinas, will,
steve.capper, linux-arm-kernel, maz, james.morse,
vladimir.murzin, mark.rutland, tglx, gregkh, allison, info,
alexios.zavras, sstabellini, boris.ostrovsky, jgross, stefan,
yamada.masahiro, xen-devel, linux, andrew.cooper3, julien
On Thu, 2 Jan 2020, Pavel Tatashin wrote:
> The arm and arm64 versions of hypercall.h are missing the include
> guards. This is needed because C inlines for privcmd_call are going to
> be added to the files.
>
> Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
> Reviewed-by: Julien Grall <julien@xen.org>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
> ---
> arch/arm/include/asm/xen/hypercall.h | 4 ++++
> arch/arm64/include/asm/xen/hypercall.h | 4 ++++
> include/xen/arm/hypercall.h | 6 +++---
> 3 files changed, 11 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm/include/asm/xen/hypercall.h b/arch/arm/include/asm/xen/hypercall.h
> index 3522cbaed316..c6882bba5284 100644
> --- a/arch/arm/include/asm/xen/hypercall.h
> +++ b/arch/arm/include/asm/xen/hypercall.h
> @@ -1 +1,5 @@
> +#ifndef _ASM_ARM_XEN_HYPERCALL_H
> +#define _ASM_ARM_XEN_HYPERCALL_H
> #include <xen/arm/hypercall.h>
> +
> +#endif /* _ASM_ARM_XEN_HYPERCALL_H */
> diff --git a/arch/arm64/include/asm/xen/hypercall.h b/arch/arm64/include/asm/xen/hypercall.h
> index 3522cbaed316..c3198f9ccd2e 100644
> --- a/arch/arm64/include/asm/xen/hypercall.h
> +++ b/arch/arm64/include/asm/xen/hypercall.h
> @@ -1 +1,5 @@
> +#ifndef _ASM_ARM64_XEN_HYPERCALL_H
> +#define _ASM_ARM64_XEN_HYPERCALL_H
> #include <xen/arm/hypercall.h>
> +
> +#endif /* _ASM_ARM64_XEN_HYPERCALL_H */
> diff --git a/include/xen/arm/hypercall.h b/include/xen/arm/hypercall.h
> index b40485e54d80..babcc08af965 100644
> --- a/include/xen/arm/hypercall.h
> +++ b/include/xen/arm/hypercall.h
> @@ -30,8 +30,8 @@
> * IN THE SOFTWARE.
> */
>
> -#ifndef _ASM_ARM_XEN_HYPERCALL_H
> -#define _ASM_ARM_XEN_HYPERCALL_H
> +#ifndef _ARM_XEN_HYPERCALL_H
> +#define _ARM_XEN_HYPERCALL_H
>
> #include <linux/bug.h>
>
> @@ -88,4 +88,4 @@ MULTI_mmu_update(struct multicall_entry *mcl, struct mmu_update *req,
> BUG();
> }
>
> -#endif /* _ASM_ARM_XEN_HYPERCALL_H */
> +#endif /* _ARM_XEN_HYPERCALL_H */
> --
> 2.17.1
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v5 1/6] arm/arm64/xen: hypercall.h add includes guards
2020-01-06 17:18 ` Stefano Stabellini
@ 2020-01-08 17:59 ` Pavel Tatashin
0 siblings, 0 replies; 11+ messages in thread
From: Pavel Tatashin @ 2020-01-08 17:59 UTC (permalink / raw)
To: Stefano Stabellini
Cc: James Morris, Sasha Levin, LKML, Catalin Marinas, Will Deacon,
steve.capper, Linux ARM, Marc Zyngier, James Morse,
Vladimir Murzin, Mark Rutland, Thomas Gleixner,
Greg Kroah-Hartman, allison, info, alexios.zavras,
boris.ostrovsky, jgross, Stefan Agner, Masahiro Yamada,
xen-devel, Russell King - ARM Linux admin, Andrew Cooper,
Julien Grall
On Mon, Jan 6, 2020 at 12:19 PM Stefano Stabellini
<sstabellini@kernel.org> wrote:
>
> On Thu, 2 Jan 2020, Pavel Tatashin wrote:
> > The arm and arm64 versions of hypercall.h are missing the include
> > guards. This is needed because C inlines for privcmd_call are going to
> > be added to the files.
> >
> > Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
> > Reviewed-by: Julien Grall <julien@xen.org>
>
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>
Thank you,
Pasha
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v5 3/6] arm64: remove uaccess_ttbr0 asm macros from cache functions
2020-01-02 21:13 ` [PATCH v5 3/6] arm64: remove uaccess_ttbr0 asm macros from cache functions Pavel Tatashin
@ 2020-01-14 18:14 ` Will Deacon
0 siblings, 0 replies; 11+ messages in thread
From: Will Deacon @ 2020-01-14 18:14 UTC (permalink / raw)
To: Pavel Tatashin
Cc: jmorris, sashal, linux-kernel, catalin.marinas, steve.capper,
linux-arm-kernel, maz, james.morse, vladimir.murzin,
mark.rutland, tglx, gregkh, allison, info, alexios.zavras,
sstabellini, boris.ostrovsky, jgross, stefan, yamada.masahiro,
xen-devel, linux, andrew.cooper3, julien
On Thu, Jan 02, 2020 at 04:13:54PM -0500, Pavel Tatashin wrote:
> We currently duplicate the logic to enable/disable uaccess via TTBR0,
> with C functions and assembly macros. This is a maintenenace burden
> and is liable to lead to subtle bugs, so let's get rid of the assembly
> macros, and always use the C functions. This requires refactoring
> some assembly functions to have a C wrapper.
>
> Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
> ---
> arch/arm64/include/asm/asm-uaccess.h | 22 ----------------
> arch/arm64/include/asm/cacheflush.h | 39 +++++++++++++++++++++++++---
> arch/arm64/mm/cache.S | 36 ++++++++++---------------
> arch/arm64/mm/flush.c | 2 +-
> 4 files changed, 50 insertions(+), 49 deletions(-)
>
> diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h
> index f68a0e64482a..fba2a69f7fef 100644
> --- a/arch/arm64/include/asm/asm-uaccess.h
> +++ b/arch/arm64/include/asm/asm-uaccess.h
> @@ -34,28 +34,6 @@
> msr ttbr0_el1, \tmp1 // set the non-PAN TTBR0_EL1
> isb
> .endm
> -
> - .macro uaccess_ttbr0_disable, tmp1, tmp2
> -alternative_if_not ARM64_HAS_PAN
> - save_and_disable_irq \tmp2 // avoid preemption
> - __uaccess_ttbr0_disable \tmp1
> - restore_irq \tmp2
> -alternative_else_nop_endif
> - .endm
> -
> - .macro uaccess_ttbr0_enable, tmp1, tmp2, tmp3
> -alternative_if_not ARM64_HAS_PAN
> - save_and_disable_irq \tmp3 // avoid preemption
> - __uaccess_ttbr0_enable \tmp1, \tmp2
> - restore_irq \tmp3
> -alternative_else_nop_endif
> - .endm
> -#else
> - .macro uaccess_ttbr0_disable, tmp1, tmp2
> - .endm
> -
> - .macro uaccess_ttbr0_enable, tmp1, tmp2, tmp3
> - .endm
> #endif
>
> #endif
> diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
> index 665c78e0665a..cb00c61e0bde 100644
> --- a/arch/arm64/include/asm/cacheflush.h
> +++ b/arch/arm64/include/asm/cacheflush.h
> @@ -61,16 +61,49 @@
> * - kaddr - page address
> * - size - region size
> */
> -extern void __flush_icache_range(unsigned long start, unsigned long end);
> -extern int invalidate_icache_range(unsigned long start, unsigned long end);
> +extern void __asm_flush_icache_range(unsigned long start, unsigned long end);
> +extern long __asm_flush_cache_user_range(unsigned long start,
> + unsigned long end);
> +extern int __asm_invalidate_icache_range(unsigned long start,
> + unsigned long end);
> extern void __flush_dcache_area(void *addr, size_t len);
> extern void __inval_dcache_area(void *addr, size_t len);
> extern void __clean_dcache_area_poc(void *addr, size_t len);
> extern void __clean_dcache_area_pop(void *addr, size_t len);
> extern void __clean_dcache_area_pou(void *addr, size_t len);
> -extern long __flush_cache_user_range(unsigned long start, unsigned long end);
> extern void sync_icache_aliases(void *kaddr, unsigned long len);
>
> +static inline long __flush_cache_user_range(unsigned long start,
> + unsigned long end)
> +{
> + int ret;
> +
> + uaccess_ttbr0_enable();
> + ret = __asm_flush_cache_user_range(start, end);
> + uaccess_ttbr0_disable();
> +
> + return ret;
> +}
> +
> +static inline void __flush_icache_range(unsigned long start, unsigned long end)
> +{
> + uaccess_ttbr0_enable();
> + __asm_flush_icache_range(start, end);
> + uaccess_ttbr0_disable();
> +}
Interesting... I don't think we should be enabling uaccess here: the
function has a void return type so we can't communicate failure back to the
caller if we fault, so my feeling is that this should only ever be called on
kernel addresses.
> +
> +static inline int invalidate_icache_range(unsigned long start,
> + unsigned long end)
> +{
> + int ret;
> +
> + uaccess_ttbr0_enable();
> + ret = __asm_invalidate_icache_range(start, end);
> + uaccess_ttbr0_disable();
> +
> + return ret;
> +}
Same here -- I don't think think this is ever called on user addresses.
Can we make the return type void and drop the uaccess toggle?
Will
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v5 5/6] arm64: move ARM64_HAS_CACHE_DIC/_IDC from asm to C
2020-01-02 21:13 ` [PATCH v5 5/6] arm64: move ARM64_HAS_CACHE_DIC/_IDC from asm to C Pavel Tatashin
@ 2020-01-14 18:26 ` Will Deacon
0 siblings, 0 replies; 11+ messages in thread
From: Will Deacon @ 2020-01-14 18:26 UTC (permalink / raw)
To: Pavel Tatashin
Cc: jmorris, sashal, linux-kernel, catalin.marinas, steve.capper,
linux-arm-kernel, maz, james.morse, vladimir.murzin,
mark.rutland, tglx, gregkh, allison, info, alexios.zavras,
sstabellini, boris.ostrovsky, jgross, stefan, yamada.masahiro,
xen-devel, linux, andrew.cooper3, julien
On Thu, Jan 02, 2020 at 04:13:56PM -0500, Pavel Tatashin wrote:
> The assmbly functions __asm_flush_cache_user_range and
> __asm_invalidate_icache_range have alternatives:
>
> alternative_if ARM64_HAS_CACHE_DIC
> ...
>
> alternative_if ARM64_HAS_CACHE_IDC
> ...
>
> But, the implementation of those alternatives is trivial and therefore
> can be done in the C inline wrappers.
>
> Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
> ---
> arch/arm64/include/asm/cacheflush.h | 19 +++++++++++++++++++
> arch/arm64/mm/cache.S | 27 +++++----------------------
> arch/arm64/mm/flush.c | 1 +
> 3 files changed, 25 insertions(+), 22 deletions(-)
>
> diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
> index 047af338ba15..fc5217a18398 100644
> --- a/arch/arm64/include/asm/cacheflush.h
> +++ b/arch/arm64/include/asm/cacheflush.h
> @@ -77,8 +77,22 @@ static inline long __flush_cache_user_range(unsigned long start,
> {
> int ret;
>
> + if (cpus_have_const_cap(ARM64_HAS_CACHE_IDC)) {
> + dsb(ishst);
> + if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC)) {
> + isb();
> + return 0;
> + }
> + }
> +
> uaccess_ttbr0_enable();
> ret = __asm_flush_cache_user_range(start, end);
I don't understand this. Doesn't it mean a CPU with IDC but not DIC will
end up with doing the D-cache maintenance?
Will
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2020-01-14 18:26 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-02 21:13 [PATCH v5 0/6] Use C inlines for uaccess Pavel Tatashin
2020-01-02 21:13 ` [PATCH v5 1/6] arm/arm64/xen: hypercall.h add includes guards Pavel Tatashin
2020-01-06 17:18 ` Stefano Stabellini
2020-01-08 17:59 ` Pavel Tatashin
2020-01-02 21:13 ` [PATCH v5 2/6] arm/arm64/xen: use C inlines for privcmd_call Pavel Tatashin
2020-01-02 21:13 ` [PATCH v5 3/6] arm64: remove uaccess_ttbr0 asm macros from cache functions Pavel Tatashin
2020-01-14 18:14 ` Will Deacon
2020-01-02 21:13 ` [PATCH v5 4/6] arm64: remove __asm_flush_icache_range Pavel Tatashin
2020-01-02 21:13 ` [PATCH v5 5/6] arm64: move ARM64_HAS_CACHE_DIC/_IDC from asm to C Pavel Tatashin
2020-01-14 18:26 ` Will Deacon
2020-01-02 21:13 ` [PATCH v5 6/6] arm64: remove the rest of asm-uaccess.h Pavel Tatashin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).