All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/18] arm64: invoke syscalls with pt_regs
@ 2018-05-14  9:46 ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, dave.martin, james.morse, linux,
	linux-fsdevel, marc.zyngier, mark.rutland, viro, will.deacon

This series reworks arm64's syscall handling to minimize the propagation
of user-controlled register values into speculated code paths. As with
x86 [1], a wrapper is generated for each syscall, which extracts the
argument from a struct pt_regs. During kernel entry from userspace,
registers are zeroed.

The arm64 kernel code directly invokes some syscalls which the x86 code
doesn't, so I've added ksys_* wrappers for these, following the x86
example. The rest of the series is arm64-specific.

I've pushed the series out to my arm64/syscall-regs branch [2] on
kernel.org.

Thanks,
Mark.

[1] https://lkml.kernel.org/r/20180330093720.6780-1-linux@dominikbrodowski.net
[2] git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git

Mark Rutland (18):
  arm64: consistently use unsigned long for thread flags
  arm64: move SCTLR_EL{1,2} assertions to <asm/sysreg.h>
  arm64: introduce sysreg_clear_set()
  arm64: kill config_sctlr_el1()
  arm64: kill change_cpacr()
  arm64: move sve_user_{enable,disable} to <asm/fpsimd.h>
  arm64: remove sigreturn wrappers
  arm64: convert raw syscall invocation to C
  arm64: convert syscall trace logic to C
  arm64: convert native/compat syscall entry to C
  arm64: zero GPRs upon entry from EL0
  kernel: add ksys_personality()
  kernel: add kcompat_sys_{f,}statfs64()
  arm64: remove in-kernel call to sys_personality()
  arm64: use {COMPAT,}SYSCALL_DEFINE0 for sigreturn
  arm64: use SYSCALL_DEFINE6() for mmap
  arm64: convert compat wrappers to C
  arm64: implement syscall wrappers

 arch/arm64/Kconfig                       |   1 +
 arch/arm64/include/asm/fpsimd.h          |  17 ++++-
 arch/arm64/include/asm/syscall_wrapper.h |  80 ++++++++++++++++++++
 arch/arm64/include/asm/sysreg.h          |  33 ++++----
 arch/arm64/include/asm/unistd32.h        |  26 +++----
 arch/arm64/kernel/Makefile               |   5 +-
 arch/arm64/kernel/armv8_deprecated.c     |   8 +-
 arch/arm64/kernel/cpu_errata.c           |   3 +-
 arch/arm64/kernel/entry.S                | 126 +++----------------------------
 arch/arm64/kernel/entry32.S              | 121 -----------------------------
 arch/arm64/kernel/fpsimd.c               |  20 -----
 arch/arm64/kernel/signal.c               |   5 +-
 arch/arm64/kernel/signal32.c             |   6 +-
 arch/arm64/kernel/sys.c                  |  19 +++--
 arch/arm64/kernel/sys32.c                | 116 ++++++++++++++++++++++++----
 arch/arm64/kernel/syscall.c              | 113 +++++++++++++++++++++++++++
 arch/arm64/kernel/traps.c                |   4 +-
 arch/arm64/mm/fault.c                    |   2 +-
 fs/statfs.c                              |  14 +++-
 include/linux/syscalls.h                 |   9 +++
 kernel/exec_domain.c                     |   7 +-
 21 files changed, 411 insertions(+), 324 deletions(-)
 create mode 100644 arch/arm64/include/asm/syscall_wrapper.h
 delete mode 100644 arch/arm64/kernel/entry32.S
 create mode 100644 arch/arm64/kernel/syscall.c

-- 
2.11.0

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 00/18] arm64: invoke syscalls with pt_regs
@ 2018-05-14  9:46 ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel

This series reworks arm64's syscall handling to minimize the propagation
of user-controlled register values into speculated code paths. As with
x86 [1], a wrapper is generated for each syscall, which extracts the
argument from a struct pt_regs. During kernel entry from userspace,
registers are zeroed.

The arm64 kernel code directly invokes some syscalls which the x86 code
doesn't, so I've added ksys_* wrappers for these, following the x86
example. The rest of the series is arm64-specific.

I've pushed the series out to my arm64/syscall-regs branch [2] on
kernel.org.

Thanks,
Mark.

[1] https://lkml.kernel.org/r/20180330093720.6780-1-linux at dominikbrodowski.net
[2] git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git

Mark Rutland (18):
  arm64: consistently use unsigned long for thread flags
  arm64: move SCTLR_EL{1,2} assertions to <asm/sysreg.h>
  arm64: introduce sysreg_clear_set()
  arm64: kill config_sctlr_el1()
  arm64: kill change_cpacr()
  arm64: move sve_user_{enable,disable} to <asm/fpsimd.h>
  arm64: remove sigreturn wrappers
  arm64: convert raw syscall invocation to C
  arm64: convert syscall trace logic to C
  arm64: convert native/compat syscall entry to C
  arm64: zero GPRs upon entry from EL0
  kernel: add ksys_personality()
  kernel: add kcompat_sys_{f,}statfs64()
  arm64: remove in-kernel call to sys_personality()
  arm64: use {COMPAT,}SYSCALL_DEFINE0 for sigreturn
  arm64: use SYSCALL_DEFINE6() for mmap
  arm64: convert compat wrappers to C
  arm64: implement syscall wrappers

 arch/arm64/Kconfig                       |   1 +
 arch/arm64/include/asm/fpsimd.h          |  17 ++++-
 arch/arm64/include/asm/syscall_wrapper.h |  80 ++++++++++++++++++++
 arch/arm64/include/asm/sysreg.h          |  33 ++++----
 arch/arm64/include/asm/unistd32.h        |  26 +++----
 arch/arm64/kernel/Makefile               |   5 +-
 arch/arm64/kernel/armv8_deprecated.c     |   8 +-
 arch/arm64/kernel/cpu_errata.c           |   3 +-
 arch/arm64/kernel/entry.S                | 126 +++----------------------------
 arch/arm64/kernel/entry32.S              | 121 -----------------------------
 arch/arm64/kernel/fpsimd.c               |  20 -----
 arch/arm64/kernel/signal.c               |   5 +-
 arch/arm64/kernel/signal32.c             |   6 +-
 arch/arm64/kernel/sys.c                  |  19 +++--
 arch/arm64/kernel/sys32.c                | 116 ++++++++++++++++++++++++----
 arch/arm64/kernel/syscall.c              | 113 +++++++++++++++++++++++++++
 arch/arm64/kernel/traps.c                |   4 +-
 arch/arm64/mm/fault.c                    |   2 +-
 fs/statfs.c                              |  14 +++-
 include/linux/syscalls.h                 |   9 +++
 kernel/exec_domain.c                     |   7 +-
 21 files changed, 411 insertions(+), 324 deletions(-)
 create mode 100644 arch/arm64/include/asm/syscall_wrapper.h
 delete mode 100644 arch/arm64/kernel/entry32.S
 create mode 100644 arch/arm64/kernel/syscall.c

-- 
2.11.0

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 01/18] arm64: consistently use unsigned long for thread flags
  2018-05-14  9:46 ` Mark Rutland
@ 2018-05-14  9:46   ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, dave.martin, james.morse, linux,
	linux-fsdevel, marc.zyngier, mark.rutland, viro, will.deacon

In do_notify_resume, we manipulate thread_flags as a 32-bit unsigned
int, whereas thread_info::flags is a 64-bit unsigned long, and elsewhere
(e.g. in the entry assembly) we manipulate the flags as a 64-bit
quantity.

For consistency, and to avoid problems if we end up with more than 32
flags, let's make do_notify_resume take the flags as a 64-bit unsigned
long.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kernel/signal.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
index 154b7d30145d..8e624fec4707 100644
--- a/arch/arm64/kernel/signal.c
+++ b/arch/arm64/kernel/signal.c
@@ -896,7 +896,7 @@ static void do_signal(struct pt_regs *regs)
 }
 
 asmlinkage void do_notify_resume(struct pt_regs *regs,
-				 unsigned int thread_flags)
+				 unsigned long thread_flags)
 {
 	/*
 	 * The assembly code enters us with IRQs off, but it hasn't
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 01/18] arm64: consistently use unsigned long for thread flags
@ 2018-05-14  9:46   ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel

In do_notify_resume, we manipulate thread_flags as a 32-bit unsigned
int, whereas thread_info::flags is a 64-bit unsigned long, and elsewhere
(e.g. in the entry assembly) we manipulate the flags as a 64-bit
quantity.

For consistency, and to avoid problems if we end up with more than 32
flags, let's make do_notify_resume take the flags as a 64-bit unsigned
long.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kernel/signal.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
index 154b7d30145d..8e624fec4707 100644
--- a/arch/arm64/kernel/signal.c
+++ b/arch/arm64/kernel/signal.c
@@ -896,7 +896,7 @@ static void do_signal(struct pt_regs *regs)
 }
 
 asmlinkage void do_notify_resume(struct pt_regs *regs,
-				 unsigned int thread_flags)
+				 unsigned long thread_flags)
 {
 	/*
 	 * The assembly code enters us with IRQs off, but it hasn't
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 02/18] arm64: move SCTLR_EL{1,2} assertions to <asm/sysreg.h>
  2018-05-14  9:46 ` Mark Rutland
@ 2018-05-14  9:46   ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, dave.martin, james.morse, linux,
	linux-fsdevel, marc.zyngier, mark.rutland, viro, will.deacon

Currently we assert that the SCTLR_EL{1,2}_{SET,CLEAR} bits are
self-consistent with an assertion in config_sctlr_el1(). This is a bit
unusual, since config_sctlr_el1() doesn't make use of these definitions,
and is far away from the definitions themselves.

We can use the CPP #error directive to have equivalent assertions in
<asm/sysreg.h>, next to the definitions of the set/clear bits, which is
a bit clearer and simpler.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/sysreg.h | 14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 6171178075dc..bd1d1194a5e7 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -452,9 +452,9 @@
 			 SCTLR_ELx_SA     | SCTLR_ELx_I    | SCTLR_ELx_WXN | \
 			 ENDIAN_CLEAR_EL2 | SCTLR_EL2_RES0)
 
-/* Check all the bits are accounted for */
-#define SCTLR_EL2_BUILD_BUG_ON_MISSING_BITS	BUILD_BUG_ON((SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != ~0)
-
+#if (SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != 0xffffffff
+#error "Inconsistent SCTLR_EL2 set/clear bits"
+#endif
 
 /* SCTLR_EL1 specific flags. */
 #define SCTLR_EL1_UCI		(1 << 26)
@@ -492,8 +492,9 @@
 			 SCTLR_EL1_UMA | SCTLR_ELx_WXN     | ENDIAN_CLEAR_EL1 |\
 			 SCTLR_EL1_RES0)
 
-/* Check all the bits are accounted for */
-#define SCTLR_EL1_BUILD_BUG_ON_MISSING_BITS	BUILD_BUG_ON((SCTLR_EL1_SET ^ SCTLR_EL1_CLEAR) != ~0)
+#if (SCTLR_EL1_SET ^ SCTLR_EL1_CLEAR) != 0xffffffff
+#error "Inconsistent SCTLR_EL1 set/clear bits"
+#endif
 
 /* id_aa64isar0 */
 #define ID_AA64ISAR0_TS_SHIFT		52
@@ -732,9 +733,6 @@ static inline void config_sctlr_el1(u32 clear, u32 set)
 {
 	u32 val;
 
-	SCTLR_EL2_BUILD_BUG_ON_MISSING_BITS;
-	SCTLR_EL1_BUILD_BUG_ON_MISSING_BITS;
-
 	val = read_sysreg(sctlr_el1);
 	val &= ~clear;
 	val |= set;
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 02/18] arm64: move SCTLR_EL{1,2} assertions to <asm/sysreg.h>
@ 2018-05-14  9:46   ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel

Currently we assert that the SCTLR_EL{1,2}_{SET,CLEAR} bits are
self-consistent with an assertion in config_sctlr_el1(). This is a bit
unusual, since config_sctlr_el1() doesn't make use of these definitions,
and is far away from the definitions themselves.

We can use the CPP #error directive to have equivalent assertions in
<asm/sysreg.h>, next to the definitions of the set/clear bits, which is
a bit clearer and simpler.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/sysreg.h | 14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 6171178075dc..bd1d1194a5e7 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -452,9 +452,9 @@
 			 SCTLR_ELx_SA     | SCTLR_ELx_I    | SCTLR_ELx_WXN | \
 			 ENDIAN_CLEAR_EL2 | SCTLR_EL2_RES0)
 
-/* Check all the bits are accounted for */
-#define SCTLR_EL2_BUILD_BUG_ON_MISSING_BITS	BUILD_BUG_ON((SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != ~0)
-
+#if (SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != 0xffffffff
+#error "Inconsistent SCTLR_EL2 set/clear bits"
+#endif
 
 /* SCTLR_EL1 specific flags. */
 #define SCTLR_EL1_UCI		(1 << 26)
@@ -492,8 +492,9 @@
 			 SCTLR_EL1_UMA | SCTLR_ELx_WXN     | ENDIAN_CLEAR_EL1 |\
 			 SCTLR_EL1_RES0)
 
-/* Check all the bits are accounted for */
-#define SCTLR_EL1_BUILD_BUG_ON_MISSING_BITS	BUILD_BUG_ON((SCTLR_EL1_SET ^ SCTLR_EL1_CLEAR) != ~0)
+#if (SCTLR_EL1_SET ^ SCTLR_EL1_CLEAR) != 0xffffffff
+#error "Inconsistent SCTLR_EL1 set/clear bits"
+#endif
 
 /* id_aa64isar0 */
 #define ID_AA64ISAR0_TS_SHIFT		52
@@ -732,9 +733,6 @@ static inline void config_sctlr_el1(u32 clear, u32 set)
 {
 	u32 val;
 
-	SCTLR_EL2_BUILD_BUG_ON_MISSING_BITS;
-	SCTLR_EL1_BUILD_BUG_ON_MISSING_BITS;
-
 	val = read_sysreg(sctlr_el1);
 	val &= ~clear;
 	val |= set;
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 03/18] arm64: introduce sysreg_clear_set()
  2018-05-14  9:46 ` Mark Rutland
@ 2018-05-14  9:46   ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, dave.martin, james.morse, linux,
	linux-fsdevel, marc.zyngier, mark.rutland, viro, will.deacon

Currently we have a couple of helpers to manipulate bits in particular
sysregs:

 * config_sctlr_el1(u32 clear, u32 set)

 * change_cpacr(u64 val, u64 mask)

The parameters of these differ in naming convention, order, and size,
which is unfortunate. They also differ slightly in behaviour, as
change_cpacr() skips the sysreg write if the bits are unchanged, which
is a useful optimization when sysreg writes are expensive.

Before we gain more yet another sysreg manipulation function, let's
unify these with a common helper, providing a consistent order for
clear/set operands, and the write skipping behaviour from
change_cpacr(). Code will be migrated to the new helper in subsequent
patches.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Martin <dave.martin@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/sysreg.h | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index bd1d1194a5e7..b52762769329 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -729,6 +729,17 @@ asm(
 	asm volatile("msr_s " __stringify(r) ", %x0" : : "rZ" (__val));	\
 } while (0)
 
+/*
+ * Modify bits in a sysreg. Bits in the clear mask are zeroed, then bits in the
+ * set mask are set. Other bits are left as-is.
+ */
+#define sysreg_clear_set(sysreg, clear, set) do {			\
+	u64 __scs_val = read_sysreg(sysreg);				\
+	u64 __scs_new = (__scs_val & ~(u64)(clear)) | (set);		\
+	if (__scs_new != __scs_val)					\
+		write_sysreg(__scs_new, sysreg);			\
+} while (0)
+
 static inline void config_sctlr_el1(u32 clear, u32 set)
 {
 	u32 val;
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 03/18] arm64: introduce sysreg_clear_set()
@ 2018-05-14  9:46   ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel

Currently we have a couple of helpers to manipulate bits in particular
sysregs:

 * config_sctlr_el1(u32 clear, u32 set)

 * change_cpacr(u64 val, u64 mask)

The parameters of these differ in naming convention, order, and size,
which is unfortunate. They also differ slightly in behaviour, as
change_cpacr() skips the sysreg write if the bits are unchanged, which
is a useful optimization when sysreg writes are expensive.

Before we gain more yet another sysreg manipulation function, let's
unify these with a common helper, providing a consistent order for
clear/set operands, and the write skipping behaviour from
change_cpacr(). Code will be migrated to the new helper in subsequent
patches.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Martin <dave.martin@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/sysreg.h | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index bd1d1194a5e7..b52762769329 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -729,6 +729,17 @@ asm(
 	asm volatile("msr_s " __stringify(r) ", %x0" : : "rZ" (__val));	\
 } while (0)
 
+/*
+ * Modify bits in a sysreg. Bits in the clear mask are zeroed, then bits in the
+ * set mask are set. Other bits are left as-is.
+ */
+#define sysreg_clear_set(sysreg, clear, set) do {			\
+	u64 __scs_val = read_sysreg(sysreg);				\
+	u64 __scs_new = (__scs_val & ~(u64)(clear)) | (set);		\
+	if (__scs_new != __scs_val)					\
+		write_sysreg(__scs_new, sysreg);			\
+} while (0)
+
 static inline void config_sctlr_el1(u32 clear, u32 set)
 {
 	u32 val;
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 04/18] arm64: kill config_sctlr_el1()
  2018-05-14  9:46 ` Mark Rutland
@ 2018-05-14  9:46   ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, dave.martin, james.morse, linux,
	linux-fsdevel, marc.zyngier, mark.rutland, viro, will.deacon

Now that we have sysreg_clear_set(), we can consistently use this
instead of config_sctlr_el1().

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/sysreg.h      | 10 ----------
 arch/arm64/kernel/armv8_deprecated.c |  8 ++++----
 arch/arm64/kernel/cpu_errata.c       |  3 +--
 arch/arm64/kernel/traps.c            |  2 +-
 arch/arm64/mm/fault.c                |  2 +-
 5 files changed, 7 insertions(+), 18 deletions(-)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index b52762769329..3493c7048994 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -740,16 +740,6 @@ asm(
 		write_sysreg(__scs_new, sysreg);			\
 } while (0)
 
-static inline void config_sctlr_el1(u32 clear, u32 set)
-{
-	u32 val;
-
-	val = read_sysreg(sctlr_el1);
-	val &= ~clear;
-	val |= set;
-	write_sysreg(val, sctlr_el1);
-}
-
 #endif
 
 #endif	/* __ASM_SYSREG_H */
diff --git a/arch/arm64/kernel/armv8_deprecated.c b/arch/arm64/kernel/armv8_deprecated.c
index 6e47fc3ab549..778cf810f0d8 100644
--- a/arch/arm64/kernel/armv8_deprecated.c
+++ b/arch/arm64/kernel/armv8_deprecated.c
@@ -512,9 +512,9 @@ static int cp15barrier_handler(struct pt_regs *regs, u32 instr)
 static int cp15_barrier_set_hw_mode(bool enable)
 {
 	if (enable)
-		config_sctlr_el1(0, SCTLR_EL1_CP15BEN);
+		sysreg_clear_set(sctlr_el1, 0, SCTLR_EL1_CP15BEN);
 	else
-		config_sctlr_el1(SCTLR_EL1_CP15BEN, 0);
+		sysreg_clear_set(sctlr_el1, SCTLR_EL1_CP15BEN, 0);
 	return 0;
 }
 
@@ -549,9 +549,9 @@ static int setend_set_hw_mode(bool enable)
 		return -EINVAL;
 
 	if (enable)
-		config_sctlr_el1(SCTLR_EL1_SED, 0);
+		sysreg_clear_set(sctlr_el1, SCTLR_EL1_SED, 0);
 	else
-		config_sctlr_el1(0, SCTLR_EL1_SED);
+		sysreg_clear_set(sctlr_el1, 0, SCTLR_EL1_SED);
 	return 0;
 }
 
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index a900befadfe8..879daf8ea0f8 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -74,8 +74,7 @@ has_mismatched_cache_line_size(const struct arm64_cpu_capabilities *entry,
 static void
 cpu_enable_trap_ctr_access(const struct arm64_cpu_capabilities *__unused)
 {
-	/* Clear SCTLR_EL1.UCT */
-	config_sctlr_el1(SCTLR_EL1_UCT, 0);
+	sysreg_clear_set(sctlr_el1, SCTLR_EL1_UCT, 0);
 }
 
 atomic_t arm64_el2_vector_last_slot = ATOMIC_INIT(-1);
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index 8bbdc17e49df..b8d3e0d8c616 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -411,7 +411,7 @@ asmlinkage void __exception do_undefinstr(struct pt_regs *regs)
 
 void cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
 {
-	config_sctlr_el1(SCTLR_EL1_UCI, 0);
+	sysreg_clear_set(sctlr_el1, SCTLR_EL1_UCI, 0);
 }
 
 #define __user_cache_maint(insn, address, res)			\
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 4165485e8b6e..e7977d932038 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -812,7 +812,7 @@ void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused)
 	 */
 	WARN_ON_ONCE(in_interrupt());
 
-	config_sctlr_el1(SCTLR_EL1_SPAN, 0);
+	sysreg_clear_set(sctlr_el1, SCTLR_EL1_SPAN, 0);
 	asm(SET_PSTATE_PAN(1));
 }
 #endif /* CONFIG_ARM64_PAN */
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 04/18] arm64: kill config_sctlr_el1()
@ 2018-05-14  9:46   ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel

Now that we have sysreg_clear_set(), we can consistently use this
instead of config_sctlr_el1().

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/sysreg.h      | 10 ----------
 arch/arm64/kernel/armv8_deprecated.c |  8 ++++----
 arch/arm64/kernel/cpu_errata.c       |  3 +--
 arch/arm64/kernel/traps.c            |  2 +-
 arch/arm64/mm/fault.c                |  2 +-
 5 files changed, 7 insertions(+), 18 deletions(-)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index b52762769329..3493c7048994 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -740,16 +740,6 @@ asm(
 		write_sysreg(__scs_new, sysreg);			\
 } while (0)
 
-static inline void config_sctlr_el1(u32 clear, u32 set)
-{
-	u32 val;
-
-	val = read_sysreg(sctlr_el1);
-	val &= ~clear;
-	val |= set;
-	write_sysreg(val, sctlr_el1);
-}
-
 #endif
 
 #endif	/* __ASM_SYSREG_H */
diff --git a/arch/arm64/kernel/armv8_deprecated.c b/arch/arm64/kernel/armv8_deprecated.c
index 6e47fc3ab549..778cf810f0d8 100644
--- a/arch/arm64/kernel/armv8_deprecated.c
+++ b/arch/arm64/kernel/armv8_deprecated.c
@@ -512,9 +512,9 @@ static int cp15barrier_handler(struct pt_regs *regs, u32 instr)
 static int cp15_barrier_set_hw_mode(bool enable)
 {
 	if (enable)
-		config_sctlr_el1(0, SCTLR_EL1_CP15BEN);
+		sysreg_clear_set(sctlr_el1, 0, SCTLR_EL1_CP15BEN);
 	else
-		config_sctlr_el1(SCTLR_EL1_CP15BEN, 0);
+		sysreg_clear_set(sctlr_el1, SCTLR_EL1_CP15BEN, 0);
 	return 0;
 }
 
@@ -549,9 +549,9 @@ static int setend_set_hw_mode(bool enable)
 		return -EINVAL;
 
 	if (enable)
-		config_sctlr_el1(SCTLR_EL1_SED, 0);
+		sysreg_clear_set(sctlr_el1, SCTLR_EL1_SED, 0);
 	else
-		config_sctlr_el1(0, SCTLR_EL1_SED);
+		sysreg_clear_set(sctlr_el1, 0, SCTLR_EL1_SED);
 	return 0;
 }
 
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index a900befadfe8..879daf8ea0f8 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -74,8 +74,7 @@ has_mismatched_cache_line_size(const struct arm64_cpu_capabilities *entry,
 static void
 cpu_enable_trap_ctr_access(const struct arm64_cpu_capabilities *__unused)
 {
-	/* Clear SCTLR_EL1.UCT */
-	config_sctlr_el1(SCTLR_EL1_UCT, 0);
+	sysreg_clear_set(sctlr_el1, SCTLR_EL1_UCT, 0);
 }
 
 atomic_t arm64_el2_vector_last_slot = ATOMIC_INIT(-1);
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index 8bbdc17e49df..b8d3e0d8c616 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -411,7 +411,7 @@ asmlinkage void __exception do_undefinstr(struct pt_regs *regs)
 
 void cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
 {
-	config_sctlr_el1(SCTLR_EL1_UCI, 0);
+	sysreg_clear_set(sctlr_el1, SCTLR_EL1_UCI, 0);
 }
 
 #define __user_cache_maint(insn, address, res)			\
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 4165485e8b6e..e7977d932038 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -812,7 +812,7 @@ void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused)
 	 */
 	WARN_ON_ONCE(in_interrupt());
 
-	config_sctlr_el1(SCTLR_EL1_SPAN, 0);
+	sysreg_clear_set(sctlr_el1, SCTLR_EL1_SPAN, 0);
 	asm(SET_PSTATE_PAN(1));
 }
 #endif /* CONFIG_ARM64_PAN */
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 05/18] arm64: kill change_cpacr()
  2018-05-14  9:46 ` Mark Rutland
@ 2018-05-14  9:46   ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, dave.martin, james.morse, linux,
	linux-fsdevel, marc.zyngier, mark.rutland, viro, will.deacon

Now that we have sysreg_clear_set(), we can use this instead of
change_cpacr().

Note that the order of the set and clear arguments differs between
change_cpacr() and sysreg_clear_set(), so these are flipped as part of
the conversion. Also, sve_user_enable() redundantly clears
CPACR_EL1_ZEN_EL0EN before setting it; this is removed for clarity.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Martin <dave.martin@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kernel/fpsimd.c | 13 ++-----------
 1 file changed, 2 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 87a35364e750..088940387a4d 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -172,23 +172,14 @@ static void *sve_pffr(struct task_struct *task)
 		sve_ffr_offset(task->thread.sve_vl);
 }
 
-static void change_cpacr(u64 val, u64 mask)
-{
-	u64 cpacr = read_sysreg(CPACR_EL1);
-	u64 new = (cpacr & ~mask) | val;
-
-	if (new != cpacr)
-		write_sysreg(new, CPACR_EL1);
-}
-
 static void sve_user_disable(void)
 {
-	change_cpacr(0, CPACR_EL1_ZEN_EL0EN);
+	sysreg_clear_set(cpacr_el1, CPACR_EL1_ZEN_EL0EN, 0);
 }
 
 static void sve_user_enable(void)
 {
-	change_cpacr(CPACR_EL1_ZEN_EL0EN, CPACR_EL1_ZEN_EL0EN);
+	sysreg_clear_set(cpacr_el1, 0, CPACR_EL1_ZEN_EL0EN);
 }
 
 /*
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 05/18] arm64: kill change_cpacr()
@ 2018-05-14  9:46   ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel

Now that we have sysreg_clear_set(), we can use this instead of
change_cpacr().

Note that the order of the set and clear arguments differs between
change_cpacr() and sysreg_clear_set(), so these are flipped as part of
the conversion. Also, sve_user_enable() redundantly clears
CPACR_EL1_ZEN_EL0EN before setting it; this is removed for clarity.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Martin <dave.martin@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kernel/fpsimd.c | 13 ++-----------
 1 file changed, 2 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 87a35364e750..088940387a4d 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -172,23 +172,14 @@ static void *sve_pffr(struct task_struct *task)
 		sve_ffr_offset(task->thread.sve_vl);
 }
 
-static void change_cpacr(u64 val, u64 mask)
-{
-	u64 cpacr = read_sysreg(CPACR_EL1);
-	u64 new = (cpacr & ~mask) | val;
-
-	if (new != cpacr)
-		write_sysreg(new, CPACR_EL1);
-}
-
 static void sve_user_disable(void)
 {
-	change_cpacr(0, CPACR_EL1_ZEN_EL0EN);
+	sysreg_clear_set(cpacr_el1, CPACR_EL1_ZEN_EL0EN, 0);
 }
 
 static void sve_user_enable(void)
 {
-	change_cpacr(CPACR_EL1_ZEN_EL0EN, CPACR_EL1_ZEN_EL0EN);
+	sysreg_clear_set(cpacr_el1, 0, CPACR_EL1_ZEN_EL0EN);
 }
 
 /*
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 06/18] arm64: move sve_user_{enable,disable} to <asm/fpsimd.h>
  2018-05-14  9:46 ` Mark Rutland
@ 2018-05-14  9:46   ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, dave.martin, james.morse, linux,
	linux-fsdevel, marc.zyngier, mark.rutland, viro, will.deacon

In subsequent patches, we'll want to make use of sve_user_enable() and
sve_user_disable() outside of kernel/fpsimd.c. Let's move these to
<asm/fpsimd.h> where we can make use of them.

To avoid ifdeffery in sequences like:

if (system_supports_sve() && some_condition
	sve_user_disable();

... empty stubs are provided when support for SVE is not enabled.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Martin <dave.martin@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/fpsimd.h | 17 ++++++++++++++++-
 arch/arm64/kernel/fpsimd.c      | 11 -----------
 2 files changed, 16 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
index aa7162ae93e3..7377d7593c06 100644
--- a/arch/arm64/include/asm/fpsimd.h
+++ b/arch/arm64/include/asm/fpsimd.h
@@ -16,11 +16,13 @@
 #ifndef __ASM_FP_H
 #define __ASM_FP_H
 
-#include <asm/ptrace.h>
 #include <asm/errno.h>
+#include <asm/ptrace.h>
+#include <asm/sysreg.h>
 
 #ifndef __ASSEMBLY__
 
+#include <linux/build_bug.h>
 #include <linux/cache.h>
 #include <linux/init.h>
 #include <linux/stddef.h>
@@ -81,6 +83,16 @@ extern int sve_set_vector_length(struct task_struct *task,
 extern int sve_set_current_vl(unsigned long arg);
 extern int sve_get_current_vl(void);
 
+static inline void sve_user_disable(void)
+{
+	sysreg_clear_set(cpacr_el1, CPACR_EL1_ZEN_EL0EN, 0);
+}
+
+static inline void sve_user_enable(void)
+{
+	sysreg_clear_set(cpacr_el1, 0, CPACR_EL1_ZEN_EL0EN);
+}
+
 /*
  * Probing and setup functions.
  * Calls to these functions must be serialised with one another.
@@ -107,6 +119,9 @@ static inline int sve_get_current_vl(void)
 	return -EINVAL;
 }
 
+static inline void sve_user_disable(void) { }
+static inline void sve_user_enable(void) { }
+
 static inline void sve_init_vq_map(void) { }
 static inline void sve_update_vq_map(void) { }
 static inline int sve_verify_vq_map(void) { return 0; }
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 088940387a4d..79a81c7d85c6 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -159,7 +159,6 @@ static void sve_free(struct task_struct *task)
 	__sve_free(task);
 }
 
-
 /* Offset of FFR in the SVE register dump */
 static size_t sve_ffr_offset(int vl)
 {
@@ -172,16 +171,6 @@ static void *sve_pffr(struct task_struct *task)
 		sve_ffr_offset(task->thread.sve_vl);
 }
 
-static void sve_user_disable(void)
-{
-	sysreg_clear_set(cpacr_el1, CPACR_EL1_ZEN_EL0EN, 0);
-}
-
-static void sve_user_enable(void)
-{
-	sysreg_clear_set(cpacr_el1, 0, CPACR_EL1_ZEN_EL0EN);
-}
-
 /*
  * TIF_SVE controls whether a task can use SVE without trapping while
  * in userspace, and also the way a task's FPSIMD/SVE state is stored
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 06/18] arm64: move sve_user_{enable, disable} to <asm/fpsimd.h>
@ 2018-05-14  9:46   ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel

In subsequent patches, we'll want to make use of sve_user_enable() and
sve_user_disable() outside of kernel/fpsimd.c. Let's move these to
<asm/fpsimd.h> where we can make use of them.

To avoid ifdeffery in sequences like:

if (system_supports_sve() && some_condition
	sve_user_disable();

... empty stubs are provided when support for SVE is not enabled.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Martin <dave.martin@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/fpsimd.h | 17 ++++++++++++++++-
 arch/arm64/kernel/fpsimd.c      | 11 -----------
 2 files changed, 16 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
index aa7162ae93e3..7377d7593c06 100644
--- a/arch/arm64/include/asm/fpsimd.h
+++ b/arch/arm64/include/asm/fpsimd.h
@@ -16,11 +16,13 @@
 #ifndef __ASM_FP_H
 #define __ASM_FP_H
 
-#include <asm/ptrace.h>
 #include <asm/errno.h>
+#include <asm/ptrace.h>
+#include <asm/sysreg.h>
 
 #ifndef __ASSEMBLY__
 
+#include <linux/build_bug.h>
 #include <linux/cache.h>
 #include <linux/init.h>
 #include <linux/stddef.h>
@@ -81,6 +83,16 @@ extern int sve_set_vector_length(struct task_struct *task,
 extern int sve_set_current_vl(unsigned long arg);
 extern int sve_get_current_vl(void);
 
+static inline void sve_user_disable(void)
+{
+	sysreg_clear_set(cpacr_el1, CPACR_EL1_ZEN_EL0EN, 0);
+}
+
+static inline void sve_user_enable(void)
+{
+	sysreg_clear_set(cpacr_el1, 0, CPACR_EL1_ZEN_EL0EN);
+}
+
 /*
  * Probing and setup functions.
  * Calls to these functions must be serialised with one another.
@@ -107,6 +119,9 @@ static inline int sve_get_current_vl(void)
 	return -EINVAL;
 }
 
+static inline void sve_user_disable(void) { }
+static inline void sve_user_enable(void) { }
+
 static inline void sve_init_vq_map(void) { }
 static inline void sve_update_vq_map(void) { }
 static inline int sve_verify_vq_map(void) { return 0; }
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 088940387a4d..79a81c7d85c6 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -159,7 +159,6 @@ static void sve_free(struct task_struct *task)
 	__sve_free(task);
 }
 
-
 /* Offset of FFR in the SVE register dump */
 static size_t sve_ffr_offset(int vl)
 {
@@ -172,16 +171,6 @@ static void *sve_pffr(struct task_struct *task)
 		sve_ffr_offset(task->thread.sve_vl);
 }
 
-static void sve_user_disable(void)
-{
-	sysreg_clear_set(cpacr_el1, CPACR_EL1_ZEN_EL0EN, 0);
-}
-
-static void sve_user_enable(void)
-{
-	sysreg_clear_set(cpacr_el1, 0, CPACR_EL1_ZEN_EL0EN);
-}
-
 /*
  * TIF_SVE controls whether a task can use SVE without trapping while
  * in userspace, and also the way a task's FPSIMD/SVE state is stored
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 07/18] arm64: remove sigreturn wrappers
  2018-05-14  9:46 ` Mark Rutland
@ 2018-05-14  9:46   ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, dave.martin, james.morse, linux,
	linux-fsdevel, marc.zyngier, mark.rutland, viro, will.deacon

The arm64 sigreturn* syscall handlers are non-standard. Rather than
taking a number of user parameters in registers as per the AAPCS,
they expect the pt_regs as their sole argument.

To make this work, we override the syscall definitions to invoke
wrappers written in assembly, which mov the SP into x0, and branch to
their respective C functions.

On other architectures (such as x86), the sigreturn* functions take no
argument and instead use current_pt_regs() to acquire the user
registers. This requires less boilerplate code, and allows for other
features such as interposing C code in this path.

This patch takes the same approach for arm64.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/unistd32.h |  4 ++--
 arch/arm64/kernel/entry.S         |  8 --------
 arch/arm64/kernel/entry32.S       | 10 ----------
 arch/arm64/kernel/signal.c        |  3 ++-
 arch/arm64/kernel/signal32.c      |  6 ++++--
 arch/arm64/kernel/sys.c           |  3 +--
 arch/arm64/kernel/sys32.c         |  4 ++--
 7 files changed, 11 insertions(+), 27 deletions(-)

diff --git a/arch/arm64/include/asm/unistd32.h b/arch/arm64/include/asm/unistd32.h
index ef292160748c..ab95554b1734 100644
--- a/arch/arm64/include/asm/unistd32.h
+++ b/arch/arm64/include/asm/unistd32.h
@@ -260,7 +260,7 @@ __SYSCALL(117, sys_ni_syscall)
 #define __NR_fsync 118
 __SYSCALL(__NR_fsync, sys_fsync)
 #define __NR_sigreturn 119
-__SYSCALL(__NR_sigreturn, compat_sys_sigreturn_wrapper)
+__SYSCALL(__NR_sigreturn, compat_sys_sigreturn)
 #define __NR_clone 120
 __SYSCALL(__NR_clone, sys_clone)
 #define __NR_setdomainname 121
@@ -368,7 +368,7 @@ __SYSCALL(__NR_getresgid, sys_getresgid16)
 #define __NR_prctl 172
 __SYSCALL(__NR_prctl, sys_prctl)
 #define __NR_rt_sigreturn 173
-__SYSCALL(__NR_rt_sigreturn, compat_sys_rt_sigreturn_wrapper)
+__SYSCALL(__NR_rt_sigreturn, compat_sys_rt_sigreturn)
 #define __NR_rt_sigaction 174
 __SYSCALL(__NR_rt_sigaction, compat_sys_rt_sigaction)
 #define __NR_rt_sigprocmask 175
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index ec2ee720e33e..08ea3cbfb08f 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -1108,14 +1108,6 @@ __entry_tramp_data_start:
 #endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
 
 /*
- * Special system call wrappers.
- */
-ENTRY(sys_rt_sigreturn_wrapper)
-	mov	x0, sp
-	b	sys_rt_sigreturn
-ENDPROC(sys_rt_sigreturn_wrapper)
-
-/*
  * Register switch for AArch64. The callee-saved registers need to be saved
  * and restored. On entry:
  *   x0 = previous task_struct (must be preserved across the switch)
diff --git a/arch/arm64/kernel/entry32.S b/arch/arm64/kernel/entry32.S
index f332d5d1f6b4..f9461696dde4 100644
--- a/arch/arm64/kernel/entry32.S
+++ b/arch/arm64/kernel/entry32.S
@@ -30,16 +30,6 @@
  * System call wrappers for the AArch32 compatibility layer.
  */
 
-ENTRY(compat_sys_sigreturn_wrapper)
-	mov	x0, sp
-	b	compat_sys_sigreturn
-ENDPROC(compat_sys_sigreturn_wrapper)
-
-ENTRY(compat_sys_rt_sigreturn_wrapper)
-	mov	x0, sp
-	b	compat_sys_rt_sigreturn
-ENDPROC(compat_sys_rt_sigreturn_wrapper)
-
 ENTRY(compat_sys_statfs64_wrapper)
 	mov	w3, #84
 	cmp	w1, #88
diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
index 8e624fec4707..caa7a68cf2d2 100644
--- a/arch/arm64/kernel/signal.c
+++ b/arch/arm64/kernel/signal.c
@@ -538,8 +538,9 @@ static int restore_sigframe(struct pt_regs *regs,
 	return err;
 }
 
-asmlinkage long sys_rt_sigreturn(struct pt_regs *regs)
+asmlinkage long sys_rt_sigreturn(void)
 {
+	struct pt_regs *regs = current_pt_regs();
 	struct rt_sigframe __user *frame;
 
 	/* Always make any pending restarted system calls return -EINTR */
diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c
index 77b91f478995..cb10588a7cb2 100644
--- a/arch/arm64/kernel/signal32.c
+++ b/arch/arm64/kernel/signal32.c
@@ -282,8 +282,9 @@ static int compat_restore_sigframe(struct pt_regs *regs,
 	return err;
 }
 
-asmlinkage int compat_sys_sigreturn(struct pt_regs *regs)
+asmlinkage int compat_sys_sigreturn(void)
 {
+	struct pt_regs *regs = current_pt_regs();
 	struct compat_sigframe __user *frame;
 
 	/* Always make any pending restarted system calls return -EINTR */
@@ -312,8 +313,9 @@ asmlinkage int compat_sys_sigreturn(struct pt_regs *regs)
 	return 0;
 }
 
-asmlinkage int compat_sys_rt_sigreturn(struct pt_regs *regs)
+asmlinkage int compat_sys_rt_sigreturn(void)
 {
+	struct pt_regs *regs = current_pt_regs();
 	struct compat_rt_sigframe __user *frame;
 
 	/* Always make any pending restarted system calls return -EINTR */
diff --git a/arch/arm64/kernel/sys.c b/arch/arm64/kernel/sys.c
index 72981bae10eb..31045f3fed92 100644
--- a/arch/arm64/kernel/sys.c
+++ b/arch/arm64/kernel/sys.c
@@ -48,8 +48,7 @@ SYSCALL_DEFINE1(arm64_personality, unsigned int, personality)
 /*
  * Wrappers to pass the pt_regs argument.
  */
-asmlinkage long sys_rt_sigreturn_wrapper(void);
-#define sys_rt_sigreturn	sys_rt_sigreturn_wrapper
+asmlinkage long sys_rt_sigreturn(void);
 #define sys_personality		sys_arm64_personality
 
 #undef __SYSCALL
diff --git a/arch/arm64/kernel/sys32.c b/arch/arm64/kernel/sys32.c
index a40b1343b819..1ef103c95410 100644
--- a/arch/arm64/kernel/sys32.c
+++ b/arch/arm64/kernel/sys32.c
@@ -25,8 +25,8 @@
 #include <linux/compiler.h>
 #include <linux/syscalls.h>
 
-asmlinkage long compat_sys_sigreturn_wrapper(void);
-asmlinkage long compat_sys_rt_sigreturn_wrapper(void);
+asmlinkage long compat_sys_sigreturn(void);
+asmlinkage long compat_sys_rt_sigreturn(void);
 asmlinkage long compat_sys_statfs64_wrapper(void);
 asmlinkage long compat_sys_fstatfs64_wrapper(void);
 asmlinkage long compat_sys_pread64_wrapper(void);
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 07/18] arm64: remove sigreturn wrappers
@ 2018-05-14  9:46   ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel

The arm64 sigreturn* syscall handlers are non-standard. Rather than
taking a number of user parameters in registers as per the AAPCS,
they expect the pt_regs as their sole argument.

To make this work, we override the syscall definitions to invoke
wrappers written in assembly, which mov the SP into x0, and branch to
their respective C functions.

On other architectures (such as x86), the sigreturn* functions take no
argument and instead use current_pt_regs() to acquire the user
registers. This requires less boilerplate code, and allows for other
features such as interposing C code in this path.

This patch takes the same approach for arm64.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/unistd32.h |  4 ++--
 arch/arm64/kernel/entry.S         |  8 --------
 arch/arm64/kernel/entry32.S       | 10 ----------
 arch/arm64/kernel/signal.c        |  3 ++-
 arch/arm64/kernel/signal32.c      |  6 ++++--
 arch/arm64/kernel/sys.c           |  3 +--
 arch/arm64/kernel/sys32.c         |  4 ++--
 7 files changed, 11 insertions(+), 27 deletions(-)

diff --git a/arch/arm64/include/asm/unistd32.h b/arch/arm64/include/asm/unistd32.h
index ef292160748c..ab95554b1734 100644
--- a/arch/arm64/include/asm/unistd32.h
+++ b/arch/arm64/include/asm/unistd32.h
@@ -260,7 +260,7 @@ __SYSCALL(117, sys_ni_syscall)
 #define __NR_fsync 118
 __SYSCALL(__NR_fsync, sys_fsync)
 #define __NR_sigreturn 119
-__SYSCALL(__NR_sigreturn, compat_sys_sigreturn_wrapper)
+__SYSCALL(__NR_sigreturn, compat_sys_sigreturn)
 #define __NR_clone 120
 __SYSCALL(__NR_clone, sys_clone)
 #define __NR_setdomainname 121
@@ -368,7 +368,7 @@ __SYSCALL(__NR_getresgid, sys_getresgid16)
 #define __NR_prctl 172
 __SYSCALL(__NR_prctl, sys_prctl)
 #define __NR_rt_sigreturn 173
-__SYSCALL(__NR_rt_sigreturn, compat_sys_rt_sigreturn_wrapper)
+__SYSCALL(__NR_rt_sigreturn, compat_sys_rt_sigreturn)
 #define __NR_rt_sigaction 174
 __SYSCALL(__NR_rt_sigaction, compat_sys_rt_sigaction)
 #define __NR_rt_sigprocmask 175
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index ec2ee720e33e..08ea3cbfb08f 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -1108,14 +1108,6 @@ __entry_tramp_data_start:
 #endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
 
 /*
- * Special system call wrappers.
- */
-ENTRY(sys_rt_sigreturn_wrapper)
-	mov	x0, sp
-	b	sys_rt_sigreturn
-ENDPROC(sys_rt_sigreturn_wrapper)
-
-/*
  * Register switch for AArch64. The callee-saved registers need to be saved
  * and restored. On entry:
  *   x0 = previous task_struct (must be preserved across the switch)
diff --git a/arch/arm64/kernel/entry32.S b/arch/arm64/kernel/entry32.S
index f332d5d1f6b4..f9461696dde4 100644
--- a/arch/arm64/kernel/entry32.S
+++ b/arch/arm64/kernel/entry32.S
@@ -30,16 +30,6 @@
  * System call wrappers for the AArch32 compatibility layer.
  */
 
-ENTRY(compat_sys_sigreturn_wrapper)
-	mov	x0, sp
-	b	compat_sys_sigreturn
-ENDPROC(compat_sys_sigreturn_wrapper)
-
-ENTRY(compat_sys_rt_sigreturn_wrapper)
-	mov	x0, sp
-	b	compat_sys_rt_sigreturn
-ENDPROC(compat_sys_rt_sigreturn_wrapper)
-
 ENTRY(compat_sys_statfs64_wrapper)
 	mov	w3, #84
 	cmp	w1, #88
diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
index 8e624fec4707..caa7a68cf2d2 100644
--- a/arch/arm64/kernel/signal.c
+++ b/arch/arm64/kernel/signal.c
@@ -538,8 +538,9 @@ static int restore_sigframe(struct pt_regs *regs,
 	return err;
 }
 
-asmlinkage long sys_rt_sigreturn(struct pt_regs *regs)
+asmlinkage long sys_rt_sigreturn(void)
 {
+	struct pt_regs *regs = current_pt_regs();
 	struct rt_sigframe __user *frame;
 
 	/* Always make any pending restarted system calls return -EINTR */
diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c
index 77b91f478995..cb10588a7cb2 100644
--- a/arch/arm64/kernel/signal32.c
+++ b/arch/arm64/kernel/signal32.c
@@ -282,8 +282,9 @@ static int compat_restore_sigframe(struct pt_regs *regs,
 	return err;
 }
 
-asmlinkage int compat_sys_sigreturn(struct pt_regs *regs)
+asmlinkage int compat_sys_sigreturn(void)
 {
+	struct pt_regs *regs = current_pt_regs();
 	struct compat_sigframe __user *frame;
 
 	/* Always make any pending restarted system calls return -EINTR */
@@ -312,8 +313,9 @@ asmlinkage int compat_sys_sigreturn(struct pt_regs *regs)
 	return 0;
 }
 
-asmlinkage int compat_sys_rt_sigreturn(struct pt_regs *regs)
+asmlinkage int compat_sys_rt_sigreturn(void)
 {
+	struct pt_regs *regs = current_pt_regs();
 	struct compat_rt_sigframe __user *frame;
 
 	/* Always make any pending restarted system calls return -EINTR */
diff --git a/arch/arm64/kernel/sys.c b/arch/arm64/kernel/sys.c
index 72981bae10eb..31045f3fed92 100644
--- a/arch/arm64/kernel/sys.c
+++ b/arch/arm64/kernel/sys.c
@@ -48,8 +48,7 @@ SYSCALL_DEFINE1(arm64_personality, unsigned int, personality)
 /*
  * Wrappers to pass the pt_regs argument.
  */
-asmlinkage long sys_rt_sigreturn_wrapper(void);
-#define sys_rt_sigreturn	sys_rt_sigreturn_wrapper
+asmlinkage long sys_rt_sigreturn(void);
 #define sys_personality		sys_arm64_personality
 
 #undef __SYSCALL
diff --git a/arch/arm64/kernel/sys32.c b/arch/arm64/kernel/sys32.c
index a40b1343b819..1ef103c95410 100644
--- a/arch/arm64/kernel/sys32.c
+++ b/arch/arm64/kernel/sys32.c
@@ -25,8 +25,8 @@
 #include <linux/compiler.h>
 #include <linux/syscalls.h>
 
-asmlinkage long compat_sys_sigreturn_wrapper(void);
-asmlinkage long compat_sys_rt_sigreturn_wrapper(void);
+asmlinkage long compat_sys_sigreturn(void);
+asmlinkage long compat_sys_rt_sigreturn(void);
 asmlinkage long compat_sys_statfs64_wrapper(void);
 asmlinkage long compat_sys_fstatfs64_wrapper(void);
 asmlinkage long compat_sys_pread64_wrapper(void);
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 08/18] arm64: convert raw syscall invocation to C
  2018-05-14  9:46 ` Mark Rutland
@ 2018-05-14  9:46   ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, dave.martin, james.morse, linux,
	linux-fsdevel, marc.zyngier, mark.rutland, viro, will.deacon

As a first step towards invoking syscalls with a pt_regs argument,
convert the raw syscall invocation logic to C. We end up with a bit more
register shuffling, but the unified invocation logic means we can unify
the tracing paths, too.

This only converts the invocation of the syscall. The rest of the
syscall triage and tracing is left in assembly for now, and will be
converted in subsequent patches.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kernel/Makefile  |  3 ++-
 arch/arm64/kernel/entry.S   | 36 ++++++++++--------------------------
 arch/arm64/kernel/syscall.c | 29 +++++++++++++++++++++++++++++
 3 files changed, 41 insertions(+), 27 deletions(-)
 create mode 100644 arch/arm64/kernel/syscall.c

diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index bf825f38d206..c22e8ace5ea3 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -18,7 +18,8 @@ arm64-obj-y		:= debug-monitors.o entry.o irq.o fpsimd.o		\
 			   hyp-stub.o psci.o cpu_ops.o insn.o	\
 			   return_address.o cpuinfo.o cpu_errata.o		\
 			   cpufeature.o alternative.o cacheinfo.o		\
-			   smp.o smp_spin_table.o topology.o smccc-call.o
+			   smp.o smp_spin_table.o topology.o smccc-call.o	\
+			   syscall.o
 
 extra-$(CONFIG_EFI)			:= efi-entry.o
 
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 08ea3cbfb08f..d6e057500eaf 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -873,7 +873,6 @@ ENDPROC(el0_error)
  */
 ret_fast_syscall:
 	disable_daif
-	str	x0, [sp, #S_X0]			// returned x0
 	ldr	x1, [tsk, #TSK_TI_FLAGS]	// re-check for syscall tracing
 	and	x2, x1, #_TIF_SYSCALL_WORK
 	cbnz	x2, ret_fast_syscall_trace
@@ -946,15 +945,11 @@ el0_svc_naked:					// compat entry point
 
 	tst	x16, #_TIF_SYSCALL_WORK		// check for syscall hooks
 	b.ne	__sys_trace
-	cmp     wscno, wsc_nr			// check upper syscall limit
-	b.hs	ni_sys
-	mask_nospec64 xscno, xsc_nr, x19	// enforce bounds for syscall number
-	ldr	x16, [stbl, xscno, lsl #3]	// address in the syscall table
-	blr	x16				// call sys_* routine
-	b	ret_fast_syscall
-ni_sys:
 	mov	x0, sp
-	bl	do_ni_syscall
+	mov	w1, wscno
+	mov	w2, wsc_nr
+	mov	x3, stbl
+	bl	invoke_syscall
 	b	ret_fast_syscall
 ENDPROC(el0_svc)
 
@@ -971,29 +966,18 @@ __sys_trace:
 	bl	syscall_trace_enter
 	cmp	w0, #NO_SYSCALL			// skip the syscall?
 	b.eq	__sys_trace_return_skipped
-	mov	wscno, w0			// syscall number (possibly new)
-	mov	x1, sp				// pointer to regs
-	cmp	wscno, wsc_nr			// check upper syscall limit
-	b.hs	__ni_sys_trace
-	ldp	x0, x1, [sp]			// restore the syscall args
-	ldp	x2, x3, [sp, #S_X2]
-	ldp	x4, x5, [sp, #S_X4]
-	ldp	x6, x7, [sp, #S_X6]
-	ldr	x16, [stbl, xscno, lsl #3]	// address in the syscall table
-	blr	x16				// call sys_* routine
 
-__sys_trace_return:
-	str	x0, [sp, #S_X0]			// save returned x0
+	mov	x0, sp
+	mov	w1, wscno
+	mov w2, wsc_nr
+	mov	x3, stbl
+	bl	invoke_syscall
+
 __sys_trace_return_skipped:
 	mov	x0, sp
 	bl	syscall_trace_exit
 	b	ret_to_user
 
-__ni_sys_trace:
-	mov	x0, sp
-	bl	do_ni_syscall
-	b	__sys_trace_return
-
 	.popsection				// .entry.text
 
 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
new file mode 100644
index 000000000000..58d7569f47df
--- /dev/null
+++ b/arch/arm64/kernel/syscall.c
@@ -0,0 +1,29 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/nospec.h>
+#include <linux/ptrace.h>
+
+long do_ni_syscall(struct pt_regs *regs);
+
+typedef long (*syscall_fn_t)(unsigned long, unsigned long,
+			     unsigned long, unsigned long,
+			     unsigned long, unsigned long);
+
+static void __invoke_syscall(struct pt_regs *regs, syscall_fn_t syscall_fn)
+{
+	regs->regs[0] = syscall_fn(regs->regs[0], regs->regs[1],
+				   regs->regs[2], regs->regs[3],
+				   regs->regs[4], regs->regs[5]);
+}
+
+asmlinkage void invoke_syscall(struct pt_regs *regs, int scno, int sc_nr,
+			       syscall_fn_t syscall_table[])
+{
+	if (scno < sc_nr) {
+		syscall_fn_t syscall_fn;
+		syscall_fn = syscall_table[array_index_nospec(scno, sc_nr)];
+		__invoke_syscall(regs, syscall_fn);
+	} else {
+		regs->regs[0] = do_ni_syscall(regs);
+	}
+}
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 08/18] arm64: convert raw syscall invocation to C
@ 2018-05-14  9:46   ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel

As a first step towards invoking syscalls with a pt_regs argument,
convert the raw syscall invocation logic to C. We end up with a bit more
register shuffling, but the unified invocation logic means we can unify
the tracing paths, too.

This only converts the invocation of the syscall. The rest of the
syscall triage and tracing is left in assembly for now, and will be
converted in subsequent patches.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kernel/Makefile  |  3 ++-
 arch/arm64/kernel/entry.S   | 36 ++++++++++--------------------------
 arch/arm64/kernel/syscall.c | 29 +++++++++++++++++++++++++++++
 3 files changed, 41 insertions(+), 27 deletions(-)
 create mode 100644 arch/arm64/kernel/syscall.c

diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index bf825f38d206..c22e8ace5ea3 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -18,7 +18,8 @@ arm64-obj-y		:= debug-monitors.o entry.o irq.o fpsimd.o		\
 			   hyp-stub.o psci.o cpu_ops.o insn.o	\
 			   return_address.o cpuinfo.o cpu_errata.o		\
 			   cpufeature.o alternative.o cacheinfo.o		\
-			   smp.o smp_spin_table.o topology.o smccc-call.o
+			   smp.o smp_spin_table.o topology.o smccc-call.o	\
+			   syscall.o
 
 extra-$(CONFIG_EFI)			:= efi-entry.o
 
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 08ea3cbfb08f..d6e057500eaf 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -873,7 +873,6 @@ ENDPROC(el0_error)
  */
 ret_fast_syscall:
 	disable_daif
-	str	x0, [sp, #S_X0]			// returned x0
 	ldr	x1, [tsk, #TSK_TI_FLAGS]	// re-check for syscall tracing
 	and	x2, x1, #_TIF_SYSCALL_WORK
 	cbnz	x2, ret_fast_syscall_trace
@@ -946,15 +945,11 @@ el0_svc_naked:					// compat entry point
 
 	tst	x16, #_TIF_SYSCALL_WORK		// check for syscall hooks
 	b.ne	__sys_trace
-	cmp     wscno, wsc_nr			// check upper syscall limit
-	b.hs	ni_sys
-	mask_nospec64 xscno, xsc_nr, x19	// enforce bounds for syscall number
-	ldr	x16, [stbl, xscno, lsl #3]	// address in the syscall table
-	blr	x16				// call sys_* routine
-	b	ret_fast_syscall
-ni_sys:
 	mov	x0, sp
-	bl	do_ni_syscall
+	mov	w1, wscno
+	mov	w2, wsc_nr
+	mov	x3, stbl
+	bl	invoke_syscall
 	b	ret_fast_syscall
 ENDPROC(el0_svc)
 
@@ -971,29 +966,18 @@ __sys_trace:
 	bl	syscall_trace_enter
 	cmp	w0, #NO_SYSCALL			// skip the syscall?
 	b.eq	__sys_trace_return_skipped
-	mov	wscno, w0			// syscall number (possibly new)
-	mov	x1, sp				// pointer to regs
-	cmp	wscno, wsc_nr			// check upper syscall limit
-	b.hs	__ni_sys_trace
-	ldp	x0, x1, [sp]			// restore the syscall args
-	ldp	x2, x3, [sp, #S_X2]
-	ldp	x4, x5, [sp, #S_X4]
-	ldp	x6, x7, [sp, #S_X6]
-	ldr	x16, [stbl, xscno, lsl #3]	// address in the syscall table
-	blr	x16				// call sys_* routine
 
-__sys_trace_return:
-	str	x0, [sp, #S_X0]			// save returned x0
+	mov	x0, sp
+	mov	w1, wscno
+	mov w2, wsc_nr
+	mov	x3, stbl
+	bl	invoke_syscall
+
 __sys_trace_return_skipped:
 	mov	x0, sp
 	bl	syscall_trace_exit
 	b	ret_to_user
 
-__ni_sys_trace:
-	mov	x0, sp
-	bl	do_ni_syscall
-	b	__sys_trace_return
-
 	.popsection				// .entry.text
 
 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
new file mode 100644
index 000000000000..58d7569f47df
--- /dev/null
+++ b/arch/arm64/kernel/syscall.c
@@ -0,0 +1,29 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/nospec.h>
+#include <linux/ptrace.h>
+
+long do_ni_syscall(struct pt_regs *regs);
+
+typedef long (*syscall_fn_t)(unsigned long, unsigned long,
+			     unsigned long, unsigned long,
+			     unsigned long, unsigned long);
+
+static void __invoke_syscall(struct pt_regs *regs, syscall_fn_t syscall_fn)
+{
+	regs->regs[0] = syscall_fn(regs->regs[0], regs->regs[1],
+				   regs->regs[2], regs->regs[3],
+				   regs->regs[4], regs->regs[5]);
+}
+
+asmlinkage void invoke_syscall(struct pt_regs *regs, int scno, int sc_nr,
+			       syscall_fn_t syscall_table[])
+{
+	if (scno < sc_nr) {
+		syscall_fn_t syscall_fn;
+		syscall_fn = syscall_table[array_index_nospec(scno, sc_nr)];
+		__invoke_syscall(regs, syscall_fn);
+	} else {
+		regs->regs[0] = do_ni_syscall(regs);
+	}
+}
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 09/18] arm64: convert syscall trace logic to C
  2018-05-14  9:46 ` Mark Rutland
@ 2018-05-14  9:46   ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, dave.martin, james.morse, linux,
	linux-fsdevel, marc.zyngier, mark.rutland, viro, will.deacon

Currently syscall tracing is a tricky assembly state machine, which can
be rather difficult to follow, and even harder to modify. Before we
start fiddling with it for pt_regs syscalls, let's convert it to C.

This is not intended to have any functional change.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kernel/entry.S   | 53 ++----------------------------------------
 arch/arm64/kernel/syscall.c | 56 +++++++++++++++++++++++++++++++++++++++++++--
 2 files changed, 56 insertions(+), 53 deletions(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index d6e057500eaf..5c60369b52fc 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -866,24 +866,6 @@ el0_error_naked:
 	b	ret_to_user
 ENDPROC(el0_error)
 
-
-/*
- * This is the fast syscall return path.  We do as little as possible here,
- * and this includes saving x0 back into the kernel stack.
- */
-ret_fast_syscall:
-	disable_daif
-	ldr	x1, [tsk, #TSK_TI_FLAGS]	// re-check for syscall tracing
-	and	x2, x1, #_TIF_SYSCALL_WORK
-	cbnz	x2, ret_fast_syscall_trace
-	and	x2, x1, #_TIF_WORK_MASK
-	cbnz	x2, work_pending
-	enable_step_tsk x1, x2
-	kernel_exit 0
-ret_fast_syscall_trace:
-	enable_daif
-	b	__sys_trace_return_skipped	// we already saved x0
-
 /*
  * Ok, we need to do extra processing, enter the slow path.
  */
@@ -939,44 +921,13 @@ alternative_else_nop_endif
 #endif
 
 el0_svc_naked:					// compat entry point
-	stp	x0, xscno, [sp, #S_ORIG_X0]	// save the original x0 and syscall number
-	enable_daif
-	ct_user_exit 1
-
-	tst	x16, #_TIF_SYSCALL_WORK		// check for syscall hooks
-	b.ne	__sys_trace
 	mov	x0, sp
 	mov	w1, wscno
 	mov	w2, wsc_nr
 	mov	x3, stbl
-	bl	invoke_syscall
-	b	ret_fast_syscall
-ENDPROC(el0_svc)
-
-	/*
-	 * This is the really slow path.  We're going to be doing context
-	 * switches, and waiting for our parent to respond.
-	 */
-__sys_trace:
-	cmp     wscno, #NO_SYSCALL		// user-issued syscall(-1)?
-	b.ne	1f
-	mov	x0, #-ENOSYS			// set default errno if so
-	str	x0, [sp, #S_X0]
-1:	mov	x0, sp
-	bl	syscall_trace_enter
-	cmp	w0, #NO_SYSCALL			// skip the syscall?
-	b.eq	__sys_trace_return_skipped
-
-	mov	x0, sp
-	mov	w1, wscno
-	mov w2, wsc_nr
-	mov	x3, stbl
-	bl	invoke_syscall
-
-__sys_trace_return_skipped:
-	mov	x0, sp
-	bl	syscall_trace_exit
+	bl	el0_svc_common
 	b	ret_to_user
+ENDPROC(el0_svc)
 
 	.popsection				// .entry.text
 
diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
index 58d7569f47df..5df857e32b48 100644
--- a/arch/arm64/kernel/syscall.c
+++ b/arch/arm64/kernel/syscall.c
@@ -1,8 +1,13 @@
 // SPDX-License-Identifier: GPL-2.0
 
+#include <linux/compiler.h>
+#include <linux/context_tracking.h>
 #include <linux/nospec.h>
 #include <linux/ptrace.h>
 
+#include <asm/daifflags.h>
+#include <asm/thread_info.h>
+
 long do_ni_syscall(struct pt_regs *regs);
 
 typedef long (*syscall_fn_t)(unsigned long, unsigned long,
@@ -16,8 +21,8 @@ static void __invoke_syscall(struct pt_regs *regs, syscall_fn_t syscall_fn)
 				   regs->regs[4], regs->regs[5]);
 }
 
-asmlinkage void invoke_syscall(struct pt_regs *regs, int scno, int sc_nr,
-			       syscall_fn_t syscall_table[])
+static void invoke_syscall(struct pt_regs *regs, int scno, int sc_nr,
+			   syscall_fn_t syscall_table[])
 {
 	if (scno < sc_nr) {
 		syscall_fn_t syscall_fn;
@@ -27,3 +32,50 @@ asmlinkage void invoke_syscall(struct pt_regs *regs, int scno, int sc_nr,
 		regs->regs[0] = do_ni_syscall(regs);
 	}
 }
+
+static inline bool has_syscall_work(unsigned long flags)
+{
+	return unlikely(flags & _TIF_SYSCALL_WORK);
+}
+
+int syscall_trace_enter(struct pt_regs *regs);
+void syscall_trace_exit(struct pt_regs *regs);
+
+asmlinkage void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
+			       syscall_fn_t syscall_table[])
+{
+	unsigned long flags = current_thread_info()->flags;
+
+	regs->orig_x0 = regs->regs[0];
+	regs->syscallno = scno;
+
+	local_daif_restore(DAIF_PROCCTX);
+	user_exit();
+
+	if (has_syscall_work(flags)) {
+		/* set default errno for user-issued syscall(-1) */
+		if (scno == NO_SYSCALL)
+			regs->regs[0] = -ENOSYS;
+		scno = syscall_trace_enter(regs);
+		if (scno == NO_SYSCALL)
+			goto trace_exit;
+	}
+
+	invoke_syscall(regs, scno, sc_nr, syscall_table);
+
+	/*
+	 * The tracing status may have changed under our feet, so we have to
+	 * check again. However, if we were tracing entry, then we always trace
+	 * exit regardless, as the old entry assembly did.
+	 */
+	if (!has_syscall_work(flags)) {
+		local_daif_mask();
+		flags = current_thread_info()->flags;
+		if (!has_syscall_work(flags))
+			return;
+		local_daif_restore(DAIF_PROCCTX);
+	}
+
+trace_exit:
+	syscall_trace_exit(regs);
+}
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 09/18] arm64: convert syscall trace logic to C
@ 2018-05-14  9:46   ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel

Currently syscall tracing is a tricky assembly state machine, which can
be rather difficult to follow, and even harder to modify. Before we
start fiddling with it for pt_regs syscalls, let's convert it to C.

This is not intended to have any functional change.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kernel/entry.S   | 53 ++----------------------------------------
 arch/arm64/kernel/syscall.c | 56 +++++++++++++++++++++++++++++++++++++++++++--
 2 files changed, 56 insertions(+), 53 deletions(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index d6e057500eaf..5c60369b52fc 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -866,24 +866,6 @@ el0_error_naked:
 	b	ret_to_user
 ENDPROC(el0_error)
 
-
-/*
- * This is the fast syscall return path.  We do as little as possible here,
- * and this includes saving x0 back into the kernel stack.
- */
-ret_fast_syscall:
-	disable_daif
-	ldr	x1, [tsk, #TSK_TI_FLAGS]	// re-check for syscall tracing
-	and	x2, x1, #_TIF_SYSCALL_WORK
-	cbnz	x2, ret_fast_syscall_trace
-	and	x2, x1, #_TIF_WORK_MASK
-	cbnz	x2, work_pending
-	enable_step_tsk x1, x2
-	kernel_exit 0
-ret_fast_syscall_trace:
-	enable_daif
-	b	__sys_trace_return_skipped	// we already saved x0
-
 /*
  * Ok, we need to do extra processing, enter the slow path.
  */
@@ -939,44 +921,13 @@ alternative_else_nop_endif
 #endif
 
 el0_svc_naked:					// compat entry point
-	stp	x0, xscno, [sp, #S_ORIG_X0]	// save the original x0 and syscall number
-	enable_daif
-	ct_user_exit 1
-
-	tst	x16, #_TIF_SYSCALL_WORK		// check for syscall hooks
-	b.ne	__sys_trace
 	mov	x0, sp
 	mov	w1, wscno
 	mov	w2, wsc_nr
 	mov	x3, stbl
-	bl	invoke_syscall
-	b	ret_fast_syscall
-ENDPROC(el0_svc)
-
-	/*
-	 * This is the really slow path.  We're going to be doing context
-	 * switches, and waiting for our parent to respond.
-	 */
-__sys_trace:
-	cmp     wscno, #NO_SYSCALL		// user-issued syscall(-1)?
-	b.ne	1f
-	mov	x0, #-ENOSYS			// set default errno if so
-	str	x0, [sp, #S_X0]
-1:	mov	x0, sp
-	bl	syscall_trace_enter
-	cmp	w0, #NO_SYSCALL			// skip the syscall?
-	b.eq	__sys_trace_return_skipped
-
-	mov	x0, sp
-	mov	w1, wscno
-	mov w2, wsc_nr
-	mov	x3, stbl
-	bl	invoke_syscall
-
-__sys_trace_return_skipped:
-	mov	x0, sp
-	bl	syscall_trace_exit
+	bl	el0_svc_common
 	b	ret_to_user
+ENDPROC(el0_svc)
 
 	.popsection				// .entry.text
 
diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
index 58d7569f47df..5df857e32b48 100644
--- a/arch/arm64/kernel/syscall.c
+++ b/arch/arm64/kernel/syscall.c
@@ -1,8 +1,13 @@
 // SPDX-License-Identifier: GPL-2.0
 
+#include <linux/compiler.h>
+#include <linux/context_tracking.h>
 #include <linux/nospec.h>
 #include <linux/ptrace.h>
 
+#include <asm/daifflags.h>
+#include <asm/thread_info.h>
+
 long do_ni_syscall(struct pt_regs *regs);
 
 typedef long (*syscall_fn_t)(unsigned long, unsigned long,
@@ -16,8 +21,8 @@ static void __invoke_syscall(struct pt_regs *regs, syscall_fn_t syscall_fn)
 				   regs->regs[4], regs->regs[5]);
 }
 
-asmlinkage void invoke_syscall(struct pt_regs *regs, int scno, int sc_nr,
-			       syscall_fn_t syscall_table[])
+static void invoke_syscall(struct pt_regs *regs, int scno, int sc_nr,
+			   syscall_fn_t syscall_table[])
 {
 	if (scno < sc_nr) {
 		syscall_fn_t syscall_fn;
@@ -27,3 +32,50 @@ asmlinkage void invoke_syscall(struct pt_regs *regs, int scno, int sc_nr,
 		regs->regs[0] = do_ni_syscall(regs);
 	}
 }
+
+static inline bool has_syscall_work(unsigned long flags)
+{
+	return unlikely(flags & _TIF_SYSCALL_WORK);
+}
+
+int syscall_trace_enter(struct pt_regs *regs);
+void syscall_trace_exit(struct pt_regs *regs);
+
+asmlinkage void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
+			       syscall_fn_t syscall_table[])
+{
+	unsigned long flags = current_thread_info()->flags;
+
+	regs->orig_x0 = regs->regs[0];
+	regs->syscallno = scno;
+
+	local_daif_restore(DAIF_PROCCTX);
+	user_exit();
+
+	if (has_syscall_work(flags)) {
+		/* set default errno for user-issued syscall(-1) */
+		if (scno == NO_SYSCALL)
+			regs->regs[0] = -ENOSYS;
+		scno = syscall_trace_enter(regs);
+		if (scno == NO_SYSCALL)
+			goto trace_exit;
+	}
+
+	invoke_syscall(regs, scno, sc_nr, syscall_table);
+
+	/*
+	 * The tracing status may have changed under our feet, so we have to
+	 * check again. However, if we were tracing entry, then we always trace
+	 * exit regardless, as the old entry assembly did.
+	 */
+	if (!has_syscall_work(flags)) {
+		local_daif_mask();
+		flags = current_thread_info()->flags;
+		if (!has_syscall_work(flags))
+			return;
+		local_daif_restore(DAIF_PROCCTX);
+	}
+
+trace_exit:
+	syscall_trace_exit(regs);
+}
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 10/18] arm64: convert native/compat syscall entry to C
  2018-05-14  9:46 ` Mark Rutland
@ 2018-05-14  9:46   ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, dave.martin, james.morse, linux,
	linux-fsdevel, marc.zyngier, mark.rutland, viro, will.deacon

Now that the syscall invocation logic is in C, we can migrate the rest
of the syscall entry logic over, so that the entry assembly needn't look
at the register values at all.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kernel/entry.S   | 42 ++++--------------------------------------
 arch/arm64/kernel/syscall.c | 40 ++++++++++++++++++++++++++++++++++++++--
 2 files changed, 42 insertions(+), 40 deletions(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 5c60369b52fc..13afefbf608f 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -690,14 +690,9 @@ el0_sync_compat:
 	b.ge	el0_dbg
 	b	el0_inv
 el0_svc_compat:
-	/*
-	 * AArch32 syscall handling
-	 */
-	ldr	x16, [tsk, #TSK_TI_FLAGS]	// load thread flags
-	adrp	stbl, compat_sys_call_table	// load compat syscall table pointer
-	mov	wscno, w7			// syscall number in w7 (r7)
-	mov     wsc_nr, #__NR_compat_syscalls
-	b	el0_svc_naked
+	mov	x0, sp
+	bl	el0_svc_compat_handler
+	b	ret_to_user
 
 	.align	6
 el0_irq_compat:
@@ -895,37 +890,8 @@ ENDPROC(ret_to_user)
  */
 	.align	6
 el0_svc:
-	ldr	x16, [tsk, #TSK_TI_FLAGS]	// load thread flags
-	adrp	stbl, sys_call_table		// load syscall table pointer
-	mov	wscno, w8			// syscall number in w8
-	mov	wsc_nr, #__NR_syscalls
-
-#ifdef CONFIG_ARM64_SVE
-alternative_if_not ARM64_SVE
-	b	el0_svc_naked
-alternative_else_nop_endif
-	tbz	x16, #TIF_SVE, el0_svc_naked	// Skip unless TIF_SVE set:
-	bic	x16, x16, #_TIF_SVE		// discard SVE state
-	str	x16, [tsk, #TSK_TI_FLAGS]
-
-	/*
-	 * task_fpsimd_load() won't be called to update CPACR_EL1 in
-	 * ret_to_user unless TIF_FOREIGN_FPSTATE is still set, which only
-	 * happens if a context switch or kernel_neon_begin() or context
-	 * modification (sigreturn, ptrace) intervenes.
-	 * So, ensure that CPACR_EL1 is already correct for the fast-path case:
-	 */
-	mrs	x9, cpacr_el1
-	bic	x9, x9, #CPACR_EL1_ZEN_EL0EN	// disable SVE for el0
-	msr	cpacr_el1, x9			// synchronised by eret to el0
-#endif
-
-el0_svc_naked:					// compat entry point
 	mov	x0, sp
-	mov	w1, wscno
-	mov	w2, wsc_nr
-	mov	x3, stbl
-	bl	el0_svc_common
+	bl	el0_svc_handler
 	b	ret_to_user
 ENDPROC(el0_svc)
 
diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
index 5df857e32b48..4706f841e758 100644
--- a/arch/arm64/kernel/syscall.c
+++ b/arch/arm64/kernel/syscall.c
@@ -6,7 +6,9 @@
 #include <linux/ptrace.h>
 
 #include <asm/daifflags.h>
+#include <asm/fpsimd.h>
 #include <asm/thread_info.h>
+#include <asm/unistd.h>
 
 long do_ni_syscall(struct pt_regs *regs);
 
@@ -41,8 +43,8 @@ static inline bool has_syscall_work(unsigned long flags)
 int syscall_trace_enter(struct pt_regs *regs);
 void syscall_trace_exit(struct pt_regs *regs);
 
-asmlinkage void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
-			       syscall_fn_t syscall_table[])
+static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
+			   syscall_fn_t syscall_table[])
 {
 	unsigned long flags = current_thread_info()->flags;
 
@@ -79,3 +81,37 @@ asmlinkage void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
 trace_exit:
 	syscall_trace_exit(regs);
 }
+
+static inline void sve_user_reset(void)
+{
+	if (!system_supports_sve())
+		return;
+
+	/*
+	 * task_fpsimd_load() won't be called to update CPACR_EL1 in
+	 * ret_to_user unless TIF_FOREIGN_FPSTATE is still set, which only
+	 * happens if a context switch or kernel_neon_begin() or context
+	 * modification (sigreturn, ptrace) intervenes.
+	 * So, ensure that CPACR_EL1 is already correct for the fast-path case.
+	 */
+	if (test_and_clear_thread_flag(TIF_SVE))
+		sve_user_disable();
+}
+
+extern syscall_fn_t sys_call_table[];
+
+asmlinkage void el0_svc_handler(struct pt_regs *regs)
+{
+	sve_user_disable();
+	el0_svc_common(regs, regs->regs[8], __NR_syscalls, sys_call_table);
+}
+
+#ifdef CONFIG_COMPAT
+extern syscall_fn_t compat_sys_call_table[];
+
+asmlinkage void el0_svc_compat_handler(struct pt_regs *regs)
+{
+	el0_svc_common(regs, regs->regs[7], __NR_compat_syscalls,
+		       compat_sys_call_table);
+}
+#endif
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 10/18] arm64: convert native/compat syscall entry to C
@ 2018-05-14  9:46   ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel

Now that the syscall invocation logic is in C, we can migrate the rest
of the syscall entry logic over, so that the entry assembly needn't look
at the register values at all.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kernel/entry.S   | 42 ++++--------------------------------------
 arch/arm64/kernel/syscall.c | 40 ++++++++++++++++++++++++++++++++++++++--
 2 files changed, 42 insertions(+), 40 deletions(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 5c60369b52fc..13afefbf608f 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -690,14 +690,9 @@ el0_sync_compat:
 	b.ge	el0_dbg
 	b	el0_inv
 el0_svc_compat:
-	/*
-	 * AArch32 syscall handling
-	 */
-	ldr	x16, [tsk, #TSK_TI_FLAGS]	// load thread flags
-	adrp	stbl, compat_sys_call_table	// load compat syscall table pointer
-	mov	wscno, w7			// syscall number in w7 (r7)
-	mov     wsc_nr, #__NR_compat_syscalls
-	b	el0_svc_naked
+	mov	x0, sp
+	bl	el0_svc_compat_handler
+	b	ret_to_user
 
 	.align	6
 el0_irq_compat:
@@ -895,37 +890,8 @@ ENDPROC(ret_to_user)
  */
 	.align	6
 el0_svc:
-	ldr	x16, [tsk, #TSK_TI_FLAGS]	// load thread flags
-	adrp	stbl, sys_call_table		// load syscall table pointer
-	mov	wscno, w8			// syscall number in w8
-	mov	wsc_nr, #__NR_syscalls
-
-#ifdef CONFIG_ARM64_SVE
-alternative_if_not ARM64_SVE
-	b	el0_svc_naked
-alternative_else_nop_endif
-	tbz	x16, #TIF_SVE, el0_svc_naked	// Skip unless TIF_SVE set:
-	bic	x16, x16, #_TIF_SVE		// discard SVE state
-	str	x16, [tsk, #TSK_TI_FLAGS]
-
-	/*
-	 * task_fpsimd_load() won't be called to update CPACR_EL1 in
-	 * ret_to_user unless TIF_FOREIGN_FPSTATE is still set, which only
-	 * happens if a context switch or kernel_neon_begin() or context
-	 * modification (sigreturn, ptrace) intervenes.
-	 * So, ensure that CPACR_EL1 is already correct for the fast-path case:
-	 */
-	mrs	x9, cpacr_el1
-	bic	x9, x9, #CPACR_EL1_ZEN_EL0EN	// disable SVE for el0
-	msr	cpacr_el1, x9			// synchronised by eret to el0
-#endif
-
-el0_svc_naked:					// compat entry point
 	mov	x0, sp
-	mov	w1, wscno
-	mov	w2, wsc_nr
-	mov	x3, stbl
-	bl	el0_svc_common
+	bl	el0_svc_handler
 	b	ret_to_user
 ENDPROC(el0_svc)
 
diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
index 5df857e32b48..4706f841e758 100644
--- a/arch/arm64/kernel/syscall.c
+++ b/arch/arm64/kernel/syscall.c
@@ -6,7 +6,9 @@
 #include <linux/ptrace.h>
 
 #include <asm/daifflags.h>
+#include <asm/fpsimd.h>
 #include <asm/thread_info.h>
+#include <asm/unistd.h>
 
 long do_ni_syscall(struct pt_regs *regs);
 
@@ -41,8 +43,8 @@ static inline bool has_syscall_work(unsigned long flags)
 int syscall_trace_enter(struct pt_regs *regs);
 void syscall_trace_exit(struct pt_regs *regs);
 
-asmlinkage void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
-			       syscall_fn_t syscall_table[])
+static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
+			   syscall_fn_t syscall_table[])
 {
 	unsigned long flags = current_thread_info()->flags;
 
@@ -79,3 +81,37 @@ asmlinkage void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
 trace_exit:
 	syscall_trace_exit(regs);
 }
+
+static inline void sve_user_reset(void)
+{
+	if (!system_supports_sve())
+		return;
+
+	/*
+	 * task_fpsimd_load() won't be called to update CPACR_EL1 in
+	 * ret_to_user unless TIF_FOREIGN_FPSTATE is still set, which only
+	 * happens if a context switch or kernel_neon_begin() or context
+	 * modification (sigreturn, ptrace) intervenes.
+	 * So, ensure that CPACR_EL1 is already correct for the fast-path case.
+	 */
+	if (test_and_clear_thread_flag(TIF_SVE))
+		sve_user_disable();
+}
+
+extern syscall_fn_t sys_call_table[];
+
+asmlinkage void el0_svc_handler(struct pt_regs *regs)
+{
+	sve_user_disable();
+	el0_svc_common(regs, regs->regs[8], __NR_syscalls, sys_call_table);
+}
+
+#ifdef CONFIG_COMPAT
+extern syscall_fn_t compat_sys_call_table[];
+
+asmlinkage void el0_svc_compat_handler(struct pt_regs *regs)
+{
+	el0_svc_common(regs, regs->regs[7], __NR_compat_syscalls,
+		       compat_sys_call_table);
+}
+#endif
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 11/18] arm64: zero GPRs upon entry from EL0
  2018-05-14  9:46 ` Mark Rutland
@ 2018-05-14  9:46   ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, dave.martin, james.morse, linux,
	linux-fsdevel, marc.zyngier, mark.rutland, viro, will.deacon

We can zero GPRs x0 - x29 upon entry from EL0 to make it harder for
userspace to control values consumed by speculative gadgets.

We don't blat x30, since this is stashed much later, and we'll blat it
before invoking C code.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kernel/entry.S | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 13afefbf608f..4dd529fd03fd 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -62,6 +62,12 @@
 #endif
 	.endm
 
+	.macro	clear_gp_regs
+	.irp	n,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29
+	mov	x\n, xzr
+	.endr
+	.endm
+
 /*
  * Bad Abort numbers
  *-----------------
@@ -158,12 +164,11 @@ alternative_else_nop_endif
 	stp	x28, x29, [sp, #16 * 14]
 
 	.if	\el == 0
+	clear_gp_regs
 	mrs	x21, sp_el0
 	ldr_this_cpu	tsk, __entry_task, x20	// Ensure MDSCR_EL1.SS is clear,
 	ldr	x19, [tsk, #TSK_TI_FLAGS]	// since we can unmask debug
 	disable_step_tsk x19, x20		// exceptions when scheduling.
-
-	mov	x29, xzr			// fp pointed to user-space
 	.else
 	add	x21, sp, #S_FRAME_SIZE
 	get_thread_info tsk
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 11/18] arm64: zero GPRs upon entry from EL0
@ 2018-05-14  9:46   ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel

We can zero GPRs x0 - x29 upon entry from EL0 to make it harder for
userspace to control values consumed by speculative gadgets.

We don't blat x30, since this is stashed much later, and we'll blat it
before invoking C code.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kernel/entry.S | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 13afefbf608f..4dd529fd03fd 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -62,6 +62,12 @@
 #endif
 	.endm
 
+	.macro	clear_gp_regs
+	.irp	n,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29
+	mov	x\n, xzr
+	.endr
+	.endm
+
 /*
  * Bad Abort numbers
  *-----------------
@@ -158,12 +164,11 @@ alternative_else_nop_endif
 	stp	x28, x29, [sp, #16 * 14]
 
 	.if	\el == 0
+	clear_gp_regs
 	mrs	x21, sp_el0
 	ldr_this_cpu	tsk, __entry_task, x20	// Ensure MDSCR_EL1.SS is clear,
 	ldr	x19, [tsk, #TSK_TI_FLAGS]	// since we can unmask debug
 	disable_step_tsk x19, x20		// exceptions when scheduling.
-
-	mov	x29, xzr			// fp pointed to user-space
 	.else
 	add	x21, sp, #S_FRAME_SIZE
 	get_thread_info tsk
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 12/18] kernel: add ksys_personality()
  2018-05-14  9:46 ` Mark Rutland
@ 2018-05-14  9:46   ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, dave.martin, james.morse, linux,
	linux-fsdevel, marc.zyngier, mark.rutland, viro, will.deacon

Using this helper allows us to avoid the in-kernel call to the
sys_personality() syscall. The ksys_ prefix denotes that this function
is meant as a drop-in replacement for the syscall. In particular, it
uses the same calling convention as sys_personality().

This is necessary to enable conversion of arm64's syscall handling to
use pt_regs wrappers.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
---
 include/linux/syscalls.h | 1 +
 kernel/exec_domain.c     | 7 ++++++-
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
index 70fcda1a9049..6723ea51ec99 100644
--- a/include/linux/syscalls.h
+++ b/include/linux/syscalls.h
@@ -1148,6 +1148,7 @@ unsigned long ksys_mmap_pgoff(unsigned long addr, unsigned long len,
 			      unsigned long prot, unsigned long flags,
 			      unsigned long fd, unsigned long pgoff);
 ssize_t ksys_readahead(int fd, loff_t offset, size_t count);
+unsigned int ksys_personality(unsigned int personality);
 
 /*
  * The following kernel syscall equivalents are just wrappers to fs-internal
diff --git a/kernel/exec_domain.c b/kernel/exec_domain.c
index a5697119290e..4ba2b404cba2 100644
--- a/kernel/exec_domain.c
+++ b/kernel/exec_domain.c
@@ -47,7 +47,7 @@ static int __init proc_execdomains_init(void)
 module_init(proc_execdomains_init);
 #endif
 
-SYSCALL_DEFINE1(personality, unsigned int, personality)
+unsigned int ksys_personality(unsigned int personality)
 {
 	unsigned int old = current->personality;
 
@@ -56,3 +56,8 @@ SYSCALL_DEFINE1(personality, unsigned int, personality)
 
 	return old;
 }
+
+SYSCALL_DEFINE1(personality, unsigned int, personality)
+{
+	return ksys_personality(personality);
+}
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 12/18] kernel: add ksys_personality()
@ 2018-05-14  9:46   ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel

Using this helper allows us to avoid the in-kernel call to the
sys_personality() syscall. The ksys_ prefix denotes that this function
is meant as a drop-in replacement for the syscall. In particular, it
uses the same calling convention as sys_personality().

This is necessary to enable conversion of arm64's syscall handling to
use pt_regs wrappers.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
---
 include/linux/syscalls.h | 1 +
 kernel/exec_domain.c     | 7 ++++++-
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
index 70fcda1a9049..6723ea51ec99 100644
--- a/include/linux/syscalls.h
+++ b/include/linux/syscalls.h
@@ -1148,6 +1148,7 @@ unsigned long ksys_mmap_pgoff(unsigned long addr, unsigned long len,
 			      unsigned long prot, unsigned long flags,
 			      unsigned long fd, unsigned long pgoff);
 ssize_t ksys_readahead(int fd, loff_t offset, size_t count);
+unsigned int ksys_personality(unsigned int personality);
 
 /*
  * The following kernel syscall equivalents are just wrappers to fs-internal
diff --git a/kernel/exec_domain.c b/kernel/exec_domain.c
index a5697119290e..4ba2b404cba2 100644
--- a/kernel/exec_domain.c
+++ b/kernel/exec_domain.c
@@ -47,7 +47,7 @@ static int __init proc_execdomains_init(void)
 module_init(proc_execdomains_init);
 #endif
 
-SYSCALL_DEFINE1(personality, unsigned int, personality)
+unsigned int ksys_personality(unsigned int personality)
 {
 	unsigned int old = current->personality;
 
@@ -56,3 +56,8 @@ SYSCALL_DEFINE1(personality, unsigned int, personality)
 
 	return old;
 }
+
+SYSCALL_DEFINE1(personality, unsigned int, personality)
+{
+	return ksys_personality(personality);
+}
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 13/18] kernel: add kcompat_sys_{f,}statfs64()
  2018-05-14  9:46 ` Mark Rutland
@ 2018-05-14  9:46   ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, dave.martin, james.morse, linux,
	linux-fsdevel, marc.zyngier, mark.rutland, viro, will.deacon

Using this helper allows us to avoid the in-kernel calls to the
compat_sys_{f,}statfs64() sycalls, as are necessary for parameter
mangling in arm64's compat handling.

Following the example of ksys_* functions, kcompat_sys_* functions are
intended to be a drop-in replacement for their compat_sys_*
counterparts, with the same calling convention.

This is necessary to enable conversion of arm64's syscall handling to
use pt_regs wrappers.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: linux-fsdevel@vger.kernel.org
---
 fs/statfs.c              | 14 ++++++++++++--
 include/linux/syscalls.h |  8 ++++++++
 2 files changed, 20 insertions(+), 2 deletions(-)

diff --git a/fs/statfs.c b/fs/statfs.c
index 5b2a24f0f263..f0216629621d 100644
--- a/fs/statfs.c
+++ b/fs/statfs.c
@@ -335,7 +335,7 @@ static int put_compat_statfs64(struct compat_statfs64 __user *ubuf, struct kstat
 	return 0;
 }
 
-COMPAT_SYSCALL_DEFINE3(statfs64, const char __user *, pathname, compat_size_t, sz, struct compat_statfs64 __user *, buf)
+int kcompat_sys_statfs64(const char __user * pathname, compat_size_t sz, struct compat_statfs64 __user * buf)
 {
 	struct kstatfs tmp;
 	int error;
@@ -349,7 +349,12 @@ COMPAT_SYSCALL_DEFINE3(statfs64, const char __user *, pathname, compat_size_t, s
 	return error;
 }
 
-COMPAT_SYSCALL_DEFINE3(fstatfs64, unsigned int, fd, compat_size_t, sz, struct compat_statfs64 __user *, buf)
+COMPAT_SYSCALL_DEFINE3(statfs64, const char __user *, pathname, compat_size_t, sz, struct compat_statfs64 __user *, buf)
+{
+	return kcompat_sys_statfs64(pathname, sz, buf);
+}
+
+int kcompat_sys_fstatfs64(unsigned int fd, compat_size_t sz, struct compat_statfs64 __user * buf)
 {
 	struct kstatfs tmp;
 	int error;
@@ -363,6 +368,11 @@ COMPAT_SYSCALL_DEFINE3(fstatfs64, unsigned int, fd, compat_size_t, sz, struct co
 	return error;
 }
 
+COMPAT_SYSCALL_DEFINE3(fstatfs64, unsigned int, fd, compat_size_t, sz, struct compat_statfs64 __user *, buf)
+{
+	return kcompat_sys_fstatfs64(fd, sz, buf);
+}
+
 /*
  * This is a copy of sys_ustat, just dealing with a structure layout.
  * Given how simple this syscall is that apporach is more maintainable
diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
index 6723ea51ec99..e0bf3e4bb897 100644
--- a/include/linux/syscalls.h
+++ b/include/linux/syscalls.h
@@ -59,6 +59,7 @@ struct tms;
 struct utimbuf;
 struct mq_attr;
 struct compat_stat;
+struct compat_statfs64;
 struct compat_timeval;
 struct robust_list_head;
 struct getcpu_cache;
@@ -1150,6 +1151,13 @@ unsigned long ksys_mmap_pgoff(unsigned long addr, unsigned long len,
 ssize_t ksys_readahead(int fd, loff_t offset, size_t count);
 unsigned int ksys_personality(unsigned int personality);
 
+#ifdef CONFIG_COMPAT
+int kcompat_sys_statfs64(const char __user * pathname, compat_size_t sz,
+		     struct compat_statfs64 __user * buf);
+int kcompat_sys_fstatfs64(unsigned int fd, compat_size_t sz,
+			  struct compat_statfs64 __user * buf);
+#endif
+
 /*
  * The following kernel syscall equivalents are just wrappers to fs-internal
  * functions. Therefore, provide stubs to be inlined at the callsites.
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 13/18] kernel: add kcompat_sys_{f,}statfs64()
@ 2018-05-14  9:46   ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel

Using this helper allows us to avoid the in-kernel calls to the
compat_sys_{f,}statfs64() sycalls, as are necessary for parameter
mangling in arm64's compat handling.

Following the example of ksys_* functions, kcompat_sys_* functions are
intended to be a drop-in replacement for their compat_sys_*
counterparts, with the same calling convention.

This is necessary to enable conversion of arm64's syscall handling to
use pt_regs wrappers.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Cc: linux-fsdevel at vger.kernel.org
---
 fs/statfs.c              | 14 ++++++++++++--
 include/linux/syscalls.h |  8 ++++++++
 2 files changed, 20 insertions(+), 2 deletions(-)

diff --git a/fs/statfs.c b/fs/statfs.c
index 5b2a24f0f263..f0216629621d 100644
--- a/fs/statfs.c
+++ b/fs/statfs.c
@@ -335,7 +335,7 @@ static int put_compat_statfs64(struct compat_statfs64 __user *ubuf, struct kstat
 	return 0;
 }
 
-COMPAT_SYSCALL_DEFINE3(statfs64, const char __user *, pathname, compat_size_t, sz, struct compat_statfs64 __user *, buf)
+int kcompat_sys_statfs64(const char __user * pathname, compat_size_t sz, struct compat_statfs64 __user * buf)
 {
 	struct kstatfs tmp;
 	int error;
@@ -349,7 +349,12 @@ COMPAT_SYSCALL_DEFINE3(statfs64, const char __user *, pathname, compat_size_t, s
 	return error;
 }
 
-COMPAT_SYSCALL_DEFINE3(fstatfs64, unsigned int, fd, compat_size_t, sz, struct compat_statfs64 __user *, buf)
+COMPAT_SYSCALL_DEFINE3(statfs64, const char __user *, pathname, compat_size_t, sz, struct compat_statfs64 __user *, buf)
+{
+	return kcompat_sys_statfs64(pathname, sz, buf);
+}
+
+int kcompat_sys_fstatfs64(unsigned int fd, compat_size_t sz, struct compat_statfs64 __user * buf)
 {
 	struct kstatfs tmp;
 	int error;
@@ -363,6 +368,11 @@ COMPAT_SYSCALL_DEFINE3(fstatfs64, unsigned int, fd, compat_size_t, sz, struct co
 	return error;
 }
 
+COMPAT_SYSCALL_DEFINE3(fstatfs64, unsigned int, fd, compat_size_t, sz, struct compat_statfs64 __user *, buf)
+{
+	return kcompat_sys_fstatfs64(fd, sz, buf);
+}
+
 /*
  * This is a copy of sys_ustat, just dealing with a structure layout.
  * Given how simple this syscall is that apporach is more maintainable
diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
index 6723ea51ec99..e0bf3e4bb897 100644
--- a/include/linux/syscalls.h
+++ b/include/linux/syscalls.h
@@ -59,6 +59,7 @@ struct tms;
 struct utimbuf;
 struct mq_attr;
 struct compat_stat;
+struct compat_statfs64;
 struct compat_timeval;
 struct robust_list_head;
 struct getcpu_cache;
@@ -1150,6 +1151,13 @@ unsigned long ksys_mmap_pgoff(unsigned long addr, unsigned long len,
 ssize_t ksys_readahead(int fd, loff_t offset, size_t count);
 unsigned int ksys_personality(unsigned int personality);
 
+#ifdef CONFIG_COMPAT
+int kcompat_sys_statfs64(const char __user * pathname, compat_size_t sz,
+		     struct compat_statfs64 __user * buf);
+int kcompat_sys_fstatfs64(unsigned int fd, compat_size_t sz,
+			  struct compat_statfs64 __user * buf);
+#endif
+
 /*
  * The following kernel syscall equivalents are just wrappers to fs-internal
  * functions. Therefore, provide stubs to be inlined at the callsites.
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 14/18] arm64: remove in-kernel call to sys_personality()
  2018-05-14  9:46 ` Mark Rutland
@ 2018-05-14  9:46   ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, dave.martin, james.morse, linux,
	linux-fsdevel, marc.zyngier, mark.rutland, viro, will.deacon

With pt_regs syscall wrappers, the calling convention for
sys_personality() will change. Use ksys_personality(), which is
functionally equivalent.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kernel/sys.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/sys.c b/arch/arm64/kernel/sys.c
index 31045f3fed92..a82c3f7a9a90 100644
--- a/arch/arm64/kernel/sys.c
+++ b/arch/arm64/kernel/sys.c
@@ -42,7 +42,7 @@ SYSCALL_DEFINE1(arm64_personality, unsigned int, personality)
 	if (personality(personality) == PER_LINUX32 &&
 		!system_supports_32bit_el0())
 		return -EINVAL;
-	return sys_personality(personality);
+	return ksys_personality(personality);
 }
 
 /*
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 14/18] arm64: remove in-kernel call to sys_personality()
@ 2018-05-14  9:46   ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel

With pt_regs syscall wrappers, the calling convention for
sys_personality() will change. Use ksys_personality(), which is
functionally equivalent.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kernel/sys.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/sys.c b/arch/arm64/kernel/sys.c
index 31045f3fed92..a82c3f7a9a90 100644
--- a/arch/arm64/kernel/sys.c
+++ b/arch/arm64/kernel/sys.c
@@ -42,7 +42,7 @@ SYSCALL_DEFINE1(arm64_personality, unsigned int, personality)
 	if (personality(personality) == PER_LINUX32 &&
 		!system_supports_32bit_el0())
 		return -EINVAL;
-	return sys_personality(personality);
+	return ksys_personality(personality);
 }
 
 /*
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 15/18] arm64: use {COMPAT,}SYSCALL_DEFINE0 for sigreturn
  2018-05-14  9:46 ` Mark Rutland
@ 2018-05-14  9:46   ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, dave.martin, james.morse, linux,
	linux-fsdevel, marc.zyngier, mark.rutland, viro, will.deacon

We don't currently annotate our various sigreturn functions as syscalls,
as we need to do to use pt_regs syscall wrappers.

Let's mark them as real syscalls.

For compat_sys_sigreturn and compat_sys_rt_sigreturn, this changes the
return type from int to long, matching the prototypes in sys32.c.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kernel/signal.c   | 2 +-
 arch/arm64/kernel/signal32.c | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
index caa7a68cf2d2..04a26dcbea32 100644
--- a/arch/arm64/kernel/signal.c
+++ b/arch/arm64/kernel/signal.c
@@ -538,7 +538,7 @@ static int restore_sigframe(struct pt_regs *regs,
 	return err;
 }
 
-asmlinkage long sys_rt_sigreturn(void)
+SYSCALL_DEFINE0(rt_sigreturn)
 {
 	struct pt_regs *regs = current_pt_regs();
 	struct rt_sigframe __user *frame;
diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c
index cb10588a7cb2..1948566dcccf 100644
--- a/arch/arm64/kernel/signal32.c
+++ b/arch/arm64/kernel/signal32.c
@@ -282,7 +282,7 @@ static int compat_restore_sigframe(struct pt_regs *regs,
 	return err;
 }
 
-asmlinkage int compat_sys_sigreturn(void)
+COMPAT_SYSCALL_DEFINE0(sigreturn)
 {
 	struct pt_regs *regs = current_pt_regs();
 	struct compat_sigframe __user *frame;
@@ -313,7 +313,7 @@ asmlinkage int compat_sys_sigreturn(void)
 	return 0;
 }
 
-asmlinkage int compat_sys_rt_sigreturn(void)
+COMPAT_SYSCALL_DEFINE0(rt_sigreturn)
 {
 	struct pt_regs *regs = current_pt_regs();
 	struct compat_rt_sigframe __user *frame;
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 15/18] arm64: use {COMPAT,}SYSCALL_DEFINE0 for sigreturn
@ 2018-05-14  9:46   ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel

We don't currently annotate our various sigreturn functions as syscalls,
as we need to do to use pt_regs syscall wrappers.

Let's mark them as real syscalls.

For compat_sys_sigreturn and compat_sys_rt_sigreturn, this changes the
return type from int to long, matching the prototypes in sys32.c.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kernel/signal.c   | 2 +-
 arch/arm64/kernel/signal32.c | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
index caa7a68cf2d2..04a26dcbea32 100644
--- a/arch/arm64/kernel/signal.c
+++ b/arch/arm64/kernel/signal.c
@@ -538,7 +538,7 @@ static int restore_sigframe(struct pt_regs *regs,
 	return err;
 }
 
-asmlinkage long sys_rt_sigreturn(void)
+SYSCALL_DEFINE0(rt_sigreturn)
 {
 	struct pt_regs *regs = current_pt_regs();
 	struct rt_sigframe __user *frame;
diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c
index cb10588a7cb2..1948566dcccf 100644
--- a/arch/arm64/kernel/signal32.c
+++ b/arch/arm64/kernel/signal32.c
@@ -282,7 +282,7 @@ static int compat_restore_sigframe(struct pt_regs *regs,
 	return err;
 }
 
-asmlinkage int compat_sys_sigreturn(void)
+COMPAT_SYSCALL_DEFINE0(sigreturn)
 {
 	struct pt_regs *regs = current_pt_regs();
 	struct compat_sigframe __user *frame;
@@ -313,7 +313,7 @@ asmlinkage int compat_sys_sigreturn(void)
 	return 0;
 }
 
-asmlinkage int compat_sys_rt_sigreturn(void)
+COMPAT_SYSCALL_DEFINE0(rt_sigreturn)
 {
 	struct pt_regs *regs = current_pt_regs();
 	struct compat_rt_sigframe __user *frame;
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 16/18] arm64: use SYSCALL_DEFINE6() for mmap
  2018-05-14  9:46 ` Mark Rutland
@ 2018-05-14  9:46   ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, dave.martin, james.morse, linux,
	linux-fsdevel, marc.zyngier, mark.rutland, viro, will.deacon

We don't currently annotate our mmap implementation as a syscall, as we
need to do to use pt_regs syscall wrappers.

Let's mark it as a real syscall.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kernel/sys.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kernel/sys.c b/arch/arm64/kernel/sys.c
index a82c3f7a9a90..2ad1497a184e 100644
--- a/arch/arm64/kernel/sys.c
+++ b/arch/arm64/kernel/sys.c
@@ -27,9 +27,9 @@
 #include <linux/syscalls.h>
 #include <asm/cpufeature.h>
 
-asmlinkage long sys_mmap(unsigned long addr, unsigned long len,
-			 unsigned long prot, unsigned long flags,
-			 unsigned long fd, off_t off)
+SYSCALL_DEFINE6(mmap, unsigned long, addr, unsigned long, len,
+		unsigned long, prot, unsigned long, flags,
+		unsigned long, fd, off_t, off)
 {
 	if (offset_in_page(off) != 0)
 		return -EINVAL;
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 16/18] arm64: use SYSCALL_DEFINE6() for mmap
@ 2018-05-14  9:46   ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel

We don't currently annotate our mmap implementation as a syscall, as we
need to do to use pt_regs syscall wrappers.

Let's mark it as a real syscall.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kernel/sys.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kernel/sys.c b/arch/arm64/kernel/sys.c
index a82c3f7a9a90..2ad1497a184e 100644
--- a/arch/arm64/kernel/sys.c
+++ b/arch/arm64/kernel/sys.c
@@ -27,9 +27,9 @@
 #include <linux/syscalls.h>
 #include <asm/cpufeature.h>
 
-asmlinkage long sys_mmap(unsigned long addr, unsigned long len,
-			 unsigned long prot, unsigned long flags,
-			 unsigned long fd, off_t off)
+SYSCALL_DEFINE6(mmap, unsigned long, addr, unsigned long, len,
+		unsigned long, prot, unsigned long, flags,
+		unsigned long, fd, off_t, off)
 {
 	if (offset_in_page(off) != 0)
 		return -EINVAL;
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 17/18] arm64: convert compat wrappers to C
  2018-05-14  9:46 ` Mark Rutland
@ 2018-05-14  9:46   ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, dave.martin, james.morse, linux,
	linux-fsdevel, marc.zyngier, mark.rutland, viro, will.deacon

In preparation for converting to pt_regs syscall wrappers, convert our
existing compat wrappers to C. This will allow the pt_regs wrappers to
be automatically generated, and will allow for the compat register
manipulation to be folded in with the pt_regs accesses.

To avoid confusion with the upcoming pt_regs wrappers and existing
compat wrappers provided by core code, the C wrappers are renamed to
compat_sys_aarch32_<syscall>.

With the assembly wrappers gone, we can get rid of entry32.S and the
associated boilerplate.

Note that these must call the ksys_* syscall entry points, as the usual
sys_* entry points will be modified to take a single pt_regs pointer
argument.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/unistd32.h |  22 ++++----
 arch/arm64/kernel/Makefile        |   2 +-
 arch/arm64/kernel/entry32.S       | 111 --------------------------------------
 arch/arm64/kernel/sys32.c         | 103 +++++++++++++++++++++++++++++++----
 4 files changed, 104 insertions(+), 134 deletions(-)
 delete mode 100644 arch/arm64/kernel/entry32.S

diff --git a/arch/arm64/include/asm/unistd32.h b/arch/arm64/include/asm/unistd32.h
index ab95554b1734..0e3dd3265993 100644
--- a/arch/arm64/include/asm/unistd32.h
+++ b/arch/arm64/include/asm/unistd32.h
@@ -382,9 +382,9 @@ __SYSCALL(__NR_rt_sigqueueinfo, compat_sys_rt_sigqueueinfo)
 #define __NR_rt_sigsuspend 179
 __SYSCALL(__NR_rt_sigsuspend, compat_sys_rt_sigsuspend)
 #define __NR_pread64 180
-__SYSCALL(__NR_pread64, compat_sys_pread64_wrapper)
+__SYSCALL(__NR_pread64, compat_sys_aarch32_pread64)
 #define __NR_pwrite64 181
-__SYSCALL(__NR_pwrite64, compat_sys_pwrite64_wrapper)
+__SYSCALL(__NR_pwrite64, compat_sys_aarch32_pwrite64)
 #define __NR_chown 182
 __SYSCALL(__NR_chown, sys_chown16)
 #define __NR_getcwd 183
@@ -406,11 +406,11 @@ __SYSCALL(__NR_vfork, sys_vfork)
 #define __NR_ugetrlimit 191	/* SuS compliant getrlimit */
 __SYSCALL(__NR_ugetrlimit, compat_sys_getrlimit)		/* SuS compliant getrlimit */
 #define __NR_mmap2 192
-__SYSCALL(__NR_mmap2, compat_sys_mmap2_wrapper)
+__SYSCALL(__NR_mmap2, compat_sys_aarch32_mmap2)
 #define __NR_truncate64 193
-__SYSCALL(__NR_truncate64, compat_sys_truncate64_wrapper)
+__SYSCALL(__NR_truncate64, compat_sys_aarch32_truncate64)
 #define __NR_ftruncate64 194
-__SYSCALL(__NR_ftruncate64, compat_sys_ftruncate64_wrapper)
+__SYSCALL(__NR_ftruncate64, compat_sys_aarch32_ftruncate64)
 #define __NR_stat64 195
 __SYSCALL(__NR_stat64, sys_stat64)
 #define __NR_lstat64 196
@@ -472,7 +472,7 @@ __SYSCALL(223, sys_ni_syscall)
 #define __NR_gettid 224
 __SYSCALL(__NR_gettid, sys_gettid)
 #define __NR_readahead 225
-__SYSCALL(__NR_readahead, compat_sys_readahead_wrapper)
+__SYSCALL(__NR_readahead, compat_sys_aarch32_readahead)
 #define __NR_setxattr 226
 __SYSCALL(__NR_setxattr, sys_setxattr)
 #define __NR_lsetxattr 227
@@ -554,15 +554,15 @@ __SYSCALL(__NR_clock_getres, compat_sys_clock_getres)
 #define __NR_clock_nanosleep 265
 __SYSCALL(__NR_clock_nanosleep, compat_sys_clock_nanosleep)
 #define __NR_statfs64 266
-__SYSCALL(__NR_statfs64, compat_sys_statfs64_wrapper)
+__SYSCALL(__NR_statfs64, compat_sys_aarch32_statfs64)
 #define __NR_fstatfs64 267
-__SYSCALL(__NR_fstatfs64, compat_sys_fstatfs64_wrapper)
+__SYSCALL(__NR_fstatfs64, compat_sys_aarch32_fstatfs64)
 #define __NR_tgkill 268
 __SYSCALL(__NR_tgkill, sys_tgkill)
 #define __NR_utimes 269
 __SYSCALL(__NR_utimes, compat_sys_utimes)
 #define __NR_arm_fadvise64_64 270
-__SYSCALL(__NR_arm_fadvise64_64, compat_sys_fadvise64_64_wrapper)
+__SYSCALL(__NR_arm_fadvise64_64, compat_sys_aarch32_fadvise64_64)
 #define __NR_pciconfig_iobase 271
 __SYSCALL(__NR_pciconfig_iobase, sys_pciconfig_iobase)
 #define __NR_pciconfig_read 272
@@ -704,7 +704,7 @@ __SYSCALL(__NR_get_robust_list, compat_sys_get_robust_list)
 #define __NR_splice 340
 __SYSCALL(__NR_splice, sys_splice)
 #define __NR_sync_file_range2 341
-__SYSCALL(__NR_sync_file_range2, compat_sys_sync_file_range2_wrapper)
+__SYSCALL(__NR_sync_file_range2, compat_sys_aarch32_sync_file_range2)
 #define __NR_tee 342
 __SYSCALL(__NR_tee, sys_tee)
 #define __NR_vmsplice 343
@@ -726,7 +726,7 @@ __SYSCALL(__NR_timerfd_create, sys_timerfd_create)
 #define __NR_eventfd 351
 __SYSCALL(__NR_eventfd, sys_eventfd)
 #define __NR_fallocate 352
-__SYSCALL(__NR_fallocate, compat_sys_fallocate_wrapper)
+__SYSCALL(__NR_fallocate, compat_sys_aarch32_fallocate)
 #define __NR_timerfd_settime 353
 __SYSCALL(__NR_timerfd_settime, compat_sys_timerfd_settime)
 #define __NR_timerfd_gettime 354
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index c22e8ace5ea3..5886bc53fc9f 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -28,7 +28,7 @@ $(obj)/%.stub.o: $(obj)/%.o FORCE
 	$(call if_changed,objcopy)
 
 arm64-obj-$(CONFIG_COMPAT)		+= sys32.o kuser32.o signal32.o 	\
-					   sys_compat.o entry32.o
+					   sys_compat.o
 arm64-obj-$(CONFIG_FUNCTION_TRACER)	+= ftrace.o entry-ftrace.o
 arm64-obj-$(CONFIG_MODULES)		+= arm64ksyms.o module.o
 arm64-obj-$(CONFIG_ARM64_MODULE_PLTS)	+= module-plts.o
diff --git a/arch/arm64/kernel/entry32.S b/arch/arm64/kernel/entry32.S
deleted file mode 100644
index f9461696dde4..000000000000
--- a/arch/arm64/kernel/entry32.S
+++ /dev/null
@@ -1,111 +0,0 @@
-/*
- * Compat system call wrappers
- *
- * Copyright (C) 2012 ARM Ltd.
- * Authors: Will Deacon <will.deacon@arm.com>
- *	    Catalin Marinas <catalin.marinas@arm.com>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program.  If not, see <http://www.gnu.org/licenses/>.
- */
-
-#include <linux/linkage.h>
-#include <linux/const.h>
-
-#include <asm/assembler.h>
-#include <asm/asm-offsets.h>
-#include <asm/errno.h>
-#include <asm/page.h>
-
-/*
- * System call wrappers for the AArch32 compatibility layer.
- */
-
-ENTRY(compat_sys_statfs64_wrapper)
-	mov	w3, #84
-	cmp	w1, #88
-	csel	w1, w3, w1, eq
-	b	compat_sys_statfs64
-ENDPROC(compat_sys_statfs64_wrapper)
-
-ENTRY(compat_sys_fstatfs64_wrapper)
-	mov	w3, #84
-	cmp	w1, #88
-	csel	w1, w3, w1, eq
-	b	compat_sys_fstatfs64
-ENDPROC(compat_sys_fstatfs64_wrapper)
-
-/*
- * Note: off_4k (w5) is always in units of 4K. If we can't do the
- * requested offset because it is not page-aligned, we return -EINVAL.
- */
-ENTRY(compat_sys_mmap2_wrapper)
-#if PAGE_SHIFT > 12
-	tst	w5, #~PAGE_MASK >> 12
-	b.ne	1f
-	lsr	w5, w5, #PAGE_SHIFT - 12
-#endif
-	b	sys_mmap_pgoff
-1:	mov	x0, #-EINVAL
-	ret
-ENDPROC(compat_sys_mmap2_wrapper)
-
-/*
- * Wrappers for AArch32 syscalls that either take 64-bit parameters
- * in registers or that take 32-bit parameters which require sign
- * extension.
- */
-ENTRY(compat_sys_pread64_wrapper)
-	regs_to_64	x3, x4, x5
-	b	sys_pread64
-ENDPROC(compat_sys_pread64_wrapper)
-
-ENTRY(compat_sys_pwrite64_wrapper)
-	regs_to_64	x3, x4, x5
-	b	sys_pwrite64
-ENDPROC(compat_sys_pwrite64_wrapper)
-
-ENTRY(compat_sys_truncate64_wrapper)
-	regs_to_64	x1, x2, x3
-	b	sys_truncate
-ENDPROC(compat_sys_truncate64_wrapper)
-
-ENTRY(compat_sys_ftruncate64_wrapper)
-	regs_to_64	x1, x2, x3
-	b	sys_ftruncate
-ENDPROC(compat_sys_ftruncate64_wrapper)
-
-ENTRY(compat_sys_readahead_wrapper)
-	regs_to_64	x1, x2, x3
-	mov	w2, w4
-	b	sys_readahead
-ENDPROC(compat_sys_readahead_wrapper)
-
-ENTRY(compat_sys_fadvise64_64_wrapper)
-	mov	w6, w1
-	regs_to_64	x1, x2, x3
-	regs_to_64	x2, x4, x5
-	mov	w3, w6
-	b	sys_fadvise64_64
-ENDPROC(compat_sys_fadvise64_64_wrapper)
-
-ENTRY(compat_sys_sync_file_range2_wrapper)
-	regs_to_64	x2, x2, x3
-	regs_to_64	x3, x4, x5
-	b	sys_sync_file_range2
-ENDPROC(compat_sys_sync_file_range2_wrapper)
-
-ENTRY(compat_sys_fallocate_wrapper)
-	regs_to_64	x2, x2, x3
-	regs_to_64	x3, x4, x5
-	b	sys_fallocate
-ENDPROC(compat_sys_fallocate_wrapper)
diff --git a/arch/arm64/kernel/sys32.c b/arch/arm64/kernel/sys32.c
index 1ef103c95410..1a84e79c47f3 100644
--- a/arch/arm64/kernel/sys32.c
+++ b/arch/arm64/kernel/sys32.c
@@ -27,17 +27,98 @@
 
 asmlinkage long compat_sys_sigreturn(void);
 asmlinkage long compat_sys_rt_sigreturn(void);
-asmlinkage long compat_sys_statfs64_wrapper(void);
-asmlinkage long compat_sys_fstatfs64_wrapper(void);
-asmlinkage long compat_sys_pread64_wrapper(void);
-asmlinkage long compat_sys_pwrite64_wrapper(void);
-asmlinkage long compat_sys_truncate64_wrapper(void);
-asmlinkage long compat_sys_ftruncate64_wrapper(void);
-asmlinkage long compat_sys_readahead_wrapper(void);
-asmlinkage long compat_sys_fadvise64_64_wrapper(void);
-asmlinkage long compat_sys_sync_file_range2_wrapper(void);
-asmlinkage long compat_sys_fallocate_wrapper(void);
-asmlinkage long compat_sys_mmap2_wrapper(void);
+
+COMPAT_SYSCALL_DEFINE3(aarch32_statfs64, const char __user *, pathname,
+		       compat_size_t, sz, struct compat_statfs64 __user *, buf)
+{
+	if (sz == 88)
+		sz = 84;
+
+	return kcompat_sys_statfs64(pathname, sz, buf);
+}
+
+COMPAT_SYSCALL_DEFINE3(aarch32_fstatfs64, unsigned int, fd, compat_size_t, sz,
+		       struct compat_statfs64 __user *, buf)
+{
+	if (sz == 88)
+		sz = 84;
+
+	return kcompat_sys_fstatfs64(fd, sz, buf);
+}
+
+/*
+ * Note: off_4k is always in units of 4K. If we can't do the
+ * requested offset because it is not page-aligned, we return -EINVAL.
+ */
+COMPAT_SYSCALL_DEFINE6(aarch32_mmap2, unsigned long, addr, unsigned long, len,
+		       unsigned long, prot, unsigned long, flags,
+		       unsigned long, fd, unsigned long, off_4k)
+{
+	if (off_4k & (~PAGE_MASK >> 12))
+		return -EINVAL;
+
+	off_4k >>= (PAGE_SHIFT - 12);
+
+	return ksys_mmap_pgoff(addr, len, prot, flags, fd, off_4k);
+}
+
+#ifdef CONFIG_CPU_BIG_ENDIAN
+#define arg_u32p(name)	u32, name##_hi, u32, name##_lo
+#else
+#define arg_u32p(name)	u32, name##_lo, u32, name##_hi
+#endif
+
+#define arg_u64(name)	(((u64)name##_hi << 32) | name##_lo)
+
+COMPAT_SYSCALL_DEFINE6(aarch32_pread64, unsigned int, fd, char __user *, buf,
+		       size_t, count, u32, __pad, arg_u32p(pos))
+{
+	return ksys_pread64(fd, buf, count, arg_u64(pos));
+}
+
+COMPAT_SYSCALL_DEFINE6(aarch32_pwrite64, unsigned int, fd,
+		       const char __user *, buf, size_t, count, u32, __pad,
+		       arg_u32p(pos))
+{
+	return ksys_pwrite64(fd, buf, count, arg_u64(pos));
+}
+
+COMPAT_SYSCALL_DEFINE4(aarch32_truncate64, const char __user *, pathname,
+		       u32, __pad, arg_u32p(length))
+{
+	return ksys_truncate(pathname, arg_u64(length));
+}
+
+COMPAT_SYSCALL_DEFINE4(aarch32_ftruncate64, unsigned int, fd, u32, __pad,
+		       arg_u32p(length))
+{
+	return ksys_ftruncate(fd, arg_u64(length));
+}
+
+COMPAT_SYSCALL_DEFINE5(aarch32_readahead, int, fd, u32, __pad,
+		       arg_u32p(offset), size_t, count)
+{
+	return ksys_readahead(fd, arg_u64(offset), count);
+}
+
+COMPAT_SYSCALL_DEFINE6(aarch32_fadvise64_64, int, fd, int, advice,
+		       arg_u32p(offset), arg_u32p(len))
+{
+	return ksys_fadvise64_64(fd, arg_u64(offset), arg_u64(len), advice);
+}
+
+COMPAT_SYSCALL_DEFINE6(aarch32_sync_file_range2, int, fd, unsigned int, flags,
+		       arg_u32p(offset), arg_u32p(nbytes))
+{
+	return ksys_sync_file_range(fd, arg_u64(offset), arg_u64(nbytes),
+				    flags);
+}
+
+COMPAT_SYSCALL_DEFINE6(aarch32_fallocate, int, fd, int, mode,
+		       arg_u32p(offset), arg_u32p(len))
+{
+	return ksys_fallocate(fd, mode, arg_u64(offset), arg_u64(len));
+}
 
 #undef __SYSCALL
 #define __SYSCALL(nr, sym)	[nr] = sym,
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 17/18] arm64: convert compat wrappers to C
@ 2018-05-14  9:46   ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel

In preparation for converting to pt_regs syscall wrappers, convert our
existing compat wrappers to C. This will allow the pt_regs wrappers to
be automatically generated, and will allow for the compat register
manipulation to be folded in with the pt_regs accesses.

To avoid confusion with the upcoming pt_regs wrappers and existing
compat wrappers provided by core code, the C wrappers are renamed to
compat_sys_aarch32_<syscall>.

With the assembly wrappers gone, we can get rid of entry32.S and the
associated boilerplate.

Note that these must call the ksys_* syscall entry points, as the usual
sys_* entry points will be modified to take a single pt_regs pointer
argument.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/unistd32.h |  22 ++++----
 arch/arm64/kernel/Makefile        |   2 +-
 arch/arm64/kernel/entry32.S       | 111 --------------------------------------
 arch/arm64/kernel/sys32.c         | 103 +++++++++++++++++++++++++++++++----
 4 files changed, 104 insertions(+), 134 deletions(-)
 delete mode 100644 arch/arm64/kernel/entry32.S

diff --git a/arch/arm64/include/asm/unistd32.h b/arch/arm64/include/asm/unistd32.h
index ab95554b1734..0e3dd3265993 100644
--- a/arch/arm64/include/asm/unistd32.h
+++ b/arch/arm64/include/asm/unistd32.h
@@ -382,9 +382,9 @@ __SYSCALL(__NR_rt_sigqueueinfo, compat_sys_rt_sigqueueinfo)
 #define __NR_rt_sigsuspend 179
 __SYSCALL(__NR_rt_sigsuspend, compat_sys_rt_sigsuspend)
 #define __NR_pread64 180
-__SYSCALL(__NR_pread64, compat_sys_pread64_wrapper)
+__SYSCALL(__NR_pread64, compat_sys_aarch32_pread64)
 #define __NR_pwrite64 181
-__SYSCALL(__NR_pwrite64, compat_sys_pwrite64_wrapper)
+__SYSCALL(__NR_pwrite64, compat_sys_aarch32_pwrite64)
 #define __NR_chown 182
 __SYSCALL(__NR_chown, sys_chown16)
 #define __NR_getcwd 183
@@ -406,11 +406,11 @@ __SYSCALL(__NR_vfork, sys_vfork)
 #define __NR_ugetrlimit 191	/* SuS compliant getrlimit */
 __SYSCALL(__NR_ugetrlimit, compat_sys_getrlimit)		/* SuS compliant getrlimit */
 #define __NR_mmap2 192
-__SYSCALL(__NR_mmap2, compat_sys_mmap2_wrapper)
+__SYSCALL(__NR_mmap2, compat_sys_aarch32_mmap2)
 #define __NR_truncate64 193
-__SYSCALL(__NR_truncate64, compat_sys_truncate64_wrapper)
+__SYSCALL(__NR_truncate64, compat_sys_aarch32_truncate64)
 #define __NR_ftruncate64 194
-__SYSCALL(__NR_ftruncate64, compat_sys_ftruncate64_wrapper)
+__SYSCALL(__NR_ftruncate64, compat_sys_aarch32_ftruncate64)
 #define __NR_stat64 195
 __SYSCALL(__NR_stat64, sys_stat64)
 #define __NR_lstat64 196
@@ -472,7 +472,7 @@ __SYSCALL(223, sys_ni_syscall)
 #define __NR_gettid 224
 __SYSCALL(__NR_gettid, sys_gettid)
 #define __NR_readahead 225
-__SYSCALL(__NR_readahead, compat_sys_readahead_wrapper)
+__SYSCALL(__NR_readahead, compat_sys_aarch32_readahead)
 #define __NR_setxattr 226
 __SYSCALL(__NR_setxattr, sys_setxattr)
 #define __NR_lsetxattr 227
@@ -554,15 +554,15 @@ __SYSCALL(__NR_clock_getres, compat_sys_clock_getres)
 #define __NR_clock_nanosleep 265
 __SYSCALL(__NR_clock_nanosleep, compat_sys_clock_nanosleep)
 #define __NR_statfs64 266
-__SYSCALL(__NR_statfs64, compat_sys_statfs64_wrapper)
+__SYSCALL(__NR_statfs64, compat_sys_aarch32_statfs64)
 #define __NR_fstatfs64 267
-__SYSCALL(__NR_fstatfs64, compat_sys_fstatfs64_wrapper)
+__SYSCALL(__NR_fstatfs64, compat_sys_aarch32_fstatfs64)
 #define __NR_tgkill 268
 __SYSCALL(__NR_tgkill, sys_tgkill)
 #define __NR_utimes 269
 __SYSCALL(__NR_utimes, compat_sys_utimes)
 #define __NR_arm_fadvise64_64 270
-__SYSCALL(__NR_arm_fadvise64_64, compat_sys_fadvise64_64_wrapper)
+__SYSCALL(__NR_arm_fadvise64_64, compat_sys_aarch32_fadvise64_64)
 #define __NR_pciconfig_iobase 271
 __SYSCALL(__NR_pciconfig_iobase, sys_pciconfig_iobase)
 #define __NR_pciconfig_read 272
@@ -704,7 +704,7 @@ __SYSCALL(__NR_get_robust_list, compat_sys_get_robust_list)
 #define __NR_splice 340
 __SYSCALL(__NR_splice, sys_splice)
 #define __NR_sync_file_range2 341
-__SYSCALL(__NR_sync_file_range2, compat_sys_sync_file_range2_wrapper)
+__SYSCALL(__NR_sync_file_range2, compat_sys_aarch32_sync_file_range2)
 #define __NR_tee 342
 __SYSCALL(__NR_tee, sys_tee)
 #define __NR_vmsplice 343
@@ -726,7 +726,7 @@ __SYSCALL(__NR_timerfd_create, sys_timerfd_create)
 #define __NR_eventfd 351
 __SYSCALL(__NR_eventfd, sys_eventfd)
 #define __NR_fallocate 352
-__SYSCALL(__NR_fallocate, compat_sys_fallocate_wrapper)
+__SYSCALL(__NR_fallocate, compat_sys_aarch32_fallocate)
 #define __NR_timerfd_settime 353
 __SYSCALL(__NR_timerfd_settime, compat_sys_timerfd_settime)
 #define __NR_timerfd_gettime 354
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index c22e8ace5ea3..5886bc53fc9f 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -28,7 +28,7 @@ $(obj)/%.stub.o: $(obj)/%.o FORCE
 	$(call if_changed,objcopy)
 
 arm64-obj-$(CONFIG_COMPAT)		+= sys32.o kuser32.o signal32.o 	\
-					   sys_compat.o entry32.o
+					   sys_compat.o
 arm64-obj-$(CONFIG_FUNCTION_TRACER)	+= ftrace.o entry-ftrace.o
 arm64-obj-$(CONFIG_MODULES)		+= arm64ksyms.o module.o
 arm64-obj-$(CONFIG_ARM64_MODULE_PLTS)	+= module-plts.o
diff --git a/arch/arm64/kernel/entry32.S b/arch/arm64/kernel/entry32.S
deleted file mode 100644
index f9461696dde4..000000000000
--- a/arch/arm64/kernel/entry32.S
+++ /dev/null
@@ -1,111 +0,0 @@
-/*
- * Compat system call wrappers
- *
- * Copyright (C) 2012 ARM Ltd.
- * Authors: Will Deacon <will.deacon@arm.com>
- *	    Catalin Marinas <catalin.marinas@arm.com>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program.  If not, see <http://www.gnu.org/licenses/>.
- */
-
-#include <linux/linkage.h>
-#include <linux/const.h>
-
-#include <asm/assembler.h>
-#include <asm/asm-offsets.h>
-#include <asm/errno.h>
-#include <asm/page.h>
-
-/*
- * System call wrappers for the AArch32 compatibility layer.
- */
-
-ENTRY(compat_sys_statfs64_wrapper)
-	mov	w3, #84
-	cmp	w1, #88
-	csel	w1, w3, w1, eq
-	b	compat_sys_statfs64
-ENDPROC(compat_sys_statfs64_wrapper)
-
-ENTRY(compat_sys_fstatfs64_wrapper)
-	mov	w3, #84
-	cmp	w1, #88
-	csel	w1, w3, w1, eq
-	b	compat_sys_fstatfs64
-ENDPROC(compat_sys_fstatfs64_wrapper)
-
-/*
- * Note: off_4k (w5) is always in units of 4K. If we can't do the
- * requested offset because it is not page-aligned, we return -EINVAL.
- */
-ENTRY(compat_sys_mmap2_wrapper)
-#if PAGE_SHIFT > 12
-	tst	w5, #~PAGE_MASK >> 12
-	b.ne	1f
-	lsr	w5, w5, #PAGE_SHIFT - 12
-#endif
-	b	sys_mmap_pgoff
-1:	mov	x0, #-EINVAL
-	ret
-ENDPROC(compat_sys_mmap2_wrapper)
-
-/*
- * Wrappers for AArch32 syscalls that either take 64-bit parameters
- * in registers or that take 32-bit parameters which require sign
- * extension.
- */
-ENTRY(compat_sys_pread64_wrapper)
-	regs_to_64	x3, x4, x5
-	b	sys_pread64
-ENDPROC(compat_sys_pread64_wrapper)
-
-ENTRY(compat_sys_pwrite64_wrapper)
-	regs_to_64	x3, x4, x5
-	b	sys_pwrite64
-ENDPROC(compat_sys_pwrite64_wrapper)
-
-ENTRY(compat_sys_truncate64_wrapper)
-	regs_to_64	x1, x2, x3
-	b	sys_truncate
-ENDPROC(compat_sys_truncate64_wrapper)
-
-ENTRY(compat_sys_ftruncate64_wrapper)
-	regs_to_64	x1, x2, x3
-	b	sys_ftruncate
-ENDPROC(compat_sys_ftruncate64_wrapper)
-
-ENTRY(compat_sys_readahead_wrapper)
-	regs_to_64	x1, x2, x3
-	mov	w2, w4
-	b	sys_readahead
-ENDPROC(compat_sys_readahead_wrapper)
-
-ENTRY(compat_sys_fadvise64_64_wrapper)
-	mov	w6, w1
-	regs_to_64	x1, x2, x3
-	regs_to_64	x2, x4, x5
-	mov	w3, w6
-	b	sys_fadvise64_64
-ENDPROC(compat_sys_fadvise64_64_wrapper)
-
-ENTRY(compat_sys_sync_file_range2_wrapper)
-	regs_to_64	x2, x2, x3
-	regs_to_64	x3, x4, x5
-	b	sys_sync_file_range2
-ENDPROC(compat_sys_sync_file_range2_wrapper)
-
-ENTRY(compat_sys_fallocate_wrapper)
-	regs_to_64	x2, x2, x3
-	regs_to_64	x3, x4, x5
-	b	sys_fallocate
-ENDPROC(compat_sys_fallocate_wrapper)
diff --git a/arch/arm64/kernel/sys32.c b/arch/arm64/kernel/sys32.c
index 1ef103c95410..1a84e79c47f3 100644
--- a/arch/arm64/kernel/sys32.c
+++ b/arch/arm64/kernel/sys32.c
@@ -27,17 +27,98 @@
 
 asmlinkage long compat_sys_sigreturn(void);
 asmlinkage long compat_sys_rt_sigreturn(void);
-asmlinkage long compat_sys_statfs64_wrapper(void);
-asmlinkage long compat_sys_fstatfs64_wrapper(void);
-asmlinkage long compat_sys_pread64_wrapper(void);
-asmlinkage long compat_sys_pwrite64_wrapper(void);
-asmlinkage long compat_sys_truncate64_wrapper(void);
-asmlinkage long compat_sys_ftruncate64_wrapper(void);
-asmlinkage long compat_sys_readahead_wrapper(void);
-asmlinkage long compat_sys_fadvise64_64_wrapper(void);
-asmlinkage long compat_sys_sync_file_range2_wrapper(void);
-asmlinkage long compat_sys_fallocate_wrapper(void);
-asmlinkage long compat_sys_mmap2_wrapper(void);
+
+COMPAT_SYSCALL_DEFINE3(aarch32_statfs64, const char __user *, pathname,
+		       compat_size_t, sz, struct compat_statfs64 __user *, buf)
+{
+	if (sz == 88)
+		sz = 84;
+
+	return kcompat_sys_statfs64(pathname, sz, buf);
+}
+
+COMPAT_SYSCALL_DEFINE3(aarch32_fstatfs64, unsigned int, fd, compat_size_t, sz,
+		       struct compat_statfs64 __user *, buf)
+{
+	if (sz == 88)
+		sz = 84;
+
+	return kcompat_sys_fstatfs64(fd, sz, buf);
+}
+
+/*
+ * Note: off_4k is always in units of 4K. If we can't do the
+ * requested offset because it is not page-aligned, we return -EINVAL.
+ */
+COMPAT_SYSCALL_DEFINE6(aarch32_mmap2, unsigned long, addr, unsigned long, len,
+		       unsigned long, prot, unsigned long, flags,
+		       unsigned long, fd, unsigned long, off_4k)
+{
+	if (off_4k & (~PAGE_MASK >> 12))
+		return -EINVAL;
+
+	off_4k >>= (PAGE_SHIFT - 12);
+
+	return ksys_mmap_pgoff(addr, len, prot, flags, fd, off_4k);
+}
+
+#ifdef CONFIG_CPU_BIG_ENDIAN
+#define arg_u32p(name)	u32, name##_hi, u32, name##_lo
+#else
+#define arg_u32p(name)	u32, name##_lo, u32, name##_hi
+#endif
+
+#define arg_u64(name)	(((u64)name##_hi << 32) | name##_lo)
+
+COMPAT_SYSCALL_DEFINE6(aarch32_pread64, unsigned int, fd, char __user *, buf,
+		       size_t, count, u32, __pad, arg_u32p(pos))
+{
+	return ksys_pread64(fd, buf, count, arg_u64(pos));
+}
+
+COMPAT_SYSCALL_DEFINE6(aarch32_pwrite64, unsigned int, fd,
+		       const char __user *, buf, size_t, count, u32, __pad,
+		       arg_u32p(pos))
+{
+	return ksys_pwrite64(fd, buf, count, arg_u64(pos));
+}
+
+COMPAT_SYSCALL_DEFINE4(aarch32_truncate64, const char __user *, pathname,
+		       u32, __pad, arg_u32p(length))
+{
+	return ksys_truncate(pathname, arg_u64(length));
+}
+
+COMPAT_SYSCALL_DEFINE4(aarch32_ftruncate64, unsigned int, fd, u32, __pad,
+		       arg_u32p(length))
+{
+	return ksys_ftruncate(fd, arg_u64(length));
+}
+
+COMPAT_SYSCALL_DEFINE5(aarch32_readahead, int, fd, u32, __pad,
+		       arg_u32p(offset), size_t, count)
+{
+	return ksys_readahead(fd, arg_u64(offset), count);
+}
+
+COMPAT_SYSCALL_DEFINE6(aarch32_fadvise64_64, int, fd, int, advice,
+		       arg_u32p(offset), arg_u32p(len))
+{
+	return ksys_fadvise64_64(fd, arg_u64(offset), arg_u64(len), advice);
+}
+
+COMPAT_SYSCALL_DEFINE6(aarch32_sync_file_range2, int, fd, unsigned int, flags,
+		       arg_u32p(offset), arg_u32p(nbytes))
+{
+	return ksys_sync_file_range(fd, arg_u64(offset), arg_u64(nbytes),
+				    flags);
+}
+
+COMPAT_SYSCALL_DEFINE6(aarch32_fallocate, int, fd, int, mode,
+		       arg_u32p(offset), arg_u32p(len))
+{
+	return ksys_fallocate(fd, mode, arg_u64(offset), arg_u64(len));
+}
 
 #undef __SYSCALL
 #define __SYSCALL(nr, sym)	[nr] = sym,
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 18/18] arm64: implement syscall wrappers
  2018-05-14  9:46 ` Mark Rutland
@ 2018-05-14  9:46   ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, dave.martin, james.morse, linux,
	linux-fsdevel, marc.zyngier, mark.rutland, viro, will.deacon

To minimize the risk of userspace-controlled values being used under
speculation, this patch adds pt_regs based syscall wrappers for arm64,
which pass the minimum set of required userspace values to syscall
implementations. For each syscall, a wrapper which takes a pt_regs
argument is automatically generated, and this extracts the arguments
before calling the "real" syscall implementation.

Each syscall has three functions generated:

* __do_<compat_>sys_<name> is the "real" syscall implementation, with
  the expected prototype.

* __se_<compat_>sys_<name> is the sign-extension/narrowing wrapper,
  inherited from common code. This takes a series of long parameters,
  casting each to the requisite types required by the "real" syscall
  implementation in __do_<compat_>sys_<name>.

  This wrapper *may* not be necessary on arm64 given the AAPCS rules on
  unused register bits, but it seemed safer to keep the wrapper for now.

* __arm64_<compat_>_sys_<name> takes a struct pt_regs pointer, and
  extracts *only* the relevant register values, passing these on to the
  __se_<compat_>sys_<name> wrapper.

The syscall invocation code is updated to handle the calling convention
required by __arm64_<compat_>_sys_<name>, and passes a single struct
pt_regs pointer.

The compiler can fold the syscall implementation and its wrappers, such
that the overhead of this approach is minimized.

Note that we play games with sys_ni_syscall(). It can't be defined with
SYSCALL_DEFINE0() because we must avoid the possibility of error
injection. Additionally, there are a couple of locations where we need
to call it from C code, and we don't (currently) have a
ksys_ni_syscall().  While it has no wrapper, passing in a redundant
pt_regs pointer is benign per the AAPCS.

When ARCH_HAS_SYSCALL_WRAPPER is selected, no prototype is define for
sys_ni_syscall(). Since we need to treat it differently for in-kernel
calls and the syscall tables, the prototype is defined as-required.

Largely the wrappers are largely the same as their x86 counterparts, but
simplified as we don't have a variety of compat calling conventions that
require separate stubs. Unlike x86, we have some zero-argument compat
syscalls, and must define COMPAT_SYSCALL_DEFINE0().

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/Kconfig                       |  1 +
 arch/arm64/include/asm/syscall_wrapper.h | 80 ++++++++++++++++++++++++++++++++
 arch/arm64/kernel/sys.c                  | 10 +++-
 arch/arm64/kernel/sys32.c                |  9 +++-
 arch/arm64/kernel/syscall.c              |  8 +---
 arch/arm64/kernel/traps.c                |  2 +
 6 files changed, 101 insertions(+), 9 deletions(-)
 create mode 100644 arch/arm64/include/asm/syscall_wrapper.h

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index eb2cf4938f6d..8e559b1fc555 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -21,6 +21,7 @@ config ARM64
 	select ARCH_HAS_SG_CHAIN
 	select ARCH_HAS_STRICT_KERNEL_RWX
 	select ARCH_HAS_STRICT_MODULE_RWX
+	select ARCH_HAS_SYSCALL_WRAPPER
 	select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
 	select ARCH_HAVE_NMI_SAFE_CMPXCHG
 	select ARCH_INLINE_READ_LOCK if !PREEMPT
diff --git a/arch/arm64/include/asm/syscall_wrapper.h b/arch/arm64/include/asm/syscall_wrapper.h
new file mode 100644
index 000000000000..a4477e515b79
--- /dev/null
+++ b/arch/arm64/include/asm/syscall_wrapper.h
@@ -0,0 +1,80 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * syscall_wrapper.h - arm64 specific wrappers to syscall definitions
+ *
+ * Based on arch/x86/include/asm_syscall_wrapper.h
+ */
+
+#ifndef __ASM_SYSCALL_WRAPPER_H
+#define __ASM_SYSCALL_WRAPPER_H
+
+#define SC_ARM64_REGS_TO_ARGS(x, ...)				\
+	__MAP(x,__SC_ARGS					\
+	      ,,regs->regs[0],,regs->regs[1],,regs->regs[2]	\
+	      ,,regs->regs[3],,regs->regs[4],,regs->regs[5])
+
+#ifdef CONFIG_COMPAT
+
+#define COMPAT_SYSCALL_DEFINEx(x, name, ...)						\
+	asmlinkage long __arm64_compat_sys##name(const struct pt_regs *regs);		\
+	ALLOW_ERROR_INJECTION(__arm64_compat_sys##name, ERRNO);				\
+	static long __se_compat_sys##name(__MAP(x,__SC_LONG,__VA_ARGS__));		\
+	static inline long __do_compat_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__));	\
+	asmlinkage long __arm64_compat_sys##name(const struct pt_regs *regs)		\
+	{										\
+		return __se_compat_sys##name(SC_ARM64_REGS_TO_ARGS(x,__VA_ARGS__));	\
+	}										\
+	static long __se_compat_sys##name(__MAP(x,__SC_LONG,__VA_ARGS__))		\
+	{										\
+		return __do_compat_sys##name(__MAP(x,__SC_DELOUSE,__VA_ARGS__));	\
+	}										\
+	static inline long __do_compat_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__))
+
+#define COMPAT_SYSCALL_DEFINE0(sname)					\
+	asmlinkage long __arm64_compat_sys_##sname(void);		\
+	ALLOW_ERROR_INJECTION(__arm64_compat_sys_##sname, ERRNO);	\
+	asmlinkage long __arm64_compat_sys_##sname(void)
+
+#define COND_SYSCALL_COMPAT(name) \
+	cond_syscall(__arm64_compat_sys_##name);
+
+#define COMPAT_SYS_NI(name) \
+	SYSCALL_ALIAS(__arm64_compat_sys_##name, sys_ni_posix_timers);
+
+#endif /* CONFIG_COMPAT */
+
+#define __SYSCALL_DEFINEx(x, name, ...)						\
+	asmlinkage long __arm64_sys##name(const struct pt_regs *regs);		\
+	ALLOW_ERROR_INJECTION(__arm64_sys##name, ERRNO);			\
+	static long __se_sys##name(__MAP(x,__SC_LONG,__VA_ARGS__));		\
+	static inline long __do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__));	\
+	asmlinkage long __arm64_sys##name(const struct pt_regs *regs)		\
+	{									\
+		return __se_sys##name(SC_ARM64_REGS_TO_ARGS(x,__VA_ARGS__));	\
+	}									\
+	static long __se_sys##name(__MAP(x,__SC_LONG,__VA_ARGS__))		\
+	{									\
+		long ret = __do_sys##name(__MAP(x,__SC_CAST,__VA_ARGS__));	\
+		__MAP(x,__SC_TEST,__VA_ARGS__);					\
+		__PROTECT(x, ret,__MAP(x,__SC_ARGS,__VA_ARGS__));		\
+		return ret;							\
+	}									\
+	static inline long __do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__))
+
+#ifndef SYSCALL_DEFINE0
+#define SYSCALL_DEFINE0(sname)					\
+	SYSCALL_METADATA(_##sname, 0);				\
+	asmlinkage long __arm64_sys_##sname(void);		\
+	ALLOW_ERROR_INJECTION(__arm64_sys_##sname, ERRNO);	\
+	asmlinkage long __arm64_sys_##sname(void)
+#endif
+
+#ifndef COND_SYSCALL
+#define COND_SYSCALL(name) cond_syscall(__arm64_sys_##name)
+#endif
+
+#ifndef SYS_NI
+#define SYS_NI(name) SYSCALL_ALIAS(__arm64_sys_##name, sys_ni_posix_timers);
+#endif
+
+#endif /* __ASM_SYSCALL_WRAPPER_H */
diff --git a/arch/arm64/kernel/sys.c b/arch/arm64/kernel/sys.c
index 2ad1497a184e..ee93bf789f0a 100644
--- a/arch/arm64/kernel/sys.c
+++ b/arch/arm64/kernel/sys.c
@@ -48,11 +48,17 @@ SYSCALL_DEFINE1(arm64_personality, unsigned int, personality)
 /*
  * Wrappers to pass the pt_regs argument.
  */
-asmlinkage long sys_rt_sigreturn(void);
 #define sys_personality		sys_arm64_personality
 
+asmlinkage long sys_ni_syscall(const struct pt_regs *);
+#define __arm64_sys_ni_syscall	sys_ni_syscall
+
+#undef __SYSCALL
+#define __SYSCALL(nr, sym)	asmlinkage long __arm64_##sym(const struct pt_regs *);
+#include <asm/unistd.h>
+
 #undef __SYSCALL
-#define __SYSCALL(nr, sym)	[nr] = sym,
+#define __SYSCALL(nr, sym)	[nr] = __arm64_##sym,
 
 /*
  * The sys_call_table array must be 4K aligned to be accessible from
diff --git a/arch/arm64/kernel/sys32.c b/arch/arm64/kernel/sys32.c
index 1a84e79c47f3..8359b54a9c97 100644
--- a/arch/arm64/kernel/sys32.c
+++ b/arch/arm64/kernel/sys32.c
@@ -120,8 +120,15 @@ COMPAT_SYSCALL_DEFINE6(aarch32_fallocate, int, fd, int, mode,
 	return ksys_fallocate(fd, mode, arg_u64(offset), arg_u64(len));
 }
 
+asmlinkage long sys_ni_syscall(const struct pt_regs *);
+#define __arm64_sys_ni_syscall	sys_ni_syscall
+
+#undef __SYSCALL
+#define __SYSCALL(nr, sym)	asmlinkage long __arm64_##sym(const struct pt_regs *);
+#include <asm/unistd32.h>
+
 #undef __SYSCALL
-#define __SYSCALL(nr, sym)	[nr] = sym,
+#define __SYSCALL(nr, sym)	[nr] = __arm64_##sym,
 
 /*
  * The sys_call_table array must be 4K aligned to be accessible from
diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
index 4706f841e758..15b5d18f6104 100644
--- a/arch/arm64/kernel/syscall.c
+++ b/arch/arm64/kernel/syscall.c
@@ -12,15 +12,11 @@
 
 long do_ni_syscall(struct pt_regs *regs);
 
-typedef long (*syscall_fn_t)(unsigned long, unsigned long,
-			     unsigned long, unsigned long,
-			     unsigned long, unsigned long);
+typedef long (*syscall_fn_t)(struct pt_regs *regs);
 
 static void __invoke_syscall(struct pt_regs *regs, syscall_fn_t syscall_fn)
 {
-	regs->regs[0] = syscall_fn(regs->regs[0], regs->regs[1],
-				   regs->regs[2], regs->regs[3],
-				   regs->regs[4], regs->regs[5]);
+	regs->regs[0] = syscall_fn(regs);
 }
 
 static void invoke_syscall(struct pt_regs *regs, int scno, int sc_nr,
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index b8d3e0d8c616..8bfcc37cbc65 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -549,6 +549,8 @@ asmlinkage void __exception do_sysinstr(unsigned int esr, struct pt_regs *regs)
 
 long compat_arm_syscall(struct pt_regs *regs);
 
+long sys_ni_syscall(void);
+
 asmlinkage long do_ni_syscall(struct pt_regs *regs)
 {
 #ifdef CONFIG_COMPAT
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* [PATCH 18/18] arm64: implement syscall wrappers
@ 2018-05-14  9:46   ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14  9:46 UTC (permalink / raw)
  To: linux-arm-kernel

To minimize the risk of userspace-controlled values being used under
speculation, this patch adds pt_regs based syscall wrappers for arm64,
which pass the minimum set of required userspace values to syscall
implementations. For each syscall, a wrapper which takes a pt_regs
argument is automatically generated, and this extracts the arguments
before calling the "real" syscall implementation.

Each syscall has three functions generated:

* __do_<compat_>sys_<name> is the "real" syscall implementation, with
  the expected prototype.

* __se_<compat_>sys_<name> is the sign-extension/narrowing wrapper,
  inherited from common code. This takes a series of long parameters,
  casting each to the requisite types required by the "real" syscall
  implementation in __do_<compat_>sys_<name>.

  This wrapper *may* not be necessary on arm64 given the AAPCS rules on
  unused register bits, but it seemed safer to keep the wrapper for now.

* __arm64_<compat_>_sys_<name> takes a struct pt_regs pointer, and
  extracts *only* the relevant register values, passing these on to the
  __se_<compat_>sys_<name> wrapper.

The syscall invocation code is updated to handle the calling convention
required by __arm64_<compat_>_sys_<name>, and passes a single struct
pt_regs pointer.

The compiler can fold the syscall implementation and its wrappers, such
that the overhead of this approach is minimized.

Note that we play games with sys_ni_syscall(). It can't be defined with
SYSCALL_DEFINE0() because we must avoid the possibility of error
injection. Additionally, there are a couple of locations where we need
to call it from C code, and we don't (currently) have a
ksys_ni_syscall().  While it has no wrapper, passing in a redundant
pt_regs pointer is benign per the AAPCS.

When ARCH_HAS_SYSCALL_WRAPPER is selected, no prototype is define for
sys_ni_syscall(). Since we need to treat it differently for in-kernel
calls and the syscall tables, the prototype is defined as-required.

Largely the wrappers are largely the same as their x86 counterparts, but
simplified as we don't have a variety of compat calling conventions that
require separate stubs. Unlike x86, we have some zero-argument compat
syscalls, and must define COMPAT_SYSCALL_DEFINE0().

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/Kconfig                       |  1 +
 arch/arm64/include/asm/syscall_wrapper.h | 80 ++++++++++++++++++++++++++++++++
 arch/arm64/kernel/sys.c                  | 10 +++-
 arch/arm64/kernel/sys32.c                |  9 +++-
 arch/arm64/kernel/syscall.c              |  8 +---
 arch/arm64/kernel/traps.c                |  2 +
 6 files changed, 101 insertions(+), 9 deletions(-)
 create mode 100644 arch/arm64/include/asm/syscall_wrapper.h

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index eb2cf4938f6d..8e559b1fc555 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -21,6 +21,7 @@ config ARM64
 	select ARCH_HAS_SG_CHAIN
 	select ARCH_HAS_STRICT_KERNEL_RWX
 	select ARCH_HAS_STRICT_MODULE_RWX
+	select ARCH_HAS_SYSCALL_WRAPPER
 	select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
 	select ARCH_HAVE_NMI_SAFE_CMPXCHG
 	select ARCH_INLINE_READ_LOCK if !PREEMPT
diff --git a/arch/arm64/include/asm/syscall_wrapper.h b/arch/arm64/include/asm/syscall_wrapper.h
new file mode 100644
index 000000000000..a4477e515b79
--- /dev/null
+++ b/arch/arm64/include/asm/syscall_wrapper.h
@@ -0,0 +1,80 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * syscall_wrapper.h - arm64 specific wrappers to syscall definitions
+ *
+ * Based on arch/x86/include/asm_syscall_wrapper.h
+ */
+
+#ifndef __ASM_SYSCALL_WRAPPER_H
+#define __ASM_SYSCALL_WRAPPER_H
+
+#define SC_ARM64_REGS_TO_ARGS(x, ...)				\
+	__MAP(x,__SC_ARGS					\
+	      ,,regs->regs[0],,regs->regs[1],,regs->regs[2]	\
+	      ,,regs->regs[3],,regs->regs[4],,regs->regs[5])
+
+#ifdef CONFIG_COMPAT
+
+#define COMPAT_SYSCALL_DEFINEx(x, name, ...)						\
+	asmlinkage long __arm64_compat_sys##name(const struct pt_regs *regs);		\
+	ALLOW_ERROR_INJECTION(__arm64_compat_sys##name, ERRNO);				\
+	static long __se_compat_sys##name(__MAP(x,__SC_LONG,__VA_ARGS__));		\
+	static inline long __do_compat_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__));	\
+	asmlinkage long __arm64_compat_sys##name(const struct pt_regs *regs)		\
+	{										\
+		return __se_compat_sys##name(SC_ARM64_REGS_TO_ARGS(x,__VA_ARGS__));	\
+	}										\
+	static long __se_compat_sys##name(__MAP(x,__SC_LONG,__VA_ARGS__))		\
+	{										\
+		return __do_compat_sys##name(__MAP(x,__SC_DELOUSE,__VA_ARGS__));	\
+	}										\
+	static inline long __do_compat_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__))
+
+#define COMPAT_SYSCALL_DEFINE0(sname)					\
+	asmlinkage long __arm64_compat_sys_##sname(void);		\
+	ALLOW_ERROR_INJECTION(__arm64_compat_sys_##sname, ERRNO);	\
+	asmlinkage long __arm64_compat_sys_##sname(void)
+
+#define COND_SYSCALL_COMPAT(name) \
+	cond_syscall(__arm64_compat_sys_##name);
+
+#define COMPAT_SYS_NI(name) \
+	SYSCALL_ALIAS(__arm64_compat_sys_##name, sys_ni_posix_timers);
+
+#endif /* CONFIG_COMPAT */
+
+#define __SYSCALL_DEFINEx(x, name, ...)						\
+	asmlinkage long __arm64_sys##name(const struct pt_regs *regs);		\
+	ALLOW_ERROR_INJECTION(__arm64_sys##name, ERRNO);			\
+	static long __se_sys##name(__MAP(x,__SC_LONG,__VA_ARGS__));		\
+	static inline long __do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__));	\
+	asmlinkage long __arm64_sys##name(const struct pt_regs *regs)		\
+	{									\
+		return __se_sys##name(SC_ARM64_REGS_TO_ARGS(x,__VA_ARGS__));	\
+	}									\
+	static long __se_sys##name(__MAP(x,__SC_LONG,__VA_ARGS__))		\
+	{									\
+		long ret = __do_sys##name(__MAP(x,__SC_CAST,__VA_ARGS__));	\
+		__MAP(x,__SC_TEST,__VA_ARGS__);					\
+		__PROTECT(x, ret,__MAP(x,__SC_ARGS,__VA_ARGS__));		\
+		return ret;							\
+	}									\
+	static inline long __do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__))
+
+#ifndef SYSCALL_DEFINE0
+#define SYSCALL_DEFINE0(sname)					\
+	SYSCALL_METADATA(_##sname, 0);				\
+	asmlinkage long __arm64_sys_##sname(void);		\
+	ALLOW_ERROR_INJECTION(__arm64_sys_##sname, ERRNO);	\
+	asmlinkage long __arm64_sys_##sname(void)
+#endif
+
+#ifndef COND_SYSCALL
+#define COND_SYSCALL(name) cond_syscall(__arm64_sys_##name)
+#endif
+
+#ifndef SYS_NI
+#define SYS_NI(name) SYSCALL_ALIAS(__arm64_sys_##name, sys_ni_posix_timers);
+#endif
+
+#endif /* __ASM_SYSCALL_WRAPPER_H */
diff --git a/arch/arm64/kernel/sys.c b/arch/arm64/kernel/sys.c
index 2ad1497a184e..ee93bf789f0a 100644
--- a/arch/arm64/kernel/sys.c
+++ b/arch/arm64/kernel/sys.c
@@ -48,11 +48,17 @@ SYSCALL_DEFINE1(arm64_personality, unsigned int, personality)
 /*
  * Wrappers to pass the pt_regs argument.
  */
-asmlinkage long sys_rt_sigreturn(void);
 #define sys_personality		sys_arm64_personality
 
+asmlinkage long sys_ni_syscall(const struct pt_regs *);
+#define __arm64_sys_ni_syscall	sys_ni_syscall
+
+#undef __SYSCALL
+#define __SYSCALL(nr, sym)	asmlinkage long __arm64_##sym(const struct pt_regs *);
+#include <asm/unistd.h>
+
 #undef __SYSCALL
-#define __SYSCALL(nr, sym)	[nr] = sym,
+#define __SYSCALL(nr, sym)	[nr] = __arm64_##sym,
 
 /*
  * The sys_call_table array must be 4K aligned to be accessible from
diff --git a/arch/arm64/kernel/sys32.c b/arch/arm64/kernel/sys32.c
index 1a84e79c47f3..8359b54a9c97 100644
--- a/arch/arm64/kernel/sys32.c
+++ b/arch/arm64/kernel/sys32.c
@@ -120,8 +120,15 @@ COMPAT_SYSCALL_DEFINE6(aarch32_fallocate, int, fd, int, mode,
 	return ksys_fallocate(fd, mode, arg_u64(offset), arg_u64(len));
 }
 
+asmlinkage long sys_ni_syscall(const struct pt_regs *);
+#define __arm64_sys_ni_syscall	sys_ni_syscall
+
+#undef __SYSCALL
+#define __SYSCALL(nr, sym)	asmlinkage long __arm64_##sym(const struct pt_regs *);
+#include <asm/unistd32.h>
+
 #undef __SYSCALL
-#define __SYSCALL(nr, sym)	[nr] = sym,
+#define __SYSCALL(nr, sym)	[nr] = __arm64_##sym,
 
 /*
  * The sys_call_table array must be 4K aligned to be accessible from
diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
index 4706f841e758..15b5d18f6104 100644
--- a/arch/arm64/kernel/syscall.c
+++ b/arch/arm64/kernel/syscall.c
@@ -12,15 +12,11 @@
 
 long do_ni_syscall(struct pt_regs *regs);
 
-typedef long (*syscall_fn_t)(unsigned long, unsigned long,
-			     unsigned long, unsigned long,
-			     unsigned long, unsigned long);
+typedef long (*syscall_fn_t)(struct pt_regs *regs);
 
 static void __invoke_syscall(struct pt_regs *regs, syscall_fn_t syscall_fn)
 {
-	regs->regs[0] = syscall_fn(regs->regs[0], regs->regs[1],
-				   regs->regs[2], regs->regs[3],
-				   regs->regs[4], regs->regs[5]);
+	regs->regs[0] = syscall_fn(regs);
 }
 
 static void invoke_syscall(struct pt_regs *regs, int scno, int sc_nr,
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index b8d3e0d8c616..8bfcc37cbc65 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -549,6 +549,8 @@ asmlinkage void __exception do_sysinstr(unsigned int esr, struct pt_regs *regs)
 
 long compat_arm_syscall(struct pt_regs *regs);
 
+long sys_ni_syscall(void);
+
 asmlinkage long do_ni_syscall(struct pt_regs *regs)
 {
 #ifdef CONFIG_COMPAT
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 126+ messages in thread

* Re: [PATCH 01/18] arm64: consistently use unsigned long for thread flags
  2018-05-14  9:46   ` Mark Rutland
@ 2018-05-14  9:57     ` Dave Martin
  -1 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-14  9:57 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, marc.zyngier, catalin.marinas, will.deacon,
	linux-kernel, linux, james.morse, viro, linux-fsdevel

On Mon, May 14, 2018 at 10:46:23AM +0100, Mark Rutland wrote:
> In do_notify_resume, we manipulate thread_flags as a 32-bit unsigned
> int, whereas thread_info::flags is a 64-bit unsigned long, and elsewhere
> (e.g. in the entry assembly) we manipulate the flags as a 64-bit
> quantity.
> 
> For consistency, and to avoid problems if we end up with more than 32
> flags, let's make do_notify_resume take the flags as a 64-bit unsigned
> long.

Looks sensible.

Reviewed-by: Dave Martin <Dave.Martin@arm.com>

> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/kernel/signal.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
> index 154b7d30145d..8e624fec4707 100644
> --- a/arch/arm64/kernel/signal.c
> +++ b/arch/arm64/kernel/signal.c
> @@ -896,7 +896,7 @@ static void do_signal(struct pt_regs *regs)
>  }
>  
>  asmlinkage void do_notify_resume(struct pt_regs *regs,
> -				 unsigned int thread_flags)
> +				 unsigned long thread_flags)
>  {
>  	/*
>  	 * The assembly code enters us with IRQs off, but it hasn't
> -- 
> 2.11.0
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 01/18] arm64: consistently use unsigned long for thread flags
@ 2018-05-14  9:57     ` Dave Martin
  0 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-14  9:57 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 10:46:23AM +0100, Mark Rutland wrote:
> In do_notify_resume, we manipulate thread_flags as a 32-bit unsigned
> int, whereas thread_info::flags is a 64-bit unsigned long, and elsewhere
> (e.g. in the entry assembly) we manipulate the flags as a 64-bit
> quantity.
> 
> For consistency, and to avoid problems if we end up with more than 32
> flags, let's make do_notify_resume take the flags as a 64-bit unsigned
> long.

Looks sensible.

Reviewed-by: Dave Martin <Dave.Martin@arm.com>

> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/kernel/signal.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
> index 154b7d30145d..8e624fec4707 100644
> --- a/arch/arm64/kernel/signal.c
> +++ b/arch/arm64/kernel/signal.c
> @@ -896,7 +896,7 @@ static void do_signal(struct pt_regs *regs)
>  }
>  
>  asmlinkage void do_notify_resume(struct pt_regs *regs,
> -				 unsigned int thread_flags)
> +				 unsigned long thread_flags)
>  {
>  	/*
>  	 * The assembly code enters us with IRQs off, but it hasn't
> -- 
> 2.11.0
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 02/18] arm64: move SCTLR_EL{1,2} assertions to <asm/sysreg.h>
  2018-05-14  9:46   ` Mark Rutland
@ 2018-05-14 10:00     ` Dave Martin
  -1 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-14 10:00 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, marc.zyngier, catalin.marinas, will.deacon,
	linux-kernel, linux, james.morse, viro, linux-fsdevel

On Mon, May 14, 2018 at 10:46:24AM +0100, Mark Rutland wrote:
> Currently we assert that the SCTLR_EL{1,2}_{SET,CLEAR} bits are
> self-consistent with an assertion in config_sctlr_el1(). This is a bit
> unusual, since config_sctlr_el1() doesn't make use of these definitions,
> and is far away from the definitions themselves.
> 
> We can use the CPP #error directive to have equivalent assertions in
> <asm/sysreg.h>, next to the definitions of the set/clear bits, which is
> a bit clearer and simpler.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: James Morse <james.morse@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/include/asm/sysreg.h | 14 ++++++--------
>  1 file changed, 6 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index 6171178075dc..bd1d1194a5e7 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -452,9 +452,9 @@
>  			 SCTLR_ELx_SA     | SCTLR_ELx_I    | SCTLR_ELx_WXN | \
>  			 ENDIAN_CLEAR_EL2 | SCTLR_EL2_RES0)
>  
> -/* Check all the bits are accounted for */
> -#define SCTLR_EL2_BUILD_BUG_ON_MISSING_BITS	BUILD_BUG_ON((SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != ~0)
> -
> +#if (SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != 0xffffffff
> +#error "Inconsistent SCTLR_EL2 set/clear bits"
> +#endif

Can we have a comment on the != 0xffffffff versus != ~0 here?

The subtle differences in evaluation semantics between #if and
other contexts here may well trip people up during maintenance...


With that, Reviewed-by: Dave Martin <Dave.Martin@arm.com>

Cheers
---Dave

>  
>  /* SCTLR_EL1 specific flags. */
>  #define SCTLR_EL1_UCI		(1 << 26)
> @@ -492,8 +492,9 @@
>  			 SCTLR_EL1_UMA | SCTLR_ELx_WXN     | ENDIAN_CLEAR_EL1 |\
>  			 SCTLR_EL1_RES0)
>  
> -/* Check all the bits are accounted for */
> -#define SCTLR_EL1_BUILD_BUG_ON_MISSING_BITS	BUILD_BUG_ON((SCTLR_EL1_SET ^ SCTLR_EL1_CLEAR) != ~0)
> +#if (SCTLR_EL1_SET ^ SCTLR_EL1_CLEAR) != 0xffffffff
> +#error "Inconsistent SCTLR_EL1 set/clear bits"
> +#endif
>  
>  /* id_aa64isar0 */
>  #define ID_AA64ISAR0_TS_SHIFT		52
> @@ -732,9 +733,6 @@ static inline void config_sctlr_el1(u32 clear, u32 set)
>  {
>  	u32 val;
>  
> -	SCTLR_EL2_BUILD_BUG_ON_MISSING_BITS;
> -	SCTLR_EL1_BUILD_BUG_ON_MISSING_BITS;
> -
>  	val = read_sysreg(sctlr_el1);
>  	val &= ~clear;
>  	val |= set;
> -- 
> 2.11.0
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 02/18] arm64: move SCTLR_EL{1,2} assertions to <asm/sysreg.h>
@ 2018-05-14 10:00     ` Dave Martin
  0 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-14 10:00 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 10:46:24AM +0100, Mark Rutland wrote:
> Currently we assert that the SCTLR_EL{1,2}_{SET,CLEAR} bits are
> self-consistent with an assertion in config_sctlr_el1(). This is a bit
> unusual, since config_sctlr_el1() doesn't make use of these definitions,
> and is far away from the definitions themselves.
> 
> We can use the CPP #error directive to have equivalent assertions in
> <asm/sysreg.h>, next to the definitions of the set/clear bits, which is
> a bit clearer and simpler.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: James Morse <james.morse@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/include/asm/sysreg.h | 14 ++++++--------
>  1 file changed, 6 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index 6171178075dc..bd1d1194a5e7 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -452,9 +452,9 @@
>  			 SCTLR_ELx_SA     | SCTLR_ELx_I    | SCTLR_ELx_WXN | \
>  			 ENDIAN_CLEAR_EL2 | SCTLR_EL2_RES0)
>  
> -/* Check all the bits are accounted for */
> -#define SCTLR_EL2_BUILD_BUG_ON_MISSING_BITS	BUILD_BUG_ON((SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != ~0)
> -
> +#if (SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != 0xffffffff
> +#error "Inconsistent SCTLR_EL2 set/clear bits"
> +#endif

Can we have a comment on the != 0xffffffff versus != ~0 here?

The subtle differences in evaluation semantics between #if and
other contexts here may well trip people up during maintenance...


With that, Reviewed-by: Dave Martin <Dave.Martin@arm.com>

Cheers
---Dave

>  
>  /* SCTLR_EL1 specific flags. */
>  #define SCTLR_EL1_UCI		(1 << 26)
> @@ -492,8 +492,9 @@
>  			 SCTLR_EL1_UMA | SCTLR_ELx_WXN     | ENDIAN_CLEAR_EL1 |\
>  			 SCTLR_EL1_RES0)
>  
> -/* Check all the bits are accounted for */
> -#define SCTLR_EL1_BUILD_BUG_ON_MISSING_BITS	BUILD_BUG_ON((SCTLR_EL1_SET ^ SCTLR_EL1_CLEAR) != ~0)
> +#if (SCTLR_EL1_SET ^ SCTLR_EL1_CLEAR) != 0xffffffff
> +#error "Inconsistent SCTLR_EL1 set/clear bits"
> +#endif
>  
>  /* id_aa64isar0 */
>  #define ID_AA64ISAR0_TS_SHIFT		52
> @@ -732,9 +733,6 @@ static inline void config_sctlr_el1(u32 clear, u32 set)
>  {
>  	u32 val;
>  
> -	SCTLR_EL2_BUILD_BUG_ON_MISSING_BITS;
> -	SCTLR_EL1_BUILD_BUG_ON_MISSING_BITS;
> -
>  	val = read_sysreg(sctlr_el1);
>  	val &= ~clear;
>  	val |= set;
> -- 
> 2.11.0
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 03/18] arm64: introduce sysreg_clear_set()
  2018-05-14  9:46   ` Mark Rutland
@ 2018-05-14 10:04     ` Dave Martin
  -1 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-14 10:04 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, marc.zyngier, catalin.marinas, will.deacon,
	linux-kernel, linux, james.morse, viro, linux-fsdevel

On Mon, May 14, 2018 at 10:46:25AM +0100, Mark Rutland wrote:
> Currently we have a couple of helpers to manipulate bits in particular
> sysregs:
> 
>  * config_sctlr_el1(u32 clear, u32 set)
> 
>  * change_cpacr(u64 val, u64 mask)
> 
> The parameters of these differ in naming convention, order, and size,
> which is unfortunate. They also differ slightly in behaviour, as
> change_cpacr() skips the sysreg write if the bits are unchanged, which
> is a useful optimization when sysreg writes are expensive.
> 
> Before we gain more yet another sysreg manipulation function, let's
> unify these with a common helper, providing a consistent order for
> clear/set operands, and the write skipping behaviour from
> change_cpacr(). Code will be migrated to the new helper in subsequent
> patches.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Dave Martin <dave.martin@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>

Makes sense.  I felt I was (re)inventing an idiom with
change_cpacr() at the time, though it didn't quite seem worth
factoring out.  Now you plan to add a third similar manipulator,
this factoring makes sense.

Reviewed-by: Dave Martin <Dave.Martin@arm.com>

> ---
>  arch/arm64/include/asm/sysreg.h | 11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index bd1d1194a5e7..b52762769329 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -729,6 +729,17 @@ asm(
>  	asm volatile("msr_s " __stringify(r) ", %x0" : : "rZ" (__val));	\
>  } while (0)
>  
> +/*
> + * Modify bits in a sysreg. Bits in the clear mask are zeroed, then bits in the
> + * set mask are set. Other bits are left as-is.
> + */
> +#define sysreg_clear_set(sysreg, clear, set) do {			\
> +	u64 __scs_val = read_sysreg(sysreg);				\
> +	u64 __scs_new = (__scs_val & ~(u64)(clear)) | (set);		\
> +	if (__scs_new != __scs_val)					\
> +		write_sysreg(__scs_new, sysreg);			\
> +} while (0)
> +
>  static inline void config_sctlr_el1(u32 clear, u32 set)
>  {
>  	u32 val;
> -- 
> 2.11.0
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 03/18] arm64: introduce sysreg_clear_set()
@ 2018-05-14 10:04     ` Dave Martin
  0 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-14 10:04 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 10:46:25AM +0100, Mark Rutland wrote:
> Currently we have a couple of helpers to manipulate bits in particular
> sysregs:
> 
>  * config_sctlr_el1(u32 clear, u32 set)
> 
>  * change_cpacr(u64 val, u64 mask)
> 
> The parameters of these differ in naming convention, order, and size,
> which is unfortunate. They also differ slightly in behaviour, as
> change_cpacr() skips the sysreg write if the bits are unchanged, which
> is a useful optimization when sysreg writes are expensive.
> 
> Before we gain more yet another sysreg manipulation function, let's
> unify these with a common helper, providing a consistent order for
> clear/set operands, and the write skipping behaviour from
> change_cpacr(). Code will be migrated to the new helper in subsequent
> patches.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Dave Martin <dave.martin@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>

Makes sense.  I felt I was (re)inventing an idiom with
change_cpacr() at the time, though it didn't quite seem worth
factoring out.  Now you plan to add a third similar manipulator,
this factoring makes sense.

Reviewed-by: Dave Martin <Dave.Martin@arm.com>

> ---
>  arch/arm64/include/asm/sysreg.h | 11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index bd1d1194a5e7..b52762769329 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -729,6 +729,17 @@ asm(
>  	asm volatile("msr_s " __stringify(r) ", %x0" : : "rZ" (__val));	\
>  } while (0)
>  
> +/*
> + * Modify bits in a sysreg. Bits in the clear mask are zeroed, then bits in the
> + * set mask are set. Other bits are left as-is.
> + */
> +#define sysreg_clear_set(sysreg, clear, set) do {			\
> +	u64 __scs_val = read_sysreg(sysreg);				\
> +	u64 __scs_new = (__scs_val & ~(u64)(clear)) | (set);		\
> +	if (__scs_new != __scs_val)					\
> +		write_sysreg(__scs_new, sysreg);			\
> +} while (0)
> +
>  static inline void config_sctlr_el1(u32 clear, u32 set)
>  {
>  	u32 val;
> -- 
> 2.11.0
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 04/18] arm64: kill config_sctlr_el1()
  2018-05-14  9:46   ` Mark Rutland
@ 2018-05-14 10:05     ` Dave Martin
  -1 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-14 10:05 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, marc.zyngier, catalin.marinas, will.deacon,
	linux-kernel, linux, james.morse, viro, linux-fsdevel

On Mon, May 14, 2018 at 10:46:26AM +0100, Mark Rutland wrote:
> Now that we have sysreg_clear_set(), we can consistently use this
> instead of config_sctlr_el1().
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: James Morse <james.morse@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>

Reviewed-by: Dave Martin <Dave.Martin@arm.com>

> ---
>  arch/arm64/include/asm/sysreg.h      | 10 ----------
>  arch/arm64/kernel/armv8_deprecated.c |  8 ++++----
>  arch/arm64/kernel/cpu_errata.c       |  3 +--
>  arch/arm64/kernel/traps.c            |  2 +-
>  arch/arm64/mm/fault.c                |  2 +-
>  5 files changed, 7 insertions(+), 18 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index b52762769329..3493c7048994 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -740,16 +740,6 @@ asm(
>  		write_sysreg(__scs_new, sysreg);			\
>  } while (0)
>  
> -static inline void config_sctlr_el1(u32 clear, u32 set)
> -{
> -	u32 val;
> -
> -	val = read_sysreg(sctlr_el1);
> -	val &= ~clear;
> -	val |= set;
> -	write_sysreg(val, sctlr_el1);
> -}
> -
>  #endif
>  
>  #endif	/* __ASM_SYSREG_H */
> diff --git a/arch/arm64/kernel/armv8_deprecated.c b/arch/arm64/kernel/armv8_deprecated.c
> index 6e47fc3ab549..778cf810f0d8 100644
> --- a/arch/arm64/kernel/armv8_deprecated.c
> +++ b/arch/arm64/kernel/armv8_deprecated.c
> @@ -512,9 +512,9 @@ static int cp15barrier_handler(struct pt_regs *regs, u32 instr)
>  static int cp15_barrier_set_hw_mode(bool enable)
>  {
>  	if (enable)
> -		config_sctlr_el1(0, SCTLR_EL1_CP15BEN);
> +		sysreg_clear_set(sctlr_el1, 0, SCTLR_EL1_CP15BEN);
>  	else
> -		config_sctlr_el1(SCTLR_EL1_CP15BEN, 0);
> +		sysreg_clear_set(sctlr_el1, SCTLR_EL1_CP15BEN, 0);
>  	return 0;
>  }
>  
> @@ -549,9 +549,9 @@ static int setend_set_hw_mode(bool enable)
>  		return -EINVAL;
>  
>  	if (enable)
> -		config_sctlr_el1(SCTLR_EL1_SED, 0);
> +		sysreg_clear_set(sctlr_el1, SCTLR_EL1_SED, 0);
>  	else
> -		config_sctlr_el1(0, SCTLR_EL1_SED);
> +		sysreg_clear_set(sctlr_el1, 0, SCTLR_EL1_SED);
>  	return 0;
>  }
>  
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index a900befadfe8..879daf8ea0f8 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -74,8 +74,7 @@ has_mismatched_cache_line_size(const struct arm64_cpu_capabilities *entry,
>  static void
>  cpu_enable_trap_ctr_access(const struct arm64_cpu_capabilities *__unused)
>  {
> -	/* Clear SCTLR_EL1.UCT */
> -	config_sctlr_el1(SCTLR_EL1_UCT, 0);
> +	sysreg_clear_set(sctlr_el1, SCTLR_EL1_UCT, 0);
>  }
>  
>  atomic_t arm64_el2_vector_last_slot = ATOMIC_INIT(-1);
> diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
> index 8bbdc17e49df..b8d3e0d8c616 100644
> --- a/arch/arm64/kernel/traps.c
> +++ b/arch/arm64/kernel/traps.c
> @@ -411,7 +411,7 @@ asmlinkage void __exception do_undefinstr(struct pt_regs *regs)
>  
>  void cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
>  {
> -	config_sctlr_el1(SCTLR_EL1_UCI, 0);
> +	sysreg_clear_set(sctlr_el1, SCTLR_EL1_UCI, 0);
>  }
>  
>  #define __user_cache_maint(insn, address, res)			\
> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> index 4165485e8b6e..e7977d932038 100644
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -812,7 +812,7 @@ void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused)
>  	 */
>  	WARN_ON_ONCE(in_interrupt());
>  
> -	config_sctlr_el1(SCTLR_EL1_SPAN, 0);
> +	sysreg_clear_set(sctlr_el1, SCTLR_EL1_SPAN, 0);
>  	asm(SET_PSTATE_PAN(1));
>  }
>  #endif /* CONFIG_ARM64_PAN */
> -- 
> 2.11.0
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 04/18] arm64: kill config_sctlr_el1()
@ 2018-05-14 10:05     ` Dave Martin
  0 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-14 10:05 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 10:46:26AM +0100, Mark Rutland wrote:
> Now that we have sysreg_clear_set(), we can consistently use this
> instead of config_sctlr_el1().
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: James Morse <james.morse@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>

Reviewed-by: Dave Martin <Dave.Martin@arm.com>

> ---
>  arch/arm64/include/asm/sysreg.h      | 10 ----------
>  arch/arm64/kernel/armv8_deprecated.c |  8 ++++----
>  arch/arm64/kernel/cpu_errata.c       |  3 +--
>  arch/arm64/kernel/traps.c            |  2 +-
>  arch/arm64/mm/fault.c                |  2 +-
>  5 files changed, 7 insertions(+), 18 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index b52762769329..3493c7048994 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -740,16 +740,6 @@ asm(
>  		write_sysreg(__scs_new, sysreg);			\
>  } while (0)
>  
> -static inline void config_sctlr_el1(u32 clear, u32 set)
> -{
> -	u32 val;
> -
> -	val = read_sysreg(sctlr_el1);
> -	val &= ~clear;
> -	val |= set;
> -	write_sysreg(val, sctlr_el1);
> -}
> -
>  #endif
>  
>  #endif	/* __ASM_SYSREG_H */
> diff --git a/arch/arm64/kernel/armv8_deprecated.c b/arch/arm64/kernel/armv8_deprecated.c
> index 6e47fc3ab549..778cf810f0d8 100644
> --- a/arch/arm64/kernel/armv8_deprecated.c
> +++ b/arch/arm64/kernel/armv8_deprecated.c
> @@ -512,9 +512,9 @@ static int cp15barrier_handler(struct pt_regs *regs, u32 instr)
>  static int cp15_barrier_set_hw_mode(bool enable)
>  {
>  	if (enable)
> -		config_sctlr_el1(0, SCTLR_EL1_CP15BEN);
> +		sysreg_clear_set(sctlr_el1, 0, SCTLR_EL1_CP15BEN);
>  	else
> -		config_sctlr_el1(SCTLR_EL1_CP15BEN, 0);
> +		sysreg_clear_set(sctlr_el1, SCTLR_EL1_CP15BEN, 0);
>  	return 0;
>  }
>  
> @@ -549,9 +549,9 @@ static int setend_set_hw_mode(bool enable)
>  		return -EINVAL;
>  
>  	if (enable)
> -		config_sctlr_el1(SCTLR_EL1_SED, 0);
> +		sysreg_clear_set(sctlr_el1, SCTLR_EL1_SED, 0);
>  	else
> -		config_sctlr_el1(0, SCTLR_EL1_SED);
> +		sysreg_clear_set(sctlr_el1, 0, SCTLR_EL1_SED);
>  	return 0;
>  }
>  
> diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
> index a900befadfe8..879daf8ea0f8 100644
> --- a/arch/arm64/kernel/cpu_errata.c
> +++ b/arch/arm64/kernel/cpu_errata.c
> @@ -74,8 +74,7 @@ has_mismatched_cache_line_size(const struct arm64_cpu_capabilities *entry,
>  static void
>  cpu_enable_trap_ctr_access(const struct arm64_cpu_capabilities *__unused)
>  {
> -	/* Clear SCTLR_EL1.UCT */
> -	config_sctlr_el1(SCTLR_EL1_UCT, 0);
> +	sysreg_clear_set(sctlr_el1, SCTLR_EL1_UCT, 0);
>  }
>  
>  atomic_t arm64_el2_vector_last_slot = ATOMIC_INIT(-1);
> diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
> index 8bbdc17e49df..b8d3e0d8c616 100644
> --- a/arch/arm64/kernel/traps.c
> +++ b/arch/arm64/kernel/traps.c
> @@ -411,7 +411,7 @@ asmlinkage void __exception do_undefinstr(struct pt_regs *regs)
>  
>  void cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
>  {
> -	config_sctlr_el1(SCTLR_EL1_UCI, 0);
> +	sysreg_clear_set(sctlr_el1, SCTLR_EL1_UCI, 0);
>  }
>  
>  #define __user_cache_maint(insn, address, res)			\
> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> index 4165485e8b6e..e7977d932038 100644
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -812,7 +812,7 @@ void cpu_enable_pan(const struct arm64_cpu_capabilities *__unused)
>  	 */
>  	WARN_ON_ONCE(in_interrupt());
>  
> -	config_sctlr_el1(SCTLR_EL1_SPAN, 0);
> +	sysreg_clear_set(sctlr_el1, SCTLR_EL1_SPAN, 0);
>  	asm(SET_PSTATE_PAN(1));
>  }
>  #endif /* CONFIG_ARM64_PAN */
> -- 
> 2.11.0
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 05/18] arm64: kill change_cpacr()
  2018-05-14  9:46   ` Mark Rutland
@ 2018-05-14 10:06     ` Dave Martin
  -1 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-14 10:06 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, marc.zyngier, catalin.marinas, will.deacon,
	linux-kernel, linux, james.morse, viro, linux-fsdevel

On Mon, May 14, 2018 at 10:46:27AM +0100, Mark Rutland wrote:
> Now that we have sysreg_clear_set(), we can use this instead of
> change_cpacr().
> 
> Note that the order of the set and clear arguments differs between
> change_cpacr() and sysreg_clear_set(), so these are flipped as part of
> the conversion. Also, sve_user_enable() redundantly clears
> CPACR_EL1_ZEN_EL0EN before setting it; this is removed for clarity.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Dave Martin <dave.martin@arm.com>
> Cc: James Morse <james.morse@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>

Looks right to me.

Reviewed-by: Dave Martin <Dave.Martin@arm.com>

> ---
>  arch/arm64/kernel/fpsimd.c | 13 ++-----------
>  1 file changed, 2 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
> index 87a35364e750..088940387a4d 100644
> --- a/arch/arm64/kernel/fpsimd.c
> +++ b/arch/arm64/kernel/fpsimd.c
> @@ -172,23 +172,14 @@ static void *sve_pffr(struct task_struct *task)
>  		sve_ffr_offset(task->thread.sve_vl);
>  }
>  
> -static void change_cpacr(u64 val, u64 mask)
> -{
> -	u64 cpacr = read_sysreg(CPACR_EL1);
> -	u64 new = (cpacr & ~mask) | val;
> -
> -	if (new != cpacr)
> -		write_sysreg(new, CPACR_EL1);
> -}
> -
>  static void sve_user_disable(void)
>  {
> -	change_cpacr(0, CPACR_EL1_ZEN_EL0EN);
> +	sysreg_clear_set(cpacr_el1, CPACR_EL1_ZEN_EL0EN, 0);
>  }
>  
>  static void sve_user_enable(void)
>  {
> -	change_cpacr(CPACR_EL1_ZEN_EL0EN, CPACR_EL1_ZEN_EL0EN);
> +	sysreg_clear_set(cpacr_el1, 0, CPACR_EL1_ZEN_EL0EN);
>  }
>  
>  /*
> -- 
> 2.11.0
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 05/18] arm64: kill change_cpacr()
@ 2018-05-14 10:06     ` Dave Martin
  0 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-14 10:06 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 10:46:27AM +0100, Mark Rutland wrote:
> Now that we have sysreg_clear_set(), we can use this instead of
> change_cpacr().
> 
> Note that the order of the set and clear arguments differs between
> change_cpacr() and sysreg_clear_set(), so these are flipped as part of
> the conversion. Also, sve_user_enable() redundantly clears
> CPACR_EL1_ZEN_EL0EN before setting it; this is removed for clarity.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Dave Martin <dave.martin@arm.com>
> Cc: James Morse <james.morse@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>

Looks right to me.

Reviewed-by: Dave Martin <Dave.Martin@arm.com>

> ---
>  arch/arm64/kernel/fpsimd.c | 13 ++-----------
>  1 file changed, 2 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
> index 87a35364e750..088940387a4d 100644
> --- a/arch/arm64/kernel/fpsimd.c
> +++ b/arch/arm64/kernel/fpsimd.c
> @@ -172,23 +172,14 @@ static void *sve_pffr(struct task_struct *task)
>  		sve_ffr_offset(task->thread.sve_vl);
>  }
>  
> -static void change_cpacr(u64 val, u64 mask)
> -{
> -	u64 cpacr = read_sysreg(CPACR_EL1);
> -	u64 new = (cpacr & ~mask) | val;
> -
> -	if (new != cpacr)
> -		write_sysreg(new, CPACR_EL1);
> -}
> -
>  static void sve_user_disable(void)
>  {
> -	change_cpacr(0, CPACR_EL1_ZEN_EL0EN);
> +	sysreg_clear_set(cpacr_el1, CPACR_EL1_ZEN_EL0EN, 0);
>  }
>  
>  static void sve_user_enable(void)
>  {
> -	change_cpacr(CPACR_EL1_ZEN_EL0EN, CPACR_EL1_ZEN_EL0EN);
> +	sysreg_clear_set(cpacr_el1, 0, CPACR_EL1_ZEN_EL0EN);
>  }
>  
>  /*
> -- 
> 2.11.0
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 02/18] arm64: move SCTLR_EL{1,2} assertions to <asm/sysreg.h>
  2018-05-14 10:00     ` Dave Martin
@ 2018-05-14 10:08       ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14 10:08 UTC (permalink / raw)
  To: Dave Martin
  Cc: linux-arm-kernel, marc.zyngier, catalin.marinas, will.deacon,
	linux-kernel, linux, james.morse, viro, linux-fsdevel

On Mon, May 14, 2018 at 11:00:53AM +0100, Dave Martin wrote:
> On Mon, May 14, 2018 at 10:46:24AM +0100, Mark Rutland wrote:
> > -/* Check all the bits are accounted for */
> > -#define SCTLR_EL2_BUILD_BUG_ON_MISSING_BITS	BUILD_BUG_ON((SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != ~0)
> > -
> > +#if (SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != 0xffffffff
> > +#error "Inconsistent SCTLR_EL2 set/clear bits"
> > +#endif
> 
> Can we have a comment on the != 0xffffffff versus != ~0 here?
> 
> The subtle differences in evaluation semantics between #if and
> other contexts here may well trip people up during maintenance...

Do you have any suggestion as to the wording?

I'm happy to add a comment, but I don't really know what to say.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 02/18] arm64: move SCTLR_EL{1,2} assertions to <asm/sysreg.h>
@ 2018-05-14 10:08       ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14 10:08 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 11:00:53AM +0100, Dave Martin wrote:
> On Mon, May 14, 2018 at 10:46:24AM +0100, Mark Rutland wrote:
> > -/* Check all the bits are accounted for */
> > -#define SCTLR_EL2_BUILD_BUG_ON_MISSING_BITS	BUILD_BUG_ON((SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != ~0)
> > -
> > +#if (SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != 0xffffffff
> > +#error "Inconsistent SCTLR_EL2 set/clear bits"
> > +#endif
> 
> Can we have a comment on the != 0xffffffff versus != ~0 here?
> 
> The subtle differences in evaluation semantics between #if and
> other contexts here may well trip people up during maintenance...

Do you have any suggestion as to the wording?

I'm happy to add a comment, but I don't really know what to say.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 06/18] arm64: move sve_user_{enable, disable} to <asm/fpsimd.h>
  2018-05-14  9:46   ` [PATCH 06/18] arm64: move sve_user_{enable, disable} " Mark Rutland
@ 2018-05-14 11:06     ` Dave Martin
  -1 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-14 11:06 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, marc.zyngier, catalin.marinas, will.deacon,
	linux-kernel, linux, james.morse, viro, linux-fsdevel

On Mon, May 14, 2018 at 10:46:28AM +0100, Mark Rutland wrote:
> In subsequent patches, we'll want to make use of sve_user_enable() and
> sve_user_disable() outside of kernel/fpsimd.c. Let's move these to
> <asm/fpsimd.h> where we can make use of them.
> 
> To avoid ifdeffery in sequences like:
> 
> if (system_supports_sve() && some_condition
> 	sve_user_disable();
> 
> ... empty stubs are provided when support for SVE is not enabled.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Dave Martin <dave.martin@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/include/asm/fpsimd.h | 17 ++++++++++++++++-
>  arch/arm64/kernel/fpsimd.c      | 11 -----------
>  2 files changed, 16 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
> index aa7162ae93e3..7377d7593c06 100644
> --- a/arch/arm64/include/asm/fpsimd.h
> +++ b/arch/arm64/include/asm/fpsimd.h
> @@ -16,11 +16,13 @@
>  #ifndef __ASM_FP_H
>  #define __ASM_FP_H
>  
> -#include <asm/ptrace.h>
>  #include <asm/errno.h>
> +#include <asm/ptrace.h>
> +#include <asm/sysreg.h>
>  
>  #ifndef __ASSEMBLY__
>  
> +#include <linux/build_bug.h>
>  #include <linux/cache.h>
>  #include <linux/init.h>
>  #include <linux/stddef.h>
> @@ -81,6 +83,16 @@ extern int sve_set_vector_length(struct task_struct *task,
>  extern int sve_set_current_vl(unsigned long arg);
>  extern int sve_get_current_vl(void);
>  
> +static inline void sve_user_disable(void)
> +{
> +	sysreg_clear_set(cpacr_el1, CPACR_EL1_ZEN_EL0EN, 0);
> +}
> +
> +static inline void sve_user_enable(void)
> +{
> +	sysreg_clear_set(cpacr_el1, 0, CPACR_EL1_ZEN_EL0EN);
> +}
> +
>  /*
>   * Probing and setup functions.
>   * Calls to these functions must be serialised with one another.
> @@ -107,6 +119,9 @@ static inline int sve_get_current_vl(void)
>  	return -EINVAL;
>  }
>  
> +static inline void sve_user_disable(void) { }
> +static inline void sve_user_enable(void) { }
> +

Alternatively, just move the full definitions outside the #ifdef
CONFIG_ARM64_SVE.

All calls to these should be shadowed by an if
(system_supports_sve()) in any case, and setting/clearing ZEN_EL0EN
in the CPACR_EL1 ought to be harmless now that the meaning of these
bits architecturally committed.

Ideally we would have a BUG_ON(!system_supports_sve()) in those
functions, but we won't won't to pay the cost in a production kernel.

>  static inline void sve_init_vq_map(void) { }
>  static inline void sve_update_vq_map(void) { }
>  static inline int sve_verify_vq_map(void) { return 0; }
> diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
> index 088940387a4d..79a81c7d85c6 100644
> --- a/arch/arm64/kernel/fpsimd.c
> +++ b/arch/arm64/kernel/fpsimd.c
> @@ -159,7 +159,6 @@ static void sve_free(struct task_struct *task)
>  	__sve_free(task);
>  }
>  
> -

Hmmm, Ack.  Check for conflicts with the KVM FPSIMD rework [1] (though
trivial).

[...]

Cheers
---Dave


[1]

[PATCH v7 10/16] arm64/sve: Switch sve_pffr() argument from task to thread
http://lists.infradead.org/pipermail/linux-arm-kernel/2018-May/576601.html

[PATCH v7 11/16] arm64/sve: Move sve_pffr() to fpsimd.h and make inline
http://lists.infradead.org/pipermail/linux-arm-kernel/2018-May/576606.html

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 06/18] arm64: move sve_user_{enable, disable} to <asm/fpsimd.h>
@ 2018-05-14 11:06     ` Dave Martin
  0 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-14 11:06 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 10:46:28AM +0100, Mark Rutland wrote:
> In subsequent patches, we'll want to make use of sve_user_enable() and
> sve_user_disable() outside of kernel/fpsimd.c. Let's move these to
> <asm/fpsimd.h> where we can make use of them.
> 
> To avoid ifdeffery in sequences like:
> 
> if (system_supports_sve() && some_condition
> 	sve_user_disable();
> 
> ... empty stubs are provided when support for SVE is not enabled.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Dave Martin <dave.martin@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/include/asm/fpsimd.h | 17 ++++++++++++++++-
>  arch/arm64/kernel/fpsimd.c      | 11 -----------
>  2 files changed, 16 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
> index aa7162ae93e3..7377d7593c06 100644
> --- a/arch/arm64/include/asm/fpsimd.h
> +++ b/arch/arm64/include/asm/fpsimd.h
> @@ -16,11 +16,13 @@
>  #ifndef __ASM_FP_H
>  #define __ASM_FP_H
>  
> -#include <asm/ptrace.h>
>  #include <asm/errno.h>
> +#include <asm/ptrace.h>
> +#include <asm/sysreg.h>
>  
>  #ifndef __ASSEMBLY__
>  
> +#include <linux/build_bug.h>
>  #include <linux/cache.h>
>  #include <linux/init.h>
>  #include <linux/stddef.h>
> @@ -81,6 +83,16 @@ extern int sve_set_vector_length(struct task_struct *task,
>  extern int sve_set_current_vl(unsigned long arg);
>  extern int sve_get_current_vl(void);
>  
> +static inline void sve_user_disable(void)
> +{
> +	sysreg_clear_set(cpacr_el1, CPACR_EL1_ZEN_EL0EN, 0);
> +}
> +
> +static inline void sve_user_enable(void)
> +{
> +	sysreg_clear_set(cpacr_el1, 0, CPACR_EL1_ZEN_EL0EN);
> +}
> +
>  /*
>   * Probing and setup functions.
>   * Calls to these functions must be serialised with one another.
> @@ -107,6 +119,9 @@ static inline int sve_get_current_vl(void)
>  	return -EINVAL;
>  }
>  
> +static inline void sve_user_disable(void) { }
> +static inline void sve_user_enable(void) { }
> +

Alternatively, just move the full definitions outside the #ifdef
CONFIG_ARM64_SVE.

All calls to these should be shadowed by an if
(system_supports_sve()) in any case, and setting/clearing ZEN_EL0EN
in the CPACR_EL1 ought to be harmless now that the meaning of these
bits architecturally committed.

Ideally we would have a BUG_ON(!system_supports_sve()) in those
functions, but we won't won't to pay the cost in a production kernel.

>  static inline void sve_init_vq_map(void) { }
>  static inline void sve_update_vq_map(void) { }
>  static inline int sve_verify_vq_map(void) { return 0; }
> diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
> index 088940387a4d..79a81c7d85c6 100644
> --- a/arch/arm64/kernel/fpsimd.c
> +++ b/arch/arm64/kernel/fpsimd.c
> @@ -159,7 +159,6 @@ static void sve_free(struct task_struct *task)
>  	__sve_free(task);
>  }
>  
> -

Hmmm, Ack.  Check for conflicts with the KVM FPSIMD rework [1] (though
trivial).

[...]

Cheers
---Dave


[1]

[PATCH v7 10/16] arm64/sve: Switch sve_pffr() argument from task to thread
http://lists.infradead.org/pipermail/linux-arm-kernel/2018-May/576601.html

[PATCH v7 11/16] arm64/sve: Move sve_pffr() to fpsimd.h and make inline
http://lists.infradead.org/pipermail/linux-arm-kernel/2018-May/576606.html

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 07/18] arm64: remove sigreturn wrappers
  2018-05-14  9:46   ` Mark Rutland
@ 2018-05-14 11:07     ` Dave Martin
  -1 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-14 11:07 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, marc.zyngier, catalin.marinas, will.deacon,
	linux-kernel, linux, james.morse, viro, linux-fsdevel

On Mon, May 14, 2018 at 10:46:29AM +0100, Mark Rutland wrote:
> The arm64 sigreturn* syscall handlers are non-standard. Rather than
> taking a number of user parameters in registers as per the AAPCS,
> they expect the pt_regs as their sole argument.
> 
> To make this work, we override the syscall definitions to invoke
> wrappers written in assembly, which mov the SP into x0, and branch to
> their respective C functions.
> 
> On other architectures (such as x86), the sigreturn* functions take no
> argument and instead use current_pt_regs() to acquire the user
> registers. This requires less boilerplate code, and allows for other
> features such as interposing C code in this path.

Hmmm, I always wondered why rt_sigreturn() et al. were handled
specially.

If there's effectively no underlying reason for that, it makes sense
to regularise the interface to these syscalls.

Tentatively-reviewed-by: Dave Martin <Dave.Martin@arm.com>

> 
> This patch takes the same approach for arm64.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/include/asm/unistd32.h |  4 ++--
>  arch/arm64/kernel/entry.S         |  8 --------
>  arch/arm64/kernel/entry32.S       | 10 ----------
>  arch/arm64/kernel/signal.c        |  3 ++-
>  arch/arm64/kernel/signal32.c      |  6 ++++--
>  arch/arm64/kernel/sys.c           |  3 +--
>  arch/arm64/kernel/sys32.c         |  4 ++--
>  7 files changed, 11 insertions(+), 27 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/unistd32.h b/arch/arm64/include/asm/unistd32.h
> index ef292160748c..ab95554b1734 100644
> --- a/arch/arm64/include/asm/unistd32.h
> +++ b/arch/arm64/include/asm/unistd32.h
> @@ -260,7 +260,7 @@ __SYSCALL(117, sys_ni_syscall)
>  #define __NR_fsync 118
>  __SYSCALL(__NR_fsync, sys_fsync)
>  #define __NR_sigreturn 119
> -__SYSCALL(__NR_sigreturn, compat_sys_sigreturn_wrapper)
> +__SYSCALL(__NR_sigreturn, compat_sys_sigreturn)
>  #define __NR_clone 120
>  __SYSCALL(__NR_clone, sys_clone)
>  #define __NR_setdomainname 121
> @@ -368,7 +368,7 @@ __SYSCALL(__NR_getresgid, sys_getresgid16)
>  #define __NR_prctl 172
>  __SYSCALL(__NR_prctl, sys_prctl)
>  #define __NR_rt_sigreturn 173
> -__SYSCALL(__NR_rt_sigreturn, compat_sys_rt_sigreturn_wrapper)
> +__SYSCALL(__NR_rt_sigreturn, compat_sys_rt_sigreturn)
>  #define __NR_rt_sigaction 174
>  __SYSCALL(__NR_rt_sigaction, compat_sys_rt_sigaction)
>  #define __NR_rt_sigprocmask 175
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index ec2ee720e33e..08ea3cbfb08f 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -1108,14 +1108,6 @@ __entry_tramp_data_start:
>  #endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
>  
>  /*
> - * Special system call wrappers.
> - */
> -ENTRY(sys_rt_sigreturn_wrapper)
> -	mov	x0, sp
> -	b	sys_rt_sigreturn
> -ENDPROC(sys_rt_sigreturn_wrapper)
> -
> -/*
>   * Register switch for AArch64. The callee-saved registers need to be saved
>   * and restored. On entry:
>   *   x0 = previous task_struct (must be preserved across the switch)
> diff --git a/arch/arm64/kernel/entry32.S b/arch/arm64/kernel/entry32.S
> index f332d5d1f6b4..f9461696dde4 100644
> --- a/arch/arm64/kernel/entry32.S
> +++ b/arch/arm64/kernel/entry32.S
> @@ -30,16 +30,6 @@
>   * System call wrappers for the AArch32 compatibility layer.
>   */
>  
> -ENTRY(compat_sys_sigreturn_wrapper)
> -	mov	x0, sp
> -	b	compat_sys_sigreturn
> -ENDPROC(compat_sys_sigreturn_wrapper)
> -
> -ENTRY(compat_sys_rt_sigreturn_wrapper)
> -	mov	x0, sp
> -	b	compat_sys_rt_sigreturn
> -ENDPROC(compat_sys_rt_sigreturn_wrapper)
> -
>  ENTRY(compat_sys_statfs64_wrapper)
>  	mov	w3, #84
>  	cmp	w1, #88
> diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
> index 8e624fec4707..caa7a68cf2d2 100644
> --- a/arch/arm64/kernel/signal.c
> +++ b/arch/arm64/kernel/signal.c
> @@ -538,8 +538,9 @@ static int restore_sigframe(struct pt_regs *regs,
>  	return err;
>  }
>  
> -asmlinkage long sys_rt_sigreturn(struct pt_regs *regs)
> +asmlinkage long sys_rt_sigreturn(void)
>  {
> +	struct pt_regs *regs = current_pt_regs();
>  	struct rt_sigframe __user *frame;
>  
>  	/* Always make any pending restarted system calls return -EINTR */
> diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c
> index 77b91f478995..cb10588a7cb2 100644
> --- a/arch/arm64/kernel/signal32.c
> +++ b/arch/arm64/kernel/signal32.c
> @@ -282,8 +282,9 @@ static int compat_restore_sigframe(struct pt_regs *regs,
>  	return err;
>  }
>  
> -asmlinkage int compat_sys_sigreturn(struct pt_regs *regs)
> +asmlinkage int compat_sys_sigreturn(void)
>  {
> +	struct pt_regs *regs = current_pt_regs();
>  	struct compat_sigframe __user *frame;
>  
>  	/* Always make any pending restarted system calls return -EINTR */
> @@ -312,8 +313,9 @@ asmlinkage int compat_sys_sigreturn(struct pt_regs *regs)
>  	return 0;
>  }
>  
> -asmlinkage int compat_sys_rt_sigreturn(struct pt_regs *regs)
> +asmlinkage int compat_sys_rt_sigreturn(void)
>  {
> +	struct pt_regs *regs = current_pt_regs();
>  	struct compat_rt_sigframe __user *frame;
>  
>  	/* Always make any pending restarted system calls return -EINTR */
> diff --git a/arch/arm64/kernel/sys.c b/arch/arm64/kernel/sys.c
> index 72981bae10eb..31045f3fed92 100644
> --- a/arch/arm64/kernel/sys.c
> +++ b/arch/arm64/kernel/sys.c
> @@ -48,8 +48,7 @@ SYSCALL_DEFINE1(arm64_personality, unsigned int, personality)
>  /*
>   * Wrappers to pass the pt_regs argument.
>   */
> -asmlinkage long sys_rt_sigreturn_wrapper(void);
> -#define sys_rt_sigreturn	sys_rt_sigreturn_wrapper
> +asmlinkage long sys_rt_sigreturn(void);
>  #define sys_personality		sys_arm64_personality
>  
>  #undef __SYSCALL
> diff --git a/arch/arm64/kernel/sys32.c b/arch/arm64/kernel/sys32.c
> index a40b1343b819..1ef103c95410 100644
> --- a/arch/arm64/kernel/sys32.c
> +++ b/arch/arm64/kernel/sys32.c
> @@ -25,8 +25,8 @@
>  #include <linux/compiler.h>
>  #include <linux/syscalls.h>
>  
> -asmlinkage long compat_sys_sigreturn_wrapper(void);
> -asmlinkage long compat_sys_rt_sigreturn_wrapper(void);
> +asmlinkage long compat_sys_sigreturn(void);
> +asmlinkage long compat_sys_rt_sigreturn(void);
>  asmlinkage long compat_sys_statfs64_wrapper(void);
>  asmlinkage long compat_sys_fstatfs64_wrapper(void);
>  asmlinkage long compat_sys_pread64_wrapper(void);
> -- 
> 2.11.0
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 07/18] arm64: remove sigreturn wrappers
@ 2018-05-14 11:07     ` Dave Martin
  0 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-14 11:07 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 10:46:29AM +0100, Mark Rutland wrote:
> The arm64 sigreturn* syscall handlers are non-standard. Rather than
> taking a number of user parameters in registers as per the AAPCS,
> they expect the pt_regs as their sole argument.
> 
> To make this work, we override the syscall definitions to invoke
> wrappers written in assembly, which mov the SP into x0, and branch to
> their respective C functions.
> 
> On other architectures (such as x86), the sigreturn* functions take no
> argument and instead use current_pt_regs() to acquire the user
> registers. This requires less boilerplate code, and allows for other
> features such as interposing C code in this path.

Hmmm, I always wondered why rt_sigreturn() et al. were handled
specially.

If there's effectively no underlying reason for that, it makes sense
to regularise the interface to these syscalls.

Tentatively-reviewed-by: Dave Martin <Dave.Martin@arm.com>

> 
> This patch takes the same approach for arm64.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/include/asm/unistd32.h |  4 ++--
>  arch/arm64/kernel/entry.S         |  8 --------
>  arch/arm64/kernel/entry32.S       | 10 ----------
>  arch/arm64/kernel/signal.c        |  3 ++-
>  arch/arm64/kernel/signal32.c      |  6 ++++--
>  arch/arm64/kernel/sys.c           |  3 +--
>  arch/arm64/kernel/sys32.c         |  4 ++--
>  7 files changed, 11 insertions(+), 27 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/unistd32.h b/arch/arm64/include/asm/unistd32.h
> index ef292160748c..ab95554b1734 100644
> --- a/arch/arm64/include/asm/unistd32.h
> +++ b/arch/arm64/include/asm/unistd32.h
> @@ -260,7 +260,7 @@ __SYSCALL(117, sys_ni_syscall)
>  #define __NR_fsync 118
>  __SYSCALL(__NR_fsync, sys_fsync)
>  #define __NR_sigreturn 119
> -__SYSCALL(__NR_sigreturn, compat_sys_sigreturn_wrapper)
> +__SYSCALL(__NR_sigreturn, compat_sys_sigreturn)
>  #define __NR_clone 120
>  __SYSCALL(__NR_clone, sys_clone)
>  #define __NR_setdomainname 121
> @@ -368,7 +368,7 @@ __SYSCALL(__NR_getresgid, sys_getresgid16)
>  #define __NR_prctl 172
>  __SYSCALL(__NR_prctl, sys_prctl)
>  #define __NR_rt_sigreturn 173
> -__SYSCALL(__NR_rt_sigreturn, compat_sys_rt_sigreturn_wrapper)
> +__SYSCALL(__NR_rt_sigreturn, compat_sys_rt_sigreturn)
>  #define __NR_rt_sigaction 174
>  __SYSCALL(__NR_rt_sigaction, compat_sys_rt_sigaction)
>  #define __NR_rt_sigprocmask 175
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index ec2ee720e33e..08ea3cbfb08f 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -1108,14 +1108,6 @@ __entry_tramp_data_start:
>  #endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
>  
>  /*
> - * Special system call wrappers.
> - */
> -ENTRY(sys_rt_sigreturn_wrapper)
> -	mov	x0, sp
> -	b	sys_rt_sigreturn
> -ENDPROC(sys_rt_sigreturn_wrapper)
> -
> -/*
>   * Register switch for AArch64. The callee-saved registers need to be saved
>   * and restored. On entry:
>   *   x0 = previous task_struct (must be preserved across the switch)
> diff --git a/arch/arm64/kernel/entry32.S b/arch/arm64/kernel/entry32.S
> index f332d5d1f6b4..f9461696dde4 100644
> --- a/arch/arm64/kernel/entry32.S
> +++ b/arch/arm64/kernel/entry32.S
> @@ -30,16 +30,6 @@
>   * System call wrappers for the AArch32 compatibility layer.
>   */
>  
> -ENTRY(compat_sys_sigreturn_wrapper)
> -	mov	x0, sp
> -	b	compat_sys_sigreturn
> -ENDPROC(compat_sys_sigreturn_wrapper)
> -
> -ENTRY(compat_sys_rt_sigreturn_wrapper)
> -	mov	x0, sp
> -	b	compat_sys_rt_sigreturn
> -ENDPROC(compat_sys_rt_sigreturn_wrapper)
> -
>  ENTRY(compat_sys_statfs64_wrapper)
>  	mov	w3, #84
>  	cmp	w1, #88
> diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
> index 8e624fec4707..caa7a68cf2d2 100644
> --- a/arch/arm64/kernel/signal.c
> +++ b/arch/arm64/kernel/signal.c
> @@ -538,8 +538,9 @@ static int restore_sigframe(struct pt_regs *regs,
>  	return err;
>  }
>  
> -asmlinkage long sys_rt_sigreturn(struct pt_regs *regs)
> +asmlinkage long sys_rt_sigreturn(void)
>  {
> +	struct pt_regs *regs = current_pt_regs();
>  	struct rt_sigframe __user *frame;
>  
>  	/* Always make any pending restarted system calls return -EINTR */
> diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c
> index 77b91f478995..cb10588a7cb2 100644
> --- a/arch/arm64/kernel/signal32.c
> +++ b/arch/arm64/kernel/signal32.c
> @@ -282,8 +282,9 @@ static int compat_restore_sigframe(struct pt_regs *regs,
>  	return err;
>  }
>  
> -asmlinkage int compat_sys_sigreturn(struct pt_regs *regs)
> +asmlinkage int compat_sys_sigreturn(void)
>  {
> +	struct pt_regs *regs = current_pt_regs();
>  	struct compat_sigframe __user *frame;
>  
>  	/* Always make any pending restarted system calls return -EINTR */
> @@ -312,8 +313,9 @@ asmlinkage int compat_sys_sigreturn(struct pt_regs *regs)
>  	return 0;
>  }
>  
> -asmlinkage int compat_sys_rt_sigreturn(struct pt_regs *regs)
> +asmlinkage int compat_sys_rt_sigreturn(void)
>  {
> +	struct pt_regs *regs = current_pt_regs();
>  	struct compat_rt_sigframe __user *frame;
>  
>  	/* Always make any pending restarted system calls return -EINTR */
> diff --git a/arch/arm64/kernel/sys.c b/arch/arm64/kernel/sys.c
> index 72981bae10eb..31045f3fed92 100644
> --- a/arch/arm64/kernel/sys.c
> +++ b/arch/arm64/kernel/sys.c
> @@ -48,8 +48,7 @@ SYSCALL_DEFINE1(arm64_personality, unsigned int, personality)
>  /*
>   * Wrappers to pass the pt_regs argument.
>   */
> -asmlinkage long sys_rt_sigreturn_wrapper(void);
> -#define sys_rt_sigreturn	sys_rt_sigreturn_wrapper
> +asmlinkage long sys_rt_sigreturn(void);
>  #define sys_personality		sys_arm64_personality
>  
>  #undef __SYSCALL
> diff --git a/arch/arm64/kernel/sys32.c b/arch/arm64/kernel/sys32.c
> index a40b1343b819..1ef103c95410 100644
> --- a/arch/arm64/kernel/sys32.c
> +++ b/arch/arm64/kernel/sys32.c
> @@ -25,8 +25,8 @@
>  #include <linux/compiler.h>
>  #include <linux/syscalls.h>
>  
> -asmlinkage long compat_sys_sigreturn_wrapper(void);
> -asmlinkage long compat_sys_rt_sigreturn_wrapper(void);
> +asmlinkage long compat_sys_sigreturn(void);
> +asmlinkage long compat_sys_rt_sigreturn(void);
>  asmlinkage long compat_sys_statfs64_wrapper(void);
>  asmlinkage long compat_sys_fstatfs64_wrapper(void);
>  asmlinkage long compat_sys_pread64_wrapper(void);
> -- 
> 2.11.0
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 08/18] arm64: convert raw syscall invocation to C
  2018-05-14  9:46   ` Mark Rutland
@ 2018-05-14 11:07     ` Dave Martin
  -1 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-14 11:07 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, marc.zyngier, catalin.marinas, will.deacon,
	linux-kernel, linux, james.morse, viro, linux-fsdevel

On Mon, May 14, 2018 at 10:46:30AM +0100, Mark Rutland wrote:
> As a first step towards invoking syscalls with a pt_regs argument,
> convert the raw syscall invocation logic to C. We end up with a bit more
> register shuffling, but the unified invocation logic means we can unify
> the tracing paths, too.
> 
> This only converts the invocation of the syscall. The rest of the
> syscall triage and tracing is left in assembly for now, and will be
> converted in subsequent patches.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/kernel/Makefile  |  3 ++-
>  arch/arm64/kernel/entry.S   | 36 ++++++++++--------------------------
>  arch/arm64/kernel/syscall.c | 29 +++++++++++++++++++++++++++++
>  3 files changed, 41 insertions(+), 27 deletions(-)
>  create mode 100644 arch/arm64/kernel/syscall.c
> 
> diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
> index bf825f38d206..c22e8ace5ea3 100644
> --- a/arch/arm64/kernel/Makefile
> +++ b/arch/arm64/kernel/Makefile
> @@ -18,7 +18,8 @@ arm64-obj-y		:= debug-monitors.o entry.o irq.o fpsimd.o		\
>  			   hyp-stub.o psci.o cpu_ops.o insn.o	\
>  			   return_address.o cpuinfo.o cpu_errata.o		\
>  			   cpufeature.o alternative.o cacheinfo.o		\
> -			   smp.o smp_spin_table.o topology.o smccc-call.o
> +			   smp.o smp_spin_table.o topology.o smccc-call.o	\
> +			   syscall.o
>  
>  extra-$(CONFIG_EFI)			:= efi-entry.o
>  
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index 08ea3cbfb08f..d6e057500eaf 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -873,7 +873,6 @@ ENDPROC(el0_error)
>   */
>  ret_fast_syscall:
>  	disable_daif
> -	str	x0, [sp, #S_X0]			// returned x0
>  	ldr	x1, [tsk, #TSK_TI_FLAGS]	// re-check for syscall tracing
>  	and	x2, x1, #_TIF_SYSCALL_WORK
>  	cbnz	x2, ret_fast_syscall_trace
> @@ -946,15 +945,11 @@ el0_svc_naked:					// compat entry point
>  
>  	tst	x16, #_TIF_SYSCALL_WORK		// check for syscall hooks
>  	b.ne	__sys_trace
> -	cmp     wscno, wsc_nr			// check upper syscall limit
> -	b.hs	ni_sys
> -	mask_nospec64 xscno, xsc_nr, x19	// enforce bounds for syscall number
> -	ldr	x16, [stbl, xscno, lsl #3]	// address in the syscall table
> -	blr	x16				// call sys_* routine
> -	b	ret_fast_syscall
> -ni_sys:
>  	mov	x0, sp
> -	bl	do_ni_syscall
> +	mov	w1, wscno
> +	mov	w2, wsc_nr
> +	mov	x3, stbl
> +	bl	invoke_syscall
>  	b	ret_fast_syscall
>  ENDPROC(el0_svc)
>  
> @@ -971,29 +966,18 @@ __sys_trace:
>  	bl	syscall_trace_enter
>  	cmp	w0, #NO_SYSCALL			// skip the syscall?
>  	b.eq	__sys_trace_return_skipped
> -	mov	wscno, w0			// syscall number (possibly new)
> -	mov	x1, sp				// pointer to regs
> -	cmp	wscno, wsc_nr			// check upper syscall limit
> -	b.hs	__ni_sys_trace
> -	ldp	x0, x1, [sp]			// restore the syscall args
> -	ldp	x2, x3, [sp, #S_X2]
> -	ldp	x4, x5, [sp, #S_X4]
> -	ldp	x6, x7, [sp, #S_X6]
> -	ldr	x16, [stbl, xscno, lsl #3]	// address in the syscall table
> -	blr	x16				// call sys_* routine
>  
> -__sys_trace_return:
> -	str	x0, [sp, #S_X0]			// save returned x0
> +	mov	x0, sp
> +	mov	w1, wscno
> +	mov w2, wsc_nr
> +	mov	x3, stbl
> +	bl	invoke_syscall
> +
>  __sys_trace_return_skipped:
>  	mov	x0, sp
>  	bl	syscall_trace_exit
>  	b	ret_to_user
>  
> -__ni_sys_trace:
> -	mov	x0, sp
> -	bl	do_ni_syscall
> -	b	__sys_trace_return
> -

Can you explain why ni_syscall is special here, why __sys_trace_return
existed, and why its disappearance doesn't break anything?

Not saying there's a bug, just that I'm a little confuse -- I see no
real reason for ni_syscall being special, and this may be a good
opportunity to decruft it.  (See also comments below.)

>  	.popsection				// .entry.text
>  
>  #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
> diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
> new file mode 100644
> index 000000000000..58d7569f47df
> --- /dev/null
> +++ b/arch/arm64/kernel/syscall.c
> @@ -0,0 +1,29 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include <linux/nospec.h>
> +#include <linux/ptrace.h>
> +
> +long do_ni_syscall(struct pt_regs *regs);
> +
> +typedef long (*syscall_fn_t)(unsigned long, unsigned long,
> +			     unsigned long, unsigned long,
> +			     unsigned long, unsigned long);
> +
> +static void __invoke_syscall(struct pt_regs *regs, syscall_fn_t syscall_fn)
> +{
> +	regs->regs[0] = syscall_fn(regs->regs[0], regs->regs[1],
> +				   regs->regs[2], regs->regs[3],
> +				   regs->regs[4], regs->regs[5]);
> +}
> +
> +asmlinkage void invoke_syscall(struct pt_regs *regs, int scno, int sc_nr,
> +			       syscall_fn_t syscall_table[])
> +{
> +	if (scno < sc_nr) {

What if (int)scno < 0?  Should those args both by unsigned ints?

"sc_nr" sounds too much like "syscall number" to me.  Might
"syscall_table_size" might be clearer?  Similarly, we could have
"stbl_size" or similar in the asm.  This is purely cosmetic,
though.

> +		syscall_fn_t syscall_fn;
> +		syscall_fn = syscall_table[array_index_nospec(scno, sc_nr)];
> +		__invoke_syscall(regs, syscall_fn);
> +	} else {
> +		regs->regs[0] = do_ni_syscall(regs);

Can we make __invoke_syscall() the universal syscall wrapper, and give
do_ni_syscall() the same interface as any other syscall body?

Then you could factor this as

static syscall_fn_t syscall_fn(syscall_fn_t const syscall_table[],
				(unsigned) int scno, (unsigned) int sc_nr)
{
	if (sc_no >= sc_nr)
		return sys_ni_syscall;

	return syscall_table[array_index_nospec(scno, sc_nr)];
}

...
	__invoke_syscall(regs, syscall_fn(syscall_table, scno, sc_nr);



This is cosmetic too, of course.

do_ni_syscall() should be given a pt_regs-based wrapper like all the
rest.

This is still cosmetic, but reduces unnecessary special-case-ness
for ni_syscall.

Cheers
---Dave

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 08/18] arm64: convert raw syscall invocation to C
@ 2018-05-14 11:07     ` Dave Martin
  0 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-14 11:07 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 10:46:30AM +0100, Mark Rutland wrote:
> As a first step towards invoking syscalls with a pt_regs argument,
> convert the raw syscall invocation logic to C. We end up with a bit more
> register shuffling, but the unified invocation logic means we can unify
> the tracing paths, too.
> 
> This only converts the invocation of the syscall. The rest of the
> syscall triage and tracing is left in assembly for now, and will be
> converted in subsequent patches.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/kernel/Makefile  |  3 ++-
>  arch/arm64/kernel/entry.S   | 36 ++++++++++--------------------------
>  arch/arm64/kernel/syscall.c | 29 +++++++++++++++++++++++++++++
>  3 files changed, 41 insertions(+), 27 deletions(-)
>  create mode 100644 arch/arm64/kernel/syscall.c
> 
> diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
> index bf825f38d206..c22e8ace5ea3 100644
> --- a/arch/arm64/kernel/Makefile
> +++ b/arch/arm64/kernel/Makefile
> @@ -18,7 +18,8 @@ arm64-obj-y		:= debug-monitors.o entry.o irq.o fpsimd.o		\
>  			   hyp-stub.o psci.o cpu_ops.o insn.o	\
>  			   return_address.o cpuinfo.o cpu_errata.o		\
>  			   cpufeature.o alternative.o cacheinfo.o		\
> -			   smp.o smp_spin_table.o topology.o smccc-call.o
> +			   smp.o smp_spin_table.o topology.o smccc-call.o	\
> +			   syscall.o
>  
>  extra-$(CONFIG_EFI)			:= efi-entry.o
>  
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index 08ea3cbfb08f..d6e057500eaf 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -873,7 +873,6 @@ ENDPROC(el0_error)
>   */
>  ret_fast_syscall:
>  	disable_daif
> -	str	x0, [sp, #S_X0]			// returned x0
>  	ldr	x1, [tsk, #TSK_TI_FLAGS]	// re-check for syscall tracing
>  	and	x2, x1, #_TIF_SYSCALL_WORK
>  	cbnz	x2, ret_fast_syscall_trace
> @@ -946,15 +945,11 @@ el0_svc_naked:					// compat entry point
>  
>  	tst	x16, #_TIF_SYSCALL_WORK		// check for syscall hooks
>  	b.ne	__sys_trace
> -	cmp     wscno, wsc_nr			// check upper syscall limit
> -	b.hs	ni_sys
> -	mask_nospec64 xscno, xsc_nr, x19	// enforce bounds for syscall number
> -	ldr	x16, [stbl, xscno, lsl #3]	// address in the syscall table
> -	blr	x16				// call sys_* routine
> -	b	ret_fast_syscall
> -ni_sys:
>  	mov	x0, sp
> -	bl	do_ni_syscall
> +	mov	w1, wscno
> +	mov	w2, wsc_nr
> +	mov	x3, stbl
> +	bl	invoke_syscall
>  	b	ret_fast_syscall
>  ENDPROC(el0_svc)
>  
> @@ -971,29 +966,18 @@ __sys_trace:
>  	bl	syscall_trace_enter
>  	cmp	w0, #NO_SYSCALL			// skip the syscall?
>  	b.eq	__sys_trace_return_skipped
> -	mov	wscno, w0			// syscall number (possibly new)
> -	mov	x1, sp				// pointer to regs
> -	cmp	wscno, wsc_nr			// check upper syscall limit
> -	b.hs	__ni_sys_trace
> -	ldp	x0, x1, [sp]			// restore the syscall args
> -	ldp	x2, x3, [sp, #S_X2]
> -	ldp	x4, x5, [sp, #S_X4]
> -	ldp	x6, x7, [sp, #S_X6]
> -	ldr	x16, [stbl, xscno, lsl #3]	// address in the syscall table
> -	blr	x16				// call sys_* routine
>  
> -__sys_trace_return:
> -	str	x0, [sp, #S_X0]			// save returned x0
> +	mov	x0, sp
> +	mov	w1, wscno
> +	mov w2, wsc_nr
> +	mov	x3, stbl
> +	bl	invoke_syscall
> +
>  __sys_trace_return_skipped:
>  	mov	x0, sp
>  	bl	syscall_trace_exit
>  	b	ret_to_user
>  
> -__ni_sys_trace:
> -	mov	x0, sp
> -	bl	do_ni_syscall
> -	b	__sys_trace_return
> -

Can you explain why ni_syscall is special here, why __sys_trace_return
existed, and why its disappearance doesn't break anything?

Not saying there's a bug, just that I'm a little confuse -- I see no
real reason for ni_syscall being special, and this may be a good
opportunity to decruft it.  (See also comments below.)

>  	.popsection				// .entry.text
>  
>  #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
> diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
> new file mode 100644
> index 000000000000..58d7569f47df
> --- /dev/null
> +++ b/arch/arm64/kernel/syscall.c
> @@ -0,0 +1,29 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include <linux/nospec.h>
> +#include <linux/ptrace.h>
> +
> +long do_ni_syscall(struct pt_regs *regs);
> +
> +typedef long (*syscall_fn_t)(unsigned long, unsigned long,
> +			     unsigned long, unsigned long,
> +			     unsigned long, unsigned long);
> +
> +static void __invoke_syscall(struct pt_regs *regs, syscall_fn_t syscall_fn)
> +{
> +	regs->regs[0] = syscall_fn(regs->regs[0], regs->regs[1],
> +				   regs->regs[2], regs->regs[3],
> +				   regs->regs[4], regs->regs[5]);
> +}
> +
> +asmlinkage void invoke_syscall(struct pt_regs *regs, int scno, int sc_nr,
> +			       syscall_fn_t syscall_table[])
> +{
> +	if (scno < sc_nr) {

What if (int)scno < 0?  Should those args both by unsigned ints?

"sc_nr" sounds too much like "syscall number" to me.  Might
"syscall_table_size" might be clearer?  Similarly, we could have
"stbl_size" or similar in the asm.  This is purely cosmetic,
though.

> +		syscall_fn_t syscall_fn;
> +		syscall_fn = syscall_table[array_index_nospec(scno, sc_nr)];
> +		__invoke_syscall(regs, syscall_fn);
> +	} else {
> +		regs->regs[0] = do_ni_syscall(regs);

Can we make __invoke_syscall() the universal syscall wrapper, and give
do_ni_syscall() the same interface as any other syscall body?

Then you could factor this as

static syscall_fn_t syscall_fn(syscall_fn_t const syscall_table[],
				(unsigned) int scno, (unsigned) int sc_nr)
{
	if (sc_no >= sc_nr)
		return sys_ni_syscall;

	return syscall_table[array_index_nospec(scno, sc_nr)];
}

...
	__invoke_syscall(regs, syscall_fn(syscall_table, scno, sc_nr);



This is cosmetic too, of course.

do_ni_syscall() should be given a pt_regs-based wrapper like all the
rest.

This is still cosmetic, but reduces unnecessary special-case-ness
for ni_syscall.

Cheers
---Dave

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 10/18] arm64: convert native/compat syscall entry to C
  2018-05-14  9:46   ` Mark Rutland
@ 2018-05-14 11:07     ` Dave Martin
  -1 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-14 11:07 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, marc.zyngier, catalin.marinas, will.deacon,
	linux-kernel, linux, james.morse, viro, linux-fsdevel

On Mon, May 14, 2018 at 10:46:32AM +0100, Mark Rutland wrote:
> Now that the syscall invocation logic is in C, we can migrate the rest
> of the syscall entry logic over, so that the entry assembly needn't look
> at the register values at all.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/kernel/entry.S   | 42 ++++--------------------------------------
>  arch/arm64/kernel/syscall.c | 40 ++++++++++++++++++++++++++++++++++++++--
>  2 files changed, 42 insertions(+), 40 deletions(-)
> 
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index 5c60369b52fc..13afefbf608f 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -690,14 +690,9 @@ el0_sync_compat:
>  	b.ge	el0_dbg
>  	b	el0_inv
>  el0_svc_compat:
> -	/*
> -	 * AArch32 syscall handling
> -	 */
> -	ldr	x16, [tsk, #TSK_TI_FLAGS]	// load thread flags
> -	adrp	stbl, compat_sys_call_table	// load compat syscall table pointer
> -	mov	wscno, w7			// syscall number in w7 (r7)
> -	mov     wsc_nr, #__NR_compat_syscalls
> -	b	el0_svc_naked
> +	mov	x0, sp
> +	bl	el0_svc_compat_handler
> +	b	ret_to_user
>  
>  	.align	6
>  el0_irq_compat:
> @@ -895,37 +890,8 @@ ENDPROC(ret_to_user)
>   */
>  	.align	6
>  el0_svc:
> -	ldr	x16, [tsk, #TSK_TI_FLAGS]	// load thread flags
> -	adrp	stbl, sys_call_table		// load syscall table pointer
> -	mov	wscno, w8			// syscall number in w8
> -	mov	wsc_nr, #__NR_syscalls
> -
> -#ifdef CONFIG_ARM64_SVE
> -alternative_if_not ARM64_SVE
> -	b	el0_svc_naked
> -alternative_else_nop_endif
> -	tbz	x16, #TIF_SVE, el0_svc_naked	// Skip unless TIF_SVE set:
> -	bic	x16, x16, #_TIF_SVE		// discard SVE state
> -	str	x16, [tsk, #TSK_TI_FLAGS]
> -
> -	/*
> -	 * task_fpsimd_load() won't be called to update CPACR_EL1 in
> -	 * ret_to_user unless TIF_FOREIGN_FPSTATE is still set, which only
> -	 * happens if a context switch or kernel_neon_begin() or context
> -	 * modification (sigreturn, ptrace) intervenes.
> -	 * So, ensure that CPACR_EL1 is already correct for the fast-path case:
> -	 */
> -	mrs	x9, cpacr_el1
> -	bic	x9, x9, #CPACR_EL1_ZEN_EL0EN	// disable SVE for el0
> -	msr	cpacr_el1, x9			// synchronised by eret to el0
> -#endif
> -
> -el0_svc_naked:					// compat entry point
>  	mov	x0, sp
> -	mov	w1, wscno
> -	mov	w2, wsc_nr
> -	mov	x3, stbl
> -	bl	el0_svc_common
> +	bl	el0_svc_handler
>  	b	ret_to_user
>  ENDPROC(el0_svc)
>  
> diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
> index 5df857e32b48..4706f841e758 100644
> --- a/arch/arm64/kernel/syscall.c
> +++ b/arch/arm64/kernel/syscall.c
> @@ -6,7 +6,9 @@
>  #include <linux/ptrace.h>
>  
>  #include <asm/daifflags.h>
> +#include <asm/fpsimd.h>
>  #include <asm/thread_info.h>
> +#include <asm/unistd.h>
>  
>  long do_ni_syscall(struct pt_regs *regs);
>  
> @@ -41,8 +43,8 @@ static inline bool has_syscall_work(unsigned long flags)
>  int syscall_trace_enter(struct pt_regs *regs);
>  void syscall_trace_exit(struct pt_regs *regs);
>  
> -asmlinkage void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
> -			       syscall_fn_t syscall_table[])
> +static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
> +			   syscall_fn_t syscall_table[])
>  {
>  	unsigned long flags = current_thread_info()->flags;
>  
> @@ -79,3 +81,37 @@ asmlinkage void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
>  trace_exit:
>  	syscall_trace_exit(regs);
>  }
> +
> +static inline void sve_user_reset(void)

Static function with no caller...

> +{
> +	if (!system_supports_sve())
> +		return;
> +
> +	/*
> +	 * task_fpsimd_load() won't be called to update CPACR_EL1 in
> +	 * ret_to_user unless TIF_FOREIGN_FPSTATE is still set, which only
> +	 * happens if a context switch or kernel_neon_begin() or context
> +	 * modification (sigreturn, ptrace) intervenes.
> +	 * So, ensure that CPACR_EL1 is already correct for the fast-path case.
> +	 */
> +	if (test_and_clear_thread_flag(TIF_SVE))
> +		sve_user_disable();

sve_user_disable() is already inline, and incorporates the if()
internally via sysreg_clear_set().

So, should this just be

	clear_thread_flag(TIF_SVE);
	sve_user_disable();

> +}
> +
> +extern syscall_fn_t sys_call_table[];
> +
> +asmlinkage void el0_svc_handler(struct pt_regs *regs)
> +{

if (system_supports_sve()) ?

> +	sve_user_disable();

Or should this be replaced by a call to sve_user_reset()?

I suspect the latter, since we do want to be clearing TIF_SVE here too.


[...]

Cheers
---Dave

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 10/18] arm64: convert native/compat syscall entry to C
@ 2018-05-14 11:07     ` Dave Martin
  0 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-14 11:07 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 10:46:32AM +0100, Mark Rutland wrote:
> Now that the syscall invocation logic is in C, we can migrate the rest
> of the syscall entry logic over, so that the entry assembly needn't look
> at the register values at all.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/kernel/entry.S   | 42 ++++--------------------------------------
>  arch/arm64/kernel/syscall.c | 40 ++++++++++++++++++++++++++++++++++++++--
>  2 files changed, 42 insertions(+), 40 deletions(-)
> 
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index 5c60369b52fc..13afefbf608f 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -690,14 +690,9 @@ el0_sync_compat:
>  	b.ge	el0_dbg
>  	b	el0_inv
>  el0_svc_compat:
> -	/*
> -	 * AArch32 syscall handling
> -	 */
> -	ldr	x16, [tsk, #TSK_TI_FLAGS]	// load thread flags
> -	adrp	stbl, compat_sys_call_table	// load compat syscall table pointer
> -	mov	wscno, w7			// syscall number in w7 (r7)
> -	mov     wsc_nr, #__NR_compat_syscalls
> -	b	el0_svc_naked
> +	mov	x0, sp
> +	bl	el0_svc_compat_handler
> +	b	ret_to_user
>  
>  	.align	6
>  el0_irq_compat:
> @@ -895,37 +890,8 @@ ENDPROC(ret_to_user)
>   */
>  	.align	6
>  el0_svc:
> -	ldr	x16, [tsk, #TSK_TI_FLAGS]	// load thread flags
> -	adrp	stbl, sys_call_table		// load syscall table pointer
> -	mov	wscno, w8			// syscall number in w8
> -	mov	wsc_nr, #__NR_syscalls
> -
> -#ifdef CONFIG_ARM64_SVE
> -alternative_if_not ARM64_SVE
> -	b	el0_svc_naked
> -alternative_else_nop_endif
> -	tbz	x16, #TIF_SVE, el0_svc_naked	// Skip unless TIF_SVE set:
> -	bic	x16, x16, #_TIF_SVE		// discard SVE state
> -	str	x16, [tsk, #TSK_TI_FLAGS]
> -
> -	/*
> -	 * task_fpsimd_load() won't be called to update CPACR_EL1 in
> -	 * ret_to_user unless TIF_FOREIGN_FPSTATE is still set, which only
> -	 * happens if a context switch or kernel_neon_begin() or context
> -	 * modification (sigreturn, ptrace) intervenes.
> -	 * So, ensure that CPACR_EL1 is already correct for the fast-path case:
> -	 */
> -	mrs	x9, cpacr_el1
> -	bic	x9, x9, #CPACR_EL1_ZEN_EL0EN	// disable SVE for el0
> -	msr	cpacr_el1, x9			// synchronised by eret to el0
> -#endif
> -
> -el0_svc_naked:					// compat entry point
>  	mov	x0, sp
> -	mov	w1, wscno
> -	mov	w2, wsc_nr
> -	mov	x3, stbl
> -	bl	el0_svc_common
> +	bl	el0_svc_handler
>  	b	ret_to_user
>  ENDPROC(el0_svc)
>  
> diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
> index 5df857e32b48..4706f841e758 100644
> --- a/arch/arm64/kernel/syscall.c
> +++ b/arch/arm64/kernel/syscall.c
> @@ -6,7 +6,9 @@
>  #include <linux/ptrace.h>
>  
>  #include <asm/daifflags.h>
> +#include <asm/fpsimd.h>
>  #include <asm/thread_info.h>
> +#include <asm/unistd.h>
>  
>  long do_ni_syscall(struct pt_regs *regs);
>  
> @@ -41,8 +43,8 @@ static inline bool has_syscall_work(unsigned long flags)
>  int syscall_trace_enter(struct pt_regs *regs);
>  void syscall_trace_exit(struct pt_regs *regs);
>  
> -asmlinkage void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
> -			       syscall_fn_t syscall_table[])
> +static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
> +			   syscall_fn_t syscall_table[])
>  {
>  	unsigned long flags = current_thread_info()->flags;
>  
> @@ -79,3 +81,37 @@ asmlinkage void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
>  trace_exit:
>  	syscall_trace_exit(regs);
>  }
> +
> +static inline void sve_user_reset(void)

Static function with no caller...

> +{
> +	if (!system_supports_sve())
> +		return;
> +
> +	/*
> +	 * task_fpsimd_load() won't be called to update CPACR_EL1 in
> +	 * ret_to_user unless TIF_FOREIGN_FPSTATE is still set, which only
> +	 * happens if a context switch or kernel_neon_begin() or context
> +	 * modification (sigreturn, ptrace) intervenes.
> +	 * So, ensure that CPACR_EL1 is already correct for the fast-path case.
> +	 */
> +	if (test_and_clear_thread_flag(TIF_SVE))
> +		sve_user_disable();

sve_user_disable() is already inline, and incorporates the if()
internally via sysreg_clear_set().

So, should this just be

	clear_thread_flag(TIF_SVE);
	sve_user_disable();

> +}
> +
> +extern syscall_fn_t sys_call_table[];
> +
> +asmlinkage void el0_svc_handler(struct pt_regs *regs)
> +{

if (system_supports_sve()) ?

> +	sve_user_disable();

Or should this be replaced by a call to sve_user_reset()?

I suspect the latter, since we do want to be clearing TIF_SVE here too.


[...]

Cheers
---Dave

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 11/18] arm64: zero GPRs upon entry from EL0
  2018-05-14  9:46   ` Mark Rutland
@ 2018-05-14 11:07     ` Dave Martin
  -1 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-14 11:07 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, marc.zyngier, catalin.marinas, will.deacon,
	linux-kernel, linux, james.morse, viro, linux-fsdevel

On Mon, May 14, 2018 at 10:46:33AM +0100, Mark Rutland wrote:
> We can zero GPRs x0 - x29 upon entry from EL0 to make it harder for
> userspace to control values consumed by speculative gadgets.
> 
> We don't blat x30, since this is stashed much later, and we'll blat it
> before invoking C code.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/kernel/entry.S | 9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index 13afefbf608f..4dd529fd03fd 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -62,6 +62,12 @@
>  #endif
>  	.endm
>  
> +	.macro	clear_gp_regs
> +	.irp	n,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29
> +	mov	x\n, xzr
> +	.endr
> +	.endm
> +

Looks OK, but consider moving _for from fpsimdmacros.h to assembler.h
and just writing

 _for n, 0, 29,	mov	x\n, xzr

(could even omit the wrapper macro, since this is a one-liner).

The implementation of _for is a bit gross, but since we already have it,
we might as well use it.

[...]

Cheers
---Dave

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 11/18] arm64: zero GPRs upon entry from EL0
@ 2018-05-14 11:07     ` Dave Martin
  0 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-14 11:07 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 10:46:33AM +0100, Mark Rutland wrote:
> We can zero GPRs x0 - x29 upon entry from EL0 to make it harder for
> userspace to control values consumed by speculative gadgets.
> 
> We don't blat x30, since this is stashed much later, and we'll blat it
> before invoking C code.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/kernel/entry.S | 9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index 13afefbf608f..4dd529fd03fd 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -62,6 +62,12 @@
>  #endif
>  	.endm
>  
> +	.macro	clear_gp_regs
> +	.irp	n,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29
> +	mov	x\n, xzr
> +	.endr
> +	.endm
> +

Looks OK, but consider moving _for from fpsimdmacros.h to assembler.h
and just writing

 _for n, 0, 29,	mov	x\n, xzr

(could even omit the wrapper macro, since this is a one-liner).

The implementation of _for is a bit gross, but since we already have it,
we might as well use it.

[...]

Cheers
---Dave

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 12/18] kernel: add ksys_personality()
  2018-05-14  9:46   ` Mark Rutland
@ 2018-05-14 11:08     ` Dave Martin
  -1 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-14 11:08 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, marc.zyngier, catalin.marinas, will.deacon,
	linux-kernel, linux, james.morse, viro, linux-fsdevel

On Mon, May 14, 2018 at 10:46:34AM +0100, Mark Rutland wrote:
> Using this helper allows us to avoid the in-kernel call to the
> sys_personality() syscall. The ksys_ prefix denotes that this function
> is meant as a drop-in replacement for the syscall. In particular, it
> uses the same calling convention as sys_personality().
> 
> This is necessary to enable conversion of arm64's syscall handling to
> use pt_regs wrappers.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Al Viro <viro@zeniv.linux.org.uk>
> Cc: Dominik Brodowski <linux@dominikbrodowski.net>

Reviewed-by: Dave Martin <Dave.Martin@arm.com>

> ---
>  include/linux/syscalls.h | 1 +
>  kernel/exec_domain.c     | 7 ++++++-
>  2 files changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
> index 70fcda1a9049..6723ea51ec99 100644
> --- a/include/linux/syscalls.h
> +++ b/include/linux/syscalls.h
> @@ -1148,6 +1148,7 @@ unsigned long ksys_mmap_pgoff(unsigned long addr, unsigned long len,
>  			      unsigned long prot, unsigned long flags,
>  			      unsigned long fd, unsigned long pgoff);
>  ssize_t ksys_readahead(int fd, loff_t offset, size_t count);
> +unsigned int ksys_personality(unsigned int personality);
>  
>  /*
>   * The following kernel syscall equivalents are just wrappers to fs-internal
> diff --git a/kernel/exec_domain.c b/kernel/exec_domain.c
> index a5697119290e..4ba2b404cba2 100644
> --- a/kernel/exec_domain.c
> +++ b/kernel/exec_domain.c
> @@ -47,7 +47,7 @@ static int __init proc_execdomains_init(void)
>  module_init(proc_execdomains_init);
>  #endif
>  
> -SYSCALL_DEFINE1(personality, unsigned int, personality)
> +unsigned int ksys_personality(unsigned int personality)
>  {
>  	unsigned int old = current->personality;
>  
> @@ -56,3 +56,8 @@ SYSCALL_DEFINE1(personality, unsigned int, personality)
>  
>  	return old;
>  }
> +
> +SYSCALL_DEFINE1(personality, unsigned int, personality)
> +{
> +	return ksys_personality(personality);
> +}
> -- 
> 2.11.0
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 12/18] kernel: add ksys_personality()
@ 2018-05-14 11:08     ` Dave Martin
  0 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-14 11:08 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 10:46:34AM +0100, Mark Rutland wrote:
> Using this helper allows us to avoid the in-kernel call to the
> sys_personality() syscall. The ksys_ prefix denotes that this function
> is meant as a drop-in replacement for the syscall. In particular, it
> uses the same calling convention as sys_personality().
> 
> This is necessary to enable conversion of arm64's syscall handling to
> use pt_regs wrappers.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Al Viro <viro@zeniv.linux.org.uk>
> Cc: Dominik Brodowski <linux@dominikbrodowski.net>

Reviewed-by: Dave Martin <Dave.Martin@arm.com>

> ---
>  include/linux/syscalls.h | 1 +
>  kernel/exec_domain.c     | 7 ++++++-
>  2 files changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
> index 70fcda1a9049..6723ea51ec99 100644
> --- a/include/linux/syscalls.h
> +++ b/include/linux/syscalls.h
> @@ -1148,6 +1148,7 @@ unsigned long ksys_mmap_pgoff(unsigned long addr, unsigned long len,
>  			      unsigned long prot, unsigned long flags,
>  			      unsigned long fd, unsigned long pgoff);
>  ssize_t ksys_readahead(int fd, loff_t offset, size_t count);
> +unsigned int ksys_personality(unsigned int personality);
>  
>  /*
>   * The following kernel syscall equivalents are just wrappers to fs-internal
> diff --git a/kernel/exec_domain.c b/kernel/exec_domain.c
> index a5697119290e..4ba2b404cba2 100644
> --- a/kernel/exec_domain.c
> +++ b/kernel/exec_domain.c
> @@ -47,7 +47,7 @@ static int __init proc_execdomains_init(void)
>  module_init(proc_execdomains_init);
>  #endif
>  
> -SYSCALL_DEFINE1(personality, unsigned int, personality)
> +unsigned int ksys_personality(unsigned int personality)
>  {
>  	unsigned int old = current->personality;
>  
> @@ -56,3 +56,8 @@ SYSCALL_DEFINE1(personality, unsigned int, personality)
>  
>  	return old;
>  }
> +
> +SYSCALL_DEFINE1(personality, unsigned int, personality)
> +{
> +	return ksys_personality(personality);
> +}
> -- 
> 2.11.0
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 02/18] arm64: move SCTLR_EL{1,2} assertions to <asm/sysreg.h>
  2018-05-14 10:08       ` Mark Rutland
@ 2018-05-14 11:20         ` Dave Martin
  -1 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-14 11:20 UTC (permalink / raw)
  To: Mark Rutland
  Cc: marc.zyngier, catalin.marinas, will.deacon, linux-kernel, linux,
	james.morse, viro, linux-fsdevel, linux-arm-kernel

On Mon, May 14, 2018 at 11:08:59AM +0100, Mark Rutland wrote:
> On Mon, May 14, 2018 at 11:00:53AM +0100, Dave Martin wrote:
> > On Mon, May 14, 2018 at 10:46:24AM +0100, Mark Rutland wrote:
> > > -/* Check all the bits are accounted for */
> > > -#define SCTLR_EL2_BUILD_BUG_ON_MISSING_BITS	BUILD_BUG_ON((SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != ~0)
> > > -
> > > +#if (SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != 0xffffffff
> > > +#error "Inconsistent SCTLR_EL2 set/clear bits"
> > > +#endif
> > 
> > Can we have a comment on the != 0xffffffff versus != ~0 here?
> > 
> > The subtle differences in evaluation semantics between #if and
> > other contexts here may well trip people up during maintenance...
> 
> Do you have any suggestion as to the wording?
> 
> I'm happy to add a comment, but I don't really know what to say.


How about the following?

/* Watch out for #if evaluation rules: ~0 is not ~(int)0! */

Cheers
---Dave

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 02/18] arm64: move SCTLR_EL{1,2} assertions to <asm/sysreg.h>
@ 2018-05-14 11:20         ` Dave Martin
  0 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-14 11:20 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 11:08:59AM +0100, Mark Rutland wrote:
> On Mon, May 14, 2018 at 11:00:53AM +0100, Dave Martin wrote:
> > On Mon, May 14, 2018 at 10:46:24AM +0100, Mark Rutland wrote:
> > > -/* Check all the bits are accounted for */
> > > -#define SCTLR_EL2_BUILD_BUG_ON_MISSING_BITS	BUILD_BUG_ON((SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != ~0)
> > > -
> > > +#if (SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != 0xffffffff
> > > +#error "Inconsistent SCTLR_EL2 set/clear bits"
> > > +#endif
> > 
> > Can we have a comment on the != 0xffffffff versus != ~0 here?
> > 
> > The subtle differences in evaluation semantics between #if and
> > other contexts here may well trip people up during maintenance...
> 
> Do you have any suggestion as to the wording?
> 
> I'm happy to add a comment, but I don't really know what to say.


How about the following?

/* Watch out for #if evaluation rules: ~0 is not ~(int)0! */

Cheers
---Dave

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 08/18] arm64: convert raw syscall invocation to C
  2018-05-14 11:07     ` Dave Martin
@ 2018-05-14 11:41       ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14 11:41 UTC (permalink / raw)
  To: Dave Martin
  Cc: linux-arm-kernel, marc.zyngier, catalin.marinas, will.deacon,
	linux-kernel, linux, james.morse, viro, linux-fsdevel

On Mon, May 14, 2018 at 12:07:18PM +0100, Dave Martin wrote:
> On Mon, May 14, 2018 at 10:46:30AM +0100, Mark Rutland wrote:
> > As a first step towards invoking syscalls with a pt_regs argument,
> > convert the raw syscall invocation logic to C. We end up with a bit more
> > register shuffling, but the unified invocation logic means we can unify
> > the tracing paths, too.
> > 
> > This only converts the invocation of the syscall. The rest of the
> > syscall triage and tracing is left in assembly for now, and will be
> > converted in subsequent patches.
> > 
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Will Deacon <will.deacon@arm.com>
> > ---
> >  arch/arm64/kernel/Makefile  |  3 ++-
> >  arch/arm64/kernel/entry.S   | 36 ++++++++++--------------------------
> >  arch/arm64/kernel/syscall.c | 29 +++++++++++++++++++++++++++++
> >  3 files changed, 41 insertions(+), 27 deletions(-)
> >  create mode 100644 arch/arm64/kernel/syscall.c
> > 
> > diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
> > index bf825f38d206..c22e8ace5ea3 100644
> > --- a/arch/arm64/kernel/Makefile
> > +++ b/arch/arm64/kernel/Makefile
> > @@ -18,7 +18,8 @@ arm64-obj-y		:= debug-monitors.o entry.o irq.o fpsimd.o		\
> >  			   hyp-stub.o psci.o cpu_ops.o insn.o	\
> >  			   return_address.o cpuinfo.o cpu_errata.o		\
> >  			   cpufeature.o alternative.o cacheinfo.o		\
> > -			   smp.o smp_spin_table.o topology.o smccc-call.o
> > +			   smp.o smp_spin_table.o topology.o smccc-call.o	\
> > +			   syscall.o
> >  
> >  extra-$(CONFIG_EFI)			:= efi-entry.o
> >  
> > diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> > index 08ea3cbfb08f..d6e057500eaf 100644
> > --- a/arch/arm64/kernel/entry.S
> > +++ b/arch/arm64/kernel/entry.S
> > @@ -873,7 +873,6 @@ ENDPROC(el0_error)
> >   */
> >  ret_fast_syscall:
> >  	disable_daif
> > -	str	x0, [sp, #S_X0]			// returned x0
> >  	ldr	x1, [tsk, #TSK_TI_FLAGS]	// re-check for syscall tracing
> >  	and	x2, x1, #_TIF_SYSCALL_WORK
> >  	cbnz	x2, ret_fast_syscall_trace
> > @@ -946,15 +945,11 @@ el0_svc_naked:					// compat entry point
> >  
> >  	tst	x16, #_TIF_SYSCALL_WORK		// check for syscall hooks
> >  	b.ne	__sys_trace
> > -	cmp     wscno, wsc_nr			// check upper syscall limit
> > -	b.hs	ni_sys
> > -	mask_nospec64 xscno, xsc_nr, x19	// enforce bounds for syscall number
> > -	ldr	x16, [stbl, xscno, lsl #3]	// address in the syscall table
> > -	blr	x16				// call sys_* routine
> > -	b	ret_fast_syscall
> > -ni_sys:
> >  	mov	x0, sp
> > -	bl	do_ni_syscall
> > +	mov	w1, wscno
> > +	mov	w2, wsc_nr
> > +	mov	x3, stbl
> > +	bl	invoke_syscall
> >  	b	ret_fast_syscall
> >  ENDPROC(el0_svc)
> >  
> > @@ -971,29 +966,18 @@ __sys_trace:
> >  	bl	syscall_trace_enter
> >  	cmp	w0, #NO_SYSCALL			// skip the syscall?
> >  	b.eq	__sys_trace_return_skipped
> > -	mov	wscno, w0			// syscall number (possibly new)
> > -	mov	x1, sp				// pointer to regs
> > -	cmp	wscno, wsc_nr			// check upper syscall limit
> > -	b.hs	__ni_sys_trace
> > -	ldp	x0, x1, [sp]			// restore the syscall args
> > -	ldp	x2, x3, [sp, #S_X2]
> > -	ldp	x4, x5, [sp, #S_X4]
> > -	ldp	x6, x7, [sp, #S_X6]
> > -	ldr	x16, [stbl, xscno, lsl #3]	// address in the syscall table
> > -	blr	x16				// call sys_* routine
> >  
> > -__sys_trace_return:
> > -	str	x0, [sp, #S_X0]			// save returned x0
> > +	mov	x0, sp
> > +	mov	w1, wscno
> > +	mov w2, wsc_nr
> > +	mov	x3, stbl
> > +	bl	invoke_syscall
> > +
> >  __sys_trace_return_skipped:
> >  	mov	x0, sp
> >  	bl	syscall_trace_exit
> >  	b	ret_to_user
> >  
> > -__ni_sys_trace:
> > -	mov	x0, sp
> > -	bl	do_ni_syscall
> > -	b	__sys_trace_return
> > -
> 
> Can you explain why ni_syscall is special here, 

This is for out-of-range syscall numbers, instances of ni_syscall in the
syscall table are handled by the regular path. When the syscall number
is out-of-range, we can't index the syscall table, and have to call
ni_sys directly.

The c invoke_syscall() wrapper handles that case internally so that we
don't have to open-code it everywhere.

> why __sys_trace_return existed, 

The __sys_trace_return label existed so that the special __ni_sys_trace
path could return into a common tracing return path.

> and why its disappearance doesn't break anything?

Now that invoke_syscall() handles out-of-range syscall numbers, and we
can remove the __ni_sys_trace path, nothing branches to
__sys_trace_return.

Only the label has been removed, not the usual return path.

> Not saying there's a bug, just that I'm a little confuse -- I see no
> real reason for ni_syscall being special, and this may be a good
> opportunity to decruft it.  (See also comments below.)

Hopefully the above clarifies things?

I've updated the commit message with a description.

[...]

> > +asmlinkage void invoke_syscall(struct pt_regs *regs, int scno, int sc_nr,
> > +			       syscall_fn_t syscall_table[])
> > +{
> > +	if (scno < sc_nr) {
> 
> What if (int)scno < 0?  Should those args both by unsigned ints?

Yes, they should -- I've fixed that up locally.

That is a *very* good point, thanks!

> "sc_nr" sounds too much like "syscall number" to me.  Might
> "syscall_table_size" might be clearer?  Similarly, we could have
> "stbl_size" or similar in the asm.  This is purely cosmetic,
> though.

I'd tried to stick to the naming used in assembly to keep the conversion
clearer for those familiar with the asm.

I agree the names aren't great.

> > +		syscall_fn_t syscall_fn;
> > +		syscall_fn = syscall_table[array_index_nospec(scno, sc_nr)];
> > +		__invoke_syscall(regs, syscall_fn);
> > +	} else {
> > +		regs->regs[0] = do_ni_syscall(regs);
> 
> Can we make __invoke_syscall() the universal syscall wrapper, and give
> do_ni_syscall() the same interface as any other syscall body?

Not at this point in time, since the prototype (in core code) differs.

I agree that would be nicer, but there are a number of complications;
more details below.

> Then you could factor this as
> 
> static syscall_fn_t syscall_fn(syscall_fn_t const syscall_table[],
> 				(unsigned) int scno, (unsigned) int sc_nr)
> {
> 	if (sc_no >= sc_nr)
> 		return sys_ni_syscall;
> 
> 	return syscall_table[array_index_nospec(scno, sc_nr)];
> }
> 
> ...
> 	__invoke_syscall(regs, syscall_fn(syscall_table, scno, sc_nr);
> 
> 
> 
> This is cosmetic too, of course.
> 
> do_ni_syscall() should be given a pt_regs-based wrapper like all the
> rest.

I agree it would be nicer if it had a wrapper that took a pt_regs, even
if it does nothing with it.

We can't use SYSCALL_DEFINE0() due to the fault injection muck, we'd
need a ksys_ni_syscall() for our traps.c logic, and adding this
uniformly would involve some arch-specific rework for x86, too, so I
decided it was not worth the effort.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 08/18] arm64: convert raw syscall invocation to C
@ 2018-05-14 11:41       ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14 11:41 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 12:07:18PM +0100, Dave Martin wrote:
> On Mon, May 14, 2018 at 10:46:30AM +0100, Mark Rutland wrote:
> > As a first step towards invoking syscalls with a pt_regs argument,
> > convert the raw syscall invocation logic to C. We end up with a bit more
> > register shuffling, but the unified invocation logic means we can unify
> > the tracing paths, too.
> > 
> > This only converts the invocation of the syscall. The rest of the
> > syscall triage and tracing is left in assembly for now, and will be
> > converted in subsequent patches.
> > 
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Will Deacon <will.deacon@arm.com>
> > ---
> >  arch/arm64/kernel/Makefile  |  3 ++-
> >  arch/arm64/kernel/entry.S   | 36 ++++++++++--------------------------
> >  arch/arm64/kernel/syscall.c | 29 +++++++++++++++++++++++++++++
> >  3 files changed, 41 insertions(+), 27 deletions(-)
> >  create mode 100644 arch/arm64/kernel/syscall.c
> > 
> > diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
> > index bf825f38d206..c22e8ace5ea3 100644
> > --- a/arch/arm64/kernel/Makefile
> > +++ b/arch/arm64/kernel/Makefile
> > @@ -18,7 +18,8 @@ arm64-obj-y		:= debug-monitors.o entry.o irq.o fpsimd.o		\
> >  			   hyp-stub.o psci.o cpu_ops.o insn.o	\
> >  			   return_address.o cpuinfo.o cpu_errata.o		\
> >  			   cpufeature.o alternative.o cacheinfo.o		\
> > -			   smp.o smp_spin_table.o topology.o smccc-call.o
> > +			   smp.o smp_spin_table.o topology.o smccc-call.o	\
> > +			   syscall.o
> >  
> >  extra-$(CONFIG_EFI)			:= efi-entry.o
> >  
> > diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> > index 08ea3cbfb08f..d6e057500eaf 100644
> > --- a/arch/arm64/kernel/entry.S
> > +++ b/arch/arm64/kernel/entry.S
> > @@ -873,7 +873,6 @@ ENDPROC(el0_error)
> >   */
> >  ret_fast_syscall:
> >  	disable_daif
> > -	str	x0, [sp, #S_X0]			// returned x0
> >  	ldr	x1, [tsk, #TSK_TI_FLAGS]	// re-check for syscall tracing
> >  	and	x2, x1, #_TIF_SYSCALL_WORK
> >  	cbnz	x2, ret_fast_syscall_trace
> > @@ -946,15 +945,11 @@ el0_svc_naked:					// compat entry point
> >  
> >  	tst	x16, #_TIF_SYSCALL_WORK		// check for syscall hooks
> >  	b.ne	__sys_trace
> > -	cmp     wscno, wsc_nr			// check upper syscall limit
> > -	b.hs	ni_sys
> > -	mask_nospec64 xscno, xsc_nr, x19	// enforce bounds for syscall number
> > -	ldr	x16, [stbl, xscno, lsl #3]	// address in the syscall table
> > -	blr	x16				// call sys_* routine
> > -	b	ret_fast_syscall
> > -ni_sys:
> >  	mov	x0, sp
> > -	bl	do_ni_syscall
> > +	mov	w1, wscno
> > +	mov	w2, wsc_nr
> > +	mov	x3, stbl
> > +	bl	invoke_syscall
> >  	b	ret_fast_syscall
> >  ENDPROC(el0_svc)
> >  
> > @@ -971,29 +966,18 @@ __sys_trace:
> >  	bl	syscall_trace_enter
> >  	cmp	w0, #NO_SYSCALL			// skip the syscall?
> >  	b.eq	__sys_trace_return_skipped
> > -	mov	wscno, w0			// syscall number (possibly new)
> > -	mov	x1, sp				// pointer to regs
> > -	cmp	wscno, wsc_nr			// check upper syscall limit
> > -	b.hs	__ni_sys_trace
> > -	ldp	x0, x1, [sp]			// restore the syscall args
> > -	ldp	x2, x3, [sp, #S_X2]
> > -	ldp	x4, x5, [sp, #S_X4]
> > -	ldp	x6, x7, [sp, #S_X6]
> > -	ldr	x16, [stbl, xscno, lsl #3]	// address in the syscall table
> > -	blr	x16				// call sys_* routine
> >  
> > -__sys_trace_return:
> > -	str	x0, [sp, #S_X0]			// save returned x0
> > +	mov	x0, sp
> > +	mov	w1, wscno
> > +	mov w2, wsc_nr
> > +	mov	x3, stbl
> > +	bl	invoke_syscall
> > +
> >  __sys_trace_return_skipped:
> >  	mov	x0, sp
> >  	bl	syscall_trace_exit
> >  	b	ret_to_user
> >  
> > -__ni_sys_trace:
> > -	mov	x0, sp
> > -	bl	do_ni_syscall
> > -	b	__sys_trace_return
> > -
> 
> Can you explain why ni_syscall is special here, 

This is for out-of-range syscall numbers, instances of ni_syscall in the
syscall table are handled by the regular path. When the syscall number
is out-of-range, we can't index the syscall table, and have to call
ni_sys directly.

The c invoke_syscall() wrapper handles that case internally so that we
don't have to open-code it everywhere.

> why __sys_trace_return existed, 

The __sys_trace_return label existed so that the special __ni_sys_trace
path could return into a common tracing return path.

> and why its disappearance doesn't break anything?

Now that invoke_syscall() handles out-of-range syscall numbers, and we
can remove the __ni_sys_trace path, nothing branches to
__sys_trace_return.

Only the label has been removed, not the usual return path.

> Not saying there's a bug, just that I'm a little confuse -- I see no
> real reason for ni_syscall being special, and this may be a good
> opportunity to decruft it.  (See also comments below.)

Hopefully the above clarifies things?

I've updated the commit message with a description.

[...]

> > +asmlinkage void invoke_syscall(struct pt_regs *regs, int scno, int sc_nr,
> > +			       syscall_fn_t syscall_table[])
> > +{
> > +	if (scno < sc_nr) {
> 
> What if (int)scno < 0?  Should those args both by unsigned ints?

Yes, they should -- I've fixed that up locally.

That is a *very* good point, thanks!

> "sc_nr" sounds too much like "syscall number" to me.  Might
> "syscall_table_size" might be clearer?  Similarly, we could have
> "stbl_size" or similar in the asm.  This is purely cosmetic,
> though.

I'd tried to stick to the naming used in assembly to keep the conversion
clearer for those familiar with the asm.

I agree the names aren't great.

> > +		syscall_fn_t syscall_fn;
> > +		syscall_fn = syscall_table[array_index_nospec(scno, sc_nr)];
> > +		__invoke_syscall(regs, syscall_fn);
> > +	} else {
> > +		regs->regs[0] = do_ni_syscall(regs);
> 
> Can we make __invoke_syscall() the universal syscall wrapper, and give
> do_ni_syscall() the same interface as any other syscall body?

Not at this point in time, since the prototype (in core code) differs.

I agree that would be nicer, but there are a number of complications;
more details below.

> Then you could factor this as
> 
> static syscall_fn_t syscall_fn(syscall_fn_t const syscall_table[],
> 				(unsigned) int scno, (unsigned) int sc_nr)
> {
> 	if (sc_no >= sc_nr)
> 		return sys_ni_syscall;
> 
> 	return syscall_table[array_index_nospec(scno, sc_nr)];
> }
> 
> ...
> 	__invoke_syscall(regs, syscall_fn(syscall_table, scno, sc_nr);
> 
> 
> 
> This is cosmetic too, of course.
> 
> do_ni_syscall() should be given a pt_regs-based wrapper like all the
> rest.

I agree it would be nicer if it had a wrapper that took a pt_regs, even
if it does nothing with it.

We can't use SYSCALL_DEFINE0() due to the fault injection muck, we'd
need a ksys_ni_syscall() for our traps.c logic, and adding this
uniformly would involve some arch-specific rework for x86, too, so I
decided it was not worth the effort.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 02/18] arm64: move SCTLR_EL{1,2} assertions to <asm/sysreg.h>
  2018-05-14 11:20         ` Dave Martin
@ 2018-05-14 11:56           ` Robin Murphy
  -1 siblings, 0 replies; 126+ messages in thread
From: Robin Murphy @ 2018-05-14 11:56 UTC (permalink / raw)
  To: Dave Martin, Mark Rutland
  Cc: marc.zyngier, catalin.marinas, will.deacon, linux-kernel, linux,
	james.morse, viro, linux-fsdevel, linux-arm-kernel

On 14/05/18 12:20, Dave Martin wrote:
> On Mon, May 14, 2018 at 11:08:59AM +0100, Mark Rutland wrote:
>> On Mon, May 14, 2018 at 11:00:53AM +0100, Dave Martin wrote:
>>> On Mon, May 14, 2018 at 10:46:24AM +0100, Mark Rutland wrote:
>>>> -/* Check all the bits are accounted for */
>>>> -#define SCTLR_EL2_BUILD_BUG_ON_MISSING_BITS	BUILD_BUG_ON((SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != ~0)
>>>> -
>>>> +#if (SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != 0xffffffff
>>>> +#error "Inconsistent SCTLR_EL2 set/clear bits"
>>>> +#endif
>>>
>>> Can we have a comment on the != 0xffffffff versus != ~0 here?
>>>
>>> The subtle differences in evaluation semantics between #if and
>>> other contexts here may well trip people up during maintenance...
>>
>> Do you have any suggestion as to the wording?
>>
>> I'm happy to add a comment, but I don't really know what to say.
> 
> 
> How about the following?
> 
> /* Watch out for #if evaluation rules: ~0 is not ~(int)0! */

Or, more formally, perhaps something even less vague like "Note that in 
preprocessor arithmetic these constants are effectively of type 
intmax_t, which is 64-bit, thus ~0 is not what we want."

Robin.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 02/18] arm64: move SCTLR_EL{1,2} assertions to <asm/sysreg.h>
@ 2018-05-14 11:56           ` Robin Murphy
  0 siblings, 0 replies; 126+ messages in thread
From: Robin Murphy @ 2018-05-14 11:56 UTC (permalink / raw)
  To: linux-arm-kernel

On 14/05/18 12:20, Dave Martin wrote:
> On Mon, May 14, 2018 at 11:08:59AM +0100, Mark Rutland wrote:
>> On Mon, May 14, 2018 at 11:00:53AM +0100, Dave Martin wrote:
>>> On Mon, May 14, 2018 at 10:46:24AM +0100, Mark Rutland wrote:
>>>> -/* Check all the bits are accounted for */
>>>> -#define SCTLR_EL2_BUILD_BUG_ON_MISSING_BITS	BUILD_BUG_ON((SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != ~0)
>>>> -
>>>> +#if (SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != 0xffffffff
>>>> +#error "Inconsistent SCTLR_EL2 set/clear bits"
>>>> +#endif
>>>
>>> Can we have a comment on the != 0xffffffff versus != ~0 here?
>>>
>>> The subtle differences in evaluation semantics between #if and
>>> other contexts here may well trip people up during maintenance...
>>
>> Do you have any suggestion as to the wording?
>>
>> I'm happy to add a comment, but I don't really know what to say.
> 
> 
> How about the following?
> 
> /* Watch out for #if evaluation rules: ~0 is not ~(int)0! */

Or, more formally, perhaps something even less vague like "Note that in 
preprocessor arithmetic these constants are effectively of type 
intmax_t, which is 64-bit, thus ~0 is not what we want."

Robin.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 10/18] arm64: convert native/compat syscall entry to C
  2018-05-14 11:07     ` Dave Martin
@ 2018-05-14 11:58       ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14 11:58 UTC (permalink / raw)
  To: Dave Martin
  Cc: linux-arm-kernel, marc.zyngier, catalin.marinas, will.deacon,
	linux-kernel, linux, james.morse, viro, linux-fsdevel

On Mon, May 14, 2018 at 12:07:30PM +0100, Dave Martin wrote:
> On Mon, May 14, 2018 at 10:46:32AM +0100, Mark Rutland wrote:

> > diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
> > index 5df857e32b48..4706f841e758 100644
> > --- a/arch/arm64/kernel/syscall.c
> > +++ b/arch/arm64/kernel/syscall.c
> > @@ -6,7 +6,9 @@
> >  #include <linux/ptrace.h>
> >  
> >  #include <asm/daifflags.h>
> > +#include <asm/fpsimd.h>
> >  #include <asm/thread_info.h>
> > +#include <asm/unistd.h>
> >  
> >  long do_ni_syscall(struct pt_regs *regs);
> >  
> > @@ -41,8 +43,8 @@ static inline bool has_syscall_work(unsigned long flags)
> >  int syscall_trace_enter(struct pt_regs *regs);
> >  void syscall_trace_exit(struct pt_regs *regs);
> >  
> > -asmlinkage void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
> > -			       syscall_fn_t syscall_table[])
> > +static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
> > +			   syscall_fn_t syscall_table[])
> >  {
> >  	unsigned long flags = current_thread_info()->flags;
> >  
> > @@ -79,3 +81,37 @@ asmlinkage void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
> >  trace_exit:
> >  	syscall_trace_exit(regs);
> >  }
> > +
> > +static inline void sve_user_reset(void)
> 
> Static function with no caller...

Ugh, this was intended to be called below in el0_svc_handler().

> > +{
> > +	if (!system_supports_sve())
> > +		return;
> > +
> > +	/*
> > +	 * task_fpsimd_load() won't be called to update CPACR_EL1 in
> > +	 * ret_to_user unless TIF_FOREIGN_FPSTATE is still set, which only
> > +	 * happens if a context switch or kernel_neon_begin() or context
> > +	 * modification (sigreturn, ptrace) intervenes.
> > +	 * So, ensure that CPACR_EL1 is already correct for the fast-path case.
> > +	 */
> > +	if (test_and_clear_thread_flag(TIF_SVE))
> > +		sve_user_disable();
> 
> sve_user_disable() is already inline, and incorporates the if()
> internally via sysreg_clear_set().
> 
> So, should this just be
> 
> 	clear_thread_flag(TIF_SVE);
> 	sve_user_disable();

Sure. That does mean we'll unconditionally read cpacr_el1, but I assume
you're happy with that. I'll note the difference in the commit message.

> > +}
> > +
> > +extern syscall_fn_t sys_call_table[];
> > +
> > +asmlinkage void el0_svc_handler(struct pt_regs *regs)
> > +{
> 
> if (system_supports_sve()) ?
> 
> > +	sve_user_disable();
> 
> Or should this be replaced by a call to sve_user_reset()?
> 
> I suspect the latter, since we do want to be clearing TIF_SVE here too.

Yes, this was mean to be sve_user_reset().

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 10/18] arm64: convert native/compat syscall entry to C
@ 2018-05-14 11:58       ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14 11:58 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 12:07:30PM +0100, Dave Martin wrote:
> On Mon, May 14, 2018 at 10:46:32AM +0100, Mark Rutland wrote:

> > diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
> > index 5df857e32b48..4706f841e758 100644
> > --- a/arch/arm64/kernel/syscall.c
> > +++ b/arch/arm64/kernel/syscall.c
> > @@ -6,7 +6,9 @@
> >  #include <linux/ptrace.h>
> >  
> >  #include <asm/daifflags.h>
> > +#include <asm/fpsimd.h>
> >  #include <asm/thread_info.h>
> > +#include <asm/unistd.h>
> >  
> >  long do_ni_syscall(struct pt_regs *regs);
> >  
> > @@ -41,8 +43,8 @@ static inline bool has_syscall_work(unsigned long flags)
> >  int syscall_trace_enter(struct pt_regs *regs);
> >  void syscall_trace_exit(struct pt_regs *regs);
> >  
> > -asmlinkage void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
> > -			       syscall_fn_t syscall_table[])
> > +static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
> > +			   syscall_fn_t syscall_table[])
> >  {
> >  	unsigned long flags = current_thread_info()->flags;
> >  
> > @@ -79,3 +81,37 @@ asmlinkage void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
> >  trace_exit:
> >  	syscall_trace_exit(regs);
> >  }
> > +
> > +static inline void sve_user_reset(void)
> 
> Static function with no caller...

Ugh, this was intended to be called below in el0_svc_handler().

> > +{
> > +	if (!system_supports_sve())
> > +		return;
> > +
> > +	/*
> > +	 * task_fpsimd_load() won't be called to update CPACR_EL1 in
> > +	 * ret_to_user unless TIF_FOREIGN_FPSTATE is still set, which only
> > +	 * happens if a context switch or kernel_neon_begin() or context
> > +	 * modification (sigreturn, ptrace) intervenes.
> > +	 * So, ensure that CPACR_EL1 is already correct for the fast-path case.
> > +	 */
> > +	if (test_and_clear_thread_flag(TIF_SVE))
> > +		sve_user_disable();
> 
> sve_user_disable() is already inline, and incorporates the if()
> internally via sysreg_clear_set().
> 
> So, should this just be
> 
> 	clear_thread_flag(TIF_SVE);
> 	sve_user_disable();

Sure. That does mean we'll unconditionally read cpacr_el1, but I assume
you're happy with that. I'll note the difference in the commit message.

> > +}
> > +
> > +extern syscall_fn_t sys_call_table[];
> > +
> > +asmlinkage void el0_svc_handler(struct pt_regs *regs)
> > +{
> 
> if (system_supports_sve()) ?
> 
> > +	sve_user_disable();
> 
> Or should this be replaced by a call to sve_user_reset()?
> 
> I suspect the latter, since we do want to be clearing TIF_SVE here too.

Yes, this was mean to be sve_user_reset().

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 02/18] arm64: move SCTLR_EL{1,2} assertions to <asm/sysreg.h>
  2018-05-14 11:56           ` Robin Murphy
@ 2018-05-14 12:06             ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14 12:06 UTC (permalink / raw)
  To: Robin Murphy
  Cc: Dave Martin, marc.zyngier, catalin.marinas, will.deacon,
	linux-kernel, linux, james.morse, viro, linux-fsdevel,
	linux-arm-kernel

On Mon, May 14, 2018 at 12:56:09PM +0100, Robin Murphy wrote:
> On 14/05/18 12:20, Dave Martin wrote:
> > On Mon, May 14, 2018 at 11:08:59AM +0100, Mark Rutland wrote:
> > > On Mon, May 14, 2018 at 11:00:53AM +0100, Dave Martin wrote:
> > > > On Mon, May 14, 2018 at 10:46:24AM +0100, Mark Rutland wrote:
> > > > > -/* Check all the bits are accounted for */
> > > > > -#define SCTLR_EL2_BUILD_BUG_ON_MISSING_BITS	BUILD_BUG_ON((SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != ~0)
> > > > > -
> > > > > +#if (SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != 0xffffffff
> > > > > +#error "Inconsistent SCTLR_EL2 set/clear bits"
> > > > > +#endif
> > > > 
> > > > Can we have a comment on the != 0xffffffff versus != ~0 here?
> > > > 
> > > > The subtle differences in evaluation semantics between #if and
> > > > other contexts here may well trip people up during maintenance...
> > > 
> > > Do you have any suggestion as to the wording?
> > > 
> > > I'm happy to add a comment, but I don't really know what to say.
> > 
> > 
> > How about the following?
> > 
> > /* Watch out for #if evaluation rules: ~0 is not ~(int)0! */
> 
> Or, more formally, perhaps something even less vague like "Note that in
> preprocessor arithmetic these constants are effectively of type intmax_t,
> which is 64-bit, thus ~0 is not what we want."

I'll drop something in the commit message to that effect, rather than a
comment.

A comment will end up terse and vague or large and bloatsome (and
redundant as we have this pattern twice).

Anyone attempting to "clean" this up will find things break, and they can
look at the git log to find out why it is the way it is...

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 02/18] arm64: move SCTLR_EL{1,2} assertions to <asm/sysreg.h>
@ 2018-05-14 12:06             ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14 12:06 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 12:56:09PM +0100, Robin Murphy wrote:
> On 14/05/18 12:20, Dave Martin wrote:
> > On Mon, May 14, 2018 at 11:08:59AM +0100, Mark Rutland wrote:
> > > On Mon, May 14, 2018 at 11:00:53AM +0100, Dave Martin wrote:
> > > > On Mon, May 14, 2018 at 10:46:24AM +0100, Mark Rutland wrote:
> > > > > -/* Check all the bits are accounted for */
> > > > > -#define SCTLR_EL2_BUILD_BUG_ON_MISSING_BITS	BUILD_BUG_ON((SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != ~0)
> > > > > -
> > > > > +#if (SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != 0xffffffff
> > > > > +#error "Inconsistent SCTLR_EL2 set/clear bits"
> > > > > +#endif
> > > > 
> > > > Can we have a comment on the != 0xffffffff versus != ~0 here?
> > > > 
> > > > The subtle differences in evaluation semantics between #if and
> > > > other contexts here may well trip people up during maintenance...
> > > 
> > > Do you have any suggestion as to the wording?
> > > 
> > > I'm happy to add a comment, but I don't really know what to say.
> > 
> > 
> > How about the following?
> > 
> > /* Watch out for #if evaluation rules: ~0 is not ~(int)0! */
> 
> Or, more formally, perhaps something even less vague like "Note that in
> preprocessor arithmetic these constants are effectively of type intmax_t,
> which is 64-bit, thus ~0 is not what we want."

I'll drop something in the commit message to that effect, rather than a
comment.

A comment will end up terse and vague or large and bloatsome (and
redundant as we have this pattern twice).

Anyone attempting to "clean" this up will find things break, and they can
look at the git log to find out why it is the way it is...

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 12/18] kernel: add ksys_personality()
  2018-05-14  9:46   ` Mark Rutland
@ 2018-05-14 12:07     ` Christoph Hellwig
  -1 siblings, 0 replies; 126+ messages in thread
From: Christoph Hellwig @ 2018-05-14 12:07 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, linux-kernel, catalin.marinas, dave.martin,
	james.morse, linux, linux-fsdevel, marc.zyngier, viro,
	will.deacon

On Mon, May 14, 2018 at 10:46:34AM +0100, Mark Rutland wrote:
> Using this helper allows us to avoid the in-kernel call to the
> sys_personality() syscall. The ksys_ prefix denotes that this function
> is meant as a drop-in replacement for the syscall. In particular, it
> uses the same calling convention as sys_personality().
> 
> This is necessary to enable conversion of arm64's syscall handling to
> use pt_regs wrappers.

Plese just opencode the trivial sys_personality logic instead.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 12/18] kernel: add ksys_personality()
@ 2018-05-14 12:07     ` Christoph Hellwig
  0 siblings, 0 replies; 126+ messages in thread
From: Christoph Hellwig @ 2018-05-14 12:07 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 10:46:34AM +0100, Mark Rutland wrote:
> Using this helper allows us to avoid the in-kernel call to the
> sys_personality() syscall. The ksys_ prefix denotes that this function
> is meant as a drop-in replacement for the syscall. In particular, it
> uses the same calling convention as sys_personality().
> 
> This is necessary to enable conversion of arm64's syscall handling to
> use pt_regs wrappers.

Plese just opencode the trivial sys_personality logic instead.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 17/18] arm64: convert compat wrappers to C
  2018-05-14  9:46   ` Mark Rutland
@ 2018-05-14 12:10     ` Christoph Hellwig
  -1 siblings, 0 replies; 126+ messages in thread
From: Christoph Hellwig @ 2018-05-14 12:10 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, linux-kernel, catalin.marinas, dave.martin,
	james.morse, linux, linux-fsdevel, marc.zyngier, viro,
	will.deacon

> +COMPAT_SYSCALL_DEFINE3(aarch32_statfs64, const char __user *, pathname,
> +		       compat_size_t, sz, struct compat_statfs64 __user *, buf)
> +{
> +	if (sz == 88)
> +		sz = 84;
> +
> +	return kcompat_sys_statfs64(pathname, sz, buf);

This really needs a comment, and it looks very obviously bogus.
In case it isn't it needs a very good explanation.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 17/18] arm64: convert compat wrappers to C
@ 2018-05-14 12:10     ` Christoph Hellwig
  0 siblings, 0 replies; 126+ messages in thread
From: Christoph Hellwig @ 2018-05-14 12:10 UTC (permalink / raw)
  To: linux-arm-kernel

> +COMPAT_SYSCALL_DEFINE3(aarch32_statfs64, const char __user *, pathname,
> +		       compat_size_t, sz, struct compat_statfs64 __user *, buf)
> +{
> +	if (sz == 88)
> +		sz = 84;
> +
> +	return kcompat_sys_statfs64(pathname, sz, buf);

This really needs a comment, and it looks very obviously bogus.
In case it isn't it needs a very good explanation.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 02/18] arm64: move SCTLR_EL{1,2} assertions to <asm/sysreg.h>
  2018-05-14 12:06             ` Mark Rutland
@ 2018-05-14 12:41               ` Dave Martin
  -1 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-14 12:41 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Robin Murphy, marc.zyngier, catalin.marinas, will.deacon,
	linux-kernel, linux, james.morse, viro, linux-fsdevel,
	linux-arm-kernel

On Mon, May 14, 2018 at 01:06:10PM +0100, Mark Rutland wrote:
> On Mon, May 14, 2018 at 12:56:09PM +0100, Robin Murphy wrote:
> > On 14/05/18 12:20, Dave Martin wrote:
> > > On Mon, May 14, 2018 at 11:08:59AM +0100, Mark Rutland wrote:
> > > > On Mon, May 14, 2018 at 11:00:53AM +0100, Dave Martin wrote:
> > > > > On Mon, May 14, 2018 at 10:46:24AM +0100, Mark Rutland wrote:
> > > > > > -/* Check all the bits are accounted for */
> > > > > > -#define SCTLR_EL2_BUILD_BUG_ON_MISSING_BITS	BUILD_BUG_ON((SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != ~0)
> > > > > > -
> > > > > > +#if (SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != 0xffffffff
> > > > > > +#error "Inconsistent SCTLR_EL2 set/clear bits"
> > > > > > +#endif
> > > > > 
> > > > > Can we have a comment on the != 0xffffffff versus != ~0 here?
> > > > > 
> > > > > The subtle differences in evaluation semantics between #if and
> > > > > other contexts here may well trip people up during maintenance...
> > > > 
> > > > Do you have any suggestion as to the wording?
> > > > 
> > > > I'm happy to add a comment, but I don't really know what to say.
> > > 
> > > 
> > > How about the following?
> > > 
> > > /* Watch out for #if evaluation rules: ~0 is not ~(int)0! */
> > 
> > Or, more formally, perhaps something even less vague like "Note that in
> > preprocessor arithmetic these constants are effectively of type intmax_t,
> > which is 64-bit, thus ~0 is not what we want."
> 
> I'll drop something in the commit message to that effect, rather than a
> comment.
> 
> A comment will end up terse and vague or large and bloatsome (and
> redundant as we have this pattern twice).
> 
> Anyone attempting to "clean" this up will find things break, and they can
> look at the git log to find out why it is the way it is...

Fair enough.  So long as we say something somewhere, that's
sufficient for me.

With that,

Reviewed-by: Dave Martin <Dave.Martin@arm.com>

(as previously stated).

Cheers
---Dave

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 02/18] arm64: move SCTLR_EL{1,2} assertions to <asm/sysreg.h>
@ 2018-05-14 12:41               ` Dave Martin
  0 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-14 12:41 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 01:06:10PM +0100, Mark Rutland wrote:
> On Mon, May 14, 2018 at 12:56:09PM +0100, Robin Murphy wrote:
> > On 14/05/18 12:20, Dave Martin wrote:
> > > On Mon, May 14, 2018 at 11:08:59AM +0100, Mark Rutland wrote:
> > > > On Mon, May 14, 2018 at 11:00:53AM +0100, Dave Martin wrote:
> > > > > On Mon, May 14, 2018 at 10:46:24AM +0100, Mark Rutland wrote:
> > > > > > -/* Check all the bits are accounted for */
> > > > > > -#define SCTLR_EL2_BUILD_BUG_ON_MISSING_BITS	BUILD_BUG_ON((SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != ~0)
> > > > > > -
> > > > > > +#if (SCTLR_EL2_SET ^ SCTLR_EL2_CLEAR) != 0xffffffff
> > > > > > +#error "Inconsistent SCTLR_EL2 set/clear bits"
> > > > > > +#endif
> > > > > 
> > > > > Can we have a comment on the != 0xffffffff versus != ~0 here?
> > > > > 
> > > > > The subtle differences in evaluation semantics between #if and
> > > > > other contexts here may well trip people up during maintenance...
> > > > 
> > > > Do you have any suggestion as to the wording?
> > > > 
> > > > I'm happy to add a comment, but I don't really know what to say.
> > > 
> > > 
> > > How about the following?
> > > 
> > > /* Watch out for #if evaluation rules: ~0 is not ~(int)0! */
> > 
> > Or, more formally, perhaps something even less vague like "Note that in
> > preprocessor arithmetic these constants are effectively of type intmax_t,
> > which is 64-bit, thus ~0 is not what we want."
> 
> I'll drop something in the commit message to that effect, rather than a
> comment.
> 
> A comment will end up terse and vague or large and bloatsome (and
> redundant as we have this pattern twice).
> 
> Anyone attempting to "clean" this up will find things break, and they can
> look at the git log to find out why it is the way it is...

Fair enough.  So long as we say something somewhere, that's
sufficient for me.

With that,

Reviewed-by: Dave Martin <Dave.Martin@arm.com>

(as previously stated).

Cheers
---Dave

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 17/18] arm64: convert compat wrappers to C
  2018-05-14 12:10     ` Christoph Hellwig
@ 2018-05-14 12:43       ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14 12:43 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-arm-kernel, linux-kernel, catalin.marinas, dave.martin,
	james.morse, linux, linux-fsdevel, marc.zyngier, viro,
	will.deacon

On Mon, May 14, 2018 at 05:10:22AM -0700, Christoph Hellwig wrote:
> > +COMPAT_SYSCALL_DEFINE3(aarch32_statfs64, const char __user *, pathname,
> > +		       compat_size_t, sz, struct compat_statfs64 __user *, buf)
> > +{
> > +	if (sz == 88)
> > +		sz = 84;
> > +
> > +	return kcompat_sys_statfs64(pathname, sz, buf);
> 
> This really needs a comment, and it looks very obviously bogus.
> In case it isn't it needs a very good explanation.

Per arch/arm/kernel/sys_oabi-compat.c:

    struct statfs64 has extra padding with EABI growing its size from
    84 to 88.  This struct is now __attribute__((packed,aligned(4)))
    with a small assembly wrapper to force the sz argument to 84 if it is 88
    to avoid copying the extra padding over user space unexpecting it.

This is the behaviour for both EABI and OABI on 32-bit arm, and thus we
must do the same for compat (and have done since day one of arm64).

I'll add a comment.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 17/18] arm64: convert compat wrappers to C
@ 2018-05-14 12:43       ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14 12:43 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 05:10:22AM -0700, Christoph Hellwig wrote:
> > +COMPAT_SYSCALL_DEFINE3(aarch32_statfs64, const char __user *, pathname,
> > +		       compat_size_t, sz, struct compat_statfs64 __user *, buf)
> > +{
> > +	if (sz == 88)
> > +		sz = 84;
> > +
> > +	return kcompat_sys_statfs64(pathname, sz, buf);
> 
> This really needs a comment, and it looks very obviously bogus.
> In case it isn't it needs a very good explanation.

Per arch/arm/kernel/sys_oabi-compat.c:

    struct statfs64 has extra padding with EABI growing its size from
    84 to 88.  This struct is now __attribute__((packed,aligned(4)))
    with a small assembly wrapper to force the sz argument to 84 if it is 88
    to avoid copying the extra padding over user space unexpecting it.

This is the behaviour for both EABI and OABI on 32-bit arm, and thus we
must do the same for compat (and have done since day one of arm64).

I'll add a comment.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 08/18] arm64: convert raw syscall invocation to C
  2018-05-14 11:41       ` Mark Rutland
@ 2018-05-14 12:53         ` Dave Martin
  -1 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-14 12:53 UTC (permalink / raw)
  To: Mark Rutland
  Cc: marc.zyngier, catalin.marinas, will.deacon, linux-kernel, linux,
	james.morse, viro, linux-fsdevel, linux-arm-kernel

On Mon, May 14, 2018 at 12:41:10PM +0100, Mark Rutland wrote:
> On Mon, May 14, 2018 at 12:07:18PM +0100, Dave Martin wrote:
> > On Mon, May 14, 2018 at 10:46:30AM +0100, Mark Rutland wrote:
> > > As a first step towards invoking syscalls with a pt_regs argument,
> > > convert the raw syscall invocation logic to C. We end up with a bit more
> > > register shuffling, but the unified invocation logic means we can unify
> > > the tracing paths, too.
> > > 
> > > This only converts the invocation of the syscall. The rest of the
> > > syscall triage and tracing is left in assembly for now, and will be
> > > converted in subsequent patches.
> > > 
> > > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > > Cc: Will Deacon <will.deacon@arm.com>
> > > ---
> > >  arch/arm64/kernel/Makefile  |  3 ++-
> > >  arch/arm64/kernel/entry.S   | 36 ++++++++++--------------------------
> > >  arch/arm64/kernel/syscall.c | 29 +++++++++++++++++++++++++++++
> > >  3 files changed, 41 insertions(+), 27 deletions(-)
> > >  create mode 100644 arch/arm64/kernel/syscall.c
> > > 
> > > diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
> > > index bf825f38d206..c22e8ace5ea3 100644
> > > --- a/arch/arm64/kernel/Makefile
> > > +++ b/arch/arm64/kernel/Makefile
> > > @@ -18,7 +18,8 @@ arm64-obj-y		:= debug-monitors.o entry.o irq.o fpsimd.o		\
> > >  			   hyp-stub.o psci.o cpu_ops.o insn.o	\
> > >  			   return_address.o cpuinfo.o cpu_errata.o		\
> > >  			   cpufeature.o alternative.o cacheinfo.o		\
> > > -			   smp.o smp_spin_table.o topology.o smccc-call.o
> > > +			   smp.o smp_spin_table.o topology.o smccc-call.o	\
> > > +			   syscall.o
> > >  
> > >  extra-$(CONFIG_EFI)			:= efi-entry.o
> > >  
> > > diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> > > index 08ea3cbfb08f..d6e057500eaf 100644
> > > --- a/arch/arm64/kernel/entry.S
> > > +++ b/arch/arm64/kernel/entry.S
> > > @@ -873,7 +873,6 @@ ENDPROC(el0_error)
> > >   */
> > >  ret_fast_syscall:
> > >  	disable_daif
> > > -	str	x0, [sp, #S_X0]			// returned x0
> > >  	ldr	x1, [tsk, #TSK_TI_FLAGS]	// re-check for syscall tracing
> > >  	and	x2, x1, #_TIF_SYSCALL_WORK
> > >  	cbnz	x2, ret_fast_syscall_trace
> > > @@ -946,15 +945,11 @@ el0_svc_naked:					// compat entry point
> > >  
> > >  	tst	x16, #_TIF_SYSCALL_WORK		// check for syscall hooks
> > >  	b.ne	__sys_trace
> > > -	cmp     wscno, wsc_nr			// check upper syscall limit
> > > -	b.hs	ni_sys
> > > -	mask_nospec64 xscno, xsc_nr, x19	// enforce bounds for syscall number
> > > -	ldr	x16, [stbl, xscno, lsl #3]	// address in the syscall table
> > > -	blr	x16				// call sys_* routine
> > > -	b	ret_fast_syscall
> > > -ni_sys:
> > >  	mov	x0, sp
> > > -	bl	do_ni_syscall
> > > +	mov	w1, wscno
> > > +	mov	w2, wsc_nr
> > > +	mov	x3, stbl
> > > +	bl	invoke_syscall
> > >  	b	ret_fast_syscall
> > >  ENDPROC(el0_svc)
> > >  
> > > @@ -971,29 +966,18 @@ __sys_trace:
> > >  	bl	syscall_trace_enter
> > >  	cmp	w0, #NO_SYSCALL			// skip the syscall?
> > >  	b.eq	__sys_trace_return_skipped
> > > -	mov	wscno, w0			// syscall number (possibly new)
> > > -	mov	x1, sp				// pointer to regs
> > > -	cmp	wscno, wsc_nr			// check upper syscall limit
> > > -	b.hs	__ni_sys_trace
> > > -	ldp	x0, x1, [sp]			// restore the syscall args
> > > -	ldp	x2, x3, [sp, #S_X2]
> > > -	ldp	x4, x5, [sp, #S_X4]
> > > -	ldp	x6, x7, [sp, #S_X6]
> > > -	ldr	x16, [stbl, xscno, lsl #3]	// address in the syscall table
> > > -	blr	x16				// call sys_* routine
> > >  
> > > -__sys_trace_return:
> > > -	str	x0, [sp, #S_X0]			// save returned x0
> > > +	mov	x0, sp
> > > +	mov	w1, wscno
> > > +	mov w2, wsc_nr
> > > +	mov	x3, stbl
> > > +	bl	invoke_syscall
> > > +
> > >  __sys_trace_return_skipped:
> > >  	mov	x0, sp
> > >  	bl	syscall_trace_exit
> > >  	b	ret_to_user
> > >  
> > > -__ni_sys_trace:
> > > -	mov	x0, sp
> > > -	bl	do_ni_syscall
> > > -	b	__sys_trace_return
> > > -
> > 
> > Can you explain why ni_syscall is special here, 
> 
> This is for out-of-range syscall numbers, instances of ni_syscall in the
> syscall table are handled by the regular path. When the syscall number
> is out-of-range, we can't index the syscall table, and have to call
> ni_sys directly.
> 
> The c invoke_syscall() wrapper handles that case internally so that we
> don't have to open-code it everywhere.
> 
> > why __sys_trace_return existed, 
> 
> The __sys_trace_return label existed so that the special __ni_sys_trace
> path could return into a common tracing return path.
> 
> > and why its disappearance doesn't break anything?
> 
> Now that invoke_syscall() handles out-of-range syscall numbers, and we
> can remove the __ni_sys_trace path, nothing branches to
> __sys_trace_return.
> 
> Only the label has been removed, not the usual return path.

OK, I think I understand.  I think the name "__sys_trace_return" was
confusing me, as if this was something special that only relates to the
ni_syscall case.  If it was only ever intended as the merge point for
those two paths, then I can see that merging the paths for real enables
us to get rid of it.

> > Not saying there's a bug, just that I'm a little confuse -- I see no
> > real reason for ni_syscall being special, and this may be a good
> > opportunity to decruft it.  (See also comments below.)
> 
> Hopefully the above clarifies things?

Yes, I think so.

> I've updated the commit message with a description.
> 
> [...]
> 
> > > +asmlinkage void invoke_syscall(struct pt_regs *regs, int scno, int sc_nr,
> > > +			       syscall_fn_t syscall_table[])
> > > +{
> > > +	if (scno < sc_nr) {
> > 
> > What if (int)scno < 0?  Should those args both by unsigned ints?
> 
> Yes, they should -- I've fixed that up locally.
> 
> That is a *very* good point, thanks!
> 
> > "sc_nr" sounds too much like "syscall number" to me.  Might
> > "syscall_table_size" might be clearer?  Similarly, we could have
> > "stbl_size" or similar in the asm.  This is purely cosmetic,
> > though.
> 
> I'd tried to stick to the naming used in assembly to keep the conversion
> clearer for those familiar with the asm.
> 
> I agree the names aren't great.

Not a big deal.  If you feel you would like to rename them though, I
won't argue with it ;)

> > > +		syscall_fn_t syscall_fn;
> > > +		syscall_fn = syscall_table[array_index_nospec(scno, sc_nr)];
> > > +		__invoke_syscall(regs, syscall_fn);
> > > +	} else {
> > > +		regs->regs[0] = do_ni_syscall(regs);
> > 
> > Can we make __invoke_syscall() the universal syscall wrapper, and give
> > do_ni_syscall() the same interface as any other syscall body?
> 
> Not at this point in time, since the prototype (in core code) differs.
> 
> I agree that would be nicer, but there are a number of complications;
> more details below.
> 
> > Then you could factor this as
> > 
> > static syscall_fn_t syscall_fn(syscall_fn_t const syscall_table[],
> > 				(unsigned) int scno, (unsigned) int sc_nr)
> > {
> > 	if (sc_no >= sc_nr)
> > 		return sys_ni_syscall;
> > 
> > 	return syscall_table[array_index_nospec(scno, sc_nr)];
> > }
> > 
> > ...
> > 	__invoke_syscall(regs, syscall_fn(syscall_table, scno, sc_nr);
> > 
> > 
> > 
> > This is cosmetic too, of course.
> > 
> > do_ni_syscall() should be given a pt_regs-based wrapper like all the
> > rest.
> 
> I agree it would be nicer if it had a wrapper that took a pt_regs, even
> if it does nothing with it.
> 
> We can't use SYSCALL_DEFINE0() due to the fault injection muck, we'd
> need a ksys_ni_syscall() for our traps.c logic, and adding this
> uniformly would involve some arch-specific rework for x86, too, so I
> decided it was not worth the effort.

Does allowing error injection for ni_syscall actually matter?  Error
injection is an ABI break by itself.  It would arguably be a bit
strange though.

Cheers
---Dave

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 08/18] arm64: convert raw syscall invocation to C
@ 2018-05-14 12:53         ` Dave Martin
  0 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-14 12:53 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 12:41:10PM +0100, Mark Rutland wrote:
> On Mon, May 14, 2018 at 12:07:18PM +0100, Dave Martin wrote:
> > On Mon, May 14, 2018 at 10:46:30AM +0100, Mark Rutland wrote:
> > > As a first step towards invoking syscalls with a pt_regs argument,
> > > convert the raw syscall invocation logic to C. We end up with a bit more
> > > register shuffling, but the unified invocation logic means we can unify
> > > the tracing paths, too.
> > > 
> > > This only converts the invocation of the syscall. The rest of the
> > > syscall triage and tracing is left in assembly for now, and will be
> > > converted in subsequent patches.
> > > 
> > > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > > Cc: Will Deacon <will.deacon@arm.com>
> > > ---
> > >  arch/arm64/kernel/Makefile  |  3 ++-
> > >  arch/arm64/kernel/entry.S   | 36 ++++++++++--------------------------
> > >  arch/arm64/kernel/syscall.c | 29 +++++++++++++++++++++++++++++
> > >  3 files changed, 41 insertions(+), 27 deletions(-)
> > >  create mode 100644 arch/arm64/kernel/syscall.c
> > > 
> > > diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
> > > index bf825f38d206..c22e8ace5ea3 100644
> > > --- a/arch/arm64/kernel/Makefile
> > > +++ b/arch/arm64/kernel/Makefile
> > > @@ -18,7 +18,8 @@ arm64-obj-y		:= debug-monitors.o entry.o irq.o fpsimd.o		\
> > >  			   hyp-stub.o psci.o cpu_ops.o insn.o	\
> > >  			   return_address.o cpuinfo.o cpu_errata.o		\
> > >  			   cpufeature.o alternative.o cacheinfo.o		\
> > > -			   smp.o smp_spin_table.o topology.o smccc-call.o
> > > +			   smp.o smp_spin_table.o topology.o smccc-call.o	\
> > > +			   syscall.o
> > >  
> > >  extra-$(CONFIG_EFI)			:= efi-entry.o
> > >  
> > > diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> > > index 08ea3cbfb08f..d6e057500eaf 100644
> > > --- a/arch/arm64/kernel/entry.S
> > > +++ b/arch/arm64/kernel/entry.S
> > > @@ -873,7 +873,6 @@ ENDPROC(el0_error)
> > >   */
> > >  ret_fast_syscall:
> > >  	disable_daif
> > > -	str	x0, [sp, #S_X0]			// returned x0
> > >  	ldr	x1, [tsk, #TSK_TI_FLAGS]	// re-check for syscall tracing
> > >  	and	x2, x1, #_TIF_SYSCALL_WORK
> > >  	cbnz	x2, ret_fast_syscall_trace
> > > @@ -946,15 +945,11 @@ el0_svc_naked:					// compat entry point
> > >  
> > >  	tst	x16, #_TIF_SYSCALL_WORK		// check for syscall hooks
> > >  	b.ne	__sys_trace
> > > -	cmp     wscno, wsc_nr			// check upper syscall limit
> > > -	b.hs	ni_sys
> > > -	mask_nospec64 xscno, xsc_nr, x19	// enforce bounds for syscall number
> > > -	ldr	x16, [stbl, xscno, lsl #3]	// address in the syscall table
> > > -	blr	x16				// call sys_* routine
> > > -	b	ret_fast_syscall
> > > -ni_sys:
> > >  	mov	x0, sp
> > > -	bl	do_ni_syscall
> > > +	mov	w1, wscno
> > > +	mov	w2, wsc_nr
> > > +	mov	x3, stbl
> > > +	bl	invoke_syscall
> > >  	b	ret_fast_syscall
> > >  ENDPROC(el0_svc)
> > >  
> > > @@ -971,29 +966,18 @@ __sys_trace:
> > >  	bl	syscall_trace_enter
> > >  	cmp	w0, #NO_SYSCALL			// skip the syscall?
> > >  	b.eq	__sys_trace_return_skipped
> > > -	mov	wscno, w0			// syscall number (possibly new)
> > > -	mov	x1, sp				// pointer to regs
> > > -	cmp	wscno, wsc_nr			// check upper syscall limit
> > > -	b.hs	__ni_sys_trace
> > > -	ldp	x0, x1, [sp]			// restore the syscall args
> > > -	ldp	x2, x3, [sp, #S_X2]
> > > -	ldp	x4, x5, [sp, #S_X4]
> > > -	ldp	x6, x7, [sp, #S_X6]
> > > -	ldr	x16, [stbl, xscno, lsl #3]	// address in the syscall table
> > > -	blr	x16				// call sys_* routine
> > >  
> > > -__sys_trace_return:
> > > -	str	x0, [sp, #S_X0]			// save returned x0
> > > +	mov	x0, sp
> > > +	mov	w1, wscno
> > > +	mov w2, wsc_nr
> > > +	mov	x3, stbl
> > > +	bl	invoke_syscall
> > > +
> > >  __sys_trace_return_skipped:
> > >  	mov	x0, sp
> > >  	bl	syscall_trace_exit
> > >  	b	ret_to_user
> > >  
> > > -__ni_sys_trace:
> > > -	mov	x0, sp
> > > -	bl	do_ni_syscall
> > > -	b	__sys_trace_return
> > > -
> > 
> > Can you explain why ni_syscall is special here, 
> 
> This is for out-of-range syscall numbers, instances of ni_syscall in the
> syscall table are handled by the regular path. When the syscall number
> is out-of-range, we can't index the syscall table, and have to call
> ni_sys directly.
> 
> The c invoke_syscall() wrapper handles that case internally so that we
> don't have to open-code it everywhere.
> 
> > why __sys_trace_return existed, 
> 
> The __sys_trace_return label existed so that the special __ni_sys_trace
> path could return into a common tracing return path.
> 
> > and why its disappearance doesn't break anything?
> 
> Now that invoke_syscall() handles out-of-range syscall numbers, and we
> can remove the __ni_sys_trace path, nothing branches to
> __sys_trace_return.
> 
> Only the label has been removed, not the usual return path.

OK, I think I understand.  I think the name "__sys_trace_return" was
confusing me, as if this was something special that only relates to the
ni_syscall case.  If it was only ever intended as the merge point for
those two paths, then I can see that merging the paths for real enables
us to get rid of it.

> > Not saying there's a bug, just that I'm a little confuse -- I see no
> > real reason for ni_syscall being special, and this may be a good
> > opportunity to decruft it.  (See also comments below.)
> 
> Hopefully the above clarifies things?

Yes, I think so.

> I've updated the commit message with a description.
> 
> [...]
> 
> > > +asmlinkage void invoke_syscall(struct pt_regs *regs, int scno, int sc_nr,
> > > +			       syscall_fn_t syscall_table[])
> > > +{
> > > +	if (scno < sc_nr) {
> > 
> > What if (int)scno < 0?  Should those args both by unsigned ints?
> 
> Yes, they should -- I've fixed that up locally.
> 
> That is a *very* good point, thanks!
> 
> > "sc_nr" sounds too much like "syscall number" to me.  Might
> > "syscall_table_size" might be clearer?  Similarly, we could have
> > "stbl_size" or similar in the asm.  This is purely cosmetic,
> > though.
> 
> I'd tried to stick to the naming used in assembly to keep the conversion
> clearer for those familiar with the asm.
> 
> I agree the names aren't great.

Not a big deal.  If you feel you would like to rename them though, I
won't argue with it ;)

> > > +		syscall_fn_t syscall_fn;
> > > +		syscall_fn = syscall_table[array_index_nospec(scno, sc_nr)];
> > > +		__invoke_syscall(regs, syscall_fn);
> > > +	} else {
> > > +		regs->regs[0] = do_ni_syscall(regs);
> > 
> > Can we make __invoke_syscall() the universal syscall wrapper, and give
> > do_ni_syscall() the same interface as any other syscall body?
> 
> Not at this point in time, since the prototype (in core code) differs.
> 
> I agree that would be nicer, but there are a number of complications;
> more details below.
> 
> > Then you could factor this as
> > 
> > static syscall_fn_t syscall_fn(syscall_fn_t const syscall_table[],
> > 				(unsigned) int scno, (unsigned) int sc_nr)
> > {
> > 	if (sc_no >= sc_nr)
> > 		return sys_ni_syscall;
> > 
> > 	return syscall_table[array_index_nospec(scno, sc_nr)];
> > }
> > 
> > ...
> > 	__invoke_syscall(regs, syscall_fn(syscall_table, scno, sc_nr);
> > 
> > 
> > 
> > This is cosmetic too, of course.
> > 
> > do_ni_syscall() should be given a pt_regs-based wrapper like all the
> > rest.
> 
> I agree it would be nicer if it had a wrapper that took a pt_regs, even
> if it does nothing with it.
> 
> We can't use SYSCALL_DEFINE0() due to the fault injection muck, we'd
> need a ksys_ni_syscall() for our traps.c logic, and adding this
> uniformly would involve some arch-specific rework for x86, too, so I
> decided it was not worth the effort.

Does allowing error injection for ni_syscall actually matter?  Error
injection is an ABI break by itself.  It would arguably be a bit
strange though.

Cheers
---Dave

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 02/18] arm64: move SCTLR_EL{1,2} assertions to <asm/sysreg.h>
  2018-05-14 12:41               ` Dave Martin
@ 2018-05-14 13:10                 ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14 13:10 UTC (permalink / raw)
  To: Dave Martin
  Cc: Robin Murphy, marc.zyngier, catalin.marinas, will.deacon,
	linux-kernel, linux, james.morse, viro, linux-fsdevel,
	linux-arm-kernel

On Mon, May 14, 2018 at 01:41:23PM +0100, Dave Martin wrote:
> On Mon, May 14, 2018 at 01:06:10PM +0100, Mark Rutland wrote:
> > On Mon, May 14, 2018 at 12:56:09PM +0100, Robin Murphy wrote:
> > > On 14/05/18 12:20, Dave Martin wrote:
> > > > How about the following?
> > > > 
> > > > /* Watch out for #if evaluation rules: ~0 is not ~(int)0! */
> > > 
> > > Or, more formally, perhaps something even less vague like "Note that in
> > > preprocessor arithmetic these constants are effectively of type intmax_t,
> > > which is 64-bit, thus ~0 is not what we want."
> > 
> > I'll drop something in the commit message to that effect, rather than a
> > comment.
> > 
> > A comment will end up terse and vague or large and bloatsome (and
> > redundant as we have this pattern twice).
> > 
> > Anyone attempting to "clean" this up will find things break, and they can
> > look at the git log to find out why it is the way it is...
> 
> Fair enough.  So long as we say something somewhere, that's
> sufficient for me.
> 
> With that,
> 
> Reviewed-by: Dave Martin <Dave.Martin@arm.com>
> 
> (as previously stated).

Cheers!

Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 02/18] arm64: move SCTLR_EL{1,2} assertions to <asm/sysreg.h>
@ 2018-05-14 13:10                 ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14 13:10 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 01:41:23PM +0100, Dave Martin wrote:
> On Mon, May 14, 2018 at 01:06:10PM +0100, Mark Rutland wrote:
> > On Mon, May 14, 2018 at 12:56:09PM +0100, Robin Murphy wrote:
> > > On 14/05/18 12:20, Dave Martin wrote:
> > > > How about the following?
> > > > 
> > > > /* Watch out for #if evaluation rules: ~0 is not ~(int)0! */
> > > 
> > > Or, more formally, perhaps something even less vague like "Note that in
> > > preprocessor arithmetic these constants are effectively of type intmax_t,
> > > which is 64-bit, thus ~0 is not what we want."
> > 
> > I'll drop something in the commit message to that effect, rather than a
> > comment.
> > 
> > A comment will end up terse and vague or large and bloatsome (and
> > redundant as we have this pattern twice).
> > 
> > Anyone attempting to "clean" this up will find things break, and they can
> > look at the git log to find out why it is the way it is...
> 
> Fair enough.  So long as we say something somewhere, that's
> sufficient for me.
> 
> With that,
> 
> Reviewed-by: Dave Martin <Dave.Martin@arm.com>
> 
> (as previously stated).

Cheers!

Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 10/18] arm64: convert native/compat syscall entry to C
  2018-05-14 11:58       ` Mark Rutland
@ 2018-05-14 14:43         ` Dave Martin
  -1 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-14 14:43 UTC (permalink / raw)
  To: Mark Rutland
  Cc: marc.zyngier, catalin.marinas, will.deacon, linux-kernel, linux,
	james.morse, viro, linux-fsdevel, linux-arm-kernel

On Mon, May 14, 2018 at 12:58:05PM +0100, Mark Rutland wrote:
> On Mon, May 14, 2018 at 12:07:30PM +0100, Dave Martin wrote:
> > On Mon, May 14, 2018 at 10:46:32AM +0100, Mark Rutland wrote:
> 
> > > diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
> > > index 5df857e32b48..4706f841e758 100644
> > > --- a/arch/arm64/kernel/syscall.c
> > > +++ b/arch/arm64/kernel/syscall.c
> > > @@ -6,7 +6,9 @@
> > >  #include <linux/ptrace.h>
> > >  
> > >  #include <asm/daifflags.h>
> > > +#include <asm/fpsimd.h>
> > >  #include <asm/thread_info.h>
> > > +#include <asm/unistd.h>
> > >  
> > >  long do_ni_syscall(struct pt_regs *regs);
> > >  
> > > @@ -41,8 +43,8 @@ static inline bool has_syscall_work(unsigned long flags)
> > >  int syscall_trace_enter(struct pt_regs *regs);
> > >  void syscall_trace_exit(struct pt_regs *regs);
> > >  
> > > -asmlinkage void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
> > > -			       syscall_fn_t syscall_table[])
> > > +static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
> > > +			   syscall_fn_t syscall_table[])
> > >  {
> > >  	unsigned long flags = current_thread_info()->flags;
> > >  
> > > @@ -79,3 +81,37 @@ asmlinkage void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
> > >  trace_exit:
> > >  	syscall_trace_exit(regs);
> > >  }
> > > +
> > > +static inline void sve_user_reset(void)
> > 
> > Static function with no caller...
> 
> Ugh, this was intended to be called below in el0_svc_handler().
> 
> > > +{
> > > +	if (!system_supports_sve())
> > > +		return;
> > > +
> > > +	/*
> > > +	 * task_fpsimd_load() won't be called to update CPACR_EL1 in
> > > +	 * ret_to_user unless TIF_FOREIGN_FPSTATE is still set, which only
> > > +	 * happens if a context switch or kernel_neon_begin() or context
> > > +	 * modification (sigreturn, ptrace) intervenes.
> > > +	 * So, ensure that CPACR_EL1 is already correct for the fast-path case.
> > > +	 */
> > > +	if (test_and_clear_thread_flag(TIF_SVE))
> > > +		sve_user_disable();
> > 
> > sve_user_disable() is already inline, and incorporates the if()
> > internally via sysreg_clear_set().
> > 
> > So, should this just be
> > 
> > 	clear_thread_flag(TIF_SVE);
> > 	sve_user_disable();
> 
> Sure. That does mean we'll unconditionally read cpacr_el1, but I assume
> you're happy with that. I'll note the difference in the commit message.

This is what the code does today, conditioned no system_supports_sve().

I'm assuming that reading CPACR_EL1 is cheap ... or have you come across
counterexamples to that?

> > > +}
> > > +
> > > +extern syscall_fn_t sys_call_table[];
> > > +
> > > +asmlinkage void el0_svc_handler(struct pt_regs *regs)
> > > +{
> > 
> > if (system_supports_sve()) ?
> > 
> > > +	sve_user_disable();
> > 
> > Or should this be replaced by a call to sve_user_reset()?
> > 
> > I suspect the latter, since we do want to be clearing TIF_SVE here too.
> 
> Yes, this was mean to be sve_user_reset().

OK.  Just to be clear, I think there should be a system_supports_sve()
check here (in case that wasn't obvious from my previous reply).

Cheers
---Dave

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 10/18] arm64: convert native/compat syscall entry to C
@ 2018-05-14 14:43         ` Dave Martin
  0 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-14 14:43 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 12:58:05PM +0100, Mark Rutland wrote:
> On Mon, May 14, 2018 at 12:07:30PM +0100, Dave Martin wrote:
> > On Mon, May 14, 2018 at 10:46:32AM +0100, Mark Rutland wrote:
> 
> > > diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
> > > index 5df857e32b48..4706f841e758 100644
> > > --- a/arch/arm64/kernel/syscall.c
> > > +++ b/arch/arm64/kernel/syscall.c
> > > @@ -6,7 +6,9 @@
> > >  #include <linux/ptrace.h>
> > >  
> > >  #include <asm/daifflags.h>
> > > +#include <asm/fpsimd.h>
> > >  #include <asm/thread_info.h>
> > > +#include <asm/unistd.h>
> > >  
> > >  long do_ni_syscall(struct pt_regs *regs);
> > >  
> > > @@ -41,8 +43,8 @@ static inline bool has_syscall_work(unsigned long flags)
> > >  int syscall_trace_enter(struct pt_regs *regs);
> > >  void syscall_trace_exit(struct pt_regs *regs);
> > >  
> > > -asmlinkage void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
> > > -			       syscall_fn_t syscall_table[])
> > > +static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
> > > +			   syscall_fn_t syscall_table[])
> > >  {
> > >  	unsigned long flags = current_thread_info()->flags;
> > >  
> > > @@ -79,3 +81,37 @@ asmlinkage void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
> > >  trace_exit:
> > >  	syscall_trace_exit(regs);
> > >  }
> > > +
> > > +static inline void sve_user_reset(void)
> > 
> > Static function with no caller...
> 
> Ugh, this was intended to be called below in el0_svc_handler().
> 
> > > +{
> > > +	if (!system_supports_sve())
> > > +		return;
> > > +
> > > +	/*
> > > +	 * task_fpsimd_load() won't be called to update CPACR_EL1 in
> > > +	 * ret_to_user unless TIF_FOREIGN_FPSTATE is still set, which only
> > > +	 * happens if a context switch or kernel_neon_begin() or context
> > > +	 * modification (sigreturn, ptrace) intervenes.
> > > +	 * So, ensure that CPACR_EL1 is already correct for the fast-path case.
> > > +	 */
> > > +	if (test_and_clear_thread_flag(TIF_SVE))
> > > +		sve_user_disable();
> > 
> > sve_user_disable() is already inline, and incorporates the if()
> > internally via sysreg_clear_set().
> > 
> > So, should this just be
> > 
> > 	clear_thread_flag(TIF_SVE);
> > 	sve_user_disable();
> 
> Sure. That does mean we'll unconditionally read cpacr_el1, but I assume
> you're happy with that. I'll note the difference in the commit message.

This is what the code does today, conditioned no system_supports_sve().

I'm assuming that reading CPACR_EL1 is cheap ... or have you come across
counterexamples to that?

> > > +}
> > > +
> > > +extern syscall_fn_t sys_call_table[];
> > > +
> > > +asmlinkage void el0_svc_handler(struct pt_regs *regs)
> > > +{
> > 
> > if (system_supports_sve()) ?
> > 
> > > +	sve_user_disable();
> > 
> > Or should this be replaced by a call to sve_user_reset()?
> > 
> > I suspect the latter, since we do want to be clearing TIF_SVE here too.
> 
> Yes, this was mean to be sve_user_reset().

OK.  Just to be clear, I think there should be a system_supports_sve()
check here (in case that wasn't obvious from my previous reply).

Cheers
---Dave

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 10/18] arm64: convert native/compat syscall entry to C
  2018-05-14 14:43         ` Dave Martin
@ 2018-05-14 15:01           ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14 15:01 UTC (permalink / raw)
  To: Dave Martin
  Cc: marc.zyngier, catalin.marinas, will.deacon, linux-kernel, linux,
	james.morse, viro, linux-fsdevel, linux-arm-kernel

On Mon, May 14, 2018 at 03:43:36PM +0100, Dave Martin wrote:
> On Mon, May 14, 2018 at 12:58:05PM +0100, Mark Rutland wrote:
> > On Mon, May 14, 2018 at 12:07:30PM +0100, Dave Martin wrote:
> > > On Mon, May 14, 2018 at 10:46:32AM +0100, Mark Rutland wrote:
> > > > +{
> > > > +	if (!system_supports_sve())
> > > > +		return;
> > > > +
> > > > +	/*
> > > > +	 * task_fpsimd_load() won't be called to update CPACR_EL1 in
> > > > +	 * ret_to_user unless TIF_FOREIGN_FPSTATE is still set, which only
> > > > +	 * happens if a context switch or kernel_neon_begin() or context
> > > > +	 * modification (sigreturn, ptrace) intervenes.
> > > > +	 * So, ensure that CPACR_EL1 is already correct for the fast-path case.
> > > > +	 */
> > > > +	if (test_and_clear_thread_flag(TIF_SVE))
> > > > +		sve_user_disable();
> > > 
> > > sve_user_disable() is already inline, and incorporates the if()
> > > internally via sysreg_clear_set().
> > > 
> > > So, should this just be
> > > 
> > > 	clear_thread_flag(TIF_SVE);
> > > 	sve_user_disable();
> > 
> > Sure. That does mean we'll unconditionally read cpacr_el1, but I assume
> > you're happy with that. I'll note the difference in the commit message.
> 
> This is what the code does today, conditioned no system_supports_sve().
> 
> I'm assuming that reading CPACR_EL1 is cheap ... or have you come across
> counterexamples to that?

I have no data either way. :)

> > > > +}
> > > > +
> > > > +extern syscall_fn_t sys_call_table[];
> > > > +
> > > > +asmlinkage void el0_svc_handler(struct pt_regs *regs)
> > > > +{
> > > 
> > > if (system_supports_sve()) ?
> > > 
> > > > +	sve_user_disable();
> > > 
> > > Or should this be replaced by a call to sve_user_reset()?
> > > 
> > > I suspect the latter, since we do want to be clearing TIF_SVE here too.
> > 
> > Yes, this was mean to be sve_user_reset().
> 
> OK.  Just to be clear, I think there should be a system_supports_sve()
> check here (in case that wasn't obvious from my previous reply).

I understood that; the check is inside sve_user_reset(), which I had
mean to call here.

With your above comments, I now have the following:

static inline void sve_user_reset(void)
{
	if (!system_supports_sve())
		return;

	/*
	 * task_fpsimd_load() won't be called to update CPACR_EL1 in
	 * ret_to_user unless TIF_FOREIGN_FPSTATE is still set, which only
	 * happens if a context switch or kernel_neon_begin() or context
	 * modification (sigreturn, ptrace) intervenes.
	 * So, ensure that CPACR_EL1 is already correct for the fast-path case.
	 */
	clear_thread_flag(TIF_SVE);
	sve_user_disable();
}

asmlinkage void el0_svc_handler(struct pt_regs *regs)
{
	sve_user_reset();
	el0_svc_common(regs, regs->regs[8], __NR_syscalls, sys_call_table);
}

... which I think alleviates that concern?

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 10/18] arm64: convert native/compat syscall entry to C
@ 2018-05-14 15:01           ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14 15:01 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 03:43:36PM +0100, Dave Martin wrote:
> On Mon, May 14, 2018 at 12:58:05PM +0100, Mark Rutland wrote:
> > On Mon, May 14, 2018 at 12:07:30PM +0100, Dave Martin wrote:
> > > On Mon, May 14, 2018 at 10:46:32AM +0100, Mark Rutland wrote:
> > > > +{
> > > > +	if (!system_supports_sve())
> > > > +		return;
> > > > +
> > > > +	/*
> > > > +	 * task_fpsimd_load() won't be called to update CPACR_EL1 in
> > > > +	 * ret_to_user unless TIF_FOREIGN_FPSTATE is still set, which only
> > > > +	 * happens if a context switch or kernel_neon_begin() or context
> > > > +	 * modification (sigreturn, ptrace) intervenes.
> > > > +	 * So, ensure that CPACR_EL1 is already correct for the fast-path case.
> > > > +	 */
> > > > +	if (test_and_clear_thread_flag(TIF_SVE))
> > > > +		sve_user_disable();
> > > 
> > > sve_user_disable() is already inline, and incorporates the if()
> > > internally via sysreg_clear_set().
> > > 
> > > So, should this just be
> > > 
> > > 	clear_thread_flag(TIF_SVE);
> > > 	sve_user_disable();
> > 
> > Sure. That does mean we'll unconditionally read cpacr_el1, but I assume
> > you're happy with that. I'll note the difference in the commit message.
> 
> This is what the code does today, conditioned no system_supports_sve().
> 
> I'm assuming that reading CPACR_EL1 is cheap ... or have you come across
> counterexamples to that?

I have no data either way. :)

> > > > +}
> > > > +
> > > > +extern syscall_fn_t sys_call_table[];
> > > > +
> > > > +asmlinkage void el0_svc_handler(struct pt_regs *regs)
> > > > +{
> > > 
> > > if (system_supports_sve()) ?
> > > 
> > > > +	sve_user_disable();
> > > 
> > > Or should this be replaced by a call to sve_user_reset()?
> > > 
> > > I suspect the latter, since we do want to be clearing TIF_SVE here too.
> > 
> > Yes, this was mean to be sve_user_reset().
> 
> OK.  Just to be clear, I think there should be a system_supports_sve()
> check here (in case that wasn't obvious from my previous reply).

I understood that; the check is inside sve_user_reset(), which I had
mean to call here.

With your above comments, I now have the following:

static inline void sve_user_reset(void)
{
	if (!system_supports_sve())
		return;

	/*
	 * task_fpsimd_load() won't be called to update CPACR_EL1 in
	 * ret_to_user unless TIF_FOREIGN_FPSTATE is still set, which only
	 * happens if a context switch or kernel_neon_begin() or context
	 * modification (sigreturn, ptrace) intervenes.
	 * So, ensure that CPACR_EL1 is already correct for the fast-path case.
	 */
	clear_thread_flag(TIF_SVE);
	sve_user_disable();
}

asmlinkage void el0_svc_handler(struct pt_regs *regs)
{
	sve_user_reset();
	el0_svc_common(regs, regs->regs[8], __NR_syscalls, sys_call_table);
}

... which I think alleviates that concern?

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 13/18] kernel: add kcompat_sys_{f,}statfs64()
  2018-05-14  9:46   ` Mark Rutland
@ 2018-05-14 17:14     ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14 17:14 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: linux-kernel, catalin.marinas, dave.martin, james.morse, linux,
	linux-fsdevel, marc.zyngier, viro, will.deacon

On Mon, May 14, 2018 at 10:46:35AM +0100, Mark Rutland wrote:
> Using this helper allows us to avoid the in-kernel calls to the
> compat_sys_{f,}statfs64() sycalls, as are necessary for parameter
> mangling in arm64's compat handling.
> 
> Following the example of ksys_* functions, kcompat_sys_* functions are
> intended to be a drop-in replacement for their compat_sys_*
> counterparts, with the same calling convention.
> 
> This is necessary to enable conversion of arm64's syscall handling to
> use pt_regs wrappers.

> diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
> index 6723ea51ec99..e0bf3e4bb897 100644
> --- a/include/linux/syscalls.h
> +++ b/include/linux/syscalls.h
> @@ -59,6 +59,7 @@ struct tms;
>  struct utimbuf;
>  struct mq_attr;
>  struct compat_stat;
> +struct compat_statfs64;
>  struct compat_timeval;
>  struct robust_list_head;
>  struct getcpu_cache;
> @@ -1150,6 +1151,13 @@ unsigned long ksys_mmap_pgoff(unsigned long addr, unsigned long len,
>  ssize_t ksys_readahead(int fd, loff_t offset, size_t count);
>  unsigned int ksys_personality(unsigned int personality);
>  
> +#ifdef CONFIG_COMPAT
> +int kcompat_sys_statfs64(const char __user * pathname, compat_size_t sz,
> +		     struct compat_statfs64 __user * buf);
> +int kcompat_sys_fstatfs64(unsigned int fd, compat_size_t sz,
> +			  struct compat_statfs64 __user * buf);
> +#endif

I've moved these to <linux/compat.h>, so that they live with the rest of
the compat syscall stuff. That should avoid build failures the kbuild
test robot picked up where compat_size_t wasn't dfined.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 13/18] kernel: add kcompat_sys_{f,}statfs64()
@ 2018-05-14 17:14     ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-14 17:14 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 10:46:35AM +0100, Mark Rutland wrote:
> Using this helper allows us to avoid the in-kernel calls to the
> compat_sys_{f,}statfs64() sycalls, as are necessary for parameter
> mangling in arm64's compat handling.
> 
> Following the example of ksys_* functions, kcompat_sys_* functions are
> intended to be a drop-in replacement for their compat_sys_*
> counterparts, with the same calling convention.
> 
> This is necessary to enable conversion of arm64's syscall handling to
> use pt_regs wrappers.

> diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
> index 6723ea51ec99..e0bf3e4bb897 100644
> --- a/include/linux/syscalls.h
> +++ b/include/linux/syscalls.h
> @@ -59,6 +59,7 @@ struct tms;
>  struct utimbuf;
>  struct mq_attr;
>  struct compat_stat;
> +struct compat_statfs64;
>  struct compat_timeval;
>  struct robust_list_head;
>  struct getcpu_cache;
> @@ -1150,6 +1151,13 @@ unsigned long ksys_mmap_pgoff(unsigned long addr, unsigned long len,
>  ssize_t ksys_readahead(int fd, loff_t offset, size_t count);
>  unsigned int ksys_personality(unsigned int personality);
>  
> +#ifdef CONFIG_COMPAT
> +int kcompat_sys_statfs64(const char __user * pathname, compat_size_t sz,
> +		     struct compat_statfs64 __user * buf);
> +int kcompat_sys_fstatfs64(unsigned int fd, compat_size_t sz,
> +			  struct compat_statfs64 __user * buf);
> +#endif

I've moved these to <linux/compat.h>, so that they live with the rest of
the compat syscall stuff. That should avoid build failures the kbuild
test robot picked up where compat_size_t wasn't dfined.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 08/18] arm64: convert raw syscall invocation to C
  2018-05-14  9:46   ` Mark Rutland
@ 2018-05-14 18:00     ` Dominik Brodowski
  -1 siblings, 0 replies; 126+ messages in thread
From: Dominik Brodowski @ 2018-05-14 18:00 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, linux-kernel, catalin.marinas, dave.martin,
	james.morse, linux-fsdevel, marc.zyngier, viro, will.deacon

> +static void __invoke_syscall(struct pt_regs *regs, syscall_fn_t syscall_fn)
> +{
> +	regs->regs[0] = syscall_fn(regs->regs[0], regs->regs[1],
> +				   regs->regs[2], regs->regs[3],
> +				   regs->regs[4], regs->regs[5]);
> +}

Any specific reason to have this in a separate function? This seems to be
called only from one instance, namely

> +asmlinkage void invoke_syscall(struct pt_regs *regs, int scno, int sc_nr,
> +			       syscall_fn_t syscall_table[])
> +{
> +	if (scno < sc_nr) {
> +		syscall_fn_t syscall_fn;
> +		syscall_fn = syscall_table[array_index_nospec(scno, sc_nr)];
> +		__invoke_syscall(regs, syscall_fn);
> +	} else {
> +		regs->regs[0] = do_ni_syscall(regs);
> +	}
> +}

and result in a one-liner in the last patch of the series.

Thanks,
	Dominik

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 08/18] arm64: convert raw syscall invocation to C
@ 2018-05-14 18:00     ` Dominik Brodowski
  0 siblings, 0 replies; 126+ messages in thread
From: Dominik Brodowski @ 2018-05-14 18:00 UTC (permalink / raw)
  To: linux-arm-kernel

> +static void __invoke_syscall(struct pt_regs *regs, syscall_fn_t syscall_fn)
> +{
> +	regs->regs[0] = syscall_fn(regs->regs[0], regs->regs[1],
> +				   regs->regs[2], regs->regs[3],
> +				   regs->regs[4], regs->regs[5]);
> +}

Any specific reason to have this in a separate function? This seems to be
called only from one instance, namely

> +asmlinkage void invoke_syscall(struct pt_regs *regs, int scno, int sc_nr,
> +			       syscall_fn_t syscall_table[])
> +{
> +	if (scno < sc_nr) {
> +		syscall_fn_t syscall_fn;
> +		syscall_fn = syscall_table[array_index_nospec(scno, sc_nr)];
> +		__invoke_syscall(regs, syscall_fn);
> +	} else {
> +		regs->regs[0] = do_ni_syscall(regs);
> +	}
> +}

and result in a one-liner in the last patch of the series.

Thanks,
	Dominik

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 08/18] arm64: convert raw syscall invocation to C
  2018-05-14 11:41       ` Mark Rutland
@ 2018-05-14 20:24         ` Dominik Brodowski
  -1 siblings, 0 replies; 126+ messages in thread
From: Dominik Brodowski @ 2018-05-14 20:24 UTC (permalink / raw)
  To: Mark Rutland, Dave Martin
  Cc: linux-arm-kernel, marc.zyngier, catalin.marinas, will.deacon,
	linux-kernel, james.morse, viro, linux-fsdevel

On Mon, May 14, 2018 at 12:41:10PM +0100, Mark Rutland wrote:
> I agree it would be nicer if it had a wrapper that took a pt_regs, even
> if it does nothing with it.
> 
> We can't use SYSCALL_DEFINE0() due to the fault injection muck, we'd
> need a ksys_ni_syscall() for our traps.c logic, and adding this
> uniformly would involve some arch-specific rework for x86, too, so I
> decided it was not worth the effort.

Couldn't you just open-code the "return -ENOSYS;" in traps.c? Error
injection has no reasonable stable ABI/API expectations, so that's not a
show-stopper either.

Thanks,
	Dominik

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 08/18] arm64: convert raw syscall invocation to C
@ 2018-05-14 20:24         ` Dominik Brodowski
  0 siblings, 0 replies; 126+ messages in thread
From: Dominik Brodowski @ 2018-05-14 20:24 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 12:41:10PM +0100, Mark Rutland wrote:
> I agree it would be nicer if it had a wrapper that took a pt_regs, even
> if it does nothing with it.
> 
> We can't use SYSCALL_DEFINE0() due to the fault injection muck, we'd
> need a ksys_ni_syscall() for our traps.c logic, and adding this
> uniformly would involve some arch-specific rework for x86, too, so I
> decided it was not worth the effort.

Couldn't you just open-code the "return -ENOSYS;" in traps.c? Error
injection has no reasonable stable ABI/API expectations, so that's not a
show-stopper either.

Thanks,
	Dominik

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 13/18] kernel: add kcompat_sys_{f,}statfs64()
  2018-05-14 17:14     ` Mark Rutland
@ 2018-05-14 20:34       ` Dominik Brodowski
  -1 siblings, 0 replies; 126+ messages in thread
From: Dominik Brodowski @ 2018-05-14 20:34 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, linux-kernel, catalin.marinas, dave.martin,
	james.morse, linux-fsdevel, marc.zyngier, viro, will.deacon

On Mon, May 14, 2018 at 06:14:28PM +0100, Mark Rutland wrote:
> On Mon, May 14, 2018 at 10:46:35AM +0100, Mark Rutland wrote:
> > Using this helper allows us to avoid the in-kernel calls to the
> > compat_sys_{f,}statfs64() sycalls, as are necessary for parameter
> > mangling in arm64's compat handling.
> > 
> > Following the example of ksys_* functions, kcompat_sys_* functions are
> > intended to be a drop-in replacement for their compat_sys_*
> > counterparts, with the same calling convention.
> > 
> > This is necessary to enable conversion of arm64's syscall handling to
> > use pt_regs wrappers.
> 
> > diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
> > index 6723ea51ec99..e0bf3e4bb897 100644
> > --- a/include/linux/syscalls.h
> > +++ b/include/linux/syscalls.h
> > @@ -59,6 +59,7 @@ struct tms;
> >  struct utimbuf;
> >  struct mq_attr;
> >  struct compat_stat;
> > +struct compat_statfs64;
> >  struct compat_timeval;
> >  struct robust_list_head;
> >  struct getcpu_cache;
> > @@ -1150,6 +1151,13 @@ unsigned long ksys_mmap_pgoff(unsigned long addr, unsigned long len,
> >  ssize_t ksys_readahead(int fd, loff_t offset, size_t count);
> >  unsigned int ksys_personality(unsigned int personality);
> >  
> > +#ifdef CONFIG_COMPAT
> > +int kcompat_sys_statfs64(const char __user * pathname, compat_size_t sz,
> > +		     struct compat_statfs64 __user * buf);
> > +int kcompat_sys_fstatfs64(unsigned int fd, compat_size_t sz,
> > +			  struct compat_statfs64 __user * buf);
> > +#endif
> 
> I've moved these to <linux/compat.h>, so that they live with the rest of
> the compat syscall stuff. That should avoid build failures the kbuild
> test robot picked up where compat_size_t wasn't dfined.

Please add a comment there, similar to what is in syscalls.h:

	/*
	 * Kernel code should not call syscalls (i.e., sys_xyzyyz()) directly.
	 * Instead, use one of the functions which work equivalently, such as
	 * the ksys_xyzyyz() functions prototyped below.
	 */

Once you have done so, feel free to add my

	Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net>

tag to this patch.

Thanks,
	Dominik

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 13/18] kernel: add kcompat_sys_{f,}statfs64()
@ 2018-05-14 20:34       ` Dominik Brodowski
  0 siblings, 0 replies; 126+ messages in thread
From: Dominik Brodowski @ 2018-05-14 20:34 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 06:14:28PM +0100, Mark Rutland wrote:
> On Mon, May 14, 2018 at 10:46:35AM +0100, Mark Rutland wrote:
> > Using this helper allows us to avoid the in-kernel calls to the
> > compat_sys_{f,}statfs64() sycalls, as are necessary for parameter
> > mangling in arm64's compat handling.
> > 
> > Following the example of ksys_* functions, kcompat_sys_* functions are
> > intended to be a drop-in replacement for their compat_sys_*
> > counterparts, with the same calling convention.
> > 
> > This is necessary to enable conversion of arm64's syscall handling to
> > use pt_regs wrappers.
> 
> > diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
> > index 6723ea51ec99..e0bf3e4bb897 100644
> > --- a/include/linux/syscalls.h
> > +++ b/include/linux/syscalls.h
> > @@ -59,6 +59,7 @@ struct tms;
> >  struct utimbuf;
> >  struct mq_attr;
> >  struct compat_stat;
> > +struct compat_statfs64;
> >  struct compat_timeval;
> >  struct robust_list_head;
> >  struct getcpu_cache;
> > @@ -1150,6 +1151,13 @@ unsigned long ksys_mmap_pgoff(unsigned long addr, unsigned long len,
> >  ssize_t ksys_readahead(int fd, loff_t offset, size_t count);
> >  unsigned int ksys_personality(unsigned int personality);
> >  
> > +#ifdef CONFIG_COMPAT
> > +int kcompat_sys_statfs64(const char __user * pathname, compat_size_t sz,
> > +		     struct compat_statfs64 __user * buf);
> > +int kcompat_sys_fstatfs64(unsigned int fd, compat_size_t sz,
> > +			  struct compat_statfs64 __user * buf);
> > +#endif
> 
> I've moved these to <linux/compat.h>, so that they live with the rest of
> the compat syscall stuff. That should avoid build failures the kbuild
> test robot picked up where compat_size_t wasn't dfined.

Please add a comment there, similar to what is in syscalls.h:

	/*
	 * Kernel code should not call syscalls (i.e., sys_xyzyyz()) directly.
	 * Instead, use one of the functions which work equivalently, such as
	 * the ksys_xyzyyz() functions prototyped below.
	 */

Once you have done so, feel free to add my

	Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net>

tag to this patch.

Thanks,
	Dominik

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 18/18] arm64: implement syscall wrappers
  2018-05-14  9:46   ` Mark Rutland
@ 2018-05-14 20:57     ` Dominik Brodowski
  -1 siblings, 0 replies; 126+ messages in thread
From: Dominik Brodowski @ 2018-05-14 20:57 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, linux-kernel, catalin.marinas, dave.martin,
	james.morse, linux-fsdevel, marc.zyngier, viro, will.deacon

On Mon, May 14, 2018 at 10:46:40AM +0100, Mark Rutland wrote:
> Note that we play games with sys_ni_syscall(). It can't be defined with
> SYSCALL_DEFINE0() because we must avoid the possibility of error
> injection. Additionally, there are a couple of locations where we need
> to call it from C code, and we don't (currently) have a
> ksys_ni_syscall().  While it has no wrapper, passing in a redundant
> pt_regs pointer is benign per the AAPCS.
> 
> When ARCH_HAS_SYSCALL_WRAPPER is selected, no prototype is define for
> sys_ni_syscall(). Since we need to treat it differently for in-kernel
> calls and the syscall tables, the prototype is defined as-required.

> Largely the wrappers are largely the same as their x86 counterparts, but

That's one "Largely" too much.

> simplified as we don't have a variety of compat calling conventions that
> require separate stubs. Unlike x86, we have some zero-argument compat
> syscalls, and must define COMPAT_SYSCALL_DEFINE0().

... for consistent naming, or is there another reason for that?

This patch looks good in any case.

Thanks,
	Dominik

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 18/18] arm64: implement syscall wrappers
@ 2018-05-14 20:57     ` Dominik Brodowski
  0 siblings, 0 replies; 126+ messages in thread
From: Dominik Brodowski @ 2018-05-14 20:57 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 10:46:40AM +0100, Mark Rutland wrote:
> Note that we play games with sys_ni_syscall(). It can't be defined with
> SYSCALL_DEFINE0() because we must avoid the possibility of error
> injection. Additionally, there are a couple of locations where we need
> to call it from C code, and we don't (currently) have a
> ksys_ni_syscall().  While it has no wrapper, passing in a redundant
> pt_regs pointer is benign per the AAPCS.
> 
> When ARCH_HAS_SYSCALL_WRAPPER is selected, no prototype is define for
> sys_ni_syscall(). Since we need to treat it differently for in-kernel
> calls and the syscall tables, the prototype is defined as-required.

> Largely the wrappers are largely the same as their x86 counterparts, but

That's one "Largely" too much.

> simplified as we don't have a variety of compat calling conventions that
> require separate stubs. Unlike x86, we have some zero-argument compat
> syscalls, and must define COMPAT_SYSCALL_DEFINE0().

... for consistent naming, or is there another reason for that?

This patch looks good in any case.

Thanks,
	Dominik

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 08/18] arm64: convert raw syscall invocation to C
  2018-05-14 18:00     ` Dominik Brodowski
@ 2018-05-15  8:18       ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-15  8:18 UTC (permalink / raw)
  To: Dominik Brodowski
  Cc: linux-arm-kernel, linux-kernel, catalin.marinas, dave.martin,
	james.morse, linux-fsdevel, marc.zyngier, viro, will.deacon

On Mon, May 14, 2018 at 08:00:29PM +0200, Dominik Brodowski wrote:
> > +static void __invoke_syscall(struct pt_regs *regs, syscall_fn_t syscall_fn)
> > +{
> > +	regs->regs[0] = syscall_fn(regs->regs[0], regs->regs[1],
> > +				   regs->regs[2], regs->regs[3],
> > +				   regs->regs[4], regs->regs[5]);
> > +}
> 
> Any specific reason to have this in a separate function? This seems to be
> called only from one instance, namely

I wanted to keep the raw syscall invocation logically separate from the syscall
table lookup and ni_syscall fallback, so that it was easier to verify in isolation.

I don't think it's a big deal either way, though.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 08/18] arm64: convert raw syscall invocation to C
@ 2018-05-15  8:18       ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-15  8:18 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 08:00:29PM +0200, Dominik Brodowski wrote:
> > +static void __invoke_syscall(struct pt_regs *regs, syscall_fn_t syscall_fn)
> > +{
> > +	regs->regs[0] = syscall_fn(regs->regs[0], regs->regs[1],
> > +				   regs->regs[2], regs->regs[3],
> > +				   regs->regs[4], regs->regs[5]);
> > +}
> 
> Any specific reason to have this in a separate function? This seems to be
> called only from one instance, namely

I wanted to keep the raw syscall invocation logically separate from the syscall
table lookup and ni_syscall fallback, so that it was easier to verify in isolation.

I don't think it's a big deal either way, though.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 08/18] arm64: convert raw syscall invocation to C
  2018-05-14 20:24         ` Dominik Brodowski
@ 2018-05-15  8:22           ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-15  8:22 UTC (permalink / raw)
  To: Dominik Brodowski
  Cc: Dave Martin, linux-arm-kernel, marc.zyngier, catalin.marinas,
	will.deacon, linux-kernel, james.morse, viro, linux-fsdevel

On Mon, May 14, 2018 at 10:24:45PM +0200, Dominik Brodowski wrote:
> On Mon, May 14, 2018 at 12:41:10PM +0100, Mark Rutland wrote:
> > I agree it would be nicer if it had a wrapper that took a pt_regs, even
> > if it does nothing with it.
> > 
> > We can't use SYSCALL_DEFINE0() due to the fault injection muck, we'd
> > need a ksys_ni_syscall() for our traps.c logic, and adding this
> > uniformly would involve some arch-specific rework for x86, too, so I
> > decided it was not worth the effort.
> 
> Couldn't you just open-code the "return -ENOSYS;" in traps.c?

I guess so. I was just worried that debug logic might be added to the generic
ni_syscall() in future, and wanted to avoid potential divergence.

> Error injection has no reasonable stable ABI/API expectations, so that's not
> a show-stopper either.

If people are happy with using SYSCALL_DEFINE0() for ni_syscall, I'm happy to
do that -- it's just that we'll need a fixup for x86 as that will change the
symbol name.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 08/18] arm64: convert raw syscall invocation to C
@ 2018-05-15  8:22           ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-15  8:22 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 10:24:45PM +0200, Dominik Brodowski wrote:
> On Mon, May 14, 2018 at 12:41:10PM +0100, Mark Rutland wrote:
> > I agree it would be nicer if it had a wrapper that took a pt_regs, even
> > if it does nothing with it.
> > 
> > We can't use SYSCALL_DEFINE0() due to the fault injection muck, we'd
> > need a ksys_ni_syscall() for our traps.c logic, and adding this
> > uniformly would involve some arch-specific rework for x86, too, so I
> > decided it was not worth the effort.
> 
> Couldn't you just open-code the "return -ENOSYS;" in traps.c?

I guess so. I was just worried that debug logic might be added to the generic
ni_syscall() in future, and wanted to avoid potential divergence.

> Error injection has no reasonable stable ABI/API expectations, so that's not
> a show-stopper either.

If people are happy with using SYSCALL_DEFINE0() for ni_syscall, I'm happy to
do that -- it's just that we'll need a fixup for x86 as that will change the
symbol name.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 18/18] arm64: implement syscall wrappers
  2018-05-14 20:57     ` Dominik Brodowski
@ 2018-05-15  8:37       ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-15  8:37 UTC (permalink / raw)
  To: Dominik Brodowski
  Cc: linux-arm-kernel, linux-kernel, catalin.marinas, dave.martin,
	james.morse, linux-fsdevel, marc.zyngier, viro, will.deacon

On Mon, May 14, 2018 at 10:57:44PM +0200, Dominik Brodowski wrote:
> On Mon, May 14, 2018 at 10:46:40AM +0100, Mark Rutland wrote:
> > Note that we play games with sys_ni_syscall(). It can't be defined with
> > SYSCALL_DEFINE0() because we must avoid the possibility of error
> > injection. Additionally, there are a couple of locations where we need
> > to call it from C code, and we don't (currently) have a
> > ksys_ni_syscall().  While it has no wrapper, passing in a redundant
> > pt_regs pointer is benign per the AAPCS.
> > 
> > When ARCH_HAS_SYSCALL_WRAPPER is selected, no prototype is define for
> > sys_ni_syscall(). Since we need to treat it differently for in-kernel
> > calls and the syscall tables, the prototype is defined as-required.
> 
> > Largely the wrappers are largely the same as their x86 counterparts, but
> 
> That's one "Largely" too much.

True. I've dropped the first instance.

> > simplified as we don't have a variety of compat calling conventions that
> > require separate stubs. Unlike x86, we have some zero-argument compat
> > syscalls, and must define COMPAT_SYSCALL_DEFINE0().
> 
> ... for consistent naming, or is there another reason for that?

For consistent naming.

I've ammended that to say:

  Unlike x86, we have some zero-argument compat syscalls, and must define
  COMPAT_SYSCALL_DEFINE0() to ensure these are also given an
  __arm64_compat_sys_ prefix.

... though I am tempted to refactor this so that our *SYSCALL_DEFINE0() and
SYSCALL_DEFINEx() share the same wrapper generation logic. The differing
prototypes don't matter per our calling convention, but IIRC some CFI features
aren't happy otherwise, and it would be nice to be consistent.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 18/18] arm64: implement syscall wrappers
@ 2018-05-15  8:37       ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-15  8:37 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 10:57:44PM +0200, Dominik Brodowski wrote:
> On Mon, May 14, 2018 at 10:46:40AM +0100, Mark Rutland wrote:
> > Note that we play games with sys_ni_syscall(). It can't be defined with
> > SYSCALL_DEFINE0() because we must avoid the possibility of error
> > injection. Additionally, there are a couple of locations where we need
> > to call it from C code, and we don't (currently) have a
> > ksys_ni_syscall().  While it has no wrapper, passing in a redundant
> > pt_regs pointer is benign per the AAPCS.
> > 
> > When ARCH_HAS_SYSCALL_WRAPPER is selected, no prototype is define for
> > sys_ni_syscall(). Since we need to treat it differently for in-kernel
> > calls and the syscall tables, the prototype is defined as-required.
> 
> > Largely the wrappers are largely the same as their x86 counterparts, but
> 
> That's one "Largely" too much.

True. I've dropped the first instance.

> > simplified as we don't have a variety of compat calling conventions that
> > require separate stubs. Unlike x86, we have some zero-argument compat
> > syscalls, and must define COMPAT_SYSCALL_DEFINE0().
> 
> ... for consistent naming, or is there another reason for that?

For consistent naming.

I've ammended that to say:

  Unlike x86, we have some zero-argument compat syscalls, and must define
  COMPAT_SYSCALL_DEFINE0() to ensure these are also given an
  __arm64_compat_sys_ prefix.

... though I am tempted to refactor this so that our *SYSCALL_DEFINE0() and
SYSCALL_DEFINEx() share the same wrapper generation logic. The differing
prototypes don't matter per our calling convention, but IIRC some CFI features
aren't happy otherwise, and it would be nice to be consistent.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 13/18] kernel: add kcompat_sys_{f,}statfs64()
  2018-05-14 20:34       ` Dominik Brodowski
@ 2018-05-15  9:53         ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-15  9:53 UTC (permalink / raw)
  To: Dominik Brodowski
  Cc: linux-arm-kernel, linux-kernel, catalin.marinas, dave.martin,
	james.morse, linux-fsdevel, marc.zyngier, viro, will.deacon

On Mon, May 14, 2018 at 10:34:14PM +0200, Dominik Brodowski wrote:
> On Mon, May 14, 2018 at 06:14:28PM +0100, Mark Rutland wrote:
> > On Mon, May 14, 2018 at 10:46:35AM +0100, Mark Rutland wrote:
> > > +#ifdef CONFIG_COMPAT
> > > +int kcompat_sys_statfs64(const char __user * pathname, compat_size_t sz,
> > > +		     struct compat_statfs64 __user * buf);
> > > +int kcompat_sys_fstatfs64(unsigned int fd, compat_size_t sz,
> > > +			  struct compat_statfs64 __user * buf);
> > > +#endif
> > 
> > I've moved these to <linux/compat.h>, so that they live with the rest of
> > the compat syscall stuff. That should avoid build failures the kbuild
> > test robot picked up where compat_size_t wasn't dfined.
> 
> Please add a comment there, similar to what is in syscalls.h:
> 
> 	/*
> 	 * Kernel code should not call syscalls (i.e., sys_xyzyyz()) directly.
> 	 * Instead, use one of the functions which work equivalently, such as
> 	 * the ksys_xyzyyz() functions prototyped below.
> 	 */

To make the kcompat_sys_* naming scheme clearer, I've added compat references to
the above, i.e.

/*
 * Kernel code should not call compat syscalls (i.e., compat_sys_xyzyyz())
 * directly. Instead, use one of the functions which work equivalently, such
 * as the kcompat_sys_xyzyyz() functions prototyped below.
 */

> Once you have done so, feel free to add my
> 
> 	Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net>

I hope that this still stands with the changes above?

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 13/18] kernel: add kcompat_sys_{f,}statfs64()
@ 2018-05-15  9:53         ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-15  9:53 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 10:34:14PM +0200, Dominik Brodowski wrote:
> On Mon, May 14, 2018 at 06:14:28PM +0100, Mark Rutland wrote:
> > On Mon, May 14, 2018 at 10:46:35AM +0100, Mark Rutland wrote:
> > > +#ifdef CONFIG_COMPAT
> > > +int kcompat_sys_statfs64(const char __user * pathname, compat_size_t sz,
> > > +		     struct compat_statfs64 __user * buf);
> > > +int kcompat_sys_fstatfs64(unsigned int fd, compat_size_t sz,
> > > +			  struct compat_statfs64 __user * buf);
> > > +#endif
> > 
> > I've moved these to <linux/compat.h>, so that they live with the rest of
> > the compat syscall stuff. That should avoid build failures the kbuild
> > test robot picked up where compat_size_t wasn't dfined.
> 
> Please add a comment there, similar to what is in syscalls.h:
> 
> 	/*
> 	 * Kernel code should not call syscalls (i.e., sys_xyzyyz()) directly.
> 	 * Instead, use one of the functions which work equivalently, such as
> 	 * the ksys_xyzyyz() functions prototyped below.
> 	 */

To make the kcompat_sys_* naming scheme clearer, I've added compat references to
the above, i.e.

/*
 * Kernel code should not call compat syscalls (i.e., compat_sys_xyzyyz())
 * directly. Instead, use one of the functions which work equivalently, such
 * as the kcompat_sys_xyzyyz() functions prototyped below.
 */

> Once you have done so, feel free to add my
> 
> 	Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net>

I hope that this still stands with the changes above?

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 12/18] kernel: add ksys_personality()
  2018-05-14 12:07     ` Christoph Hellwig
@ 2018-05-15  9:56       ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-15  9:56 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-arm-kernel, linux-kernel, catalin.marinas, dave.martin,
	james.morse, linux, linux-fsdevel, marc.zyngier, viro,
	will.deacon

On Mon, May 14, 2018 at 05:07:56AM -0700, Christoph Hellwig wrote:
> On Mon, May 14, 2018 at 10:46:34AM +0100, Mark Rutland wrote:
> > Using this helper allows us to avoid the in-kernel call to the
> > sys_personality() syscall. The ksys_ prefix denotes that this function
> > is meant as a drop-in replacement for the syscall. In particular, it
> > uses the same calling convention as sys_personality().
> > 
> > This is necessary to enable conversion of arm64's syscall handling to
> > use pt_regs wrappers.
> 
> Plese just opencode the trivial sys_personality logic instead.

Sure, I'll make that a static inline in <linux/syscalls.h> as with
ksys_close() and friends.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 12/18] kernel: add ksys_personality()
@ 2018-05-15  9:56       ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-15  9:56 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 05:07:56AM -0700, Christoph Hellwig wrote:
> On Mon, May 14, 2018 at 10:46:34AM +0100, Mark Rutland wrote:
> > Using this helper allows us to avoid the in-kernel call to the
> > sys_personality() syscall. The ksys_ prefix denotes that this function
> > is meant as a drop-in replacement for the syscall. In particular, it
> > uses the same calling convention as sys_personality().
> > 
> > This is necessary to enable conversion of arm64's syscall handling to
> > use pt_regs wrappers.
> 
> Plese just opencode the trivial sys_personality logic instead.

Sure, I'll make that a static inline in <linux/syscalls.h> as with
ksys_close() and friends.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 13/18] kernel: add kcompat_sys_{f,}statfs64()
  2018-05-15  9:53         ` Mark Rutland
@ 2018-05-15  9:58           ` Dominik Brodowski
  -1 siblings, 0 replies; 126+ messages in thread
From: Dominik Brodowski @ 2018-05-15  9:58 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, linux-kernel, catalin.marinas, dave.martin,
	james.morse, linux-fsdevel, marc.zyngier, viro, will.deacon

On Tue, May 15, 2018 at 10:53:51AM +0100, Mark Rutland wrote:
> On Mon, May 14, 2018 at 10:34:14PM +0200, Dominik Brodowski wrote:
> > On Mon, May 14, 2018 at 06:14:28PM +0100, Mark Rutland wrote:
> > > On Mon, May 14, 2018 at 10:46:35AM +0100, Mark Rutland wrote:
> > > > +#ifdef CONFIG_COMPAT
> > > > +int kcompat_sys_statfs64(const char __user * pathname, compat_size_t sz,
> > > > +		     struct compat_statfs64 __user * buf);
> > > > +int kcompat_sys_fstatfs64(unsigned int fd, compat_size_t sz,
> > > > +			  struct compat_statfs64 __user * buf);
> > > > +#endif
> > > 
> > > I've moved these to <linux/compat.h>, so that they live with the rest of
> > > the compat syscall stuff. That should avoid build failures the kbuild
> > > test robot picked up where compat_size_t wasn't dfined.
> > 
> > Please add a comment there, similar to what is in syscalls.h:
> > 
> > 	/*
> > 	 * Kernel code should not call syscalls (i.e., sys_xyzyyz()) directly.
> > 	 * Instead, use one of the functions which work equivalently, such as
> > 	 * the ksys_xyzyyz() functions prototyped below.
> > 	 */
> 
> To make the kcompat_sys_* naming scheme clearer, I've added compat references to
> the above, i.e.
> 
> /*
>  * Kernel code should not call compat syscalls (i.e., compat_sys_xyzyyz())
>  * directly. Instead, use one of the functions which work equivalently, such
>  * as the kcompat_sys_xyzyyz() functions prototyped below.
>  */

That's what I meant ;)

> > Once you have done so, feel free to add my
> > 
> > 	Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net>
> 
> I hope that this still stands with the changes above?

It does. Thanks!

	Dominik

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 13/18] kernel: add kcompat_sys_{f,}statfs64()
@ 2018-05-15  9:58           ` Dominik Brodowski
  0 siblings, 0 replies; 126+ messages in thread
From: Dominik Brodowski @ 2018-05-15  9:58 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 15, 2018 at 10:53:51AM +0100, Mark Rutland wrote:
> On Mon, May 14, 2018 at 10:34:14PM +0200, Dominik Brodowski wrote:
> > On Mon, May 14, 2018 at 06:14:28PM +0100, Mark Rutland wrote:
> > > On Mon, May 14, 2018 at 10:46:35AM +0100, Mark Rutland wrote:
> > > > +#ifdef CONFIG_COMPAT
> > > > +int kcompat_sys_statfs64(const char __user * pathname, compat_size_t sz,
> > > > +		     struct compat_statfs64 __user * buf);
> > > > +int kcompat_sys_fstatfs64(unsigned int fd, compat_size_t sz,
> > > > +			  struct compat_statfs64 __user * buf);
> > > > +#endif
> > > 
> > > I've moved these to <linux/compat.h>, so that they live with the rest of
> > > the compat syscall stuff. That should avoid build failures the kbuild
> > > test robot picked up where compat_size_t wasn't dfined.
> > 
> > Please add a comment there, similar to what is in syscalls.h:
> > 
> > 	/*
> > 	 * Kernel code should not call syscalls (i.e., sys_xyzyyz()) directly.
> > 	 * Instead, use one of the functions which work equivalently, such as
> > 	 * the ksys_xyzyyz() functions prototyped below.
> > 	 */
> 
> To make the kcompat_sys_* naming scheme clearer, I've added compat references to
> the above, i.e.
> 
> /*
>  * Kernel code should not call compat syscalls (i.e., compat_sys_xyzyyz())
>  * directly. Instead, use one of the functions which work equivalently, such
>  * as the kcompat_sys_xyzyyz() functions prototyped below.
>  */

That's what I meant ;)

> > Once you have done so, feel free to add my
> > 
> > 	Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net>
> 
> I hope that this still stands with the changes above?

It does. Thanks!

	Dominik

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 08/18] arm64: convert raw syscall invocation to C
  2018-05-15  8:22           ` Mark Rutland
@ 2018-05-15 10:01             ` Dominik Brodowski
  -1 siblings, 0 replies; 126+ messages in thread
From: Dominik Brodowski @ 2018-05-15 10:01 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Dave Martin, linux-arm-kernel, marc.zyngier, catalin.marinas,
	will.deacon, linux-kernel, james.morse, viro, linux-fsdevel

On Tue, May 15, 2018 at 09:22:23AM +0100, Mark Rutland wrote:
> On Mon, May 14, 2018 at 10:24:45PM +0200, Dominik Brodowski wrote:
> > On Mon, May 14, 2018 at 12:41:10PM +0100, Mark Rutland wrote:
> > > I agree it would be nicer if it had a wrapper that took a pt_regs, even
> > > if it does nothing with it.
> > > 
> > > We can't use SYSCALL_DEFINE0() due to the fault injection muck, we'd
> > > need a ksys_ni_syscall() for our traps.c logic, and adding this
> > > uniformly would involve some arch-specific rework for x86, too, so I
> > > decided it was not worth the effort.
> > 
> > Couldn't you just open-code the "return -ENOSYS;" in traps.c?
> 
> I guess so. I was just worried that debug logic might be added to the generic
> ni_syscall() in future, and wanted to avoid potential divergence.
> 
> > Error injection has no reasonable stable ABI/API expectations, so that's not
> > a show-stopper either.
> 
> If people are happy with using SYSCALL_DEFINE0() for ni_syscall, I'm happy to
> do that -- it's just that we'll need a fixup for x86 as that will change the
> symbol name.

For me, it's less about using SYSCALL_DEFINE0() for ni_syscall, but more
about keeping the syscall invokation easy. Therefore, we do pass a pointer
struct pt_regs to sys_ni_syscall() on x86, even though it does not expect
it.

	/* this is a lie, but it does not hurt as sys_ni_syscall just returns -EINVAL */
	extern asmlinkage long sys_ni_syscall(const struct pt_regs *);

Thanks,
	Dominik

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 08/18] arm64: convert raw syscall invocation to C
@ 2018-05-15 10:01             ` Dominik Brodowski
  0 siblings, 0 replies; 126+ messages in thread
From: Dominik Brodowski @ 2018-05-15 10:01 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 15, 2018 at 09:22:23AM +0100, Mark Rutland wrote:
> On Mon, May 14, 2018 at 10:24:45PM +0200, Dominik Brodowski wrote:
> > On Mon, May 14, 2018 at 12:41:10PM +0100, Mark Rutland wrote:
> > > I agree it would be nicer if it had a wrapper that took a pt_regs, even
> > > if it does nothing with it.
> > > 
> > > We can't use SYSCALL_DEFINE0() due to the fault injection muck, we'd
> > > need a ksys_ni_syscall() for our traps.c logic, and adding this
> > > uniformly would involve some arch-specific rework for x86, too, so I
> > > decided it was not worth the effort.
> > 
> > Couldn't you just open-code the "return -ENOSYS;" in traps.c?
> 
> I guess so. I was just worried that debug logic might be added to the generic
> ni_syscall() in future, and wanted to avoid potential divergence.
> 
> > Error injection has no reasonable stable ABI/API expectations, so that's not
> > a show-stopper either.
> 
> If people are happy with using SYSCALL_DEFINE0() for ni_syscall, I'm happy to
> do that -- it's just that we'll need a fixup for x86 as that will change the
> symbol name.

For me, it's less about using SYSCALL_DEFINE0() for ni_syscall, but more
about keeping the syscall invokation easy. Therefore, we do pass a pointer
struct pt_regs to sys_ni_syscall() on x86, even though it does not expect
it.

	/* this is a lie, but it does not hurt as sys_ni_syscall just returns -EINVAL */
	extern asmlinkage long sys_ni_syscall(const struct pt_regs *);

Thanks,
	Dominik

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 08/18] arm64: convert raw syscall invocation to C
  2018-05-15 10:01             ` Dominik Brodowski
@ 2018-05-15 10:13               ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-15 10:13 UTC (permalink / raw)
  To: Dominik Brodowski
  Cc: Dave Martin, linux-arm-kernel, marc.zyngier, catalin.marinas,
	will.deacon, linux-kernel, james.morse, viro, linux-fsdevel

On Tue, May 15, 2018 at 12:01:58PM +0200, Dominik Brodowski wrote:
> On Tue, May 15, 2018 at 09:22:23AM +0100, Mark Rutland wrote:
> > On Mon, May 14, 2018 at 10:24:45PM +0200, Dominik Brodowski wrote:
> > > On Mon, May 14, 2018 at 12:41:10PM +0100, Mark Rutland wrote:
> > > > I agree it would be nicer if it had a wrapper that took a pt_regs, even
> > > > if it does nothing with it.
> > > > 
> > > > We can't use SYSCALL_DEFINE0() due to the fault injection muck, we'd
> > > > need a ksys_ni_syscall() for our traps.c logic, and adding this
> > > > uniformly would involve some arch-specific rework for x86, too, so I
> > > > decided it was not worth the effort.
> > > 
> > > Couldn't you just open-code the "return -ENOSYS;" in traps.c?
> > 
> > I guess so. I was just worried that debug logic might be added to the generic
> > ni_syscall() in future, and wanted to avoid potential divergence.
> > 
> > > Error injection has no reasonable stable ABI/API expectations, so that's not
> > > a show-stopper either.
> > 
> > If people are happy with using SYSCALL_DEFINE0() for ni_syscall, I'm happy to
> > do that -- it's just that we'll need a fixup for x86 as that will change the
> > symbol name.
> 
> For me, it's less about using SYSCALL_DEFINE0() for ni_syscall, but more
> about keeping the syscall invokation easy. Therefore, we do pass a pointer
> struct pt_regs to sys_ni_syscall() on x86, even though it does not expect
> it.
> 
> 	/* this is a lie, but it does not hurt as sys_ni_syscall just returns -EINVAL */
> 	extern asmlinkage long sys_ni_syscall(const struct pt_regs *);

Oh, sure, we do the same on arm64 in this series.

Having a pt_regs wrapper for it (e.g. using SYSCALL_DEFINE0()) would
allow us to avoid that lie (which might be best for CFI stuff), would
allow us to avoid some name mangling on arm64, and would seemingly
confuse people less.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 08/18] arm64: convert raw syscall invocation to C
@ 2018-05-15 10:13               ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-15 10:13 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 15, 2018 at 12:01:58PM +0200, Dominik Brodowski wrote:
> On Tue, May 15, 2018 at 09:22:23AM +0100, Mark Rutland wrote:
> > On Mon, May 14, 2018 at 10:24:45PM +0200, Dominik Brodowski wrote:
> > > On Mon, May 14, 2018 at 12:41:10PM +0100, Mark Rutland wrote:
> > > > I agree it would be nicer if it had a wrapper that took a pt_regs, even
> > > > if it does nothing with it.
> > > > 
> > > > We can't use SYSCALL_DEFINE0() due to the fault injection muck, we'd
> > > > need a ksys_ni_syscall() for our traps.c logic, and adding this
> > > > uniformly would involve some arch-specific rework for x86, too, so I
> > > > decided it was not worth the effort.
> > > 
> > > Couldn't you just open-code the "return -ENOSYS;" in traps.c?
> > 
> > I guess so. I was just worried that debug logic might be added to the generic
> > ni_syscall() in future, and wanted to avoid potential divergence.
> > 
> > > Error injection has no reasonable stable ABI/API expectations, so that's not
> > > a show-stopper either.
> > 
> > If people are happy with using SYSCALL_DEFINE0() for ni_syscall, I'm happy to
> > do that -- it's just that we'll need a fixup for x86 as that will change the
> > symbol name.
> 
> For me, it's less about using SYSCALL_DEFINE0() for ni_syscall, but more
> about keeping the syscall invokation easy. Therefore, we do pass a pointer
> struct pt_regs to sys_ni_syscall() on x86, even though it does not expect
> it.
> 
> 	/* this is a lie, but it does not hurt as sys_ni_syscall just returns -EINVAL */
> 	extern asmlinkage long sys_ni_syscall(const struct pt_regs *);

Oh, sure, we do the same on arm64 in this series.

Having a pt_regs wrapper for it (e.g. using SYSCALL_DEFINE0()) would
allow us to avoid that lie (which might be best for CFI stuff), would
allow us to avoid some name mangling on arm64, and would seemingly
confuse people less.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 06/18] arm64: move sve_user_{enable, disable} to <asm/fpsimd.h>
  2018-05-14 11:06     ` Dave Martin
@ 2018-05-15 10:39       ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-15 10:39 UTC (permalink / raw)
  To: Dave Martin
  Cc: linux-arm-kernel, marc.zyngier, catalin.marinas, will.deacon,
	linux-kernel, linux, james.morse, viro, linux-fsdevel

On Mon, May 14, 2018 at 12:06:50PM +0100, Dave Martin wrote:
> On Mon, May 14, 2018 at 10:46:28AM +0100, Mark Rutland wrote:
> > In subsequent patches, we'll want to make use of sve_user_enable() and
> > sve_user_disable() outside of kernel/fpsimd.c. Let's move these to
> > <asm/fpsimd.h> where we can make use of them.
> > 
> > To avoid ifdeffery in sequences like:
> > 
> > if (system_supports_sve() && some_condition
> > 	sve_user_disable();
> > 
> > ... empty stubs are provided when support for SVE is not enabled.
> > 
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Dave Martin <dave.martin@arm.com>
> > Cc: Will Deacon <will.deacon@arm.com>
> > ---
> >  arch/arm64/include/asm/fpsimd.h | 17 ++++++++++++++++-
> >  arch/arm64/kernel/fpsimd.c      | 11 -----------
> >  2 files changed, 16 insertions(+), 12 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
> > index aa7162ae93e3..7377d7593c06 100644
> > --- a/arch/arm64/include/asm/fpsimd.h
> > +++ b/arch/arm64/include/asm/fpsimd.h
> > @@ -16,11 +16,13 @@
> >  #ifndef __ASM_FP_H
> >  #define __ASM_FP_H
> >  
> > -#include <asm/ptrace.h>
> >  #include <asm/errno.h>
> > +#include <asm/ptrace.h>
> > +#include <asm/sysreg.h>
> >  
> >  #ifndef __ASSEMBLY__
> >  
> > +#include <linux/build_bug.h>
> >  #include <linux/cache.h>
> >  #include <linux/init.h>
> >  #include <linux/stddef.h>
> > @@ -81,6 +83,16 @@ extern int sve_set_vector_length(struct task_struct *task,
> >  extern int sve_set_current_vl(unsigned long arg);
> >  extern int sve_get_current_vl(void);
> >  
> > +static inline void sve_user_disable(void)
> > +{
> > +	sysreg_clear_set(cpacr_el1, CPACR_EL1_ZEN_EL0EN, 0);
> > +}
> > +
> > +static inline void sve_user_enable(void)
> > +{
> > +	sysreg_clear_set(cpacr_el1, 0, CPACR_EL1_ZEN_EL0EN);
> > +}
> > +
> >  /*
> >   * Probing and setup functions.
> >   * Calls to these functions must be serialised with one another.
> > @@ -107,6 +119,9 @@ static inline int sve_get_current_vl(void)
> >  	return -EINVAL;
> >  }
> >  
> > +static inline void sve_user_disable(void) { }
> > +static inline void sve_user_enable(void) { }
> > +
> 
> Alternatively, just move the full definitions outside the #ifdef
> CONFIG_ARM64_SVE.

Can do, though I was trying to keep the exsting pattern with empty
inlines for the !CONFIG_ARM64_SVE case.

> 
> All calls to these should be shadowed by an if
> (system_supports_sve()) in any case, and setting/clearing ZEN_EL0EN
> in the CPACR_EL1 ought to be harmless now that the meaning of these
> bits architecturally committed.
> 
> Ideally we would have a BUG_ON(!system_supports_sve()) in those
> functions, but we won't won't to pay the cost in a production kernel.

Earlier I'd put BUILD_BUG() in the body for the !CONFIG_ARM64_SVE case,
to catch that kind of thing -- I could restore that.

> >  static inline void sve_init_vq_map(void) { }
> >  static inline void sve_update_vq_map(void) { }
> >  static inline int sve_verify_vq_map(void) { return 0; }
> > diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
> > index 088940387a4d..79a81c7d85c6 100644
> > --- a/arch/arm64/kernel/fpsimd.c
> > +++ b/arch/arm64/kernel/fpsimd.c
> > @@ -159,7 +159,6 @@ static void sve_free(struct task_struct *task)
> >  	__sve_free(task);
> >  }
> >  
> > -
> 
> Hmmm, Ack.  Check for conflicts with the KVM FPSIMD rework [1] (though
> trivial).

I'll assume that Ack stands regardless. :)

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 06/18] arm64: move sve_user_{enable, disable} to <asm/fpsimd.h>
@ 2018-05-15 10:39       ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-15 10:39 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, May 14, 2018 at 12:06:50PM +0100, Dave Martin wrote:
> On Mon, May 14, 2018 at 10:46:28AM +0100, Mark Rutland wrote:
> > In subsequent patches, we'll want to make use of sve_user_enable() and
> > sve_user_disable() outside of kernel/fpsimd.c. Let's move these to
> > <asm/fpsimd.h> where we can make use of them.
> > 
> > To avoid ifdeffery in sequences like:
> > 
> > if (system_supports_sve() && some_condition
> > 	sve_user_disable();
> > 
> > ... empty stubs are provided when support for SVE is not enabled.
> > 
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Dave Martin <dave.martin@arm.com>
> > Cc: Will Deacon <will.deacon@arm.com>
> > ---
> >  arch/arm64/include/asm/fpsimd.h | 17 ++++++++++++++++-
> >  arch/arm64/kernel/fpsimd.c      | 11 -----------
> >  2 files changed, 16 insertions(+), 12 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
> > index aa7162ae93e3..7377d7593c06 100644
> > --- a/arch/arm64/include/asm/fpsimd.h
> > +++ b/arch/arm64/include/asm/fpsimd.h
> > @@ -16,11 +16,13 @@
> >  #ifndef __ASM_FP_H
> >  #define __ASM_FP_H
> >  
> > -#include <asm/ptrace.h>
> >  #include <asm/errno.h>
> > +#include <asm/ptrace.h>
> > +#include <asm/sysreg.h>
> >  
> >  #ifndef __ASSEMBLY__
> >  
> > +#include <linux/build_bug.h>
> >  #include <linux/cache.h>
> >  #include <linux/init.h>
> >  #include <linux/stddef.h>
> > @@ -81,6 +83,16 @@ extern int sve_set_vector_length(struct task_struct *task,
> >  extern int sve_set_current_vl(unsigned long arg);
> >  extern int sve_get_current_vl(void);
> >  
> > +static inline void sve_user_disable(void)
> > +{
> > +	sysreg_clear_set(cpacr_el1, CPACR_EL1_ZEN_EL0EN, 0);
> > +}
> > +
> > +static inline void sve_user_enable(void)
> > +{
> > +	sysreg_clear_set(cpacr_el1, 0, CPACR_EL1_ZEN_EL0EN);
> > +}
> > +
> >  /*
> >   * Probing and setup functions.
> >   * Calls to these functions must be serialised with one another.
> > @@ -107,6 +119,9 @@ static inline int sve_get_current_vl(void)
> >  	return -EINVAL;
> >  }
> >  
> > +static inline void sve_user_disable(void) { }
> > +static inline void sve_user_enable(void) { }
> > +
> 
> Alternatively, just move the full definitions outside the #ifdef
> CONFIG_ARM64_SVE.

Can do, though I was trying to keep the exsting pattern with empty
inlines for the !CONFIG_ARM64_SVE case.

> 
> All calls to these should be shadowed by an if
> (system_supports_sve()) in any case, and setting/clearing ZEN_EL0EN
> in the CPACR_EL1 ought to be harmless now that the meaning of these
> bits architecturally committed.
> 
> Ideally we would have a BUG_ON(!system_supports_sve()) in those
> functions, but we won't won't to pay the cost in a production kernel.

Earlier I'd put BUILD_BUG() in the body for the !CONFIG_ARM64_SVE case,
to catch that kind of thing -- I could restore that.

> >  static inline void sve_init_vq_map(void) { }
> >  static inline void sve_update_vq_map(void) { }
> >  static inline int sve_verify_vq_map(void) { return 0; }
> > diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
> > index 088940387a4d..79a81c7d85c6 100644
> > --- a/arch/arm64/kernel/fpsimd.c
> > +++ b/arch/arm64/kernel/fpsimd.c
> > @@ -159,7 +159,6 @@ static void sve_free(struct task_struct *task)
> >  	__sve_free(task);
> >  }
> >  
> > -
> 
> Hmmm, Ack.  Check for conflicts with the KVM FPSIMD rework [1] (though
> trivial).

I'll assume that Ack stands regardless. :)

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 06/18] arm64: move sve_user_{enable, disable} to <asm/fpsimd.h>
  2018-05-15 10:39       ` Mark Rutland
@ 2018-05-15 12:19         ` Dave Martin
  -1 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-15 12:19 UTC (permalink / raw)
  To: Mark Rutland
  Cc: marc.zyngier, catalin.marinas, will.deacon, linux-kernel, linux,
	james.morse, viro, linux-fsdevel, linux-arm-kernel

On Tue, May 15, 2018 at 11:39:36AM +0100, Mark Rutland wrote:
> On Mon, May 14, 2018 at 12:06:50PM +0100, Dave Martin wrote:
> > On Mon, May 14, 2018 at 10:46:28AM +0100, Mark Rutland wrote:
> > > In subsequent patches, we'll want to make use of sve_user_enable() and
> > > sve_user_disable() outside of kernel/fpsimd.c. Let's move these to
> > > <asm/fpsimd.h> where we can make use of them.
> > > 
> > > To avoid ifdeffery in sequences like:
> > > 
> > > if (system_supports_sve() && some_condition
> > > 	sve_user_disable();
> > > 
> > > ... empty stubs are provided when support for SVE is not enabled.
> > > 
> > > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > > Cc: Dave Martin <dave.martin@arm.com>
> > > Cc: Will Deacon <will.deacon@arm.com>
> > > ---
> > >  arch/arm64/include/asm/fpsimd.h | 17 ++++++++++++++++-
> > >  arch/arm64/kernel/fpsimd.c      | 11 -----------
> > >  2 files changed, 16 insertions(+), 12 deletions(-)
> > > 
> > > diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
> > > index aa7162ae93e3..7377d7593c06 100644
> > > --- a/arch/arm64/include/asm/fpsimd.h
> > > +++ b/arch/arm64/include/asm/fpsimd.h
> > > @@ -16,11 +16,13 @@
> > >  #ifndef __ASM_FP_H
> > >  #define __ASM_FP_H
> > >  
> > > -#include <asm/ptrace.h>
> > >  #include <asm/errno.h>
> > > +#include <asm/ptrace.h>
> > > +#include <asm/sysreg.h>
> > >  
> > >  #ifndef __ASSEMBLY__
> > >  
> > > +#include <linux/build_bug.h>
> > >  #include <linux/cache.h>
> > >  #include <linux/init.h>
> > >  #include <linux/stddef.h>
> > > @@ -81,6 +83,16 @@ extern int sve_set_vector_length(struct task_struct *task,
> > >  extern int sve_set_current_vl(unsigned long arg);
> > >  extern int sve_get_current_vl(void);
> > >  
> > > +static inline void sve_user_disable(void)
> > > +{
> > > +	sysreg_clear_set(cpacr_el1, CPACR_EL1_ZEN_EL0EN, 0);
> > > +}
> > > +
> > > +static inline void sve_user_enable(void)
> > > +{
> > > +	sysreg_clear_set(cpacr_el1, 0, CPACR_EL1_ZEN_EL0EN);
> > > +}
> > > +
> > >  /*
> > >   * Probing and setup functions.
> > >   * Calls to these functions must be serialised with one another.
> > > @@ -107,6 +119,9 @@ static inline int sve_get_current_vl(void)
> > >  	return -EINVAL;
> > >  }
> > >  
> > > +static inline void sve_user_disable(void) { }
> > > +static inline void sve_user_enable(void) { }
> > > +
> > 
> > Alternatively, just move the full definitions outside the #ifdef
> > CONFIG_ARM64_SVE.
> 
> Can do, though I was trying to keep the exsting pattern with empty
> inlines for the !CONFIG_ARM64_SVE case.

There isn't really a pattern.  I tried to avoid dummy versions where
there's no real reason to have them.  I don't _think_ they're really
needed here, unless I missed something.  Did you get build failures
without them?

> > All calls to these should be shadowed by an if
> > (system_supports_sve()) in any case, and setting/clearing ZEN_EL0EN
> > in the CPACR_EL1 ought to be harmless now that the meaning of these
> > bits architecturally committed.
> > 
> > Ideally we would have a BUG_ON(!system_supports_sve()) in those
> > functions, but we won't won't to pay the cost in a production kernel.
> 
> Earlier I'd put BUILD_BUG() in the body for the !CONFIG_ARM64_SVE case,
> to catch that kind of thing -- I could restore that.

IIUC:

	if (0) {
		BUILD_BUG_ON(1);
	}

can still fire, in which case it's futile checking for CONFIG_ARM64_SVE
in most of the SVE support code.

Anyway, CONFIG_ARM64_SVE doesn't capture the whole condition.

> 
> > >  static inline void sve_init_vq_map(void) { }
> > >  static inline void sve_update_vq_map(void) { }
> > >  static inline int sve_verify_vq_map(void) { return 0; }
> > > diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
> > > index 088940387a4d..79a81c7d85c6 100644
> > > --- a/arch/arm64/kernel/fpsimd.c
> > > +++ b/arch/arm64/kernel/fpsimd.c
> > > @@ -159,7 +159,6 @@ static void sve_free(struct task_struct *task)
> > >  	__sve_free(task);
> > >  }
> > >  
> > > -
> > 
> > Hmmm, Ack.  Check for conflicts with the KVM FPSIMD rework [1] (though
> > trivial).
> 
> I'll assume that Ack stands regardless. :)

Actually, I was just commenting on the deleted blank line...  not that
there is any massive issue with this patch, though.

Cheers
---Dave

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 06/18] arm64: move sve_user_{enable, disable} to <asm/fpsimd.h>
@ 2018-05-15 12:19         ` Dave Martin
  0 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-15 12:19 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 15, 2018 at 11:39:36AM +0100, Mark Rutland wrote:
> On Mon, May 14, 2018 at 12:06:50PM +0100, Dave Martin wrote:
> > On Mon, May 14, 2018 at 10:46:28AM +0100, Mark Rutland wrote:
> > > In subsequent patches, we'll want to make use of sve_user_enable() and
> > > sve_user_disable() outside of kernel/fpsimd.c. Let's move these to
> > > <asm/fpsimd.h> where we can make use of them.
> > > 
> > > To avoid ifdeffery in sequences like:
> > > 
> > > if (system_supports_sve() && some_condition
> > > 	sve_user_disable();
> > > 
> > > ... empty stubs are provided when support for SVE is not enabled.
> > > 
> > > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > > Cc: Dave Martin <dave.martin@arm.com>
> > > Cc: Will Deacon <will.deacon@arm.com>
> > > ---
> > >  arch/arm64/include/asm/fpsimd.h | 17 ++++++++++++++++-
> > >  arch/arm64/kernel/fpsimd.c      | 11 -----------
> > >  2 files changed, 16 insertions(+), 12 deletions(-)
> > > 
> > > diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
> > > index aa7162ae93e3..7377d7593c06 100644
> > > --- a/arch/arm64/include/asm/fpsimd.h
> > > +++ b/arch/arm64/include/asm/fpsimd.h
> > > @@ -16,11 +16,13 @@
> > >  #ifndef __ASM_FP_H
> > >  #define __ASM_FP_H
> > >  
> > > -#include <asm/ptrace.h>
> > >  #include <asm/errno.h>
> > > +#include <asm/ptrace.h>
> > > +#include <asm/sysreg.h>
> > >  
> > >  #ifndef __ASSEMBLY__
> > >  
> > > +#include <linux/build_bug.h>
> > >  #include <linux/cache.h>
> > >  #include <linux/init.h>
> > >  #include <linux/stddef.h>
> > > @@ -81,6 +83,16 @@ extern int sve_set_vector_length(struct task_struct *task,
> > >  extern int sve_set_current_vl(unsigned long arg);
> > >  extern int sve_get_current_vl(void);
> > >  
> > > +static inline void sve_user_disable(void)
> > > +{
> > > +	sysreg_clear_set(cpacr_el1, CPACR_EL1_ZEN_EL0EN, 0);
> > > +}
> > > +
> > > +static inline void sve_user_enable(void)
> > > +{
> > > +	sysreg_clear_set(cpacr_el1, 0, CPACR_EL1_ZEN_EL0EN);
> > > +}
> > > +
> > >  /*
> > >   * Probing and setup functions.
> > >   * Calls to these functions must be serialised with one another.
> > > @@ -107,6 +119,9 @@ static inline int sve_get_current_vl(void)
> > >  	return -EINVAL;
> > >  }
> > >  
> > > +static inline void sve_user_disable(void) { }
> > > +static inline void sve_user_enable(void) { }
> > > +
> > 
> > Alternatively, just move the full definitions outside the #ifdef
> > CONFIG_ARM64_SVE.
> 
> Can do, though I was trying to keep the exsting pattern with empty
> inlines for the !CONFIG_ARM64_SVE case.

There isn't really a pattern.  I tried to avoid dummy versions where
there's no real reason to have them.  I don't _think_ they're really
needed here, unless I missed something.  Did you get build failures
without them?

> > All calls to these should be shadowed by an if
> > (system_supports_sve()) in any case, and setting/clearing ZEN_EL0EN
> > in the CPACR_EL1 ought to be harmless now that the meaning of these
> > bits architecturally committed.
> > 
> > Ideally we would have a BUG_ON(!system_supports_sve()) in those
> > functions, but we won't won't to pay the cost in a production kernel.
> 
> Earlier I'd put BUILD_BUG() in the body for the !CONFIG_ARM64_SVE case,
> to catch that kind of thing -- I could restore that.

IIUC:

	if (0) {
		BUILD_BUG_ON(1);
	}

can still fire, in which case it's futile checking for CONFIG_ARM64_SVE
in most of the SVE support code.

Anyway, CONFIG_ARM64_SVE doesn't capture the whole condition.

> 
> > >  static inline void sve_init_vq_map(void) { }
> > >  static inline void sve_update_vq_map(void) { }
> > >  static inline int sve_verify_vq_map(void) { return 0; }
> > > diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
> > > index 088940387a4d..79a81c7d85c6 100644
> > > --- a/arch/arm64/kernel/fpsimd.c
> > > +++ b/arch/arm64/kernel/fpsimd.c
> > > @@ -159,7 +159,6 @@ static void sve_free(struct task_struct *task)
> > >  	__sve_free(task);
> > >  }
> > >  
> > > -
> > 
> > Hmmm, Ack.  Check for conflicts with the KVM FPSIMD rework [1] (though
> > trivial).
> 
> I'll assume that Ack stands regardless. :)

Actually, I was just commenting on the deleted blank line...  not that
there is any massive issue with this patch, though.

Cheers
---Dave

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 06/18] arm64: move sve_user_{enable, disable} to <asm/fpsimd.h>
  2018-05-15 12:19         ` Dave Martin
@ 2018-05-15 16:33           ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-15 16:33 UTC (permalink / raw)
  To: Dave Martin
  Cc: marc.zyngier, catalin.marinas, will.deacon, linux-kernel, linux,
	james.morse, viro, linux-fsdevel, linux-arm-kernel

On Tue, May 15, 2018 at 01:19:26PM +0100, Dave Martin wrote:
> On Tue, May 15, 2018 at 11:39:36AM +0100, Mark Rutland wrote:
> > On Mon, May 14, 2018 at 12:06:50PM +0100, Dave Martin wrote:
> > > On Mon, May 14, 2018 at 10:46:28AM +0100, Mark Rutland wrote:
> > > > +static inline void sve_user_disable(void)
> > > > +{
> > > > +	sysreg_clear_set(cpacr_el1, CPACR_EL1_ZEN_EL0EN, 0);
> > > > +}
> > > > +
> > > > +static inline void sve_user_enable(void)
> > > > +{
> > > > +	sysreg_clear_set(cpacr_el1, 0, CPACR_EL1_ZEN_EL0EN);
> > > > +}
> > > > +
> > > >  /*
> > > >   * Probing and setup functions.
> > > >   * Calls to these functions must be serialised with one another.
> > > > @@ -107,6 +119,9 @@ static inline int sve_get_current_vl(void)
> > > >  	return -EINVAL;
> > > >  }
> > > >  
> > > > +static inline void sve_user_disable(void) { }
> > > > +static inline void sve_user_enable(void) { }
> > > > +
> > > 
> > > Alternatively, just move the full definitions outside the #ifdef
> > > CONFIG_ARM64_SVE.
> > 
> > Can do, though I was trying to keep the exsting pattern with empty
> > inlines for the !CONFIG_ARM64_SVE case.
> 
> There isn't really a pattern.  I tried to avoid dummy versions where
> there's no real reason to have them.  I don't _think_ they're really
> needed here, unless I missed something.  Did you get build failures
> without them?

I need *some* definition so that sve_user_reset() in the syscall path
can compile without ifdeferry. 

In sve_user_reset() I first check system_supports_sve(), which checks
IS_ENABLED(CONFIG_ARM64_SVE), so the call should be optimised away when
!CONFIG_ARM64_SVE, but I need a prototype regardless.

> > > All calls to these should be shadowed by an if
> > > (system_supports_sve()) in any case, and setting/clearing ZEN_EL0EN
> > > in the CPACR_EL1 ought to be harmless now that the meaning of these
> > > bits architecturally committed.
> > > 
> > > Ideally we would have a BUG_ON(!system_supports_sve()) in those
> > > functions, but we won't won't to pay the cost in a production kernel.
> > 
> > Earlier I'd put BUILD_BUG() in the body for the !CONFIG_ARM64_SVE case,
> > to catch that kind of thing -- I could restore that.
> 
> IIUC:
> 
> 	if (0) {
> 		BUILD_BUG_ON(1);
> 	}
> 
> can still fire, in which case it's futile checking for CONFIG_ARM64_SVE
> in most of the SVE support code.

We already rely on BUILD_BUG() not firing in paths that can be trivially
optimized away. e.g. in the cmpxchg code.
 
> > > >  static inline void sve_init_vq_map(void) { }
> > > >  static inline void sve_update_vq_map(void) { }
> > > >  static inline int sve_verify_vq_map(void) { return 0; }
> > > > diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
> > > > index 088940387a4d..79a81c7d85c6 100644
> > > > --- a/arch/arm64/kernel/fpsimd.c
> > > > +++ b/arch/arm64/kernel/fpsimd.c
> > > > @@ -159,7 +159,6 @@ static void sve_free(struct task_struct *task)
> > > >  	__sve_free(task);
> > > >  }
> > > >  
> > > > -
> > > 
> > > Hmmm, Ack.  Check for conflicts with the KVM FPSIMD rework [1] (though
> > > trivial).
> > 
> > I'll assume that Ack stands regardless. :)
> 
> Actually, I was just commenting on the deleted blank line... 

Ah. I've restored that now.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 06/18] arm64: move sve_user_{enable, disable} to <asm/fpsimd.h>
@ 2018-05-15 16:33           ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-05-15 16:33 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 15, 2018 at 01:19:26PM +0100, Dave Martin wrote:
> On Tue, May 15, 2018 at 11:39:36AM +0100, Mark Rutland wrote:
> > On Mon, May 14, 2018 at 12:06:50PM +0100, Dave Martin wrote:
> > > On Mon, May 14, 2018 at 10:46:28AM +0100, Mark Rutland wrote:
> > > > +static inline void sve_user_disable(void)
> > > > +{
> > > > +	sysreg_clear_set(cpacr_el1, CPACR_EL1_ZEN_EL0EN, 0);
> > > > +}
> > > > +
> > > > +static inline void sve_user_enable(void)
> > > > +{
> > > > +	sysreg_clear_set(cpacr_el1, 0, CPACR_EL1_ZEN_EL0EN);
> > > > +}
> > > > +
> > > >  /*
> > > >   * Probing and setup functions.
> > > >   * Calls to these functions must be serialised with one another.
> > > > @@ -107,6 +119,9 @@ static inline int sve_get_current_vl(void)
> > > >  	return -EINVAL;
> > > >  }
> > > >  
> > > > +static inline void sve_user_disable(void) { }
> > > > +static inline void sve_user_enable(void) { }
> > > > +
> > > 
> > > Alternatively, just move the full definitions outside the #ifdef
> > > CONFIG_ARM64_SVE.
> > 
> > Can do, though I was trying to keep the exsting pattern with empty
> > inlines for the !CONFIG_ARM64_SVE case.
> 
> There isn't really a pattern.  I tried to avoid dummy versions where
> there's no real reason to have them.  I don't _think_ they're really
> needed here, unless I missed something.  Did you get build failures
> without them?

I need *some* definition so that sve_user_reset() in the syscall path
can compile without ifdeferry. 

In sve_user_reset() I first check system_supports_sve(), which checks
IS_ENABLED(CONFIG_ARM64_SVE), so the call should be optimised away when
!CONFIG_ARM64_SVE, but I need a prototype regardless.

> > > All calls to these should be shadowed by an if
> > > (system_supports_sve()) in any case, and setting/clearing ZEN_EL0EN
> > > in the CPACR_EL1 ought to be harmless now that the meaning of these
> > > bits architecturally committed.
> > > 
> > > Ideally we would have a BUG_ON(!system_supports_sve()) in those
> > > functions, but we won't won't to pay the cost in a production kernel.
> > 
> > Earlier I'd put BUILD_BUG() in the body for the !CONFIG_ARM64_SVE case,
> > to catch that kind of thing -- I could restore that.
> 
> IIUC:
> 
> 	if (0) {
> 		BUILD_BUG_ON(1);
> 	}
> 
> can still fire, in which case it's futile checking for CONFIG_ARM64_SVE
> in most of the SVE support code.

We already rely on BUILD_BUG() not firing in paths that can be trivially
optimized away. e.g. in the cmpxchg code.
 
> > > >  static inline void sve_init_vq_map(void) { }
> > > >  static inline void sve_update_vq_map(void) { }
> > > >  static inline int sve_verify_vq_map(void) { return 0; }
> > > > diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
> > > > index 088940387a4d..79a81c7d85c6 100644
> > > > --- a/arch/arm64/kernel/fpsimd.c
> > > > +++ b/arch/arm64/kernel/fpsimd.c
> > > > @@ -159,7 +159,6 @@ static void sve_free(struct task_struct *task)
> > > >  	__sve_free(task);
> > > >  }
> > > >  
> > > > -
> > > 
> > > Hmmm, Ack.  Check for conflicts with the KVM FPSIMD rework [1] (though
> > > trivial).
> > 
> > I'll assume that Ack stands regardless. :)
> 
> Actually, I was just commenting on the deleted blank line... 

Ah. I've restored that now.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 06/18] arm64: move sve_user_{enable, disable} to <asm/fpsimd.h>
  2018-05-15 16:33           ` Mark Rutland
@ 2018-05-16  9:01             ` Dave Martin
  -1 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-16  9:01 UTC (permalink / raw)
  To: Mark Rutland
  Cc: marc.zyngier, catalin.marinas, will.deacon, linux-kernel, linux,
	james.morse, viro, linux-fsdevel, linux-arm-kernel

On Tue, May 15, 2018 at 05:33:52PM +0100, Mark Rutland wrote:
> On Tue, May 15, 2018 at 01:19:26PM +0100, Dave Martin wrote:
> > On Tue, May 15, 2018 at 11:39:36AM +0100, Mark Rutland wrote:
> > > On Mon, May 14, 2018 at 12:06:50PM +0100, Dave Martin wrote:
> > > > On Mon, May 14, 2018 at 10:46:28AM +0100, Mark Rutland wrote:

[...]

> > > > > @@ -107,6 +119,9 @@ static inline int sve_get_current_vl(void)
> > > > >  	return -EINVAL;
> > > > >  }
> > > > >  
> > > > > +static inline void sve_user_disable(void) { }
> > > > > +static inline void sve_user_enable(void) { }
> > > > > +
> > > > 
> > > > Alternatively, just move the full definitions outside the #ifdef
> > > > CONFIG_ARM64_SVE.
> > > 
> > > Can do, though I was trying to keep the exsting pattern with empty
> > > inlines for the !CONFIG_ARM64_SVE case.
> > 
> > There isn't really a pattern.  I tried to avoid dummy versions where
> > there's no real reason to have them.  I don't _think_ they're really
> > needed here, unless I missed something.  Did you get build failures
> > without them?
> 
> I need *some* definition so that sve_user_reset() in the syscall path
> can compile without ifdeferry. 
> 
> In sve_user_reset() I first check system_supports_sve(), which checks
> IS_ENABLED(CONFIG_ARM64_SVE), so the call should be optimised away when
> !CONFIG_ARM64_SVE, but I need a prototype regardless.

What I envisaged is that you move the real definitions outside the
#ifdef so that they're defined unconditionally, and get rid of the
dummies.

Having a dummy definition of sve_user_enable() really feels like it's
papering over something.  How could it be appropriate to call this in a
non-SVE enabled system?  You _do_ guard the call to this already, so
hiding the real function body for CONFIG_ARM64_SVE=n doesn't appear to
achieve anything.  Maybe I missed something somewhere.

A dummy sve_user_disable() is a bit more reasonable though, but we want
this to be a nop on non-SVE hardware even if CONFIG_ARM64_SVE=y.

What about moving the system_supports_sve() check inside
sve_user_disable()?

[...]

> > > Earlier I'd put BUILD_BUG() in the body for the !CONFIG_ARM64_SVE case,
> > > to catch that kind of thing -- I could restore that.
> > 
> > IIUC:
> > 
> > 	if (0) {
> > 		BUILD_BUG_ON(1);
> > 	}
> > 
> > can still fire, in which case it's futile checking for CONFIG_ARM64_SVE
> > in most of the SVE support code.
> 
> We already rely on BUILD_BUG() not firing in paths that can be trivially
> optimized away. e.g. in the cmpxchg code.

Fair enough.  I had been unsure on this point.

If you want to put a BUILD_BUG_ON(!IS_ENABLED(CONFIG_ARM64_SVE)) in
sve_user_enable() and build with CONFIG_ARM64_SVE=n to check it works,
then I'd be fine with that.

This doesn't capture the runtime part of the condition, but it's better
than nothing.

[...]

> > > > > diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
> > > > > index 088940387a4d..79a81c7d85c6 100644
> > > > > --- a/arch/arm64/kernel/fpsimd.c
> > > > > +++ b/arch/arm64/kernel/fpsimd.c
> > > > > @@ -159,7 +159,6 @@ static void sve_free(struct task_struct *task)
> > > > >  	__sve_free(task);
> > > > >  }
> > > > >  
> > > > > -
> > > > 
> > > > Hmmm, Ack.  Check for conflicts with the KVM FPSIMD rework [1] (though
> > > > trivial).
> > > 
> > > I'll assume that Ack stands regardless. :)
> > 
> > Actually, I was just commenting on the deleted blank line... 
> 
> Ah. I've restored that now.

I meant Ack to the deletion.  It looks like the blank line was
spuriously introduced in the first place.  But it doesn't hugely matter
either way.

Cheers
---Dave

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 06/18] arm64: move sve_user_{enable, disable} to <asm/fpsimd.h>
@ 2018-05-16  9:01             ` Dave Martin
  0 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-05-16  9:01 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, May 15, 2018 at 05:33:52PM +0100, Mark Rutland wrote:
> On Tue, May 15, 2018 at 01:19:26PM +0100, Dave Martin wrote:
> > On Tue, May 15, 2018 at 11:39:36AM +0100, Mark Rutland wrote:
> > > On Mon, May 14, 2018 at 12:06:50PM +0100, Dave Martin wrote:
> > > > On Mon, May 14, 2018 at 10:46:28AM +0100, Mark Rutland wrote:

[...]

> > > > > @@ -107,6 +119,9 @@ static inline int sve_get_current_vl(void)
> > > > >  	return -EINVAL;
> > > > >  }
> > > > >  
> > > > > +static inline void sve_user_disable(void) { }
> > > > > +static inline void sve_user_enable(void) { }
> > > > > +
> > > > 
> > > > Alternatively, just move the full definitions outside the #ifdef
> > > > CONFIG_ARM64_SVE.
> > > 
> > > Can do, though I was trying to keep the exsting pattern with empty
> > > inlines for the !CONFIG_ARM64_SVE case.
> > 
> > There isn't really a pattern.  I tried to avoid dummy versions where
> > there's no real reason to have them.  I don't _think_ they're really
> > needed here, unless I missed something.  Did you get build failures
> > without them?
> 
> I need *some* definition so that sve_user_reset() in the syscall path
> can compile without ifdeferry. 
> 
> In sve_user_reset() I first check system_supports_sve(), which checks
> IS_ENABLED(CONFIG_ARM64_SVE), so the call should be optimised away when
> !CONFIG_ARM64_SVE, but I need a prototype regardless.

What I envisaged is that you move the real definitions outside the
#ifdef so that they're defined unconditionally, and get rid of the
dummies.

Having a dummy definition of sve_user_enable() really feels like it's
papering over something.  How could it be appropriate to call this in a
non-SVE enabled system?  You _do_ guard the call to this already, so
hiding the real function body for CONFIG_ARM64_SVE=n doesn't appear to
achieve anything.  Maybe I missed something somewhere.

A dummy sve_user_disable() is a bit more reasonable though, but we want
this to be a nop on non-SVE hardware even if CONFIG_ARM64_SVE=y.

What about moving the system_supports_sve() check inside
sve_user_disable()?

[...]

> > > Earlier I'd put BUILD_BUG() in the body for the !CONFIG_ARM64_SVE case,
> > > to catch that kind of thing -- I could restore that.
> > 
> > IIUC:
> > 
> > 	if (0) {
> > 		BUILD_BUG_ON(1);
> > 	}
> > 
> > can still fire, in which case it's futile checking for CONFIG_ARM64_SVE
> > in most of the SVE support code.
> 
> We already rely on BUILD_BUG() not firing in paths that can be trivially
> optimized away. e.g. in the cmpxchg code.

Fair enough.  I had been unsure on this point.

If you want to put a BUILD_BUG_ON(!IS_ENABLED(CONFIG_ARM64_SVE)) in
sve_user_enable() and build with CONFIG_ARM64_SVE=n to check it works,
then I'd be fine with that.

This doesn't capture the runtime part of the condition, but it's better
than nothing.

[...]

> > > > > diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
> > > > > index 088940387a4d..79a81c7d85c6 100644
> > > > > --- a/arch/arm64/kernel/fpsimd.c
> > > > > +++ b/arch/arm64/kernel/fpsimd.c
> > > > > @@ -159,7 +159,6 @@ static void sve_free(struct task_struct *task)
> > > > >  	__sve_free(task);
> > > > >  }
> > > > >  
> > > > > -
> > > > 
> > > > Hmmm, Ack.  Check for conflicts with the KVM FPSIMD rework [1] (though
> > > > trivial).
> > > 
> > > I'll assume that Ack stands regardless. :)
> > 
> > Actually, I was just commenting on the deleted blank line... 
> 
> Ah. I've restored that now.

I meant Ack to the deletion.  It looks like the blank line was
spuriously introduced in the first place.  But it doesn't hugely matter
either way.

Cheers
---Dave

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 06/18] arm64: move sve_user_{enable, disable} to <asm/fpsimd.h>
  2018-05-16  9:01             ` Dave Martin
@ 2018-06-01 10:29               ` Mark Rutland
  -1 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-06-01 10:29 UTC (permalink / raw)
  To: Dave Martin
  Cc: marc.zyngier, catalin.marinas, will.deacon, linux-kernel, linux,
	james.morse, viro, linux-fsdevel, linux-arm-kernel

On Wed, May 16, 2018 at 10:01:32AM +0100, Dave Martin wrote:
> On Tue, May 15, 2018 at 05:33:52PM +0100, Mark Rutland wrote:
> > On Tue, May 15, 2018 at 01:19:26PM +0100, Dave Martin wrote:
> > > On Tue, May 15, 2018 at 11:39:36AM +0100, Mark Rutland wrote:
> > > > Earlier I'd put BUILD_BUG() in the body for the !CONFIG_ARM64_SVE case,
> > > > to catch that kind of thing -- I could restore that.
> > > 
> > > IIUC:
> > > 
> > > 	if (0) {
> > > 		BUILD_BUG_ON(1);
> > > 	}
> > > 
> > > can still fire, in which case it's futile checking for CONFIG_ARM64_SVE
> > > in most of the SVE support code.
> > 
> > We already rely on BUILD_BUG() not firing in paths that can be trivially
> > optimized away. e.g. in the cmpxchg code.
> 
> Fair enough.  I had been unsure on this point.
> 
> If you want to put a BUILD_BUG_ON(!IS_ENABLED(CONFIG_ARM64_SVE)) in
> sve_user_enable() and build with CONFIG_ARM64_SVE=n to check it works,
> then I'd be fine with that.
> 
> This doesn't capture the runtime part of the condition, but it's better
> than nothing.

For the moment, I've kept the stubs, but placed a BUILD_BUG() in each,
as per the above rationale. 

We generally do that rather than than BUILD_BUG_ON(!IS_ENABLED(...)) in
a common definition, and it's more in keeping with the other stubs in
<asm/fpsimd.h>.

> > > > > > diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
> > > > > > index 088940387a4d..79a81c7d85c6 100644
> > > > > > --- a/arch/arm64/kernel/fpsimd.c
> > > > > > +++ b/arch/arm64/kernel/fpsimd.c
> > > > > > @@ -159,7 +159,6 @@ static void sve_free(struct task_struct *task)
> > > > > >  	__sve_free(task);
> > > > > >  }
> > > > > >  
> > > > > > -
> > > > > 
> > > > > Hmmm, Ack.  Check for conflicts with the KVM FPSIMD rework [1] (though
> > > > > trivial).
> > > > 
> > > > I'll assume that Ack stands regardless. :)
> > > 
> > > Actually, I was just commenting on the deleted blank line... 
> > 
> > Ah. I've restored that now.
> 
> I meant Ack to the deletion.  It looks like the blank line was
> spuriously introduced in the first place.  But it doesn't hugely matter
> either way.

Ok. I've dropped that for now to minimize the potential for conflicts,
but we can clean this up later.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 06/18] arm64: move sve_user_{enable, disable} to <asm/fpsimd.h>
@ 2018-06-01 10:29               ` Mark Rutland
  0 siblings, 0 replies; 126+ messages in thread
From: Mark Rutland @ 2018-06-01 10:29 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, May 16, 2018 at 10:01:32AM +0100, Dave Martin wrote:
> On Tue, May 15, 2018 at 05:33:52PM +0100, Mark Rutland wrote:
> > On Tue, May 15, 2018 at 01:19:26PM +0100, Dave Martin wrote:
> > > On Tue, May 15, 2018 at 11:39:36AM +0100, Mark Rutland wrote:
> > > > Earlier I'd put BUILD_BUG() in the body for the !CONFIG_ARM64_SVE case,
> > > > to catch that kind of thing -- I could restore that.
> > > 
> > > IIUC:
> > > 
> > > 	if (0) {
> > > 		BUILD_BUG_ON(1);
> > > 	}
> > > 
> > > can still fire, in which case it's futile checking for CONFIG_ARM64_SVE
> > > in most of the SVE support code.
> > 
> > We already rely on BUILD_BUG() not firing in paths that can be trivially
> > optimized away. e.g. in the cmpxchg code.
> 
> Fair enough.  I had been unsure on this point.
> 
> If you want to put a BUILD_BUG_ON(!IS_ENABLED(CONFIG_ARM64_SVE)) in
> sve_user_enable() and build with CONFIG_ARM64_SVE=n to check it works,
> then I'd be fine with that.
> 
> This doesn't capture the runtime part of the condition, but it's better
> than nothing.

For the moment, I've kept the stubs, but placed a BUILD_BUG() in each,
as per the above rationale. 

We generally do that rather than than BUILD_BUG_ON(!IS_ENABLED(...)) in
a common definition, and it's more in keeping with the other stubs in
<asm/fpsimd.h>.

> > > > > > diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
> > > > > > index 088940387a4d..79a81c7d85c6 100644
> > > > > > --- a/arch/arm64/kernel/fpsimd.c
> > > > > > +++ b/arch/arm64/kernel/fpsimd.c
> > > > > > @@ -159,7 +159,6 @@ static void sve_free(struct task_struct *task)
> > > > > >  	__sve_free(task);
> > > > > >  }
> > > > > >  
> > > > > > -
> > > > > 
> > > > > Hmmm, Ack.  Check for conflicts with the KVM FPSIMD rework [1] (though
> > > > > trivial).
> > > > 
> > > > I'll assume that Ack stands regardless. :)
> > > 
> > > Actually, I was just commenting on the deleted blank line... 
> > 
> > Ah. I've restored that now.
> 
> I meant Ack to the deletion.  It looks like the blank line was
> spuriously introduced in the first place.  But it doesn't hugely matter
> either way.

Ok. I've dropped that for now to minimize the potential for conflicts,
but we can clean this up later.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 126+ messages in thread

* Re: [PATCH 06/18] arm64: move sve_user_{enable, disable} to <asm/fpsimd.h>
  2018-06-01 10:29               ` Mark Rutland
@ 2018-06-01 10:42                 ` Dave Martin
  -1 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-06-01 10:42 UTC (permalink / raw)
  To: Mark Rutland
  Cc: marc.zyngier, catalin.marinas, will.deacon, linux-kernel, linux,
	james.morse, viro, linux-fsdevel, linux-arm-kernel

On Fri, Jun 01, 2018 at 11:29:13AM +0100, Mark Rutland wrote:
> On Wed, May 16, 2018 at 10:01:32AM +0100, Dave Martin wrote:
> > On Tue, May 15, 2018 at 05:33:52PM +0100, Mark Rutland wrote:
> > > On Tue, May 15, 2018 at 01:19:26PM +0100, Dave Martin wrote:
> > > > On Tue, May 15, 2018 at 11:39:36AM +0100, Mark Rutland wrote:
> > > > > Earlier I'd put BUILD_BUG() in the body for the !CONFIG_ARM64_SVE case,
> > > > > to catch that kind of thing -- I could restore that.
> > > > 
> > > > IIUC:
> > > > 
> > > > 	if (0) {
> > > > 		BUILD_BUG_ON(1);
> > > > 	}
> > > > 
> > > > can still fire, in which case it's futile checking for CONFIG_ARM64_SVE
> > > > in most of the SVE support code.
> > > 
> > > We already rely on BUILD_BUG() not firing in paths that can be trivially
> > > optimized away. e.g. in the cmpxchg code.
> > 
> > Fair enough.  I had been unsure on this point.
> > 
> > If you want to put a BUILD_BUG_ON(!IS_ENABLED(CONFIG_ARM64_SVE)) in
> > sve_user_enable() and build with CONFIG_ARM64_SVE=n to check it works,
> > then I'd be fine with that.
> > 
> > This doesn't capture the runtime part of the condition, but it's better
> > than nothing.
> 
> For the moment, I've kept the stubs, but placed a BUILD_BUG() in each,
> as per the above rationale. 
> 
> We generally do that rather than than BUILD_BUG_ON(!IS_ENABLED(...)) in
> a common definition, and it's more in keeping with the other stubs in
> <asm/fpsimd.h>.

OK, fine by me.

> > > > > > > diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
> > > > > > > index 088940387a4d..79a81c7d85c6 100644
> > > > > > > --- a/arch/arm64/kernel/fpsimd.c
> > > > > > > +++ b/arch/arm64/kernel/fpsimd.c
> > > > > > > @@ -159,7 +159,6 @@ static void sve_free(struct task_struct *task)
> > > > > > >  	__sve_free(task);
> > > > > > >  }
> > > > > > >  
> > > > > > > -
> > > > > > 
> > > > > > Hmmm, Ack.  Check for conflicts with the KVM FPSIMD rework [1] (though
> > > > > > trivial).
> > > > > 
> > > > > I'll assume that Ack stands regardless. :)
> > > > 
> > > > Actually, I was just commenting on the deleted blank line... 
> > > 
> > > Ah. I've restored that now.
> > 
> > I meant Ack to the deletion.  It looks like the blank line was
> > spuriously introduced in the first place.  But it doesn't hugely matter
> > either way.
> 
> Ok. I've dropped that for now to minimize the potential for conflicts,
> but we can clean this up later.

No big deal either way.

Cheers
---Dave

^ permalink raw reply	[flat|nested] 126+ messages in thread

* [PATCH 06/18] arm64: move sve_user_{enable, disable} to <asm/fpsimd.h>
@ 2018-06-01 10:42                 ` Dave Martin
  0 siblings, 0 replies; 126+ messages in thread
From: Dave Martin @ 2018-06-01 10:42 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jun 01, 2018 at 11:29:13AM +0100, Mark Rutland wrote:
> On Wed, May 16, 2018 at 10:01:32AM +0100, Dave Martin wrote:
> > On Tue, May 15, 2018 at 05:33:52PM +0100, Mark Rutland wrote:
> > > On Tue, May 15, 2018 at 01:19:26PM +0100, Dave Martin wrote:
> > > > On Tue, May 15, 2018 at 11:39:36AM +0100, Mark Rutland wrote:
> > > > > Earlier I'd put BUILD_BUG() in the body for the !CONFIG_ARM64_SVE case,
> > > > > to catch that kind of thing -- I could restore that.
> > > > 
> > > > IIUC:
> > > > 
> > > > 	if (0) {
> > > > 		BUILD_BUG_ON(1);
> > > > 	}
> > > > 
> > > > can still fire, in which case it's futile checking for CONFIG_ARM64_SVE
> > > > in most of the SVE support code.
> > > 
> > > We already rely on BUILD_BUG() not firing in paths that can be trivially
> > > optimized away. e.g. in the cmpxchg code.
> > 
> > Fair enough.  I had been unsure on this point.
> > 
> > If you want to put a BUILD_BUG_ON(!IS_ENABLED(CONFIG_ARM64_SVE)) in
> > sve_user_enable() and build with CONFIG_ARM64_SVE=n to check it works,
> > then I'd be fine with that.
> > 
> > This doesn't capture the runtime part of the condition, but it's better
> > than nothing.
> 
> For the moment, I've kept the stubs, but placed a BUILD_BUG() in each,
> as per the above rationale. 
> 
> We generally do that rather than than BUILD_BUG_ON(!IS_ENABLED(...)) in
> a common definition, and it's more in keeping with the other stubs in
> <asm/fpsimd.h>.

OK, fine by me.

> > > > > > > diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
> > > > > > > index 088940387a4d..79a81c7d85c6 100644
> > > > > > > --- a/arch/arm64/kernel/fpsimd.c
> > > > > > > +++ b/arch/arm64/kernel/fpsimd.c
> > > > > > > @@ -159,7 +159,6 @@ static void sve_free(struct task_struct *task)
> > > > > > >  	__sve_free(task);
> > > > > > >  }
> > > > > > >  
> > > > > > > -
> > > > > > 
> > > > > > Hmmm, Ack.  Check for conflicts with the KVM FPSIMD rework [1] (though
> > > > > > trivial).
> > > > > 
> > > > > I'll assume that Ack stands regardless. :)
> > > > 
> > > > Actually, I was just commenting on the deleted blank line... 
> > > 
> > > Ah. I've restored that now.
> > 
> > I meant Ack to the deletion.  It looks like the blank line was
> > spuriously introduced in the first place.  But it doesn't hugely matter
> > either way.
> 
> Ok. I've dropped that for now to minimize the potential for conflicts,
> but we can clean this up later.

No big deal either way.

Cheers
---Dave

^ permalink raw reply	[flat|nested] 126+ messages in thread

end of thread, other threads:[~2018-06-01 10:42 UTC | newest]

Thread overview: 126+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-14  9:46 [PATCH 00/18] arm64: invoke syscalls with pt_regs Mark Rutland
2018-05-14  9:46 ` Mark Rutland
2018-05-14  9:46 ` [PATCH 01/18] arm64: consistently use unsigned long for thread flags Mark Rutland
2018-05-14  9:46   ` Mark Rutland
2018-05-14  9:57   ` Dave Martin
2018-05-14  9:57     ` Dave Martin
2018-05-14  9:46 ` [PATCH 02/18] arm64: move SCTLR_EL{1,2} assertions to <asm/sysreg.h> Mark Rutland
2018-05-14  9:46   ` Mark Rutland
2018-05-14 10:00   ` Dave Martin
2018-05-14 10:00     ` Dave Martin
2018-05-14 10:08     ` Mark Rutland
2018-05-14 10:08       ` Mark Rutland
2018-05-14 11:20       ` Dave Martin
2018-05-14 11:20         ` Dave Martin
2018-05-14 11:56         ` Robin Murphy
2018-05-14 11:56           ` Robin Murphy
2018-05-14 12:06           ` Mark Rutland
2018-05-14 12:06             ` Mark Rutland
2018-05-14 12:41             ` Dave Martin
2018-05-14 12:41               ` Dave Martin
2018-05-14 13:10               ` Mark Rutland
2018-05-14 13:10                 ` Mark Rutland
2018-05-14  9:46 ` [PATCH 03/18] arm64: introduce sysreg_clear_set() Mark Rutland
2018-05-14  9:46   ` Mark Rutland
2018-05-14 10:04   ` Dave Martin
2018-05-14 10:04     ` Dave Martin
2018-05-14  9:46 ` [PATCH 04/18] arm64: kill config_sctlr_el1() Mark Rutland
2018-05-14  9:46   ` Mark Rutland
2018-05-14 10:05   ` Dave Martin
2018-05-14 10:05     ` Dave Martin
2018-05-14  9:46 ` [PATCH 05/18] arm64: kill change_cpacr() Mark Rutland
2018-05-14  9:46   ` Mark Rutland
2018-05-14 10:06   ` Dave Martin
2018-05-14 10:06     ` Dave Martin
2018-05-14  9:46 ` [PATCH 06/18] arm64: move sve_user_{enable,disable} to <asm/fpsimd.h> Mark Rutland
2018-05-14  9:46   ` [PATCH 06/18] arm64: move sve_user_{enable, disable} " Mark Rutland
2018-05-14 11:06   ` Dave Martin
2018-05-14 11:06     ` Dave Martin
2018-05-15 10:39     ` Mark Rutland
2018-05-15 10:39       ` Mark Rutland
2018-05-15 12:19       ` Dave Martin
2018-05-15 12:19         ` Dave Martin
2018-05-15 16:33         ` Mark Rutland
2018-05-15 16:33           ` Mark Rutland
2018-05-16  9:01           ` Dave Martin
2018-05-16  9:01             ` Dave Martin
2018-06-01 10:29             ` Mark Rutland
2018-06-01 10:29               ` Mark Rutland
2018-06-01 10:42               ` Dave Martin
2018-06-01 10:42                 ` Dave Martin
2018-05-14  9:46 ` [PATCH 07/18] arm64: remove sigreturn wrappers Mark Rutland
2018-05-14  9:46   ` Mark Rutland
2018-05-14 11:07   ` Dave Martin
2018-05-14 11:07     ` Dave Martin
2018-05-14  9:46 ` [PATCH 08/18] arm64: convert raw syscall invocation to C Mark Rutland
2018-05-14  9:46   ` Mark Rutland
2018-05-14 11:07   ` Dave Martin
2018-05-14 11:07     ` Dave Martin
2018-05-14 11:41     ` Mark Rutland
2018-05-14 11:41       ` Mark Rutland
2018-05-14 12:53       ` Dave Martin
2018-05-14 12:53         ` Dave Martin
2018-05-14 20:24       ` Dominik Brodowski
2018-05-14 20:24         ` Dominik Brodowski
2018-05-15  8:22         ` Mark Rutland
2018-05-15  8:22           ` Mark Rutland
2018-05-15 10:01           ` Dominik Brodowski
2018-05-15 10:01             ` Dominik Brodowski
2018-05-15 10:13             ` Mark Rutland
2018-05-15 10:13               ` Mark Rutland
2018-05-14 18:00   ` Dominik Brodowski
2018-05-14 18:00     ` Dominik Brodowski
2018-05-15  8:18     ` Mark Rutland
2018-05-15  8:18       ` Mark Rutland
2018-05-14  9:46 ` [PATCH 09/18] arm64: convert syscall trace logic " Mark Rutland
2018-05-14  9:46   ` Mark Rutland
2018-05-14  9:46 ` [PATCH 10/18] arm64: convert native/compat syscall entry " Mark Rutland
2018-05-14  9:46   ` Mark Rutland
2018-05-14 11:07   ` Dave Martin
2018-05-14 11:07     ` Dave Martin
2018-05-14 11:58     ` Mark Rutland
2018-05-14 11:58       ` Mark Rutland
2018-05-14 14:43       ` Dave Martin
2018-05-14 14:43         ` Dave Martin
2018-05-14 15:01         ` Mark Rutland
2018-05-14 15:01           ` Mark Rutland
2018-05-14  9:46 ` [PATCH 11/18] arm64: zero GPRs upon entry from EL0 Mark Rutland
2018-05-14  9:46   ` Mark Rutland
2018-05-14 11:07   ` Dave Martin
2018-05-14 11:07     ` Dave Martin
2018-05-14  9:46 ` [PATCH 12/18] kernel: add ksys_personality() Mark Rutland
2018-05-14  9:46   ` Mark Rutland
2018-05-14 11:08   ` Dave Martin
2018-05-14 11:08     ` Dave Martin
2018-05-14 12:07   ` Christoph Hellwig
2018-05-14 12:07     ` Christoph Hellwig
2018-05-15  9:56     ` Mark Rutland
2018-05-15  9:56       ` Mark Rutland
2018-05-14  9:46 ` [PATCH 13/18] kernel: add kcompat_sys_{f,}statfs64() Mark Rutland
2018-05-14  9:46   ` Mark Rutland
2018-05-14 17:14   ` Mark Rutland
2018-05-14 17:14     ` Mark Rutland
2018-05-14 20:34     ` Dominik Brodowski
2018-05-14 20:34       ` Dominik Brodowski
2018-05-15  9:53       ` Mark Rutland
2018-05-15  9:53         ` Mark Rutland
2018-05-15  9:58         ` Dominik Brodowski
2018-05-15  9:58           ` Dominik Brodowski
2018-05-14  9:46 ` [PATCH 14/18] arm64: remove in-kernel call to sys_personality() Mark Rutland
2018-05-14  9:46   ` Mark Rutland
2018-05-14  9:46 ` [PATCH 15/18] arm64: use {COMPAT,}SYSCALL_DEFINE0 for sigreturn Mark Rutland
2018-05-14  9:46   ` Mark Rutland
2018-05-14  9:46 ` [PATCH 16/18] arm64: use SYSCALL_DEFINE6() for mmap Mark Rutland
2018-05-14  9:46   ` Mark Rutland
2018-05-14  9:46 ` [PATCH 17/18] arm64: convert compat wrappers to C Mark Rutland
2018-05-14  9:46   ` Mark Rutland
2018-05-14 12:10   ` Christoph Hellwig
2018-05-14 12:10     ` Christoph Hellwig
2018-05-14 12:43     ` Mark Rutland
2018-05-14 12:43       ` Mark Rutland
2018-05-14  9:46 ` [PATCH 18/18] arm64: implement syscall wrappers Mark Rutland
2018-05-14  9:46   ` Mark Rutland
2018-05-14 20:57   ` Dominik Brodowski
2018-05-14 20:57     ` Dominik Brodowski
2018-05-15  8:37     ` Mark Rutland
2018-05-15  8:37       ` Mark Rutland

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.