linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v11 0/8] Disable compat cruft on ppc64le v11
       [not found] <20200225173541.1549955-1-npiggin@gmail.com>
@ 2020-03-19 12:19 ` Michal Suchanek
  2020-03-19 12:19   ` [PATCH v11 1/8] powerpc: Add back __ARCH_WANT_SYS_LLSEEK macro Michal Suchanek
                     ` (9 more replies)
  2020-03-20 10:20 ` [PATCH v12 0/8] Disable compat cruft on ppc64le v12 Michal Suchanek
  1 sibling, 10 replies; 53+ messages in thread
From: Michal Suchanek @ 2020-03-19 12:19 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Michal Suchanek, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Alexander Viro, Mauro Carvalho Chehab,
	David S. Miller, Rob Herring, Greg Kroah-Hartman,
	Jonathan Cameron, Andy Shevchenko, Christophe Leroy,
	Thomas Gleixner, Arnd Bergmann, Nayna Jain, Eric Richter,
	Claudio Carvalho, Nicholas Piggin, Hari Bathini, Masahiro Yamada,
	Thiago Jung Bauermann, Sebastian Andrzej Siewior,
	Valentin Schneider, Jordan Niethe, Michael Neuling,
	Gustavo Luiz Duarte, Allison Randal, Eric W. Biederman,
	linux-kernel, linux-fsdevel

Less code means less bugs so add a knob to skip the compat stuff.

Changes in v2: saner CONFIG_COMPAT ifdefs
Changes in v3:
 - change llseek to 32bit instead of builing it unconditionally in fs
 - clanup the makefile conditionals
 - remove some ifdefs or convert to IS_DEFINED where possible
Changes in v4:
 - cleanup is_32bit_task and current_is_64bit
 - more makefile cleanup
Changes in v5:
 - more current_is_64bit cleanup
 - split off callchain.c 32bit and 64bit parts
Changes in v6:
 - cleanup makefile after split
 - consolidate read_user_stack_32
 - fix some checkpatch warnings
Changes in v7:
 - add back __ARCH_WANT_SYS_LLSEEK to fix build with llseek
 - remove leftover hunk
 - add review tags
Changes in v8:
 - consolidate valid_user_sp to fix it in the split callchain.c
 - fix build errors/warnings with PPC64 !COMPAT and PPC32
Changes in v9:
 - remove current_is_64bit()
Chanegs in v10:
 - rebase, sent together with the syscall cleanup
Changes in v11:
 - rebase
 - add MAINTAINERS pattern for ppc perf

Michal Suchanek (8):
  powerpc: Add back __ARCH_WANT_SYS_LLSEEK macro
  powerpc: move common register copy functions from signal_32.c to
    signal.c
  powerpc/perf: consolidate read_user_stack_32
  powerpc/perf: consolidate valid_user_sp
  powerpc/64: make buildable without CONFIG_COMPAT
  powerpc/64: Make COMPAT user-selectable disabled on littleendian by
    default.
  powerpc/perf: split callchain.c by bitness
  MAINTAINERS: perf: Add pattern that matches ppc perf to the perf
    entry.

 MAINTAINERS                            |   2 +
 arch/powerpc/Kconfig                   |   5 +-
 arch/powerpc/include/asm/thread_info.h |   4 +-
 arch/powerpc/include/asm/unistd.h      |   1 +
 arch/powerpc/kernel/Makefile           |   6 +-
 arch/powerpc/kernel/entry_64.S         |   2 +
 arch/powerpc/kernel/signal.c           | 144 +++++++++-
 arch/powerpc/kernel/signal_32.c        | 140 ----------
 arch/powerpc/kernel/syscall_64.c       |   6 +-
 arch/powerpc/kernel/vdso.c             |   3 +-
 arch/powerpc/perf/Makefile             |   5 +-
 arch/powerpc/perf/callchain.c          | 356 +------------------------
 arch/powerpc/perf/callchain.h          |  20 ++
 arch/powerpc/perf/callchain_32.c       | 196 ++++++++++++++
 arch/powerpc/perf/callchain_64.c       | 174 ++++++++++++
 fs/read_write.c                        |   3 +-
 16 files changed, 556 insertions(+), 511 deletions(-)
 create mode 100644 arch/powerpc/perf/callchain.h
 create mode 100644 arch/powerpc/perf/callchain_32.c
 create mode 100644 arch/powerpc/perf/callchain_64.c

-- 
2.23.0


^ permalink raw reply	[flat|nested] 53+ messages in thread

* [PATCH v11 1/8] powerpc: Add back __ARCH_WANT_SYS_LLSEEK macro
  2020-03-19 12:19 ` [PATCH v11 0/8] Disable compat cruft on ppc64le v11 Michal Suchanek
@ 2020-03-19 12:19   ` Michal Suchanek
  2020-03-19 12:19   ` [PATCH v11 2/8] powerpc: move common register copy functions from signal_32.c to signal.c Michal Suchanek
                     ` (8 subsequent siblings)
  9 siblings, 0 replies; 53+ messages in thread
From: Michal Suchanek @ 2020-03-19 12:19 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Michal Suchanek, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Alexander Viro, Mauro Carvalho Chehab,
	David S. Miller, Rob Herring, Greg Kroah-Hartman,
	Jonathan Cameron, Andy Shevchenko, Christophe Leroy,
	Thomas Gleixner, Arnd Bergmann, Nayna Jain, Eric Richter,
	Claudio Carvalho, Nicholas Piggin, Hari Bathini, Masahiro Yamada,
	Thiago Jung Bauermann, Sebastian Andrzej Siewior,
	Valentin Schneider, Jordan Niethe, Michael Neuling,
	Gustavo Luiz Duarte, Allison Randal, Eric W. Biederman,
	linux-kernel, linux-fsdevel

This partially reverts commit caf6f9c8a326 ("asm-generic: Remove
unneeded __ARCH_WANT_SYS_LLSEEK macro")

When CONFIG_COMPAT is disabled on ppc64 the kernel does not build.

There is resistance to both removing the llseek syscall from the 64bit
syscall tables and building the llseek interface unconditionally.

Link: https://lore.kernel.org/lkml/20190828151552.GA16855@infradead.org/
Link: https://lore.kernel.org/lkml/20190829214319.498c7de2@naga/

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
---
v7: new patch
---
 arch/powerpc/include/asm/unistd.h | 1 +
 fs/read_write.c                   | 3 ++-
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/unistd.h b/arch/powerpc/include/asm/unistd.h
index b0720c7c3fcf..700fcdac2e3c 100644
--- a/arch/powerpc/include/asm/unistd.h
+++ b/arch/powerpc/include/asm/unistd.h
@@ -31,6 +31,7 @@
 #define __ARCH_WANT_SYS_SOCKETCALL
 #define __ARCH_WANT_SYS_FADVISE64
 #define __ARCH_WANT_SYS_GETPGRP
+#define __ARCH_WANT_SYS_LLSEEK
 #define __ARCH_WANT_SYS_NICE
 #define __ARCH_WANT_SYS_OLD_GETRLIMIT
 #define __ARCH_WANT_SYS_OLD_UNAME
diff --git a/fs/read_write.c b/fs/read_write.c
index 59d819c5b92e..bbfa9b12b15e 100644
--- a/fs/read_write.c
+++ b/fs/read_write.c
@@ -331,7 +331,8 @@ COMPAT_SYSCALL_DEFINE3(lseek, unsigned int, fd, compat_off_t, offset, unsigned i
 }
 #endif
 
-#if !defined(CONFIG_64BIT) || defined(CONFIG_COMPAT)
+#if !defined(CONFIG_64BIT) || defined(CONFIG_COMPAT) || \
+	defined(__ARCH_WANT_SYS_LLSEEK)
 SYSCALL_DEFINE5(llseek, unsigned int, fd, unsigned long, offset_high,
 		unsigned long, offset_low, loff_t __user *, result,
 		unsigned int, whence)
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v11 2/8] powerpc: move common register copy functions from signal_32.c to signal.c
  2020-03-19 12:19 ` [PATCH v11 0/8] Disable compat cruft on ppc64le v11 Michal Suchanek
  2020-03-19 12:19   ` [PATCH v11 1/8] powerpc: Add back __ARCH_WANT_SYS_LLSEEK macro Michal Suchanek
@ 2020-03-19 12:19   ` Michal Suchanek
  2020-03-19 12:19   ` [PATCH v11 3/8] powerpc/perf: consolidate read_user_stack_32 Michal Suchanek
                     ` (7 subsequent siblings)
  9 siblings, 0 replies; 53+ messages in thread
From: Michal Suchanek @ 2020-03-19 12:19 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Michal Suchanek, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Alexander Viro, Mauro Carvalho Chehab,
	David S. Miller, Rob Herring, Greg Kroah-Hartman,
	Jonathan Cameron, Andy Shevchenko, Christophe Leroy,
	Thomas Gleixner, Arnd Bergmann, Nayna Jain, Eric Richter,
	Claudio Carvalho, Nicholas Piggin, Hari Bathini, Masahiro Yamada,
	Thiago Jung Bauermann, Sebastian Andrzej Siewior,
	Valentin Schneider, Jordan Niethe, Michael Neuling,
	Gustavo Luiz Duarte, Allison Randal, Eric W. Biederman,
	linux-kernel, linux-fsdevel

These functions are required for 64bit as well.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/kernel/signal.c    | 141 ++++++++++++++++++++++++++++++++
 arch/powerpc/kernel/signal_32.c | 140 -------------------------------
 2 files changed, 141 insertions(+), 140 deletions(-)

diff --git a/arch/powerpc/kernel/signal.c b/arch/powerpc/kernel/signal.c
index d215f9554553..4b0152108f61 100644
--- a/arch/powerpc/kernel/signal.c
+++ b/arch/powerpc/kernel/signal.c
@@ -18,12 +18,153 @@
 #include <linux/syscalls.h>
 #include <asm/hw_breakpoint.h>
 #include <linux/uaccess.h>
+#include <asm/switch_to.h>
 #include <asm/unistd.h>
 #include <asm/debug.h>
 #include <asm/tm.h>
 
 #include "signal.h"
 
+#ifdef CONFIG_VSX
+unsigned long copy_fpr_to_user(void __user *to,
+			       struct task_struct *task)
+{
+	u64 buf[ELF_NFPREG];
+	int i;
+
+	/* save FPR copy to local buffer then write to the thread_struct */
+	for (i = 0; i < (ELF_NFPREG - 1) ; i++)
+		buf[i] = task->thread.TS_FPR(i);
+	buf[i] = task->thread.fp_state.fpscr;
+	return __copy_to_user(to, buf, ELF_NFPREG * sizeof(double));
+}
+
+unsigned long copy_fpr_from_user(struct task_struct *task,
+				 void __user *from)
+{
+	u64 buf[ELF_NFPREG];
+	int i;
+
+	if (__copy_from_user(buf, from, ELF_NFPREG * sizeof(double)))
+		return 1;
+	for (i = 0; i < (ELF_NFPREG - 1) ; i++)
+		task->thread.TS_FPR(i) = buf[i];
+	task->thread.fp_state.fpscr = buf[i];
+
+	return 0;
+}
+
+unsigned long copy_vsx_to_user(void __user *to,
+			       struct task_struct *task)
+{
+	u64 buf[ELF_NVSRHALFREG];
+	int i;
+
+	/* save FPR copy to local buffer then write to the thread_struct */
+	for (i = 0; i < ELF_NVSRHALFREG; i++)
+		buf[i] = task->thread.fp_state.fpr[i][TS_VSRLOWOFFSET];
+	return __copy_to_user(to, buf, ELF_NVSRHALFREG * sizeof(double));
+}
+
+unsigned long copy_vsx_from_user(struct task_struct *task,
+				 void __user *from)
+{
+	u64 buf[ELF_NVSRHALFREG];
+	int i;
+
+	if (__copy_from_user(buf, from, ELF_NVSRHALFREG * sizeof(double)))
+		return 1;
+	for (i = 0; i < ELF_NVSRHALFREG ; i++)
+		task->thread.fp_state.fpr[i][TS_VSRLOWOFFSET] = buf[i];
+	return 0;
+}
+
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+unsigned long copy_ckfpr_to_user(void __user *to,
+				  struct task_struct *task)
+{
+	u64 buf[ELF_NFPREG];
+	int i;
+
+	/* save FPR copy to local buffer then write to the thread_struct */
+	for (i = 0; i < (ELF_NFPREG - 1) ; i++)
+		buf[i] = task->thread.TS_CKFPR(i);
+	buf[i] = task->thread.ckfp_state.fpscr;
+	return __copy_to_user(to, buf, ELF_NFPREG * sizeof(double));
+}
+
+unsigned long copy_ckfpr_from_user(struct task_struct *task,
+					  void __user *from)
+{
+	u64 buf[ELF_NFPREG];
+	int i;
+
+	if (__copy_from_user(buf, from, ELF_NFPREG * sizeof(double)))
+		return 1;
+	for (i = 0; i < (ELF_NFPREG - 1) ; i++)
+		task->thread.TS_CKFPR(i) = buf[i];
+	task->thread.ckfp_state.fpscr = buf[i];
+
+	return 0;
+}
+
+unsigned long copy_ckvsx_to_user(void __user *to,
+				  struct task_struct *task)
+{
+	u64 buf[ELF_NVSRHALFREG];
+	int i;
+
+	/* save FPR copy to local buffer then write to the thread_struct */
+	for (i = 0; i < ELF_NVSRHALFREG; i++)
+		buf[i] = task->thread.ckfp_state.fpr[i][TS_VSRLOWOFFSET];
+	return __copy_to_user(to, buf, ELF_NVSRHALFREG * sizeof(double));
+}
+
+unsigned long copy_ckvsx_from_user(struct task_struct *task,
+					  void __user *from)
+{
+	u64 buf[ELF_NVSRHALFREG];
+	int i;
+
+	if (__copy_from_user(buf, from, ELF_NVSRHALFREG * sizeof(double)))
+		return 1;
+	for (i = 0; i < ELF_NVSRHALFREG ; i++)
+		task->thread.ckfp_state.fpr[i][TS_VSRLOWOFFSET] = buf[i];
+	return 0;
+}
+#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
+#else
+inline unsigned long copy_fpr_to_user(void __user *to,
+				      struct task_struct *task)
+{
+	return __copy_to_user(to, task->thread.fp_state.fpr,
+			      ELF_NFPREG * sizeof(double));
+}
+
+inline unsigned long copy_fpr_from_user(struct task_struct *task,
+					void __user *from)
+{
+	return __copy_from_user(task->thread.fp_state.fpr, from,
+			      ELF_NFPREG * sizeof(double));
+}
+
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+inline unsigned long copy_ckfpr_to_user(void __user *to,
+					 struct task_struct *task)
+{
+	return __copy_to_user(to, task->thread.ckfp_state.fpr,
+			      ELF_NFPREG * sizeof(double));
+}
+
+inline unsigned long copy_ckfpr_from_user(struct task_struct *task,
+						 void __user *from)
+{
+	return __copy_from_user(task->thread.ckfp_state.fpr, from,
+				ELF_NFPREG * sizeof(double));
+}
+#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
+#endif
+
 /* Log an error when sending an unhandled signal to a process. Controlled
  * through debug.exception-trace sysctl.
  */
diff --git a/arch/powerpc/kernel/signal_32.c b/arch/powerpc/kernel/signal_32.c
index 1b090a76b444..4f96d29a22bf 100644
--- a/arch/powerpc/kernel/signal_32.c
+++ b/arch/powerpc/kernel/signal_32.c
@@ -235,146 +235,6 @@ struct rt_sigframe {
 	int			abigap[56];
 };
 
-#ifdef CONFIG_VSX
-unsigned long copy_fpr_to_user(void __user *to,
-			       struct task_struct *task)
-{
-	u64 buf[ELF_NFPREG];
-	int i;
-
-	/* save FPR copy to local buffer then write to the thread_struct */
-	for (i = 0; i < (ELF_NFPREG - 1) ; i++)
-		buf[i] = task->thread.TS_FPR(i);
-	buf[i] = task->thread.fp_state.fpscr;
-	return __copy_to_user(to, buf, ELF_NFPREG * sizeof(double));
-}
-
-unsigned long copy_fpr_from_user(struct task_struct *task,
-				 void __user *from)
-{
-	u64 buf[ELF_NFPREG];
-	int i;
-
-	if (__copy_from_user(buf, from, ELF_NFPREG * sizeof(double)))
-		return 1;
-	for (i = 0; i < (ELF_NFPREG - 1) ; i++)
-		task->thread.TS_FPR(i) = buf[i];
-	task->thread.fp_state.fpscr = buf[i];
-
-	return 0;
-}
-
-unsigned long copy_vsx_to_user(void __user *to,
-			       struct task_struct *task)
-{
-	u64 buf[ELF_NVSRHALFREG];
-	int i;
-
-	/* save FPR copy to local buffer then write to the thread_struct */
-	for (i = 0; i < ELF_NVSRHALFREG; i++)
-		buf[i] = task->thread.fp_state.fpr[i][TS_VSRLOWOFFSET];
-	return __copy_to_user(to, buf, ELF_NVSRHALFREG * sizeof(double));
-}
-
-unsigned long copy_vsx_from_user(struct task_struct *task,
-				 void __user *from)
-{
-	u64 buf[ELF_NVSRHALFREG];
-	int i;
-
-	if (__copy_from_user(buf, from, ELF_NVSRHALFREG * sizeof(double)))
-		return 1;
-	for (i = 0; i < ELF_NVSRHALFREG ; i++)
-		task->thread.fp_state.fpr[i][TS_VSRLOWOFFSET] = buf[i];
-	return 0;
-}
-
-#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
-unsigned long copy_ckfpr_to_user(void __user *to,
-				  struct task_struct *task)
-{
-	u64 buf[ELF_NFPREG];
-	int i;
-
-	/* save FPR copy to local buffer then write to the thread_struct */
-	for (i = 0; i < (ELF_NFPREG - 1) ; i++)
-		buf[i] = task->thread.TS_CKFPR(i);
-	buf[i] = task->thread.ckfp_state.fpscr;
-	return __copy_to_user(to, buf, ELF_NFPREG * sizeof(double));
-}
-
-unsigned long copy_ckfpr_from_user(struct task_struct *task,
-					  void __user *from)
-{
-	u64 buf[ELF_NFPREG];
-	int i;
-
-	if (__copy_from_user(buf, from, ELF_NFPREG * sizeof(double)))
-		return 1;
-	for (i = 0; i < (ELF_NFPREG - 1) ; i++)
-		task->thread.TS_CKFPR(i) = buf[i];
-	task->thread.ckfp_state.fpscr = buf[i];
-
-	return 0;
-}
-
-unsigned long copy_ckvsx_to_user(void __user *to,
-				  struct task_struct *task)
-{
-	u64 buf[ELF_NVSRHALFREG];
-	int i;
-
-	/* save FPR copy to local buffer then write to the thread_struct */
-	for (i = 0; i < ELF_NVSRHALFREG; i++)
-		buf[i] = task->thread.ckfp_state.fpr[i][TS_VSRLOWOFFSET];
-	return __copy_to_user(to, buf, ELF_NVSRHALFREG * sizeof(double));
-}
-
-unsigned long copy_ckvsx_from_user(struct task_struct *task,
-					  void __user *from)
-{
-	u64 buf[ELF_NVSRHALFREG];
-	int i;
-
-	if (__copy_from_user(buf, from, ELF_NVSRHALFREG * sizeof(double)))
-		return 1;
-	for (i = 0; i < ELF_NVSRHALFREG ; i++)
-		task->thread.ckfp_state.fpr[i][TS_VSRLOWOFFSET] = buf[i];
-	return 0;
-}
-#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
-#else
-inline unsigned long copy_fpr_to_user(void __user *to,
-				      struct task_struct *task)
-{
-	return __copy_to_user(to, task->thread.fp_state.fpr,
-			      ELF_NFPREG * sizeof(double));
-}
-
-inline unsigned long copy_fpr_from_user(struct task_struct *task,
-					void __user *from)
-{
-	return __copy_from_user(task->thread.fp_state.fpr, from,
-			      ELF_NFPREG * sizeof(double));
-}
-
-#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
-inline unsigned long copy_ckfpr_to_user(void __user *to,
-					 struct task_struct *task)
-{
-	return __copy_to_user(to, task->thread.ckfp_state.fpr,
-			      ELF_NFPREG * sizeof(double));
-}
-
-inline unsigned long copy_ckfpr_from_user(struct task_struct *task,
-						 void __user *from)
-{
-	return __copy_from_user(task->thread.ckfp_state.fpr, from,
-				ELF_NFPREG * sizeof(double));
-}
-#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
-#endif
-
 /*
  * Save the current user registers on the user stack.
  * We only save the altivec/spe registers if the process has used
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v11 3/8] powerpc/perf: consolidate read_user_stack_32
  2020-03-19 12:19 ` [PATCH v11 0/8] Disable compat cruft on ppc64le v11 Michal Suchanek
  2020-03-19 12:19   ` [PATCH v11 1/8] powerpc: Add back __ARCH_WANT_SYS_LLSEEK macro Michal Suchanek
  2020-03-19 12:19   ` [PATCH v11 2/8] powerpc: move common register copy functions from signal_32.c to signal.c Michal Suchanek
@ 2020-03-19 12:19   ` Michal Suchanek
  2020-03-24  8:48     ` Nicholas Piggin
  2020-03-19 12:19   ` [PATCH v11 4/8] powerpc/perf: consolidate valid_user_sp Michal Suchanek
                     ` (6 subsequent siblings)
  9 siblings, 1 reply; 53+ messages in thread
From: Michal Suchanek @ 2020-03-19 12:19 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Michal Suchanek, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Alexander Viro, Mauro Carvalho Chehab,
	David S. Miller, Rob Herring, Greg Kroah-Hartman,
	Jonathan Cameron, Andy Shevchenko, Christophe Leroy,
	Thomas Gleixner, Arnd Bergmann, Nayna Jain, Eric Richter,
	Claudio Carvalho, Nicholas Piggin, Hari Bathini, Masahiro Yamada,
	Thiago Jung Bauermann, Sebastian Andrzej Siewior,
	Valentin Schneider, Jordan Niethe, Michael Neuling,
	Gustavo Luiz Duarte, Allison Randal, Eric W. Biederman,
	linux-kernel, linux-fsdevel

There are two almost identical copies for 32bit and 64bit.

The function is used only in 32bit code which will be split out in next
patch so consolidate to one function.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
v6:  new patch
v8:  move the consolidated function out of the ifdef block.
v11: rebase on top of def0bfdbd603
---
 arch/powerpc/perf/callchain.c | 48 +++++++++++++++++------------------
 1 file changed, 24 insertions(+), 24 deletions(-)

diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
index cbc251981209..c9a78c6e4361 100644
--- a/arch/powerpc/perf/callchain.c
+++ b/arch/powerpc/perf/callchain.c
@@ -161,18 +161,6 @@ static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret)
 	return read_user_stack_slow(ptr, ret, 8);
 }
 
-static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
-{
-	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
-	    ((unsigned long)ptr & 3))
-		return -EFAULT;
-
-	if (!probe_user_read(ret, ptr, sizeof(*ret)))
-		return 0;
-
-	return read_user_stack_slow(ptr, ret, 4);
-}
-
 static inline int valid_user_sp(unsigned long sp, int is_64)
 {
 	if (!sp || (sp & 7) || sp > (is_64 ? TASK_SIZE : 0x100000000UL) - 32)
@@ -277,19 +265,9 @@ static void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
 }
 
 #else  /* CONFIG_PPC64 */
-/*
- * On 32-bit we just access the address and let hash_page create a
- * HPTE if necessary, so there is no need to fall back to reading
- * the page tables.  Since this is called at interrupt level,
- * do_page_fault() won't treat a DSI as a page fault.
- */
-static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
+static int read_user_stack_slow(void __user *ptr, void *buf, int nb)
 {
-	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
-	    ((unsigned long)ptr & 3))
-		return -EFAULT;
-
-	return probe_user_read(ret, ptr, sizeof(*ret));
+	return 0;
 }
 
 static inline void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
@@ -312,6 +290,28 @@ static inline int valid_user_sp(unsigned long sp, int is_64)
 
 #endif /* CONFIG_PPC64 */
 
+/*
+ * On 32-bit we just access the address and let hash_page create a
+ * HPTE if necessary, so there is no need to fall back to reading
+ * the page tables.  Since this is called at interrupt level,
+ * do_page_fault() won't treat a DSI as a page fault.
+ */
+static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
+{
+	int rc;
+
+	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
+	    ((unsigned long)ptr & 3))
+		return -EFAULT;
+
+	rc = probe_user_read(ret, ptr, sizeof(*ret));
+
+	if (IS_ENABLED(CONFIG_PPC64) && rc)
+		return read_user_stack_slow(ptr, ret, 4);
+
+	return rc;
+}
+
 /*
  * Layout for non-RT signal frames
  */
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v11 4/8] powerpc/perf: consolidate valid_user_sp
  2020-03-19 12:19 ` [PATCH v11 0/8] Disable compat cruft on ppc64le v11 Michal Suchanek
                     ` (2 preceding siblings ...)
  2020-03-19 12:19   ` [PATCH v11 3/8] powerpc/perf: consolidate read_user_stack_32 Michal Suchanek
@ 2020-03-19 12:19   ` Michal Suchanek
  2020-03-19 12:19   ` [PATCH v11 5/8] powerpc/64: make buildable without CONFIG_COMPAT Michal Suchanek
                     ` (5 subsequent siblings)
  9 siblings, 0 replies; 53+ messages in thread
From: Michal Suchanek @ 2020-03-19 12:19 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Michal Suchanek, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Alexander Viro, Mauro Carvalho Chehab,
	David S. Miller, Rob Herring, Greg Kroah-Hartman,
	Jonathan Cameron, Andy Shevchenko, Christophe Leroy,
	Thomas Gleixner, Arnd Bergmann, Nayna Jain, Eric Richter,
	Claudio Carvalho, Nicholas Piggin, Hari Bathini, Masahiro Yamada,
	Thiago Jung Bauermann, Sebastian Andrzej Siewior,
	Valentin Schneider, Jordan Niethe, Michael Neuling,
	Gustavo Luiz Duarte, Allison Randal, Eric W. Biederman,
	linux-kernel, linux-fsdevel

Merge the 32bit and 64bit version.

Halve the check constants on 32bit.

Use STACK_TOP since it is defined.

Passing is_64 is now redundant since is_32bit_task() is used to
determine which callchain variant should be used. Use STACK_TOP and
is_32bit_task() directly.

This removes a page from the valid 32bit area on 64bit:
 #define TASK_SIZE_USER32 (0x0000000100000000UL - (1 * PAGE_SIZE))
 #define STACK_TOP_USER32 TASK_SIZE_USER32

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
---
v8: new patch
v11: simplify by using is_32bit_task()
---
 arch/powerpc/perf/callchain.c | 27 +++++++++++----------------
 1 file changed, 11 insertions(+), 16 deletions(-)

diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
index c9a78c6e4361..194c7fd933e6 100644
--- a/arch/powerpc/perf/callchain.c
+++ b/arch/powerpc/perf/callchain.c
@@ -102,6 +102,15 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *re
 	}
 }
 
+static inline int valid_user_sp(unsigned long sp)
+{
+	bool is_64 = !is_32bit_task();
+
+	if (!sp || (sp & (is_64 ? 7 : 3)) || sp > STACK_TOP - (is_64 ? 32 : 16))
+		return 0;
+	return 1;
+}
+
 #ifdef CONFIG_PPC64
 /*
  * On 64-bit we don't want to invoke hash_page on user addresses from
@@ -161,13 +170,6 @@ static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret)
 	return read_user_stack_slow(ptr, ret, 8);
 }
 
-static inline int valid_user_sp(unsigned long sp, int is_64)
-{
-	if (!sp || (sp & 7) || sp > (is_64 ? TASK_SIZE : 0x100000000UL) - 32)
-		return 0;
-	return 1;
-}
-
 /*
  * 64-bit user processes use the same stack frame for RT and non-RT signals.
  */
@@ -226,7 +228,7 @@ static void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
 
 	while (entry->nr < entry->max_stack) {
 		fp = (unsigned long __user *) sp;
-		if (!valid_user_sp(sp, 1) || read_user_stack_64(fp, &next_sp))
+		if (!valid_user_sp(sp) || read_user_stack_64(fp, &next_sp))
 			return;
 		if (level > 0 && read_user_stack_64(&fp[2], &next_ip))
 			return;
@@ -275,13 +277,6 @@ static inline void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry
 {
 }
 
-static inline int valid_user_sp(unsigned long sp, int is_64)
-{
-	if (!sp || (sp & 7) || sp > TASK_SIZE - 32)
-		return 0;
-	return 1;
-}
-
 #define __SIGNAL_FRAMESIZE32	__SIGNAL_FRAMESIZE
 #define sigcontext32		sigcontext
 #define mcontext32		mcontext
@@ -423,7 +418,7 @@ static void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry,
 
 	while (entry->nr < entry->max_stack) {
 		fp = (unsigned int __user *) (unsigned long) sp;
-		if (!valid_user_sp(sp, 0) || read_user_stack_32(fp, &next_sp))
+		if (!valid_user_sp(sp) || read_user_stack_32(fp, &next_sp))
 			return;
 		if (level > 0 && read_user_stack_32(&fp[1], &next_ip))
 			return;
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v11 5/8] powerpc/64: make buildable without CONFIG_COMPAT
  2020-03-19 12:19 ` [PATCH v11 0/8] Disable compat cruft on ppc64le v11 Michal Suchanek
                     ` (3 preceding siblings ...)
  2020-03-19 12:19   ` [PATCH v11 4/8] powerpc/perf: consolidate valid_user_sp Michal Suchanek
@ 2020-03-19 12:19   ` Michal Suchanek
  2020-03-24  8:54     ` Nicholas Piggin
  2020-03-19 12:19   ` [PATCH v11 6/8] powerpc/64: Make COMPAT user-selectable disabled on littleendian by default Michal Suchanek
                     ` (4 subsequent siblings)
  9 siblings, 1 reply; 53+ messages in thread
From: Michal Suchanek @ 2020-03-19 12:19 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Michal Suchanek, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Alexander Viro, Mauro Carvalho Chehab,
	David S. Miller, Rob Herring, Greg Kroah-Hartman,
	Jonathan Cameron, Andy Shevchenko, Christophe Leroy,
	Thomas Gleixner, Arnd Bergmann, Nayna Jain, Eric Richter,
	Claudio Carvalho, Nicholas Piggin, Hari Bathini, Masahiro Yamada,
	Thiago Jung Bauermann, Sebastian Andrzej Siewior,
	Valentin Schneider, Jordan Niethe, Michael Neuling,
	Gustavo Luiz Duarte, Allison Randal, Eric W. Biederman,
	linux-kernel, linux-fsdevel

There are numerous references to 32bit functions in generic and 64bit
code so ifdef them out.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
---
v2:
- fix 32bit ifdef condition in signal.c
- simplify the compat ifdef condition in vdso.c - 64bit is redundant
- simplify the compat ifdef condition in callchain.c - 64bit is redundant
v3:
- use IS_ENABLED and maybe_unused where possible
- do not ifdef declarations
- clean up Makefile
v4:
- further makefile cleanup
- simplify is_32bit_task conditions
- avoid ifdef in condition by using return
v5:
- avoid unreachable code on 32bit
- make is_current_64bit constant on !COMPAT
- add stub perf_callchain_user_32 to avoid some ifdefs
v6:
- consolidate current_is_64bit
v7:
- remove leftover perf_callchain_user_32 stub from previous series version
v8:
- fix build again - too trigger-happy with stub removal
- remove a vdso.c hunk that causes warning according to kbuild test robot
v9:
- removed current_is_64bit in previous patch
v10:
- rebase on top of 70ed86f4de5bd
---
 arch/powerpc/include/asm/thread_info.h | 4 ++--
 arch/powerpc/kernel/Makefile           | 6 +++---
 arch/powerpc/kernel/entry_64.S         | 2 ++
 arch/powerpc/kernel/signal.c           | 3 +--
 arch/powerpc/kernel/syscall_64.c       | 6 ++----
 arch/powerpc/kernel/vdso.c             | 3 ++-
 arch/powerpc/perf/callchain.c          | 8 +++++++-
 7 files changed, 19 insertions(+), 13 deletions(-)

diff --git a/arch/powerpc/include/asm/thread_info.h b/arch/powerpc/include/asm/thread_info.h
index a2270749b282..ca6c97025704 100644
--- a/arch/powerpc/include/asm/thread_info.h
+++ b/arch/powerpc/include/asm/thread_info.h
@@ -162,10 +162,10 @@ static inline bool test_thread_local_flags(unsigned int flags)
 	return (ti->local_flags & flags) != 0;
 }
 
-#ifdef CONFIG_PPC64
+#ifdef CONFIG_COMPAT
 #define is_32bit_task()	(test_thread_flag(TIF_32BIT))
 #else
-#define is_32bit_task()	(1)
+#define is_32bit_task()	(IS_ENABLED(CONFIG_PPC32))
 #endif
 
 #if defined(CONFIG_PPC64)
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index 5700231a8988..98a1c143b613 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -42,16 +42,16 @@ CFLAGS_btext.o += -DDISABLE_BRANCH_PROFILING
 endif
 
 obj-y				:= cputable.o ptrace.o syscalls.o \
-				   irq.o align.o signal_32.o pmc.o vdso.o \
+				   irq.o align.o signal_$(BITS).o pmc.o vdso.o \
 				   process.o systbl.o idle.o \
 				   signal.o sysfs.o cacheinfo.o time.o \
 				   prom.o traps.o setup-common.o \
 				   udbg.o misc.o io.o misc_$(BITS).o \
 				   of_platform.o prom_parse.o
-obj-$(CONFIG_PPC64)		+= setup_64.o sys_ppc32.o \
-				   signal_64.o ptrace32.o \
+obj-$(CONFIG_PPC64)		+= setup_64.o \
 				   paca.o nvram_64.o firmware.o note.o \
 				   syscall_64.o
+obj-$(CONFIG_COMPAT)		+= sys_ppc32.o ptrace32.o signal_32.o
 obj-$(CONFIG_VDSO32)		+= vdso32/
 obj-$(CONFIG_PPC_WATCHDOG)	+= watchdog.o
 obj-$(CONFIG_HAVE_HW_BREAKPOINT)	+= hw_breakpoint.o
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 4c0d0400e93d..fe1421e08f09 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -52,8 +52,10 @@
 SYS_CALL_TABLE:
 	.tc sys_call_table[TC],sys_call_table
 
+#ifdef CONFIG_COMPAT
 COMPAT_SYS_CALL_TABLE:
 	.tc compat_sys_call_table[TC],compat_sys_call_table
+#endif
 
 /* This value is used to mark exception frames on the stack. */
 exception_marker:
diff --git a/arch/powerpc/kernel/signal.c b/arch/powerpc/kernel/signal.c
index 4b0152108f61..a264989626fd 100644
--- a/arch/powerpc/kernel/signal.c
+++ b/arch/powerpc/kernel/signal.c
@@ -247,7 +247,6 @@ static void do_signal(struct task_struct *tsk)
 	sigset_t *oldset = sigmask_to_save();
 	struct ksignal ksig = { .sig = 0 };
 	int ret;
-	int is32 = is_32bit_task();
 
 	BUG_ON(tsk != current);
 
@@ -277,7 +276,7 @@ static void do_signal(struct task_struct *tsk)
 
 	rseq_signal_deliver(&ksig, tsk->thread.regs);
 
-	if (is32) {
+	if (is_32bit_task()) {
         	if (ksig.ka.sa.sa_flags & SA_SIGINFO)
 			ret = handle_rt_signal32(&ksig, oldset, tsk);
 		else
diff --git a/arch/powerpc/kernel/syscall_64.c b/arch/powerpc/kernel/syscall_64.c
index 87d95b455b83..2dcbfe38f5ac 100644
--- a/arch/powerpc/kernel/syscall_64.c
+++ b/arch/powerpc/kernel/syscall_64.c
@@ -24,7 +24,6 @@ notrace long system_call_exception(long r3, long r4, long r5,
 				   long r6, long r7, long r8,
 				   unsigned long r0, struct pt_regs *regs)
 {
-	unsigned long ti_flags;
 	syscall_fn f;
 
 	if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
@@ -68,8 +67,7 @@ notrace long system_call_exception(long r3, long r4, long r5,
 
 	local_irq_enable();
 
-	ti_flags = current_thread_info()->flags;
-	if (unlikely(ti_flags & _TIF_SYSCALL_DOTRACE)) {
+	if (unlikely(current_thread_info()->flags & _TIF_SYSCALL_DOTRACE)) {
 		/*
 		 * We use the return value of do_syscall_trace_enter() as the
 		 * syscall number. If the syscall was rejected for any reason
@@ -94,7 +92,7 @@ notrace long system_call_exception(long r3, long r4, long r5,
 	/* May be faster to do array_index_nospec? */
 	barrier_nospec();
 
-	if (unlikely(ti_flags & _TIF_32BIT)) {
+	if (unlikely(is_32bit_task())) {
 		f = (void *)compat_sys_call_table[r0];
 
 		r3 &= 0x00000000ffffffffULL;
diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index b9a108411c0d..77da3b7d304d 100644
--- a/arch/powerpc/kernel/vdso.c
+++ b/arch/powerpc/kernel/vdso.c
@@ -656,7 +656,8 @@ static void __init vdso_setup_syscall_map(void)
 		if (sys_call_table[i] != sys_ni_syscall)
 			vdso_data->syscall_map_64[i >> 5] |=
 				0x80000000UL >> (i & 0x1f);
-		if (compat_sys_call_table[i] != sys_ni_syscall)
+		if (IS_ENABLED(CONFIG_COMPAT) &&
+		    compat_sys_call_table[i] != sys_ni_syscall)
 			vdso_data->syscall_map_32[i >> 5] |=
 				0x80000000UL >> (i & 0x1f);
 #else /* CONFIG_PPC64 */
diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
index 194c7fd933e6..8a274bd523b1 100644
--- a/arch/powerpc/perf/callchain.c
+++ b/arch/powerpc/perf/callchain.c
@@ -15,7 +15,7 @@
 #include <asm/sigcontext.h>
 #include <asm/ucontext.h>
 #include <asm/vdso.h>
-#ifdef CONFIG_PPC64
+#ifdef CONFIG_COMPAT
 #include "../kernel/ppc32.h"
 #endif
 #include <asm/pte-walk.h>
@@ -285,6 +285,7 @@ static inline void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry
 
 #endif /* CONFIG_PPC64 */
 
+#if defined(CONFIG_PPC32) || defined(CONFIG_COMPAT)
 /*
  * On 32-bit we just access the address and let hash_page create a
  * HPTE if necessary, so there is no need to fall back to reading
@@ -448,6 +449,11 @@ static void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry,
 		sp = next_sp;
 	}
 }
+#else /* 32bit */
+static void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry,
+				   struct pt_regs *regs)
+{}
+#endif /* 32bit */
 
 void
 perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v11 6/8] powerpc/64: Make COMPAT user-selectable disabled on littleendian by default.
  2020-03-19 12:19 ` [PATCH v11 0/8] Disable compat cruft on ppc64le v11 Michal Suchanek
                     ` (4 preceding siblings ...)
  2020-03-19 12:19   ` [PATCH v11 5/8] powerpc/64: make buildable without CONFIG_COMPAT Michal Suchanek
@ 2020-03-19 12:19   ` Michal Suchanek
  2020-03-19 12:19   ` [PATCH v11 7/8] powerpc/perf: split callchain.c by bitness Michal Suchanek
                     ` (3 subsequent siblings)
  9 siblings, 0 replies; 53+ messages in thread
From: Michal Suchanek @ 2020-03-19 12:19 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Michal Suchanek, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Alexander Viro, Mauro Carvalho Chehab,
	David S. Miller, Rob Herring, Greg Kroah-Hartman,
	Jonathan Cameron, Andy Shevchenko, Christophe Leroy,
	Thomas Gleixner, Arnd Bergmann, Nayna Jain, Eric Richter,
	Claudio Carvalho, Nicholas Piggin, Hari Bathini, Masahiro Yamada,
	Thiago Jung Bauermann, Sebastian Andrzej Siewior,
	Valentin Schneider, Jordan Niethe, Michael Neuling,
	Gustavo Luiz Duarte, Allison Randal, Eric W. Biederman,
	linux-kernel, linux-fsdevel

On bigendian ppc64 it is common to have 32bit legacy binaries but much
less so on littleendian.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
v3: make configurable
---
 arch/powerpc/Kconfig | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 497b7d0b2d7e..29d00b3959b9 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -264,8 +264,9 @@ config PANIC_TIMEOUT
 	default 180
 
 config COMPAT
-	bool
-	default y if PPC64
+	bool "Enable support for 32bit binaries"
+	depends on PPC64
+	default y if !CPU_LITTLE_ENDIAN
 	select COMPAT_BINFMT_ELF
 	select ARCH_WANT_OLD_COMPAT_IPC
 	select COMPAT_OLD_SIGACTION
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v11 7/8] powerpc/perf: split callchain.c by bitness
  2020-03-19 12:19 ` [PATCH v11 0/8] Disable compat cruft on ppc64le v11 Michal Suchanek
                     ` (5 preceding siblings ...)
  2020-03-19 12:19   ` [PATCH v11 6/8] powerpc/64: Make COMPAT user-selectable disabled on littleendian by default Michal Suchanek
@ 2020-03-19 12:19   ` Michal Suchanek
  2020-03-19 12:19   ` [PATCH v11 8/8] MAINTAINERS: perf: Add pattern that matches ppc perf to the perf entry Michal Suchanek
                     ` (2 subsequent siblings)
  9 siblings, 0 replies; 53+ messages in thread
From: Michal Suchanek @ 2020-03-19 12:19 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Michal Suchanek, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Alexander Viro, Mauro Carvalho Chehab,
	David S. Miller, Rob Herring, Greg Kroah-Hartman,
	Jonathan Cameron, Andy Shevchenko, Christophe Leroy,
	Thomas Gleixner, Arnd Bergmann, Nayna Jain, Eric Richter,
	Claudio Carvalho, Nicholas Piggin, Hari Bathini, Masahiro Yamada,
	Thiago Jung Bauermann, Sebastian Andrzej Siewior,
	Valentin Schneider, Jordan Niethe, Michael Neuling,
	Gustavo Luiz Duarte, Allison Randal, Eric W. Biederman,
	linux-kernel, linux-fsdevel

Building callchain.c with !COMPAT proved quite ugly with all the
defines. Splitting out the 32bit and 64bit parts looks better.

No code change intended.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
---
v6:
 - move current_is_64bit consolidetaion to earlier patch
 - move defines to the top of callchain_32.c
 - Makefile cleanup
v8:
 - fix valid_user_sp
v11:
 - rebase on top of def0bfdbd603
---
 arch/powerpc/perf/Makefile       |   5 +-
 arch/powerpc/perf/callchain.c    | 357 +------------------------------
 arch/powerpc/perf/callchain.h    |  20 ++
 arch/powerpc/perf/callchain_32.c | 196 +++++++++++++++++
 arch/powerpc/perf/callchain_64.c | 174 +++++++++++++++
 5 files changed, 395 insertions(+), 357 deletions(-)
 create mode 100644 arch/powerpc/perf/callchain.h
 create mode 100644 arch/powerpc/perf/callchain_32.c
 create mode 100644 arch/powerpc/perf/callchain_64.c

diff --git a/arch/powerpc/perf/Makefile b/arch/powerpc/perf/Makefile
index c155dcbb8691..53d614e98537 100644
--- a/arch/powerpc/perf/Makefile
+++ b/arch/powerpc/perf/Makefile
@@ -1,6 +1,9 @@
 # SPDX-License-Identifier: GPL-2.0
 
-obj-$(CONFIG_PERF_EVENTS)	+= callchain.o perf_regs.o
+obj-$(CONFIG_PERF_EVENTS)	+= callchain.o callchain_$(BITS).o perf_regs.o
+ifdef CONFIG_COMPAT
+obj-$(CONFIG_PERF_EVENTS)	+= callchain_32.o
+endif
 
 obj-$(CONFIG_PPC_PERF_CTRS)	+= core-book3s.o bhrb.o
 obj64-$(CONFIG_PPC_PERF_CTRS)	+= ppc970-pmu.o power5-pmu.o \
diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
index 8a274bd523b1..dd5051015008 100644
--- a/arch/powerpc/perf/callchain.c
+++ b/arch/powerpc/perf/callchain.c
@@ -15,11 +15,9 @@
 #include <asm/sigcontext.h>
 #include <asm/ucontext.h>
 #include <asm/vdso.h>
-#ifdef CONFIG_COMPAT
-#include "../kernel/ppc32.h"
-#endif
 #include <asm/pte-walk.h>
 
+#include "callchain.h"
 
 /*
  * Is sp valid as the address of the next kernel stack frame after prev_sp?
@@ -102,359 +100,6 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *re
 	}
 }
 
-static inline int valid_user_sp(unsigned long sp)
-{
-	bool is_64 = !is_32bit_task();
-
-	if (!sp || (sp & (is_64 ? 7 : 3)) || sp > STACK_TOP - (is_64 ? 32 : 16))
-		return 0;
-	return 1;
-}
-
-#ifdef CONFIG_PPC64
-/*
- * On 64-bit we don't want to invoke hash_page on user addresses from
- * interrupt context, so if the access faults, we read the page tables
- * to find which page (if any) is mapped and access it directly.
- */
-static int read_user_stack_slow(void __user *ptr, void *buf, int nb)
-{
-	int ret = -EFAULT;
-	pgd_t *pgdir;
-	pte_t *ptep, pte;
-	unsigned shift;
-	unsigned long addr = (unsigned long) ptr;
-	unsigned long offset;
-	unsigned long pfn, flags;
-	void *kaddr;
-
-	pgdir = current->mm->pgd;
-	if (!pgdir)
-		return -EFAULT;
-
-	local_irq_save(flags);
-	ptep = find_current_mm_pte(pgdir, addr, NULL, &shift);
-	if (!ptep)
-		goto err_out;
-	if (!shift)
-		shift = PAGE_SHIFT;
-
-	/* align address to page boundary */
-	offset = addr & ((1UL << shift) - 1);
-
-	pte = READ_ONCE(*ptep);
-	if (!pte_present(pte) || !pte_user(pte))
-		goto err_out;
-	pfn = pte_pfn(pte);
-	if (!page_is_ram(pfn))
-		goto err_out;
-
-	/* no highmem to worry about here */
-	kaddr = pfn_to_kaddr(pfn);
-	memcpy(buf, kaddr + offset, nb);
-	ret = 0;
-err_out:
-	local_irq_restore(flags);
-	return ret;
-}
-
-static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret)
-{
-	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned long) ||
-	    ((unsigned long)ptr & 7))
-		return -EFAULT;
-
-	if (!probe_user_read(ret, ptr, sizeof(*ret)))
-		return 0;
-
-	return read_user_stack_slow(ptr, ret, 8);
-}
-
-/*
- * 64-bit user processes use the same stack frame for RT and non-RT signals.
- */
-struct signal_frame_64 {
-	char		dummy[__SIGNAL_FRAMESIZE];
-	struct ucontext	uc;
-	unsigned long	unused[2];
-	unsigned int	tramp[6];
-	struct siginfo	*pinfo;
-	void		*puc;
-	struct siginfo	info;
-	char		abigap[288];
-};
-
-static int is_sigreturn_64_address(unsigned long nip, unsigned long fp)
-{
-	if (nip == fp + offsetof(struct signal_frame_64, tramp))
-		return 1;
-	if (vdso64_rt_sigtramp && current->mm->context.vdso_base &&
-	    nip == current->mm->context.vdso_base + vdso64_rt_sigtramp)
-		return 1;
-	return 0;
-}
-
-/*
- * Do some sanity checking on the signal frame pointed to by sp.
- * We check the pinfo and puc pointers in the frame.
- */
-static int sane_signal_64_frame(unsigned long sp)
-{
-	struct signal_frame_64 __user *sf;
-	unsigned long pinfo, puc;
-
-	sf = (struct signal_frame_64 __user *) sp;
-	if (read_user_stack_64((unsigned long __user *) &sf->pinfo, &pinfo) ||
-	    read_user_stack_64((unsigned long __user *) &sf->puc, &puc))
-		return 0;
-	return pinfo == (unsigned long) &sf->info &&
-		puc == (unsigned long) &sf->uc;
-}
-
-static void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
-				   struct pt_regs *regs)
-{
-	unsigned long sp, next_sp;
-	unsigned long next_ip;
-	unsigned long lr;
-	long level = 0;
-	struct signal_frame_64 __user *sigframe;
-	unsigned long __user *fp, *uregs;
-
-	next_ip = perf_instruction_pointer(regs);
-	lr = regs->link;
-	sp = regs->gpr[1];
-	perf_callchain_store(entry, next_ip);
-
-	while (entry->nr < entry->max_stack) {
-		fp = (unsigned long __user *) sp;
-		if (!valid_user_sp(sp) || read_user_stack_64(fp, &next_sp))
-			return;
-		if (level > 0 && read_user_stack_64(&fp[2], &next_ip))
-			return;
-
-		/*
-		 * Note: the next_sp - sp >= signal frame size check
-		 * is true when next_sp < sp, which can happen when
-		 * transitioning from an alternate signal stack to the
-		 * normal stack.
-		 */
-		if (next_sp - sp >= sizeof(struct signal_frame_64) &&
-		    (is_sigreturn_64_address(next_ip, sp) ||
-		     (level <= 1 && is_sigreturn_64_address(lr, sp))) &&
-		    sane_signal_64_frame(sp)) {
-			/*
-			 * This looks like an signal frame
-			 */
-			sigframe = (struct signal_frame_64 __user *) sp;
-			uregs = sigframe->uc.uc_mcontext.gp_regs;
-			if (read_user_stack_64(&uregs[PT_NIP], &next_ip) ||
-			    read_user_stack_64(&uregs[PT_LNK], &lr) ||
-			    read_user_stack_64(&uregs[PT_R1], &sp))
-				return;
-			level = 0;
-			perf_callchain_store_context(entry, PERF_CONTEXT_USER);
-			perf_callchain_store(entry, next_ip);
-			continue;
-		}
-
-		if (level == 0)
-			next_ip = lr;
-		perf_callchain_store(entry, next_ip);
-		++level;
-		sp = next_sp;
-	}
-}
-
-#else  /* CONFIG_PPC64 */
-static int read_user_stack_slow(void __user *ptr, void *buf, int nb)
-{
-	return 0;
-}
-
-static inline void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
-					  struct pt_regs *regs)
-{
-}
-
-#define __SIGNAL_FRAMESIZE32	__SIGNAL_FRAMESIZE
-#define sigcontext32		sigcontext
-#define mcontext32		mcontext
-#define ucontext32		ucontext
-#define compat_siginfo_t	struct siginfo
-
-#endif /* CONFIG_PPC64 */
-
-#if defined(CONFIG_PPC32) || defined(CONFIG_COMPAT)
-/*
- * On 32-bit we just access the address and let hash_page create a
- * HPTE if necessary, so there is no need to fall back to reading
- * the page tables.  Since this is called at interrupt level,
- * do_page_fault() won't treat a DSI as a page fault.
- */
-static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
-{
-	int rc;
-
-	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
-	    ((unsigned long)ptr & 3))
-		return -EFAULT;
-
-	rc = probe_user_read(ret, ptr, sizeof(*ret));
-
-	if (IS_ENABLED(CONFIG_PPC64) && rc)
-		return read_user_stack_slow(ptr, ret, 4);
-
-	return rc;
-}
-
-/*
- * Layout for non-RT signal frames
- */
-struct signal_frame_32 {
-	char			dummy[__SIGNAL_FRAMESIZE32];
-	struct sigcontext32	sctx;
-	struct mcontext32	mctx;
-	int			abigap[56];
-};
-
-/*
- * Layout for RT signal frames
- */
-struct rt_signal_frame_32 {
-	char			dummy[__SIGNAL_FRAMESIZE32 + 16];
-	compat_siginfo_t	info;
-	struct ucontext32	uc;
-	int			abigap[56];
-};
-
-static int is_sigreturn_32_address(unsigned int nip, unsigned int fp)
-{
-	if (nip == fp + offsetof(struct signal_frame_32, mctx.mc_pad))
-		return 1;
-	if (vdso32_sigtramp && current->mm->context.vdso_base &&
-	    nip == current->mm->context.vdso_base + vdso32_sigtramp)
-		return 1;
-	return 0;
-}
-
-static int is_rt_sigreturn_32_address(unsigned int nip, unsigned int fp)
-{
-	if (nip == fp + offsetof(struct rt_signal_frame_32,
-				 uc.uc_mcontext.mc_pad))
-		return 1;
-	if (vdso32_rt_sigtramp && current->mm->context.vdso_base &&
-	    nip == current->mm->context.vdso_base + vdso32_rt_sigtramp)
-		return 1;
-	return 0;
-}
-
-static int sane_signal_32_frame(unsigned int sp)
-{
-	struct signal_frame_32 __user *sf;
-	unsigned int regs;
-
-	sf = (struct signal_frame_32 __user *) (unsigned long) sp;
-	if (read_user_stack_32((unsigned int __user *) &sf->sctx.regs, &regs))
-		return 0;
-	return regs == (unsigned long) &sf->mctx;
-}
-
-static int sane_rt_signal_32_frame(unsigned int sp)
-{
-	struct rt_signal_frame_32 __user *sf;
-	unsigned int regs;
-
-	sf = (struct rt_signal_frame_32 __user *) (unsigned long) sp;
-	if (read_user_stack_32((unsigned int __user *) &sf->uc.uc_regs, &regs))
-		return 0;
-	return regs == (unsigned long) &sf->uc.uc_mcontext;
-}
-
-static unsigned int __user *signal_frame_32_regs(unsigned int sp,
-				unsigned int next_sp, unsigned int next_ip)
-{
-	struct mcontext32 __user *mctx = NULL;
-	struct signal_frame_32 __user *sf;
-	struct rt_signal_frame_32 __user *rt_sf;
-
-	/*
-	 * Note: the next_sp - sp >= signal frame size check
-	 * is true when next_sp < sp, for example, when
-	 * transitioning from an alternate signal stack to the
-	 * normal stack.
-	 */
-	if (next_sp - sp >= sizeof(struct signal_frame_32) &&
-	    is_sigreturn_32_address(next_ip, sp) &&
-	    sane_signal_32_frame(sp)) {
-		sf = (struct signal_frame_32 __user *) (unsigned long) sp;
-		mctx = &sf->mctx;
-	}
-
-	if (!mctx && next_sp - sp >= sizeof(struct rt_signal_frame_32) &&
-	    is_rt_sigreturn_32_address(next_ip, sp) &&
-	    sane_rt_signal_32_frame(sp)) {
-		rt_sf = (struct rt_signal_frame_32 __user *) (unsigned long) sp;
-		mctx = &rt_sf->uc.uc_mcontext;
-	}
-
-	if (!mctx)
-		return NULL;
-	return mctx->mc_gregs;
-}
-
-static void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry,
-				   struct pt_regs *regs)
-{
-	unsigned int sp, next_sp;
-	unsigned int next_ip;
-	unsigned int lr;
-	long level = 0;
-	unsigned int __user *fp, *uregs;
-
-	next_ip = perf_instruction_pointer(regs);
-	lr = regs->link;
-	sp = regs->gpr[1];
-	perf_callchain_store(entry, next_ip);
-
-	while (entry->nr < entry->max_stack) {
-		fp = (unsigned int __user *) (unsigned long) sp;
-		if (!valid_user_sp(sp) || read_user_stack_32(fp, &next_sp))
-			return;
-		if (level > 0 && read_user_stack_32(&fp[1], &next_ip))
-			return;
-
-		uregs = signal_frame_32_regs(sp, next_sp, next_ip);
-		if (!uregs && level <= 1)
-			uregs = signal_frame_32_regs(sp, next_sp, lr);
-		if (uregs) {
-			/*
-			 * This looks like an signal frame, so restart
-			 * the stack trace with the values in it.
-			 */
-			if (read_user_stack_32(&uregs[PT_NIP], &next_ip) ||
-			    read_user_stack_32(&uregs[PT_LNK], &lr) ||
-			    read_user_stack_32(&uregs[PT_R1], &sp))
-				return;
-			level = 0;
-			perf_callchain_store_context(entry, PERF_CONTEXT_USER);
-			perf_callchain_store(entry, next_ip);
-			continue;
-		}
-
-		if (level == 0)
-			next_ip = lr;
-		perf_callchain_store(entry, next_ip);
-		++level;
-		sp = next_sp;
-	}
-}
-#else /* 32bit */
-static void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry,
-				   struct pt_regs *regs)
-{}
-#endif /* 32bit */
-
 void
 perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)
 {
diff --git a/arch/powerpc/perf/callchain.h b/arch/powerpc/perf/callchain.h
new file mode 100644
index 000000000000..8631a96d627d
--- /dev/null
+++ b/arch/powerpc/perf/callchain.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+#ifndef _POWERPC_PERF_CALLCHAIN_H
+#define _POWERPC_PERF_CALLCHAIN_H
+
+int read_user_stack_slow(void __user *ptr, void *buf, int nb);
+void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
+			    struct pt_regs *regs);
+void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry,
+			    struct pt_regs *regs);
+
+static inline int valid_user_sp(unsigned long sp)
+{
+	bool is_64 = !is_32bit_task();
+
+	if (!sp || (sp & (is_64 ? 7 : 3)) || sp > STACK_TOP - (is_64 ? 32 : 16))
+		return 0;
+	return 1;
+}
+
+#endif /* _POWERPC_PERF_CALLCHAIN_H */
diff --git a/arch/powerpc/perf/callchain_32.c b/arch/powerpc/perf/callchain_32.c
new file mode 100644
index 000000000000..25729c651cb2
--- /dev/null
+++ b/arch/powerpc/perf/callchain_32.c
@@ -0,0 +1,196 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Performance counter callchain support - powerpc architecture code
+ *
+ * Copyright © 2009 Paul Mackerras, IBM Corporation.
+ */
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/perf_event.h>
+#include <linux/percpu.h>
+#include <linux/uaccess.h>
+#include <linux/mm.h>
+#include <asm/ptrace.h>
+#include <asm/pgtable.h>
+#include <asm/sigcontext.h>
+#include <asm/ucontext.h>
+#include <asm/vdso.h>
+#include <asm/pte-walk.h>
+
+#include "callchain.h"
+
+#ifdef CONFIG_PPC64
+#include "../kernel/ppc32.h"
+#else  /* CONFIG_PPC64 */
+
+#define __SIGNAL_FRAMESIZE32	__SIGNAL_FRAMESIZE
+#define sigcontext32		sigcontext
+#define mcontext32		mcontext
+#define ucontext32		ucontext
+#define compat_siginfo_t	struct siginfo
+
+#endif /* CONFIG_PPC64 */
+
+/*
+ * On 32-bit we just access the address and let hash_page create a
+ * HPTE if necessary, so there is no need to fall back to reading
+ * the page tables.  Since this is called at interrupt level,
+ * do_page_fault() won't treat a DSI as a page fault.
+ */
+static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
+{
+	int rc;
+
+	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
+	    ((unsigned long)ptr & 3))
+		return -EFAULT;
+
+	rc = probe_user_read(ret, ptr, sizeof(*ret));
+
+	if (IS_ENABLED(CONFIG_PPC64) && rc)
+		return read_user_stack_slow(ptr, ret, 4);
+
+	return rc;
+}
+
+/*
+ * Layout for non-RT signal frames
+ */
+struct signal_frame_32 {
+	char			dummy[__SIGNAL_FRAMESIZE32];
+	struct sigcontext32	sctx;
+	struct mcontext32	mctx;
+	int			abigap[56];
+};
+
+/*
+ * Layout for RT signal frames
+ */
+struct rt_signal_frame_32 {
+	char			dummy[__SIGNAL_FRAMESIZE32 + 16];
+	compat_siginfo_t	info;
+	struct ucontext32	uc;
+	int			abigap[56];
+};
+
+static int is_sigreturn_32_address(unsigned int nip, unsigned int fp)
+{
+	if (nip == fp + offsetof(struct signal_frame_32, mctx.mc_pad))
+		return 1;
+	if (vdso32_sigtramp && current->mm->context.vdso_base &&
+	    nip == current->mm->context.vdso_base + vdso32_sigtramp)
+		return 1;
+	return 0;
+}
+
+static int is_rt_sigreturn_32_address(unsigned int nip, unsigned int fp)
+{
+	if (nip == fp + offsetof(struct rt_signal_frame_32,
+				 uc.uc_mcontext.mc_pad))
+		return 1;
+	if (vdso32_rt_sigtramp && current->mm->context.vdso_base &&
+	    nip == current->mm->context.vdso_base + vdso32_rt_sigtramp)
+		return 1;
+	return 0;
+}
+
+static int sane_signal_32_frame(unsigned int sp)
+{
+	struct signal_frame_32 __user *sf;
+	unsigned int regs;
+
+	sf = (struct signal_frame_32 __user *) (unsigned long) sp;
+	if (read_user_stack_32((unsigned int __user *) &sf->sctx.regs, &regs))
+		return 0;
+	return regs == (unsigned long) &sf->mctx;
+}
+
+static int sane_rt_signal_32_frame(unsigned int sp)
+{
+	struct rt_signal_frame_32 __user *sf;
+	unsigned int regs;
+
+	sf = (struct rt_signal_frame_32 __user *) (unsigned long) sp;
+	if (read_user_stack_32((unsigned int __user *) &sf->uc.uc_regs, &regs))
+		return 0;
+	return regs == (unsigned long) &sf->uc.uc_mcontext;
+}
+
+static unsigned int __user *signal_frame_32_regs(unsigned int sp,
+				unsigned int next_sp, unsigned int next_ip)
+{
+	struct mcontext32 __user *mctx = NULL;
+	struct signal_frame_32 __user *sf;
+	struct rt_signal_frame_32 __user *rt_sf;
+
+	/*
+	 * Note: the next_sp - sp >= signal frame size check
+	 * is true when next_sp < sp, for example, when
+	 * transitioning from an alternate signal stack to the
+	 * normal stack.
+	 */
+	if (next_sp - sp >= sizeof(struct signal_frame_32) &&
+	    is_sigreturn_32_address(next_ip, sp) &&
+	    sane_signal_32_frame(sp)) {
+		sf = (struct signal_frame_32 __user *) (unsigned long) sp;
+		mctx = &sf->mctx;
+	}
+
+	if (!mctx && next_sp - sp >= sizeof(struct rt_signal_frame_32) &&
+	    is_rt_sigreturn_32_address(next_ip, sp) &&
+	    sane_rt_signal_32_frame(sp)) {
+		rt_sf = (struct rt_signal_frame_32 __user *) (unsigned long) sp;
+		mctx = &rt_sf->uc.uc_mcontext;
+	}
+
+	if (!mctx)
+		return NULL;
+	return mctx->mc_gregs;
+}
+
+void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry,
+			    struct pt_regs *regs)
+{
+	unsigned int sp, next_sp;
+	unsigned int next_ip;
+	unsigned int lr;
+	long level = 0;
+	unsigned int __user *fp, *uregs;
+
+	next_ip = perf_instruction_pointer(regs);
+	lr = regs->link;
+	sp = regs->gpr[1];
+	perf_callchain_store(entry, next_ip);
+
+	while (entry->nr < entry->max_stack) {
+		fp = (unsigned int __user *) (unsigned long) sp;
+		if (!valid_user_sp(sp) || read_user_stack_32(fp, &next_sp))
+			return;
+		if (level > 0 && read_user_stack_32(&fp[1], &next_ip))
+			return;
+
+		uregs = signal_frame_32_regs(sp, next_sp, next_ip);
+		if (!uregs && level <= 1)
+			uregs = signal_frame_32_regs(sp, next_sp, lr);
+		if (uregs) {
+			/*
+			 * This looks like an signal frame, so restart
+			 * the stack trace with the values in it.
+			 */
+			if (read_user_stack_32(&uregs[PT_NIP], &next_ip) ||
+			    read_user_stack_32(&uregs[PT_LNK], &lr) ||
+			    read_user_stack_32(&uregs[PT_R1], &sp))
+				return;
+			level = 0;
+			perf_callchain_store_context(entry, PERF_CONTEXT_USER);
+			perf_callchain_store(entry, next_ip);
+			continue;
+		}
+
+		if (level == 0)
+			next_ip = lr;
+		perf_callchain_store(entry, next_ip);
+		++level;
+		sp = next_sp;
+	}
+}
diff --git a/arch/powerpc/perf/callchain_64.c b/arch/powerpc/perf/callchain_64.c
new file mode 100644
index 000000000000..7e8eed59dd18
--- /dev/null
+++ b/arch/powerpc/perf/callchain_64.c
@@ -0,0 +1,174 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Performance counter callchain support - powerpc architecture code
+ *
+ * Copyright © 2009 Paul Mackerras, IBM Corporation.
+ */
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/perf_event.h>
+#include <linux/percpu.h>
+#include <linux/uaccess.h>
+#include <linux/mm.h>
+#include <asm/ptrace.h>
+#include <asm/pgtable.h>
+#include <asm/sigcontext.h>
+#include <asm/ucontext.h>
+#include <asm/vdso.h>
+#include <asm/pte-walk.h>
+
+#include "callchain.h"
+
+/*
+ * On 64-bit we don't want to invoke hash_page on user addresses from
+ * interrupt context, so if the access faults, we read the page tables
+ * to find which page (if any) is mapped and access it directly.
+ */
+int read_user_stack_slow(void __user *ptr, void *buf, int nb)
+{
+	int ret = -EFAULT;
+	pgd_t *pgdir;
+	pte_t *ptep, pte;
+	unsigned int shift;
+	unsigned long addr = (unsigned long) ptr;
+	unsigned long offset;
+	unsigned long pfn, flags;
+	void *kaddr;
+
+	pgdir = current->mm->pgd;
+	if (!pgdir)
+		return -EFAULT;
+
+	local_irq_save(flags);
+	ptep = find_current_mm_pte(pgdir, addr, NULL, &shift);
+	if (!ptep)
+		goto err_out;
+	if (!shift)
+		shift = PAGE_SHIFT;
+
+	/* align address to page boundary */
+	offset = addr & ((1UL << shift) - 1);
+
+	pte = READ_ONCE(*ptep);
+	if (!pte_present(pte) || !pte_user(pte))
+		goto err_out;
+	pfn = pte_pfn(pte);
+	if (!page_is_ram(pfn))
+		goto err_out;
+
+	/* no highmem to worry about here */
+	kaddr = pfn_to_kaddr(pfn);
+	memcpy(buf, kaddr + offset, nb);
+	ret = 0;
+err_out:
+	local_irq_restore(flags);
+	return ret;
+}
+
+static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret)
+{
+	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned long) ||
+	    ((unsigned long)ptr & 7))
+		return -EFAULT;
+
+	if (!probe_user_read(ret, ptr, sizeof(*ret)))
+		return 0;
+
+	return read_user_stack_slow(ptr, ret, 8);
+}
+
+/*
+ * 64-bit user processes use the same stack frame for RT and non-RT signals.
+ */
+struct signal_frame_64 {
+	char		dummy[__SIGNAL_FRAMESIZE];
+	struct ucontext	uc;
+	unsigned long	unused[2];
+	unsigned int	tramp[6];
+	struct siginfo	*pinfo;
+	void		*puc;
+	struct siginfo	info;
+	char		abigap[288];
+};
+
+static int is_sigreturn_64_address(unsigned long nip, unsigned long fp)
+{
+	if (nip == fp + offsetof(struct signal_frame_64, tramp))
+		return 1;
+	if (vdso64_rt_sigtramp && current->mm->context.vdso_base &&
+	    nip == current->mm->context.vdso_base + vdso64_rt_sigtramp)
+		return 1;
+	return 0;
+}
+
+/*
+ * Do some sanity checking on the signal frame pointed to by sp.
+ * We check the pinfo and puc pointers in the frame.
+ */
+static int sane_signal_64_frame(unsigned long sp)
+{
+	struct signal_frame_64 __user *sf;
+	unsigned long pinfo, puc;
+
+	sf = (struct signal_frame_64 __user *) sp;
+	if (read_user_stack_64((unsigned long __user *) &sf->pinfo, &pinfo) ||
+	    read_user_stack_64((unsigned long __user *) &sf->puc, &puc))
+		return 0;
+	return pinfo == (unsigned long) &sf->info &&
+		puc == (unsigned long) &sf->uc;
+}
+
+void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
+			    struct pt_regs *regs)
+{
+	unsigned long sp, next_sp;
+	unsigned long next_ip;
+	unsigned long lr;
+	long level = 0;
+	struct signal_frame_64 __user *sigframe;
+	unsigned long __user *fp, *uregs;
+
+	next_ip = perf_instruction_pointer(regs);
+	lr = regs->link;
+	sp = regs->gpr[1];
+	perf_callchain_store(entry, next_ip);
+
+	while (entry->nr < entry->max_stack) {
+		fp = (unsigned long __user *) sp;
+		if (!valid_user_sp(sp) || read_user_stack_64(fp, &next_sp))
+			return;
+		if (level > 0 && read_user_stack_64(&fp[2], &next_ip))
+			return;
+
+		/*
+		 * Note: the next_sp - sp >= signal frame size check
+		 * is true when next_sp < sp, which can happen when
+		 * transitioning from an alternate signal stack to the
+		 * normal stack.
+		 */
+		if (next_sp - sp >= sizeof(struct signal_frame_64) &&
+		    (is_sigreturn_64_address(next_ip, sp) ||
+		     (level <= 1 && is_sigreturn_64_address(lr, sp))) &&
+		    sane_signal_64_frame(sp)) {
+			/*
+			 * This looks like an signal frame
+			 */
+			sigframe = (struct signal_frame_64 __user *) sp;
+			uregs = sigframe->uc.uc_mcontext.gp_regs;
+			if (read_user_stack_64(&uregs[PT_NIP], &next_ip) ||
+			    read_user_stack_64(&uregs[PT_LNK], &lr) ||
+			    read_user_stack_64(&uregs[PT_R1], &sp))
+				return;
+			level = 0;
+			perf_callchain_store_context(entry, PERF_CONTEXT_USER);
+			perf_callchain_store(entry, next_ip);
+			continue;
+		}
+
+		if (level == 0)
+			next_ip = lr;
+		perf_callchain_store(entry, next_ip);
+		++level;
+		sp = next_sp;
+	}
+}
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v11 8/8] MAINTAINERS: perf: Add pattern that matches ppc perf to the perf entry.
  2020-03-19 12:19 ` [PATCH v11 0/8] Disable compat cruft on ppc64le v11 Michal Suchanek
                     ` (6 preceding siblings ...)
  2020-03-19 12:19   ` [PATCH v11 7/8] powerpc/perf: split callchain.c by bitness Michal Suchanek
@ 2020-03-19 12:19   ` Michal Suchanek
  2020-03-19 13:37     ` Andy Shevchenko
  2020-03-19 17:03     ` Joe Perches
  2020-03-19 12:36   ` [PATCH v11 0/8] Disable compat cruft on ppc64le v11 Christophe Leroy
  2020-04-03  7:25   ` Nicholas Piggin
  9 siblings, 2 replies; 53+ messages in thread
From: Michal Suchanek @ 2020-03-19 12:19 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Michal Suchanek, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Alexander Viro, Mauro Carvalho Chehab,
	David S. Miller, Rob Herring, Greg Kroah-Hartman,
	Jonathan Cameron, Andy Shevchenko, Christophe Leroy,
	Thomas Gleixner, Arnd Bergmann, Nayna Jain, Eric Richter,
	Claudio Carvalho, Nicholas Piggin, Hari Bathini, Masahiro Yamada,
	Thiago Jung Bauermann, Sebastian Andrzej Siewior,
	Valentin Schneider, Jordan Niethe, Michael Neuling,
	Gustavo Luiz Duarte, Allison Randal, Eric W. Biederman,
	linux-kernel, linux-fsdevel

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
---
v10: new patch
---
 MAINTAINERS | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index bc8dbe4fe4c9..329bf4a31412 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -13088,6 +13088,8 @@ F:	arch/*/kernel/*/perf_event*.c
 F:	arch/*/kernel/*/*/perf_event*.c
 F:	arch/*/include/asm/perf_event.h
 F:	arch/*/kernel/perf_callchain.c
+F:	arch/*/perf/*
+F:	arch/*/perf/*/*
 F:	arch/*/events/*
 F:	arch/*/events/*/*
 F:	tools/perf/
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* Re: [PATCH v11 0/8] Disable compat cruft on ppc64le v11
  2020-03-19 12:19 ` [PATCH v11 0/8] Disable compat cruft on ppc64le v11 Michal Suchanek
                     ` (7 preceding siblings ...)
  2020-03-19 12:19   ` [PATCH v11 8/8] MAINTAINERS: perf: Add pattern that matches ppc perf to the perf entry Michal Suchanek
@ 2020-03-19 12:36   ` Christophe Leroy
  2020-03-19 14:01     ` Michal Suchánek
  2020-04-03  7:25   ` Nicholas Piggin
  9 siblings, 1 reply; 53+ messages in thread
From: Christophe Leroy @ 2020-03-19 12:36 UTC (permalink / raw)
  To: Michal Suchanek, linuxppc-dev
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Alexander Viro, Mauro Carvalho Chehab, David S. Miller,
	Rob Herring, Greg Kroah-Hartman, Jonathan Cameron,
	Andy Shevchenko, Thomas Gleixner, Arnd Bergmann, Nayna Jain,
	Eric Richter, Claudio Carvalho, Nicholas Piggin, Hari Bathini,
	Masahiro Yamada, Thiago Jung Bauermann,
	Sebastian Andrzej Siewior, Valentin Schneider, Jordan Niethe,
	Michael Neuling, Gustavo Luiz Duarte, Allison Randal,
	Eric W. Biederman, linux-kernel, linux-fsdevel

You sent it twice ? Any difference between the two dispatch ?

Christophe

Le 19/03/2020 à 13:19, Michal Suchanek a écrit :
> Less code means less bugs so add a knob to skip the compat stuff.
> 
> Changes in v2: saner CONFIG_COMPAT ifdefs
> Changes in v3:
>   - change llseek to 32bit instead of builing it unconditionally in fs
>   - clanup the makefile conditionals
>   - remove some ifdefs or convert to IS_DEFINED where possible
> Changes in v4:
>   - cleanup is_32bit_task and current_is_64bit
>   - more makefile cleanup
> Changes in v5:
>   - more current_is_64bit cleanup
>   - split off callchain.c 32bit and 64bit parts
> Changes in v6:
>   - cleanup makefile after split
>   - consolidate read_user_stack_32
>   - fix some checkpatch warnings
> Changes in v7:
>   - add back __ARCH_WANT_SYS_LLSEEK to fix build with llseek
>   - remove leftover hunk
>   - add review tags
> Changes in v8:
>   - consolidate valid_user_sp to fix it in the split callchain.c
>   - fix build errors/warnings with PPC64 !COMPAT and PPC32
> Changes in v9:
>   - remove current_is_64bit()
> Chanegs in v10:
>   - rebase, sent together with the syscall cleanup
> Changes in v11:
>   - rebase
>   - add MAINTAINERS pattern for ppc perf
> 
> Michal Suchanek (8):
>    powerpc: Add back __ARCH_WANT_SYS_LLSEEK macro
>    powerpc: move common register copy functions from signal_32.c to
>      signal.c
>    powerpc/perf: consolidate read_user_stack_32
>    powerpc/perf: consolidate valid_user_sp
>    powerpc/64: make buildable without CONFIG_COMPAT
>    powerpc/64: Make COMPAT user-selectable disabled on littleendian by
>      default.
>    powerpc/perf: split callchain.c by bitness
>    MAINTAINERS: perf: Add pattern that matches ppc perf to the perf
>      entry.
> 
>   MAINTAINERS                            |   2 +
>   arch/powerpc/Kconfig                   |   5 +-
>   arch/powerpc/include/asm/thread_info.h |   4 +-
>   arch/powerpc/include/asm/unistd.h      |   1 +
>   arch/powerpc/kernel/Makefile           |   6 +-
>   arch/powerpc/kernel/entry_64.S         |   2 +
>   arch/powerpc/kernel/signal.c           | 144 +++++++++-
>   arch/powerpc/kernel/signal_32.c        | 140 ----------
>   arch/powerpc/kernel/syscall_64.c       |   6 +-
>   arch/powerpc/kernel/vdso.c             |   3 +-
>   arch/powerpc/perf/Makefile             |   5 +-
>   arch/powerpc/perf/callchain.c          | 356 +------------------------
>   arch/powerpc/perf/callchain.h          |  20 ++
>   arch/powerpc/perf/callchain_32.c       | 196 ++++++++++++++
>   arch/powerpc/perf/callchain_64.c       | 174 ++++++++++++
>   fs/read_write.c                        |   3 +-
>   16 files changed, 556 insertions(+), 511 deletions(-)
>   create mode 100644 arch/powerpc/perf/callchain.h
>   create mode 100644 arch/powerpc/perf/callchain_32.c
>   create mode 100644 arch/powerpc/perf/callchain_64.c
> 

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v11 8/8] MAINTAINERS: perf: Add pattern that matches ppc perf to the perf entry.
  2020-03-19 12:19   ` [PATCH v11 8/8] MAINTAINERS: perf: Add pattern that matches ppc perf to the perf entry Michal Suchanek
@ 2020-03-19 13:37     ` Andy Shevchenko
  2020-03-19 14:00       ` Michal Suchánek
  2020-03-19 17:03     ` Joe Perches
  1 sibling, 1 reply; 53+ messages in thread
From: Andy Shevchenko @ 2020-03-19 13:37 UTC (permalink / raw)
  To: Michal Suchanek
  Cc: open list:LINUX FOR POWERPC PA SEMI PWRFICIENT,
	Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Alexander Viro, Mauro Carvalho Chehab, David S. Miller,
	Rob Herring, Greg Kroah-Hartman, Jonathan Cameron,
	Andy Shevchenko, Christophe Leroy, Thomas Gleixner,
	Arnd Bergmann, Nayna Jain, Eric Richter, Claudio Carvalho,
	Nicholas Piggin, Hari Bathini, Masahiro Yamada,
	Thiago Jung Bauermann, Sebastian Andrzej Siewior,
	Valentin Schneider, Jordan Niethe, Michael Neuling,
	Gustavo Luiz Duarte, Allison Randal, Eric W. Biederman,
	Linux Kernel Mailing List, Linux FS Devel

On Thu, Mar 19, 2020 at 2:21 PM Michal Suchanek <msuchanek@suse.de> wrote:
>
> Signed-off-by: Michal Suchanek <msuchanek@suse.de>
> ---
> v10: new patch
> ---
>  MAINTAINERS | 2 ++
>  1 file changed, 2 insertions(+)
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index bc8dbe4fe4c9..329bf4a31412 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -13088,6 +13088,8 @@ F:      arch/*/kernel/*/perf_event*.c
>  F:     arch/*/kernel/*/*/perf_event*.c
>  F:     arch/*/include/asm/perf_event.h
>  F:     arch/*/kernel/perf_callchain.c
> +F:     arch/*/perf/*
> +F:     arch/*/perf/*/*
>  F:     arch/*/events/*
>  F:     arch/*/events/*/*
>  F:     tools/perf/

Had you run parse-maintainers.pl?

-- 
With Best Regards,
Andy Shevchenko

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v11 8/8] MAINTAINERS: perf: Add pattern that matches ppc perf to the perf entry.
  2020-03-19 13:37     ` Andy Shevchenko
@ 2020-03-19 14:00       ` Michal Suchánek
  2020-03-19 14:26         ` Andy Shevchenko
  0 siblings, 1 reply; 53+ messages in thread
From: Michal Suchánek @ 2020-03-19 14:00 UTC (permalink / raw)
  To: Andy Shevchenko
  Cc: open list:LINUX FOR POWERPC PA SEMI PWRFICIENT,
	Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Alexander Viro, Mauro Carvalho Chehab, David S. Miller,
	Rob Herring, Greg Kroah-Hartman, Jonathan Cameron,
	Andy Shevchenko, Christophe Leroy, Thomas Gleixner,
	Arnd Bergmann, Nayna Jain, Eric Richter, Claudio Carvalho,
	Nicholas Piggin, Hari Bathini, Masahiro Yamada,
	Thiago Jung Bauermann, Sebastian Andrzej Siewior,
	Valentin Schneider, Jordan Niethe, Michael Neuling,
	Gustavo Luiz Duarte, Allison Randal, Eric W. Biederman,
	Linux Kernel Mailing List, Linux FS Devel

On Thu, Mar 19, 2020 at 03:37:03PM +0200, Andy Shevchenko wrote:
> On Thu, Mar 19, 2020 at 2:21 PM Michal Suchanek <msuchanek@suse.de> wrote:
> >
> > Signed-off-by: Michal Suchanek <msuchanek@suse.de>
> > ---
> > v10: new patch
> > ---
> >  MAINTAINERS | 2 ++
> >  1 file changed, 2 insertions(+)
> >
> > diff --git a/MAINTAINERS b/MAINTAINERS
> > index bc8dbe4fe4c9..329bf4a31412 100644
> > --- a/MAINTAINERS
> > +++ b/MAINTAINERS
> > @@ -13088,6 +13088,8 @@ F:      arch/*/kernel/*/perf_event*.c
> >  F:     arch/*/kernel/*/*/perf_event*.c
> >  F:     arch/*/include/asm/perf_event.h
> >  F:     arch/*/kernel/perf_callchain.c
> > +F:     arch/*/perf/*
> > +F:     arch/*/perf/*/*
> >  F:     arch/*/events/*
> >  F:     arch/*/events/*/*
> >  F:     tools/perf/
> 
> Had you run parse-maintainers.pl?
Did not know it exists. The output is:

scripts/parse-maintainers.pl 
Odd non-pattern line '
Documentation/devicetree/bindings/media/ti,cal.yaml
' for 'TI VPE/CAL DRIVERS' at scripts/parse-maintainers.pl line 147,
<$file> line 16756.

Thanks

Michal

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v11 0/8] Disable compat cruft on ppc64le v11
  2020-03-19 12:36   ` [PATCH v11 0/8] Disable compat cruft on ppc64le v11 Christophe Leroy
@ 2020-03-19 14:01     ` Michal Suchánek
  0 siblings, 0 replies; 53+ messages in thread
From: Michal Suchánek @ 2020-03-19 14:01 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: linuxppc-dev, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Alexander Viro, Mauro Carvalho Chehab,
	David S. Miller, Rob Herring, Greg Kroah-Hartman,
	Jonathan Cameron, Andy Shevchenko, Thomas Gleixner,
	Arnd Bergmann, Nayna Jain, Eric Richter, Claudio Carvalho,
	Nicholas Piggin, Hari Bathini, Masahiro Yamada,
	Thiago Jung Bauermann, Sebastian Andrzej Siewior,
	Valentin Schneider, Jordan Niethe, Michael Neuling,
	Gustavo Luiz Duarte, Allison Randal, Eric W. Biederman,
	linux-kernel, linux-fsdevel

On Thu, Mar 19, 2020 at 01:36:56PM +0100, Christophe Leroy wrote:
> You sent it twice ? Any difference between the two dispatch ?
Some headers were broken the first time around.

Thanks

Michal
> 
> Christophe
> 
> Le 19/03/2020 à 13:19, Michal Suchanek a écrit :
> > Less code means less bugs so add a knob to skip the compat stuff.
> > 
> > Changes in v2: saner CONFIG_COMPAT ifdefs
> > Changes in v3:
> >   - change llseek to 32bit instead of builing it unconditionally in fs
> >   - clanup the makefile conditionals
> >   - remove some ifdefs or convert to IS_DEFINED where possible
> > Changes in v4:
> >   - cleanup is_32bit_task and current_is_64bit
> >   - more makefile cleanup
> > Changes in v5:
> >   - more current_is_64bit cleanup
> >   - split off callchain.c 32bit and 64bit parts
> > Changes in v6:
> >   - cleanup makefile after split
> >   - consolidate read_user_stack_32
> >   - fix some checkpatch warnings
> > Changes in v7:
> >   - add back __ARCH_WANT_SYS_LLSEEK to fix build with llseek
> >   - remove leftover hunk
> >   - add review tags
> > Changes in v8:
> >   - consolidate valid_user_sp to fix it in the split callchain.c
> >   - fix build errors/warnings with PPC64 !COMPAT and PPC32
> > Changes in v9:
> >   - remove current_is_64bit()
> > Chanegs in v10:
> >   - rebase, sent together with the syscall cleanup
> > Changes in v11:
> >   - rebase
> >   - add MAINTAINERS pattern for ppc perf
> > 
> > Michal Suchanek (8):
> >    powerpc: Add back __ARCH_WANT_SYS_LLSEEK macro
> >    powerpc: move common register copy functions from signal_32.c to
> >      signal.c
> >    powerpc/perf: consolidate read_user_stack_32
> >    powerpc/perf: consolidate valid_user_sp
> >    powerpc/64: make buildable without CONFIG_COMPAT
> >    powerpc/64: Make COMPAT user-selectable disabled on littleendian by
> >      default.
> >    powerpc/perf: split callchain.c by bitness
> >    MAINTAINERS: perf: Add pattern that matches ppc perf to the perf
> >      entry.
> > 
> >   MAINTAINERS                            |   2 +
> >   arch/powerpc/Kconfig                   |   5 +-
> >   arch/powerpc/include/asm/thread_info.h |   4 +-
> >   arch/powerpc/include/asm/unistd.h      |   1 +
> >   arch/powerpc/kernel/Makefile           |   6 +-
> >   arch/powerpc/kernel/entry_64.S         |   2 +
> >   arch/powerpc/kernel/signal.c           | 144 +++++++++-
> >   arch/powerpc/kernel/signal_32.c        | 140 ----------
> >   arch/powerpc/kernel/syscall_64.c       |   6 +-
> >   arch/powerpc/kernel/vdso.c             |   3 +-
> >   arch/powerpc/perf/Makefile             |   5 +-
> >   arch/powerpc/perf/callchain.c          | 356 +------------------------
> >   arch/powerpc/perf/callchain.h          |  20 ++
> >   arch/powerpc/perf/callchain_32.c       | 196 ++++++++++++++
> >   arch/powerpc/perf/callchain_64.c       | 174 ++++++++++++
> >   fs/read_write.c                        |   3 +-
> >   16 files changed, 556 insertions(+), 511 deletions(-)
> >   create mode 100644 arch/powerpc/perf/callchain.h
> >   create mode 100644 arch/powerpc/perf/callchain_32.c
> >   create mode 100644 arch/powerpc/perf/callchain_64.c
> > 

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v11 8/8] MAINTAINERS: perf: Add pattern that matches ppc perf to the perf entry.
  2020-03-19 14:00       ` Michal Suchánek
@ 2020-03-19 14:26         ` Andy Shevchenko
  0 siblings, 0 replies; 53+ messages in thread
From: Andy Shevchenko @ 2020-03-19 14:26 UTC (permalink / raw)
  To: Michal Suchánek
  Cc: open list:LINUX FOR POWERPC PA SEMI PWRFICIENT,
	Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Alexander Viro, Mauro Carvalho Chehab, David S. Miller,
	Rob Herring, Greg Kroah-Hartman, Jonathan Cameron,
	Christophe Leroy, Thomas Gleixner, Arnd Bergmann, Nayna Jain,
	Eric Richter, Claudio Carvalho, Nicholas Piggin, Hari Bathini,
	Masahiro Yamada, Thiago Jung Bauermann,
	Sebastian Andrzej Siewior, Valentin Schneider, Jordan Niethe,
	Michael Neuling, Gustavo Luiz Duarte, Allison Randal,
	Eric W. Biederman, Linux Kernel Mailing List, Linux FS Devel

On Thu, Mar 19, 2020 at 03:00:08PM +0100, Michal Suchánek wrote:
> On Thu, Mar 19, 2020 at 03:37:03PM +0200, Andy Shevchenko wrote:
> > On Thu, Mar 19, 2020 at 2:21 PM Michal Suchanek <msuchanek@suse.de> wrote:
> > >
> > > Signed-off-by: Michal Suchanek <msuchanek@suse.de>
> > > ---
> > > v10: new patch
> > > ---
> > >  MAINTAINERS | 2 ++
> > >  1 file changed, 2 insertions(+)
> > >
> > > diff --git a/MAINTAINERS b/MAINTAINERS
> > > index bc8dbe4fe4c9..329bf4a31412 100644
> > > --- a/MAINTAINERS
> > > +++ b/MAINTAINERS
> > > @@ -13088,6 +13088,8 @@ F:      arch/*/kernel/*/perf_event*.c
> > >  F:     arch/*/kernel/*/*/perf_event*.c
> > >  F:     arch/*/include/asm/perf_event.h
> > >  F:     arch/*/kernel/perf_callchain.c
> > > +F:     arch/*/perf/*
> > > +F:     arch/*/perf/*/*
> > >  F:     arch/*/events/*
> > >  F:     arch/*/events/*/*
> > >  F:     tools/perf/
> > 
> > Had you run parse-maintainers.pl?
> Did not know it exists. The output is:
> 
> scripts/parse-maintainers.pl 
> Odd non-pattern line '
> Documentation/devicetree/bindings/media/ti,cal.yaml
> ' for 'TI VPE/CAL DRIVERS' at scripts/parse-maintainers.pl line 147,
> <$file> line 16756.

It is fixed in media tree and available in linux next as

d44535cb14c9 ("media: MAINTAINERS: Sort entries in database for TI VPE/CAL")

-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v11 8/8] MAINTAINERS: perf: Add pattern that matches ppc perf to the perf entry.
  2020-03-19 12:19   ` [PATCH v11 8/8] MAINTAINERS: perf: Add pattern that matches ppc perf to the perf entry Michal Suchanek
  2020-03-19 13:37     ` Andy Shevchenko
@ 2020-03-19 17:03     ` Joe Perches
  1 sibling, 0 replies; 53+ messages in thread
From: Joe Perches @ 2020-03-19 17:03 UTC (permalink / raw)
  To: Michal Suchanek, linuxppc-dev
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Alexander Viro, Mauro Carvalho Chehab, David S. Miller,
	Rob Herring, Greg Kroah-Hartman, Jonathan Cameron,
	Andy Shevchenko, Christophe Leroy, Thomas Gleixner,
	Arnd Bergmann, Nayna Jain, Eric Richter, Claudio Carvalho,
	Nicholas Piggin, Hari Bathini, Masahiro Yamada,
	Thiago Jung Bauermann, Sebastian Andrzej Siewior,
	Valentin Schneider, Jordan Niethe, Michael Neuling,
	Gustavo Luiz Duarte, Allison Randal, Eric W. Biederman,
	linux-kernel, linux-fsdevel

On Thu, 2020-03-19 at 13:19 +0100, Michal Suchanek wrote:
> Signed-off-by: Michal Suchanek <msuchanek@suse.de>
> ---
> v10: new patch
> ---
>  MAINTAINERS | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index bc8dbe4fe4c9..329bf4a31412 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -13088,6 +13088,8 @@ F:	arch/*/kernel/*/perf_event*.c
>  F:	arch/*/kernel/*/*/perf_event*.c
>  F:	arch/*/include/asm/perf_event.h
>  F:	arch/*/kernel/perf_callchain.c
> +F:	arch/*/perf/*
> +F:	arch/*/perf/*/*

While I understand the desire, I believe that
repetitive listings like this don't really
help much.

Having a single entry of:

F:	arch/*/perf/

would serve the same purpose.

Nominally, the difference between the 2 entries
vs the 1 entry is this:

F:	arch/*/perf/*

Only the specific files in any directory that matches
this pattern but not any files in their subdirectories
are maintained.

F:	arch/*/perf/*/*

Only the files in any top level subdirectory of any
directory that matches this pattern are maintained
but not files in any directory of those subdirectories.

F:	arch/*/perf/

Any file or any file in any subdirectory of any
directory that matches this pattern is maintained.



^ permalink raw reply	[flat|nested] 53+ messages in thread

* [PATCH v12 0/8] Disable compat cruft on ppc64le v12
       [not found] <20200225173541.1549955-1-npiggin@gmail.com>
  2020-03-19 12:19 ` [PATCH v11 0/8] Disable compat cruft on ppc64le v11 Michal Suchanek
@ 2020-03-20 10:20 ` Michal Suchanek
  2020-03-20 10:20   ` [PATCH v12 1/8] powerpc: Add back __ARCH_WANT_SYS_LLSEEK macro Michal Suchanek
                     ` (7 more replies)
  1 sibling, 8 replies; 53+ messages in thread
From: Michal Suchanek @ 2020-03-20 10:20 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Michal Suchanek, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Alexander Viro, Mauro Carvalho Chehab,
	David S. Miller, Rob Herring, Greg Kroah-Hartman,
	Jonathan Cameron, Andy Shevchenko, Christophe Leroy,
	Thomas Gleixner, Arnd Bergmann, Nayna Jain, Eric Richter,
	Claudio Carvalho, Nicholas Piggin, Hari Bathini, Masahiro Yamada,
	Thiago Jung Bauermann, Sebastian Andrzej Siewior,
	Valentin Schneider, Jordan Niethe, Michael Neuling,
	Gustavo Luiz Duarte, Allison Randal, Eric W. Biederman,
	linux-kernel, linux-fsdevel

Less code means less bugs so add a knob to skip the compat stuff.

Changes in v2: saner CONFIG_COMPAT ifdefs
Changes in v3:
 - change llseek to 32bit instead of builing it unconditionally in fs
 - clanup the makefile conditionals
 - remove some ifdefs or convert to IS_DEFINED where possible
Changes in v4:
 - cleanup is_32bit_task and current_is_64bit
 - more makefile cleanup
Changes in v5:
 - more current_is_64bit cleanup
 - split off callchain.c 32bit and 64bit parts
Changes in v6:
 - cleanup makefile after split
 - consolidate read_user_stack_32
 - fix some checkpatch warnings
Changes in v7:
 - add back __ARCH_WANT_SYS_LLSEEK to fix build with llseek
 - remove leftover hunk
 - add review tags
Changes in v8:
 - consolidate valid_user_sp to fix it in the split callchain.c
 - fix build errors/warnings with PPC64 !COMPAT and PPC32
Changes in v9:
 - remove current_is_64bit()
Chanegs in v10:
 - rebase, sent together with the syscall cleanup
Changes in v11:
 - rebase
 - add MAINTAINERS pattern for ppc perf
Changes in v12:
 - simplify valid_user_sp and change to invalid_user_sp
 - remove superfluous perf patterns in MAINTAINERS

Michal Suchanek (8):
  powerpc: Add back __ARCH_WANT_SYS_LLSEEK macro
  powerpc: move common register copy functions from signal_32.c to
    signal.c
  powerpc/perf: consolidate read_user_stack_32
  powerpc/perf: consolidate valid_user_sp -> invalid_user_sp
  powerpc/64: make buildable without CONFIG_COMPAT
  powerpc/64: Make COMPAT user-selectable disabled on littleendian by
    default.
  powerpc/perf: split callchain.c by bitness
  MAINTAINERS: perf: Add pattern that matches ppc perf to the perf
    entry.

 MAINTAINERS                            |   6 +-
 arch/powerpc/Kconfig                   |   5 +-
 arch/powerpc/include/asm/thread_info.h |   4 +-
 arch/powerpc/include/asm/unistd.h      |   1 +
 arch/powerpc/kernel/Makefile           |   6 +-
 arch/powerpc/kernel/entry_64.S         |   2 +
 arch/powerpc/kernel/signal.c           | 144 +++++++++-
 arch/powerpc/kernel/signal_32.c        | 140 ----------
 arch/powerpc/kernel/syscall_64.c       |   6 +-
 arch/powerpc/kernel/vdso.c             |   3 +-
 arch/powerpc/perf/Makefile             |   5 +-
 arch/powerpc/perf/callchain.c          | 356 +------------------------
 arch/powerpc/perf/callchain.h          |  19 ++
 arch/powerpc/perf/callchain_32.c       | 196 ++++++++++++++
 arch/powerpc/perf/callchain_64.c       | 174 ++++++++++++
 fs/read_write.c                        |   3 +-
 16 files changed, 556 insertions(+), 514 deletions(-)
 create mode 100644 arch/powerpc/perf/callchain.h
 create mode 100644 arch/powerpc/perf/callchain_32.c
 create mode 100644 arch/powerpc/perf/callchain_64.c

-- 
2.23.0


^ permalink raw reply	[flat|nested] 53+ messages in thread

* [PATCH v12 1/8] powerpc: Add back __ARCH_WANT_SYS_LLSEEK macro
  2020-03-20 10:20 ` [PATCH v12 0/8] Disable compat cruft on ppc64le v12 Michal Suchanek
@ 2020-03-20 10:20   ` Michal Suchanek
  2020-03-20 10:20   ` [PATCH v12 2/8] powerpc: move common register copy functions from signal_32.c to signal.c Michal Suchanek
                     ` (6 subsequent siblings)
  7 siblings, 0 replies; 53+ messages in thread
From: Michal Suchanek @ 2020-03-20 10:20 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Michal Suchanek, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Alexander Viro, Mauro Carvalho Chehab,
	David S. Miller, Rob Herring, Greg Kroah-Hartman,
	Jonathan Cameron, Andy Shevchenko, Christophe Leroy,
	Thomas Gleixner, Arnd Bergmann, Nayna Jain, Eric Richter,
	Claudio Carvalho, Nicholas Piggin, Hari Bathini, Masahiro Yamada,
	Thiago Jung Bauermann, Sebastian Andrzej Siewior,
	Valentin Schneider, Jordan Niethe, Michael Neuling,
	Gustavo Luiz Duarte, Allison Randal, Eric W. Biederman,
	linux-kernel, linux-fsdevel

This partially reverts commit caf6f9c8a326 ("asm-generic: Remove
unneeded __ARCH_WANT_SYS_LLSEEK macro")

When CONFIG_COMPAT is disabled on ppc64 the kernel does not build.

There is resistance to both removing the llseek syscall from the 64bit
syscall tables and building the llseek interface unconditionally.

Link: https://lore.kernel.org/lkml/20190828151552.GA16855@infradead.org/
Link: https://lore.kernel.org/lkml/20190829214319.498c7de2@naga/

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
---
v7: new patch
---
 arch/powerpc/include/asm/unistd.h | 1 +
 fs/read_write.c                   | 3 ++-
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/unistd.h b/arch/powerpc/include/asm/unistd.h
index b0720c7c3fcf..700fcdac2e3c 100644
--- a/arch/powerpc/include/asm/unistd.h
+++ b/arch/powerpc/include/asm/unistd.h
@@ -31,6 +31,7 @@
 #define __ARCH_WANT_SYS_SOCKETCALL
 #define __ARCH_WANT_SYS_FADVISE64
 #define __ARCH_WANT_SYS_GETPGRP
+#define __ARCH_WANT_SYS_LLSEEK
 #define __ARCH_WANT_SYS_NICE
 #define __ARCH_WANT_SYS_OLD_GETRLIMIT
 #define __ARCH_WANT_SYS_OLD_UNAME
diff --git a/fs/read_write.c b/fs/read_write.c
index 59d819c5b92e..bbfa9b12b15e 100644
--- a/fs/read_write.c
+++ b/fs/read_write.c
@@ -331,7 +331,8 @@ COMPAT_SYSCALL_DEFINE3(lseek, unsigned int, fd, compat_off_t, offset, unsigned i
 }
 #endif
 
-#if !defined(CONFIG_64BIT) || defined(CONFIG_COMPAT)
+#if !defined(CONFIG_64BIT) || defined(CONFIG_COMPAT) || \
+	defined(__ARCH_WANT_SYS_LLSEEK)
 SYSCALL_DEFINE5(llseek, unsigned int, fd, unsigned long, offset_high,
 		unsigned long, offset_low, loff_t __user *, result,
 		unsigned int, whence)
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v12 2/8] powerpc: move common register copy functions from signal_32.c to signal.c
  2020-03-20 10:20 ` [PATCH v12 0/8] Disable compat cruft on ppc64le v12 Michal Suchanek
  2020-03-20 10:20   ` [PATCH v12 1/8] powerpc: Add back __ARCH_WANT_SYS_LLSEEK macro Michal Suchanek
@ 2020-03-20 10:20   ` Michal Suchanek
  2020-03-20 10:20   ` [PATCH v12 3/8] powerpc/perf: consolidate read_user_stack_32 Michal Suchanek
                     ` (5 subsequent siblings)
  7 siblings, 0 replies; 53+ messages in thread
From: Michal Suchanek @ 2020-03-20 10:20 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Michal Suchanek, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Alexander Viro, Mauro Carvalho Chehab,
	David S. Miller, Rob Herring, Greg Kroah-Hartman,
	Jonathan Cameron, Andy Shevchenko, Christophe Leroy,
	Thomas Gleixner, Arnd Bergmann, Nayna Jain, Eric Richter,
	Claudio Carvalho, Nicholas Piggin, Hari Bathini, Masahiro Yamada,
	Thiago Jung Bauermann, Sebastian Andrzej Siewior,
	Valentin Schneider, Jordan Niethe, Michael Neuling,
	Gustavo Luiz Duarte, Allison Randal, Eric W. Biederman,
	linux-kernel, linux-fsdevel

These functions are required for 64bit as well.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/kernel/signal.c    | 141 ++++++++++++++++++++++++++++++++
 arch/powerpc/kernel/signal_32.c | 140 -------------------------------
 2 files changed, 141 insertions(+), 140 deletions(-)

diff --git a/arch/powerpc/kernel/signal.c b/arch/powerpc/kernel/signal.c
index d215f9554553..4b0152108f61 100644
--- a/arch/powerpc/kernel/signal.c
+++ b/arch/powerpc/kernel/signal.c
@@ -18,12 +18,153 @@
 #include <linux/syscalls.h>
 #include <asm/hw_breakpoint.h>
 #include <linux/uaccess.h>
+#include <asm/switch_to.h>
 #include <asm/unistd.h>
 #include <asm/debug.h>
 #include <asm/tm.h>
 
 #include "signal.h"
 
+#ifdef CONFIG_VSX
+unsigned long copy_fpr_to_user(void __user *to,
+			       struct task_struct *task)
+{
+	u64 buf[ELF_NFPREG];
+	int i;
+
+	/* save FPR copy to local buffer then write to the thread_struct */
+	for (i = 0; i < (ELF_NFPREG - 1) ; i++)
+		buf[i] = task->thread.TS_FPR(i);
+	buf[i] = task->thread.fp_state.fpscr;
+	return __copy_to_user(to, buf, ELF_NFPREG * sizeof(double));
+}
+
+unsigned long copy_fpr_from_user(struct task_struct *task,
+				 void __user *from)
+{
+	u64 buf[ELF_NFPREG];
+	int i;
+
+	if (__copy_from_user(buf, from, ELF_NFPREG * sizeof(double)))
+		return 1;
+	for (i = 0; i < (ELF_NFPREG - 1) ; i++)
+		task->thread.TS_FPR(i) = buf[i];
+	task->thread.fp_state.fpscr = buf[i];
+
+	return 0;
+}
+
+unsigned long copy_vsx_to_user(void __user *to,
+			       struct task_struct *task)
+{
+	u64 buf[ELF_NVSRHALFREG];
+	int i;
+
+	/* save FPR copy to local buffer then write to the thread_struct */
+	for (i = 0; i < ELF_NVSRHALFREG; i++)
+		buf[i] = task->thread.fp_state.fpr[i][TS_VSRLOWOFFSET];
+	return __copy_to_user(to, buf, ELF_NVSRHALFREG * sizeof(double));
+}
+
+unsigned long copy_vsx_from_user(struct task_struct *task,
+				 void __user *from)
+{
+	u64 buf[ELF_NVSRHALFREG];
+	int i;
+
+	if (__copy_from_user(buf, from, ELF_NVSRHALFREG * sizeof(double)))
+		return 1;
+	for (i = 0; i < ELF_NVSRHALFREG ; i++)
+		task->thread.fp_state.fpr[i][TS_VSRLOWOFFSET] = buf[i];
+	return 0;
+}
+
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+unsigned long copy_ckfpr_to_user(void __user *to,
+				  struct task_struct *task)
+{
+	u64 buf[ELF_NFPREG];
+	int i;
+
+	/* save FPR copy to local buffer then write to the thread_struct */
+	for (i = 0; i < (ELF_NFPREG - 1) ; i++)
+		buf[i] = task->thread.TS_CKFPR(i);
+	buf[i] = task->thread.ckfp_state.fpscr;
+	return __copy_to_user(to, buf, ELF_NFPREG * sizeof(double));
+}
+
+unsigned long copy_ckfpr_from_user(struct task_struct *task,
+					  void __user *from)
+{
+	u64 buf[ELF_NFPREG];
+	int i;
+
+	if (__copy_from_user(buf, from, ELF_NFPREG * sizeof(double)))
+		return 1;
+	for (i = 0; i < (ELF_NFPREG - 1) ; i++)
+		task->thread.TS_CKFPR(i) = buf[i];
+	task->thread.ckfp_state.fpscr = buf[i];
+
+	return 0;
+}
+
+unsigned long copy_ckvsx_to_user(void __user *to,
+				  struct task_struct *task)
+{
+	u64 buf[ELF_NVSRHALFREG];
+	int i;
+
+	/* save FPR copy to local buffer then write to the thread_struct */
+	for (i = 0; i < ELF_NVSRHALFREG; i++)
+		buf[i] = task->thread.ckfp_state.fpr[i][TS_VSRLOWOFFSET];
+	return __copy_to_user(to, buf, ELF_NVSRHALFREG * sizeof(double));
+}
+
+unsigned long copy_ckvsx_from_user(struct task_struct *task,
+					  void __user *from)
+{
+	u64 buf[ELF_NVSRHALFREG];
+	int i;
+
+	if (__copy_from_user(buf, from, ELF_NVSRHALFREG * sizeof(double)))
+		return 1;
+	for (i = 0; i < ELF_NVSRHALFREG ; i++)
+		task->thread.ckfp_state.fpr[i][TS_VSRLOWOFFSET] = buf[i];
+	return 0;
+}
+#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
+#else
+inline unsigned long copy_fpr_to_user(void __user *to,
+				      struct task_struct *task)
+{
+	return __copy_to_user(to, task->thread.fp_state.fpr,
+			      ELF_NFPREG * sizeof(double));
+}
+
+inline unsigned long copy_fpr_from_user(struct task_struct *task,
+					void __user *from)
+{
+	return __copy_from_user(task->thread.fp_state.fpr, from,
+			      ELF_NFPREG * sizeof(double));
+}
+
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+inline unsigned long copy_ckfpr_to_user(void __user *to,
+					 struct task_struct *task)
+{
+	return __copy_to_user(to, task->thread.ckfp_state.fpr,
+			      ELF_NFPREG * sizeof(double));
+}
+
+inline unsigned long copy_ckfpr_from_user(struct task_struct *task,
+						 void __user *from)
+{
+	return __copy_from_user(task->thread.ckfp_state.fpr, from,
+				ELF_NFPREG * sizeof(double));
+}
+#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
+#endif
+
 /* Log an error when sending an unhandled signal to a process. Controlled
  * through debug.exception-trace sysctl.
  */
diff --git a/arch/powerpc/kernel/signal_32.c b/arch/powerpc/kernel/signal_32.c
index 1b090a76b444..4f96d29a22bf 100644
--- a/arch/powerpc/kernel/signal_32.c
+++ b/arch/powerpc/kernel/signal_32.c
@@ -235,146 +235,6 @@ struct rt_sigframe {
 	int			abigap[56];
 };
 
-#ifdef CONFIG_VSX
-unsigned long copy_fpr_to_user(void __user *to,
-			       struct task_struct *task)
-{
-	u64 buf[ELF_NFPREG];
-	int i;
-
-	/* save FPR copy to local buffer then write to the thread_struct */
-	for (i = 0; i < (ELF_NFPREG - 1) ; i++)
-		buf[i] = task->thread.TS_FPR(i);
-	buf[i] = task->thread.fp_state.fpscr;
-	return __copy_to_user(to, buf, ELF_NFPREG * sizeof(double));
-}
-
-unsigned long copy_fpr_from_user(struct task_struct *task,
-				 void __user *from)
-{
-	u64 buf[ELF_NFPREG];
-	int i;
-
-	if (__copy_from_user(buf, from, ELF_NFPREG * sizeof(double)))
-		return 1;
-	for (i = 0; i < (ELF_NFPREG - 1) ; i++)
-		task->thread.TS_FPR(i) = buf[i];
-	task->thread.fp_state.fpscr = buf[i];
-
-	return 0;
-}
-
-unsigned long copy_vsx_to_user(void __user *to,
-			       struct task_struct *task)
-{
-	u64 buf[ELF_NVSRHALFREG];
-	int i;
-
-	/* save FPR copy to local buffer then write to the thread_struct */
-	for (i = 0; i < ELF_NVSRHALFREG; i++)
-		buf[i] = task->thread.fp_state.fpr[i][TS_VSRLOWOFFSET];
-	return __copy_to_user(to, buf, ELF_NVSRHALFREG * sizeof(double));
-}
-
-unsigned long copy_vsx_from_user(struct task_struct *task,
-				 void __user *from)
-{
-	u64 buf[ELF_NVSRHALFREG];
-	int i;
-
-	if (__copy_from_user(buf, from, ELF_NVSRHALFREG * sizeof(double)))
-		return 1;
-	for (i = 0; i < ELF_NVSRHALFREG ; i++)
-		task->thread.fp_state.fpr[i][TS_VSRLOWOFFSET] = buf[i];
-	return 0;
-}
-
-#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
-unsigned long copy_ckfpr_to_user(void __user *to,
-				  struct task_struct *task)
-{
-	u64 buf[ELF_NFPREG];
-	int i;
-
-	/* save FPR copy to local buffer then write to the thread_struct */
-	for (i = 0; i < (ELF_NFPREG - 1) ; i++)
-		buf[i] = task->thread.TS_CKFPR(i);
-	buf[i] = task->thread.ckfp_state.fpscr;
-	return __copy_to_user(to, buf, ELF_NFPREG * sizeof(double));
-}
-
-unsigned long copy_ckfpr_from_user(struct task_struct *task,
-					  void __user *from)
-{
-	u64 buf[ELF_NFPREG];
-	int i;
-
-	if (__copy_from_user(buf, from, ELF_NFPREG * sizeof(double)))
-		return 1;
-	for (i = 0; i < (ELF_NFPREG - 1) ; i++)
-		task->thread.TS_CKFPR(i) = buf[i];
-	task->thread.ckfp_state.fpscr = buf[i];
-
-	return 0;
-}
-
-unsigned long copy_ckvsx_to_user(void __user *to,
-				  struct task_struct *task)
-{
-	u64 buf[ELF_NVSRHALFREG];
-	int i;
-
-	/* save FPR copy to local buffer then write to the thread_struct */
-	for (i = 0; i < ELF_NVSRHALFREG; i++)
-		buf[i] = task->thread.ckfp_state.fpr[i][TS_VSRLOWOFFSET];
-	return __copy_to_user(to, buf, ELF_NVSRHALFREG * sizeof(double));
-}
-
-unsigned long copy_ckvsx_from_user(struct task_struct *task,
-					  void __user *from)
-{
-	u64 buf[ELF_NVSRHALFREG];
-	int i;
-
-	if (__copy_from_user(buf, from, ELF_NVSRHALFREG * sizeof(double)))
-		return 1;
-	for (i = 0; i < ELF_NVSRHALFREG ; i++)
-		task->thread.ckfp_state.fpr[i][TS_VSRLOWOFFSET] = buf[i];
-	return 0;
-}
-#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
-#else
-inline unsigned long copy_fpr_to_user(void __user *to,
-				      struct task_struct *task)
-{
-	return __copy_to_user(to, task->thread.fp_state.fpr,
-			      ELF_NFPREG * sizeof(double));
-}
-
-inline unsigned long copy_fpr_from_user(struct task_struct *task,
-					void __user *from)
-{
-	return __copy_from_user(task->thread.fp_state.fpr, from,
-			      ELF_NFPREG * sizeof(double));
-}
-
-#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
-inline unsigned long copy_ckfpr_to_user(void __user *to,
-					 struct task_struct *task)
-{
-	return __copy_to_user(to, task->thread.ckfp_state.fpr,
-			      ELF_NFPREG * sizeof(double));
-}
-
-inline unsigned long copy_ckfpr_from_user(struct task_struct *task,
-						 void __user *from)
-{
-	return __copy_from_user(task->thread.ckfp_state.fpr, from,
-				ELF_NFPREG * sizeof(double));
-}
-#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
-#endif
-
 /*
  * Save the current user registers on the user stack.
  * We only save the altivec/spe registers if the process has used
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v12 3/8] powerpc/perf: consolidate read_user_stack_32
  2020-03-20 10:20 ` [PATCH v12 0/8] Disable compat cruft on ppc64le v12 Michal Suchanek
  2020-03-20 10:20   ` [PATCH v12 1/8] powerpc: Add back __ARCH_WANT_SYS_LLSEEK macro Michal Suchanek
  2020-03-20 10:20   ` [PATCH v12 2/8] powerpc: move common register copy functions from signal_32.c to signal.c Michal Suchanek
@ 2020-03-20 10:20   ` Michal Suchanek
  2020-03-20 10:20   ` [PATCH v12 4/8] powerpc/perf: consolidate valid_user_sp -> invalid_user_sp Michal Suchanek
                     ` (4 subsequent siblings)
  7 siblings, 0 replies; 53+ messages in thread
From: Michal Suchanek @ 2020-03-20 10:20 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Michal Suchanek, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Alexander Viro, Mauro Carvalho Chehab,
	David S. Miller, Rob Herring, Greg Kroah-Hartman,
	Jonathan Cameron, Andy Shevchenko, Christophe Leroy,
	Thomas Gleixner, Arnd Bergmann, Nayna Jain, Eric Richter,
	Claudio Carvalho, Nicholas Piggin, Hari Bathini, Masahiro Yamada,
	Thiago Jung Bauermann, Sebastian Andrzej Siewior,
	Valentin Schneider, Jordan Niethe, Michael Neuling,
	Gustavo Luiz Duarte, Allison Randal, Eric W. Biederman,
	linux-kernel, linux-fsdevel

There are two almost identical copies for 32bit and 64bit.

The function is used only in 32bit code which will be split out in next
patch so consolidate to one function.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
v6:  new patch
v8:  move the consolidated function out of the ifdef block.
v11: rebase on top of def0bfdbd603
---
 arch/powerpc/perf/callchain.c | 48 +++++++++++++++++------------------
 1 file changed, 24 insertions(+), 24 deletions(-)

diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
index cbc251981209..c9a78c6e4361 100644
--- a/arch/powerpc/perf/callchain.c
+++ b/arch/powerpc/perf/callchain.c
@@ -161,18 +161,6 @@ static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret)
 	return read_user_stack_slow(ptr, ret, 8);
 }
 
-static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
-{
-	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
-	    ((unsigned long)ptr & 3))
-		return -EFAULT;
-
-	if (!probe_user_read(ret, ptr, sizeof(*ret)))
-		return 0;
-
-	return read_user_stack_slow(ptr, ret, 4);
-}
-
 static inline int valid_user_sp(unsigned long sp, int is_64)
 {
 	if (!sp || (sp & 7) || sp > (is_64 ? TASK_SIZE : 0x100000000UL) - 32)
@@ -277,19 +265,9 @@ static void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
 }
 
 #else  /* CONFIG_PPC64 */
-/*
- * On 32-bit we just access the address and let hash_page create a
- * HPTE if necessary, so there is no need to fall back to reading
- * the page tables.  Since this is called at interrupt level,
- * do_page_fault() won't treat a DSI as a page fault.
- */
-static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
+static int read_user_stack_slow(void __user *ptr, void *buf, int nb)
 {
-	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
-	    ((unsigned long)ptr & 3))
-		return -EFAULT;
-
-	return probe_user_read(ret, ptr, sizeof(*ret));
+	return 0;
 }
 
 static inline void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
@@ -312,6 +290,28 @@ static inline int valid_user_sp(unsigned long sp, int is_64)
 
 #endif /* CONFIG_PPC64 */
 
+/*
+ * On 32-bit we just access the address and let hash_page create a
+ * HPTE if necessary, so there is no need to fall back to reading
+ * the page tables.  Since this is called at interrupt level,
+ * do_page_fault() won't treat a DSI as a page fault.
+ */
+static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
+{
+	int rc;
+
+	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
+	    ((unsigned long)ptr & 3))
+		return -EFAULT;
+
+	rc = probe_user_read(ret, ptr, sizeof(*ret));
+
+	if (IS_ENABLED(CONFIG_PPC64) && rc)
+		return read_user_stack_slow(ptr, ret, 4);
+
+	return rc;
+}
+
 /*
  * Layout for non-RT signal frames
  */
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v12 4/8] powerpc/perf: consolidate valid_user_sp -> invalid_user_sp
  2020-03-20 10:20 ` [PATCH v12 0/8] Disable compat cruft on ppc64le v12 Michal Suchanek
                     ` (2 preceding siblings ...)
  2020-03-20 10:20   ` [PATCH v12 3/8] powerpc/perf: consolidate read_user_stack_32 Michal Suchanek
@ 2020-03-20 10:20   ` Michal Suchanek
  2020-03-20 10:20   ` [PATCH v12 5/8] powerpc/64: make buildable without CONFIG_COMPAT Michal Suchanek
                     ` (3 subsequent siblings)
  7 siblings, 0 replies; 53+ messages in thread
From: Michal Suchanek @ 2020-03-20 10:20 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Michal Suchanek, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Alexander Viro, Mauro Carvalho Chehab,
	David S. Miller, Rob Herring, Greg Kroah-Hartman,
	Jonathan Cameron, Andy Shevchenko, Christophe Leroy,
	Thomas Gleixner, Arnd Bergmann, Nayna Jain, Eric Richter,
	Claudio Carvalho, Nicholas Piggin, Hari Bathini, Masahiro Yamada,
	Thiago Jung Bauermann, Sebastian Andrzej Siewior,
	Valentin Schneider, Jordan Niethe, Michael Neuling,
	Gustavo Luiz Duarte, Allison Randal, Eric W. Biederman,
	linux-kernel, linux-fsdevel

Merge the 32bit and 64bit version.

Halve the check constants on 32bit.

Use STACK_TOP since it is defined.

Passing is_64 is now redundant since is_32bit_task() is used to
determine which callchain variant should be used. Use STACK_TOP and
is_32bit_task() directly.

This removes a page from the valid 32bit area on 64bit:
 #define TASK_SIZE_USER32 (0x0000000100000000UL - (1 * PAGE_SIZE))
 #define STACK_TOP_USER32 TASK_SIZE_USER32

Change return value to bool. It is inverted by users anyway.

Change to invalid_user_sp to avoid inverting the return value twice.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
---
v8: new patch
v11: simplify by using is_32bit_task()
v12:
 - simplify by precalculating subexpresions
 - change return value to bool
 - remove double inversion
---
 arch/powerpc/perf/callchain.c | 26 ++++++++++----------------
 1 file changed, 10 insertions(+), 16 deletions(-)

diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
index c9a78c6e4361..001d0473a61f 100644
--- a/arch/powerpc/perf/callchain.c
+++ b/arch/powerpc/perf/callchain.c
@@ -102,6 +102,14 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *re
 	}
 }
 
+static inline bool invalid_user_sp(unsigned long sp)
+{
+	unsigned long mask = is_32bit_task() ? 3 : 7;
+	unsigned long top = STACK_TOP - (is_32bit_task() ? 16 : 32);
+
+	return (!sp || (sp & mask) || (sp > top));
+}
+
 #ifdef CONFIG_PPC64
 /*
  * On 64-bit we don't want to invoke hash_page on user addresses from
@@ -161,13 +169,6 @@ static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret)
 	return read_user_stack_slow(ptr, ret, 8);
 }
 
-static inline int valid_user_sp(unsigned long sp, int is_64)
-{
-	if (!sp || (sp & 7) || sp > (is_64 ? TASK_SIZE : 0x100000000UL) - 32)
-		return 0;
-	return 1;
-}
-
 /*
  * 64-bit user processes use the same stack frame for RT and non-RT signals.
  */
@@ -226,7 +227,7 @@ static void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
 
 	while (entry->nr < entry->max_stack) {
 		fp = (unsigned long __user *) sp;
-		if (!valid_user_sp(sp, 1) || read_user_stack_64(fp, &next_sp))
+		if (invalid_user_sp(sp) || read_user_stack_64(fp, &next_sp))
 			return;
 		if (level > 0 && read_user_stack_64(&fp[2], &next_ip))
 			return;
@@ -275,13 +276,6 @@ static inline void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry
 {
 }
 
-static inline int valid_user_sp(unsigned long sp, int is_64)
-{
-	if (!sp || (sp & 7) || sp > TASK_SIZE - 32)
-		return 0;
-	return 1;
-}
-
 #define __SIGNAL_FRAMESIZE32	__SIGNAL_FRAMESIZE
 #define sigcontext32		sigcontext
 #define mcontext32		mcontext
@@ -423,7 +417,7 @@ static void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry,
 
 	while (entry->nr < entry->max_stack) {
 		fp = (unsigned int __user *) (unsigned long) sp;
-		if (!valid_user_sp(sp, 0) || read_user_stack_32(fp, &next_sp))
+		if (invalid_user_sp(sp) || read_user_stack_32(fp, &next_sp))
 			return;
 		if (level > 0 && read_user_stack_32(&fp[1], &next_ip))
 			return;
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v12 5/8] powerpc/64: make buildable without CONFIG_COMPAT
  2020-03-20 10:20 ` [PATCH v12 0/8] Disable compat cruft on ppc64le v12 Michal Suchanek
                     ` (3 preceding siblings ...)
  2020-03-20 10:20   ` [PATCH v12 4/8] powerpc/perf: consolidate valid_user_sp -> invalid_user_sp Michal Suchanek
@ 2020-03-20 10:20   ` Michal Suchanek
  2020-04-07  5:50     ` Christophe Leroy
  2020-03-20 10:20   ` [PATCH v12 6/8] powerpc/64: Make COMPAT user-selectable disabled on littleendian by default Michal Suchanek
                     ` (2 subsequent siblings)
  7 siblings, 1 reply; 53+ messages in thread
From: Michal Suchanek @ 2020-03-20 10:20 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Michal Suchanek, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Alexander Viro, Mauro Carvalho Chehab,
	David S. Miller, Rob Herring, Greg Kroah-Hartman,
	Jonathan Cameron, Andy Shevchenko, Christophe Leroy,
	Thomas Gleixner, Arnd Bergmann, Nayna Jain, Eric Richter,
	Claudio Carvalho, Nicholas Piggin, Hari Bathini, Masahiro Yamada,
	Thiago Jung Bauermann, Sebastian Andrzej Siewior,
	Valentin Schneider, Jordan Niethe, Michael Neuling,
	Gustavo Luiz Duarte, Allison Randal, Eric W. Biederman,
	linux-kernel, linux-fsdevel

There are numerous references to 32bit functions in generic and 64bit
code so ifdef them out.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
---
v2:
- fix 32bit ifdef condition in signal.c
- simplify the compat ifdef condition in vdso.c - 64bit is redundant
- simplify the compat ifdef condition in callchain.c - 64bit is redundant
v3:
- use IS_ENABLED and maybe_unused where possible
- do not ifdef declarations
- clean up Makefile
v4:
- further makefile cleanup
- simplify is_32bit_task conditions
- avoid ifdef in condition by using return
v5:
- avoid unreachable code on 32bit
- make is_current_64bit constant on !COMPAT
- add stub perf_callchain_user_32 to avoid some ifdefs
v6:
- consolidate current_is_64bit
v7:
- remove leftover perf_callchain_user_32 stub from previous series version
v8:
- fix build again - too trigger-happy with stub removal
- remove a vdso.c hunk that causes warning according to kbuild test robot
v9:
- removed current_is_64bit in previous patch
v10:
- rebase on top of 70ed86f4de5bd
---
 arch/powerpc/include/asm/thread_info.h | 4 ++--
 arch/powerpc/kernel/Makefile           | 6 +++---
 arch/powerpc/kernel/entry_64.S         | 2 ++
 arch/powerpc/kernel/signal.c           | 3 +--
 arch/powerpc/kernel/syscall_64.c       | 6 ++----
 arch/powerpc/kernel/vdso.c             | 3 ++-
 arch/powerpc/perf/callchain.c          | 8 +++++++-
 7 files changed, 19 insertions(+), 13 deletions(-)

diff --git a/arch/powerpc/include/asm/thread_info.h b/arch/powerpc/include/asm/thread_info.h
index a2270749b282..ca6c97025704 100644
--- a/arch/powerpc/include/asm/thread_info.h
+++ b/arch/powerpc/include/asm/thread_info.h
@@ -162,10 +162,10 @@ static inline bool test_thread_local_flags(unsigned int flags)
 	return (ti->local_flags & flags) != 0;
 }
 
-#ifdef CONFIG_PPC64
+#ifdef CONFIG_COMPAT
 #define is_32bit_task()	(test_thread_flag(TIF_32BIT))
 #else
-#define is_32bit_task()	(1)
+#define is_32bit_task()	(IS_ENABLED(CONFIG_PPC32))
 #endif
 
 #if defined(CONFIG_PPC64)
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index 5700231a8988..98a1c143b613 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -42,16 +42,16 @@ CFLAGS_btext.o += -DDISABLE_BRANCH_PROFILING
 endif
 
 obj-y				:= cputable.o ptrace.o syscalls.o \
-				   irq.o align.o signal_32.o pmc.o vdso.o \
+				   irq.o align.o signal_$(BITS).o pmc.o vdso.o \
 				   process.o systbl.o idle.o \
 				   signal.o sysfs.o cacheinfo.o time.o \
 				   prom.o traps.o setup-common.o \
 				   udbg.o misc.o io.o misc_$(BITS).o \
 				   of_platform.o prom_parse.o
-obj-$(CONFIG_PPC64)		+= setup_64.o sys_ppc32.o \
-				   signal_64.o ptrace32.o \
+obj-$(CONFIG_PPC64)		+= setup_64.o \
 				   paca.o nvram_64.o firmware.o note.o \
 				   syscall_64.o
+obj-$(CONFIG_COMPAT)		+= sys_ppc32.o ptrace32.o signal_32.o
 obj-$(CONFIG_VDSO32)		+= vdso32/
 obj-$(CONFIG_PPC_WATCHDOG)	+= watchdog.o
 obj-$(CONFIG_HAVE_HW_BREAKPOINT)	+= hw_breakpoint.o
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 4c0d0400e93d..fe1421e08f09 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -52,8 +52,10 @@
 SYS_CALL_TABLE:
 	.tc sys_call_table[TC],sys_call_table
 
+#ifdef CONFIG_COMPAT
 COMPAT_SYS_CALL_TABLE:
 	.tc compat_sys_call_table[TC],compat_sys_call_table
+#endif
 
 /* This value is used to mark exception frames on the stack. */
 exception_marker:
diff --git a/arch/powerpc/kernel/signal.c b/arch/powerpc/kernel/signal.c
index 4b0152108f61..a264989626fd 100644
--- a/arch/powerpc/kernel/signal.c
+++ b/arch/powerpc/kernel/signal.c
@@ -247,7 +247,6 @@ static void do_signal(struct task_struct *tsk)
 	sigset_t *oldset = sigmask_to_save();
 	struct ksignal ksig = { .sig = 0 };
 	int ret;
-	int is32 = is_32bit_task();
 
 	BUG_ON(tsk != current);
 
@@ -277,7 +276,7 @@ static void do_signal(struct task_struct *tsk)
 
 	rseq_signal_deliver(&ksig, tsk->thread.regs);
 
-	if (is32) {
+	if (is_32bit_task()) {
         	if (ksig.ka.sa.sa_flags & SA_SIGINFO)
 			ret = handle_rt_signal32(&ksig, oldset, tsk);
 		else
diff --git a/arch/powerpc/kernel/syscall_64.c b/arch/powerpc/kernel/syscall_64.c
index 87d95b455b83..2dcbfe38f5ac 100644
--- a/arch/powerpc/kernel/syscall_64.c
+++ b/arch/powerpc/kernel/syscall_64.c
@@ -24,7 +24,6 @@ notrace long system_call_exception(long r3, long r4, long r5,
 				   long r6, long r7, long r8,
 				   unsigned long r0, struct pt_regs *regs)
 {
-	unsigned long ti_flags;
 	syscall_fn f;
 
 	if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
@@ -68,8 +67,7 @@ notrace long system_call_exception(long r3, long r4, long r5,
 
 	local_irq_enable();
 
-	ti_flags = current_thread_info()->flags;
-	if (unlikely(ti_flags & _TIF_SYSCALL_DOTRACE)) {
+	if (unlikely(current_thread_info()->flags & _TIF_SYSCALL_DOTRACE)) {
 		/*
 		 * We use the return value of do_syscall_trace_enter() as the
 		 * syscall number. If the syscall was rejected for any reason
@@ -94,7 +92,7 @@ notrace long system_call_exception(long r3, long r4, long r5,
 	/* May be faster to do array_index_nospec? */
 	barrier_nospec();
 
-	if (unlikely(ti_flags & _TIF_32BIT)) {
+	if (unlikely(is_32bit_task())) {
 		f = (void *)compat_sys_call_table[r0];
 
 		r3 &= 0x00000000ffffffffULL;
diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index b9a108411c0d..77da3b7d304d 100644
--- a/arch/powerpc/kernel/vdso.c
+++ b/arch/powerpc/kernel/vdso.c
@@ -656,7 +656,8 @@ static void __init vdso_setup_syscall_map(void)
 		if (sys_call_table[i] != sys_ni_syscall)
 			vdso_data->syscall_map_64[i >> 5] |=
 				0x80000000UL >> (i & 0x1f);
-		if (compat_sys_call_table[i] != sys_ni_syscall)
+		if (IS_ENABLED(CONFIG_COMPAT) &&
+		    compat_sys_call_table[i] != sys_ni_syscall)
 			vdso_data->syscall_map_32[i >> 5] |=
 				0x80000000UL >> (i & 0x1f);
 #else /* CONFIG_PPC64 */
diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
index 001d0473a61f..b5afd0bec4f8 100644
--- a/arch/powerpc/perf/callchain.c
+++ b/arch/powerpc/perf/callchain.c
@@ -15,7 +15,7 @@
 #include <asm/sigcontext.h>
 #include <asm/ucontext.h>
 #include <asm/vdso.h>
-#ifdef CONFIG_PPC64
+#ifdef CONFIG_COMPAT
 #include "../kernel/ppc32.h"
 #endif
 #include <asm/pte-walk.h>
@@ -284,6 +284,7 @@ static inline void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry
 
 #endif /* CONFIG_PPC64 */
 
+#if defined(CONFIG_PPC32) || defined(CONFIG_COMPAT)
 /*
  * On 32-bit we just access the address and let hash_page create a
  * HPTE if necessary, so there is no need to fall back to reading
@@ -447,6 +448,11 @@ static void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry,
 		sp = next_sp;
 	}
 }
+#else /* 32bit */
+static void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry,
+				   struct pt_regs *regs)
+{}
+#endif /* 32bit */
 
 void
 perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v12 6/8] powerpc/64: Make COMPAT user-selectable disabled on littleendian by default.
  2020-03-20 10:20 ` [PATCH v12 0/8] Disable compat cruft on ppc64le v12 Michal Suchanek
                     ` (4 preceding siblings ...)
  2020-03-20 10:20   ` [PATCH v12 5/8] powerpc/64: make buildable without CONFIG_COMPAT Michal Suchanek
@ 2020-03-20 10:20   ` Michal Suchanek
  2020-03-20 10:20   ` [PATCH v12 7/8] powerpc/perf: split callchain.c by bitness Michal Suchanek
  2020-03-20 10:20   ` [PATCH v12 8/8] MAINTAINERS: perf: Add pattern that matches ppc perf to the perf entry Michal Suchanek
  7 siblings, 0 replies; 53+ messages in thread
From: Michal Suchanek @ 2020-03-20 10:20 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Michal Suchanek, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Alexander Viro, Mauro Carvalho Chehab,
	David S. Miller, Rob Herring, Greg Kroah-Hartman,
	Jonathan Cameron, Andy Shevchenko, Christophe Leroy,
	Thomas Gleixner, Arnd Bergmann, Nayna Jain, Eric Richter,
	Claudio Carvalho, Nicholas Piggin, Hari Bathini, Masahiro Yamada,
	Thiago Jung Bauermann, Sebastian Andrzej Siewior,
	Valentin Schneider, Jordan Niethe, Michael Neuling,
	Gustavo Luiz Duarte, Allison Randal, Eric W. Biederman,
	linux-kernel, linux-fsdevel

On bigendian ppc64 it is common to have 32bit legacy binaries but much
less so on littleendian.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
v3: make configurable
---
 arch/powerpc/Kconfig | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 497b7d0b2d7e..29d00b3959b9 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -264,8 +264,9 @@ config PANIC_TIMEOUT
 	default 180
 
 config COMPAT
-	bool
-	default y if PPC64
+	bool "Enable support for 32bit binaries"
+	depends on PPC64
+	default y if !CPU_LITTLE_ENDIAN
 	select COMPAT_BINFMT_ELF
 	select ARCH_WANT_OLD_COMPAT_IPC
 	select COMPAT_OLD_SIGACTION
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v12 7/8] powerpc/perf: split callchain.c by bitness
  2020-03-20 10:20 ` [PATCH v12 0/8] Disable compat cruft on ppc64le v12 Michal Suchanek
                     ` (5 preceding siblings ...)
  2020-03-20 10:20   ` [PATCH v12 6/8] powerpc/64: Make COMPAT user-selectable disabled on littleendian by default Michal Suchanek
@ 2020-03-20 10:20   ` Michal Suchanek
  2020-03-20 10:20   ` [PATCH v12 8/8] MAINTAINERS: perf: Add pattern that matches ppc perf to the perf entry Michal Suchanek
  7 siblings, 0 replies; 53+ messages in thread
From: Michal Suchanek @ 2020-03-20 10:20 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Michal Suchanek, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Alexander Viro, Mauro Carvalho Chehab,
	David S. Miller, Rob Herring, Greg Kroah-Hartman,
	Jonathan Cameron, Andy Shevchenko, Christophe Leroy,
	Thomas Gleixner, Arnd Bergmann, Nayna Jain, Eric Richter,
	Claudio Carvalho, Nicholas Piggin, Hari Bathini, Masahiro Yamada,
	Thiago Jung Bauermann, Sebastian Andrzej Siewior,
	Valentin Schneider, Jordan Niethe, Michael Neuling,
	Gustavo Luiz Duarte, Allison Randal, Eric W. Biederman,
	linux-kernel, linux-fsdevel

Building callchain.c with !COMPAT proved quite ugly with all the
defines. Splitting out the 32bit and 64bit parts looks better.

No code change intended.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
---
v6:
 - move current_is_64bit consolidetaion to earlier patch
 - move defines to the top of callchain_32.c
 - Makefile cleanup
v8:
 - fix valid_user_sp
v11:
 - rebase on top of def0bfdbd603
---
 arch/powerpc/perf/Makefile       |   5 +-
 arch/powerpc/perf/callchain.c    | 356 +------------------------------
 arch/powerpc/perf/callchain.h    |  19 ++
 arch/powerpc/perf/callchain_32.c | 196 +++++++++++++++++
 arch/powerpc/perf/callchain_64.c | 174 +++++++++++++++
 5 files changed, 394 insertions(+), 356 deletions(-)
 create mode 100644 arch/powerpc/perf/callchain.h
 create mode 100644 arch/powerpc/perf/callchain_32.c
 create mode 100644 arch/powerpc/perf/callchain_64.c

diff --git a/arch/powerpc/perf/Makefile b/arch/powerpc/perf/Makefile
index c155dcbb8691..53d614e98537 100644
--- a/arch/powerpc/perf/Makefile
+++ b/arch/powerpc/perf/Makefile
@@ -1,6 +1,9 @@
 # SPDX-License-Identifier: GPL-2.0
 
-obj-$(CONFIG_PERF_EVENTS)	+= callchain.o perf_regs.o
+obj-$(CONFIG_PERF_EVENTS)	+= callchain.o callchain_$(BITS).o perf_regs.o
+ifdef CONFIG_COMPAT
+obj-$(CONFIG_PERF_EVENTS)	+= callchain_32.o
+endif
 
 obj-$(CONFIG_PPC_PERF_CTRS)	+= core-book3s.o bhrb.o
 obj64-$(CONFIG_PPC_PERF_CTRS)	+= ppc970-pmu.o power5-pmu.o \
diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
index b5afd0bec4f8..dd5051015008 100644
--- a/arch/powerpc/perf/callchain.c
+++ b/arch/powerpc/perf/callchain.c
@@ -15,11 +15,9 @@
 #include <asm/sigcontext.h>
 #include <asm/ucontext.h>
 #include <asm/vdso.h>
-#ifdef CONFIG_COMPAT
-#include "../kernel/ppc32.h"
-#endif
 #include <asm/pte-walk.h>
 
+#include "callchain.h"
 
 /*
  * Is sp valid as the address of the next kernel stack frame after prev_sp?
@@ -102,358 +100,6 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *re
 	}
 }
 
-static inline bool invalid_user_sp(unsigned long sp)
-{
-	unsigned long mask = is_32bit_task() ? 3 : 7;
-	unsigned long top = STACK_TOP - (is_32bit_task() ? 16 : 32);
-
-	return (!sp || (sp & mask) || (sp > top));
-}
-
-#ifdef CONFIG_PPC64
-/*
- * On 64-bit we don't want to invoke hash_page on user addresses from
- * interrupt context, so if the access faults, we read the page tables
- * to find which page (if any) is mapped and access it directly.
- */
-static int read_user_stack_slow(void __user *ptr, void *buf, int nb)
-{
-	int ret = -EFAULT;
-	pgd_t *pgdir;
-	pte_t *ptep, pte;
-	unsigned shift;
-	unsigned long addr = (unsigned long) ptr;
-	unsigned long offset;
-	unsigned long pfn, flags;
-	void *kaddr;
-
-	pgdir = current->mm->pgd;
-	if (!pgdir)
-		return -EFAULT;
-
-	local_irq_save(flags);
-	ptep = find_current_mm_pte(pgdir, addr, NULL, &shift);
-	if (!ptep)
-		goto err_out;
-	if (!shift)
-		shift = PAGE_SHIFT;
-
-	/* align address to page boundary */
-	offset = addr & ((1UL << shift) - 1);
-
-	pte = READ_ONCE(*ptep);
-	if (!pte_present(pte) || !pte_user(pte))
-		goto err_out;
-	pfn = pte_pfn(pte);
-	if (!page_is_ram(pfn))
-		goto err_out;
-
-	/* no highmem to worry about here */
-	kaddr = pfn_to_kaddr(pfn);
-	memcpy(buf, kaddr + offset, nb);
-	ret = 0;
-err_out:
-	local_irq_restore(flags);
-	return ret;
-}
-
-static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret)
-{
-	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned long) ||
-	    ((unsigned long)ptr & 7))
-		return -EFAULT;
-
-	if (!probe_user_read(ret, ptr, sizeof(*ret)))
-		return 0;
-
-	return read_user_stack_slow(ptr, ret, 8);
-}
-
-/*
- * 64-bit user processes use the same stack frame for RT and non-RT signals.
- */
-struct signal_frame_64 {
-	char		dummy[__SIGNAL_FRAMESIZE];
-	struct ucontext	uc;
-	unsigned long	unused[2];
-	unsigned int	tramp[6];
-	struct siginfo	*pinfo;
-	void		*puc;
-	struct siginfo	info;
-	char		abigap[288];
-};
-
-static int is_sigreturn_64_address(unsigned long nip, unsigned long fp)
-{
-	if (nip == fp + offsetof(struct signal_frame_64, tramp))
-		return 1;
-	if (vdso64_rt_sigtramp && current->mm->context.vdso_base &&
-	    nip == current->mm->context.vdso_base + vdso64_rt_sigtramp)
-		return 1;
-	return 0;
-}
-
-/*
- * Do some sanity checking on the signal frame pointed to by sp.
- * We check the pinfo and puc pointers in the frame.
- */
-static int sane_signal_64_frame(unsigned long sp)
-{
-	struct signal_frame_64 __user *sf;
-	unsigned long pinfo, puc;
-
-	sf = (struct signal_frame_64 __user *) sp;
-	if (read_user_stack_64((unsigned long __user *) &sf->pinfo, &pinfo) ||
-	    read_user_stack_64((unsigned long __user *) &sf->puc, &puc))
-		return 0;
-	return pinfo == (unsigned long) &sf->info &&
-		puc == (unsigned long) &sf->uc;
-}
-
-static void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
-				   struct pt_regs *regs)
-{
-	unsigned long sp, next_sp;
-	unsigned long next_ip;
-	unsigned long lr;
-	long level = 0;
-	struct signal_frame_64 __user *sigframe;
-	unsigned long __user *fp, *uregs;
-
-	next_ip = perf_instruction_pointer(regs);
-	lr = regs->link;
-	sp = regs->gpr[1];
-	perf_callchain_store(entry, next_ip);
-
-	while (entry->nr < entry->max_stack) {
-		fp = (unsigned long __user *) sp;
-		if (invalid_user_sp(sp) || read_user_stack_64(fp, &next_sp))
-			return;
-		if (level > 0 && read_user_stack_64(&fp[2], &next_ip))
-			return;
-
-		/*
-		 * Note: the next_sp - sp >= signal frame size check
-		 * is true when next_sp < sp, which can happen when
-		 * transitioning from an alternate signal stack to the
-		 * normal stack.
-		 */
-		if (next_sp - sp >= sizeof(struct signal_frame_64) &&
-		    (is_sigreturn_64_address(next_ip, sp) ||
-		     (level <= 1 && is_sigreturn_64_address(lr, sp))) &&
-		    sane_signal_64_frame(sp)) {
-			/*
-			 * This looks like an signal frame
-			 */
-			sigframe = (struct signal_frame_64 __user *) sp;
-			uregs = sigframe->uc.uc_mcontext.gp_regs;
-			if (read_user_stack_64(&uregs[PT_NIP], &next_ip) ||
-			    read_user_stack_64(&uregs[PT_LNK], &lr) ||
-			    read_user_stack_64(&uregs[PT_R1], &sp))
-				return;
-			level = 0;
-			perf_callchain_store_context(entry, PERF_CONTEXT_USER);
-			perf_callchain_store(entry, next_ip);
-			continue;
-		}
-
-		if (level == 0)
-			next_ip = lr;
-		perf_callchain_store(entry, next_ip);
-		++level;
-		sp = next_sp;
-	}
-}
-
-#else  /* CONFIG_PPC64 */
-static int read_user_stack_slow(void __user *ptr, void *buf, int nb)
-{
-	return 0;
-}
-
-static inline void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
-					  struct pt_regs *regs)
-{
-}
-
-#define __SIGNAL_FRAMESIZE32	__SIGNAL_FRAMESIZE
-#define sigcontext32		sigcontext
-#define mcontext32		mcontext
-#define ucontext32		ucontext
-#define compat_siginfo_t	struct siginfo
-
-#endif /* CONFIG_PPC64 */
-
-#if defined(CONFIG_PPC32) || defined(CONFIG_COMPAT)
-/*
- * On 32-bit we just access the address and let hash_page create a
- * HPTE if necessary, so there is no need to fall back to reading
- * the page tables.  Since this is called at interrupt level,
- * do_page_fault() won't treat a DSI as a page fault.
- */
-static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
-{
-	int rc;
-
-	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
-	    ((unsigned long)ptr & 3))
-		return -EFAULT;
-
-	rc = probe_user_read(ret, ptr, sizeof(*ret));
-
-	if (IS_ENABLED(CONFIG_PPC64) && rc)
-		return read_user_stack_slow(ptr, ret, 4);
-
-	return rc;
-}
-
-/*
- * Layout for non-RT signal frames
- */
-struct signal_frame_32 {
-	char			dummy[__SIGNAL_FRAMESIZE32];
-	struct sigcontext32	sctx;
-	struct mcontext32	mctx;
-	int			abigap[56];
-};
-
-/*
- * Layout for RT signal frames
- */
-struct rt_signal_frame_32 {
-	char			dummy[__SIGNAL_FRAMESIZE32 + 16];
-	compat_siginfo_t	info;
-	struct ucontext32	uc;
-	int			abigap[56];
-};
-
-static int is_sigreturn_32_address(unsigned int nip, unsigned int fp)
-{
-	if (nip == fp + offsetof(struct signal_frame_32, mctx.mc_pad))
-		return 1;
-	if (vdso32_sigtramp && current->mm->context.vdso_base &&
-	    nip == current->mm->context.vdso_base + vdso32_sigtramp)
-		return 1;
-	return 0;
-}
-
-static int is_rt_sigreturn_32_address(unsigned int nip, unsigned int fp)
-{
-	if (nip == fp + offsetof(struct rt_signal_frame_32,
-				 uc.uc_mcontext.mc_pad))
-		return 1;
-	if (vdso32_rt_sigtramp && current->mm->context.vdso_base &&
-	    nip == current->mm->context.vdso_base + vdso32_rt_sigtramp)
-		return 1;
-	return 0;
-}
-
-static int sane_signal_32_frame(unsigned int sp)
-{
-	struct signal_frame_32 __user *sf;
-	unsigned int regs;
-
-	sf = (struct signal_frame_32 __user *) (unsigned long) sp;
-	if (read_user_stack_32((unsigned int __user *) &sf->sctx.regs, &regs))
-		return 0;
-	return regs == (unsigned long) &sf->mctx;
-}
-
-static int sane_rt_signal_32_frame(unsigned int sp)
-{
-	struct rt_signal_frame_32 __user *sf;
-	unsigned int regs;
-
-	sf = (struct rt_signal_frame_32 __user *) (unsigned long) sp;
-	if (read_user_stack_32((unsigned int __user *) &sf->uc.uc_regs, &regs))
-		return 0;
-	return regs == (unsigned long) &sf->uc.uc_mcontext;
-}
-
-static unsigned int __user *signal_frame_32_regs(unsigned int sp,
-				unsigned int next_sp, unsigned int next_ip)
-{
-	struct mcontext32 __user *mctx = NULL;
-	struct signal_frame_32 __user *sf;
-	struct rt_signal_frame_32 __user *rt_sf;
-
-	/*
-	 * Note: the next_sp - sp >= signal frame size check
-	 * is true when next_sp < sp, for example, when
-	 * transitioning from an alternate signal stack to the
-	 * normal stack.
-	 */
-	if (next_sp - sp >= sizeof(struct signal_frame_32) &&
-	    is_sigreturn_32_address(next_ip, sp) &&
-	    sane_signal_32_frame(sp)) {
-		sf = (struct signal_frame_32 __user *) (unsigned long) sp;
-		mctx = &sf->mctx;
-	}
-
-	if (!mctx && next_sp - sp >= sizeof(struct rt_signal_frame_32) &&
-	    is_rt_sigreturn_32_address(next_ip, sp) &&
-	    sane_rt_signal_32_frame(sp)) {
-		rt_sf = (struct rt_signal_frame_32 __user *) (unsigned long) sp;
-		mctx = &rt_sf->uc.uc_mcontext;
-	}
-
-	if (!mctx)
-		return NULL;
-	return mctx->mc_gregs;
-}
-
-static void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry,
-				   struct pt_regs *regs)
-{
-	unsigned int sp, next_sp;
-	unsigned int next_ip;
-	unsigned int lr;
-	long level = 0;
-	unsigned int __user *fp, *uregs;
-
-	next_ip = perf_instruction_pointer(regs);
-	lr = regs->link;
-	sp = regs->gpr[1];
-	perf_callchain_store(entry, next_ip);
-
-	while (entry->nr < entry->max_stack) {
-		fp = (unsigned int __user *) (unsigned long) sp;
-		if (invalid_user_sp(sp) || read_user_stack_32(fp, &next_sp))
-			return;
-		if (level > 0 && read_user_stack_32(&fp[1], &next_ip))
-			return;
-
-		uregs = signal_frame_32_regs(sp, next_sp, next_ip);
-		if (!uregs && level <= 1)
-			uregs = signal_frame_32_regs(sp, next_sp, lr);
-		if (uregs) {
-			/*
-			 * This looks like an signal frame, so restart
-			 * the stack trace with the values in it.
-			 */
-			if (read_user_stack_32(&uregs[PT_NIP], &next_ip) ||
-			    read_user_stack_32(&uregs[PT_LNK], &lr) ||
-			    read_user_stack_32(&uregs[PT_R1], &sp))
-				return;
-			level = 0;
-			perf_callchain_store_context(entry, PERF_CONTEXT_USER);
-			perf_callchain_store(entry, next_ip);
-			continue;
-		}
-
-		if (level == 0)
-			next_ip = lr;
-		perf_callchain_store(entry, next_ip);
-		++level;
-		sp = next_sp;
-	}
-}
-#else /* 32bit */
-static void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry,
-				   struct pt_regs *regs)
-{}
-#endif /* 32bit */
-
 void
 perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)
 {
diff --git a/arch/powerpc/perf/callchain.h b/arch/powerpc/perf/callchain.h
new file mode 100644
index 000000000000..7a2cb9e1181a
--- /dev/null
+++ b/arch/powerpc/perf/callchain.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+#ifndef _POWERPC_PERF_CALLCHAIN_H
+#define _POWERPC_PERF_CALLCHAIN_H
+
+int read_user_stack_slow(void __user *ptr, void *buf, int nb);
+void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
+			    struct pt_regs *regs);
+void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry,
+			    struct pt_regs *regs);
+
+static inline bool invalid_user_sp(unsigned long sp)
+{
+	unsigned long mask = is_32bit_task() ? 3 : 7;
+	unsigned long top = STACK_TOP - (is_32bit_task() ? 16 : 32);
+
+	return (!sp || (sp & mask) || (sp > top));
+}
+
+#endif /* _POWERPC_PERF_CALLCHAIN_H */
diff --git a/arch/powerpc/perf/callchain_32.c b/arch/powerpc/perf/callchain_32.c
new file mode 100644
index 000000000000..8aa951003141
--- /dev/null
+++ b/arch/powerpc/perf/callchain_32.c
@@ -0,0 +1,196 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Performance counter callchain support - powerpc architecture code
+ *
+ * Copyright © 2009 Paul Mackerras, IBM Corporation.
+ */
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/perf_event.h>
+#include <linux/percpu.h>
+#include <linux/uaccess.h>
+#include <linux/mm.h>
+#include <asm/ptrace.h>
+#include <asm/pgtable.h>
+#include <asm/sigcontext.h>
+#include <asm/ucontext.h>
+#include <asm/vdso.h>
+#include <asm/pte-walk.h>
+
+#include "callchain.h"
+
+#ifdef CONFIG_PPC64
+#include "../kernel/ppc32.h"
+#else  /* CONFIG_PPC64 */
+
+#define __SIGNAL_FRAMESIZE32	__SIGNAL_FRAMESIZE
+#define sigcontext32		sigcontext
+#define mcontext32		mcontext
+#define ucontext32		ucontext
+#define compat_siginfo_t	struct siginfo
+
+#endif /* CONFIG_PPC64 */
+
+/*
+ * On 32-bit we just access the address and let hash_page create a
+ * HPTE if necessary, so there is no need to fall back to reading
+ * the page tables.  Since this is called at interrupt level,
+ * do_page_fault() won't treat a DSI as a page fault.
+ */
+static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
+{
+	int rc;
+
+	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
+	    ((unsigned long)ptr & 3))
+		return -EFAULT;
+
+	rc = probe_user_read(ret, ptr, sizeof(*ret));
+
+	if (IS_ENABLED(CONFIG_PPC64) && rc)
+		return read_user_stack_slow(ptr, ret, 4);
+
+	return rc;
+}
+
+/*
+ * Layout for non-RT signal frames
+ */
+struct signal_frame_32 {
+	char			dummy[__SIGNAL_FRAMESIZE32];
+	struct sigcontext32	sctx;
+	struct mcontext32	mctx;
+	int			abigap[56];
+};
+
+/*
+ * Layout for RT signal frames
+ */
+struct rt_signal_frame_32 {
+	char			dummy[__SIGNAL_FRAMESIZE32 + 16];
+	compat_siginfo_t	info;
+	struct ucontext32	uc;
+	int			abigap[56];
+};
+
+static int is_sigreturn_32_address(unsigned int nip, unsigned int fp)
+{
+	if (nip == fp + offsetof(struct signal_frame_32, mctx.mc_pad))
+		return 1;
+	if (vdso32_sigtramp && current->mm->context.vdso_base &&
+	    nip == current->mm->context.vdso_base + vdso32_sigtramp)
+		return 1;
+	return 0;
+}
+
+static int is_rt_sigreturn_32_address(unsigned int nip, unsigned int fp)
+{
+	if (nip == fp + offsetof(struct rt_signal_frame_32,
+				 uc.uc_mcontext.mc_pad))
+		return 1;
+	if (vdso32_rt_sigtramp && current->mm->context.vdso_base &&
+	    nip == current->mm->context.vdso_base + vdso32_rt_sigtramp)
+		return 1;
+	return 0;
+}
+
+static int sane_signal_32_frame(unsigned int sp)
+{
+	struct signal_frame_32 __user *sf;
+	unsigned int regs;
+
+	sf = (struct signal_frame_32 __user *) (unsigned long) sp;
+	if (read_user_stack_32((unsigned int __user *) &sf->sctx.regs, &regs))
+		return 0;
+	return regs == (unsigned long) &sf->mctx;
+}
+
+static int sane_rt_signal_32_frame(unsigned int sp)
+{
+	struct rt_signal_frame_32 __user *sf;
+	unsigned int regs;
+
+	sf = (struct rt_signal_frame_32 __user *) (unsigned long) sp;
+	if (read_user_stack_32((unsigned int __user *) &sf->uc.uc_regs, &regs))
+		return 0;
+	return regs == (unsigned long) &sf->uc.uc_mcontext;
+}
+
+static unsigned int __user *signal_frame_32_regs(unsigned int sp,
+				unsigned int next_sp, unsigned int next_ip)
+{
+	struct mcontext32 __user *mctx = NULL;
+	struct signal_frame_32 __user *sf;
+	struct rt_signal_frame_32 __user *rt_sf;
+
+	/*
+	 * Note: the next_sp - sp >= signal frame size check
+	 * is true when next_sp < sp, for example, when
+	 * transitioning from an alternate signal stack to the
+	 * normal stack.
+	 */
+	if (next_sp - sp >= sizeof(struct signal_frame_32) &&
+	    is_sigreturn_32_address(next_ip, sp) &&
+	    sane_signal_32_frame(sp)) {
+		sf = (struct signal_frame_32 __user *) (unsigned long) sp;
+		mctx = &sf->mctx;
+	}
+
+	if (!mctx && next_sp - sp >= sizeof(struct rt_signal_frame_32) &&
+	    is_rt_sigreturn_32_address(next_ip, sp) &&
+	    sane_rt_signal_32_frame(sp)) {
+		rt_sf = (struct rt_signal_frame_32 __user *) (unsigned long) sp;
+		mctx = &rt_sf->uc.uc_mcontext;
+	}
+
+	if (!mctx)
+		return NULL;
+	return mctx->mc_gregs;
+}
+
+void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry,
+			    struct pt_regs *regs)
+{
+	unsigned int sp, next_sp;
+	unsigned int next_ip;
+	unsigned int lr;
+	long level = 0;
+	unsigned int __user *fp, *uregs;
+
+	next_ip = perf_instruction_pointer(regs);
+	lr = regs->link;
+	sp = regs->gpr[1];
+	perf_callchain_store(entry, next_ip);
+
+	while (entry->nr < entry->max_stack) {
+		fp = (unsigned int __user *) (unsigned long) sp;
+		if (invalid_user_sp(sp) || read_user_stack_32(fp, &next_sp))
+			return;
+		if (level > 0 && read_user_stack_32(&fp[1], &next_ip))
+			return;
+
+		uregs = signal_frame_32_regs(sp, next_sp, next_ip);
+		if (!uregs && level <= 1)
+			uregs = signal_frame_32_regs(sp, next_sp, lr);
+		if (uregs) {
+			/*
+			 * This looks like an signal frame, so restart
+			 * the stack trace with the values in it.
+			 */
+			if (read_user_stack_32(&uregs[PT_NIP], &next_ip) ||
+			    read_user_stack_32(&uregs[PT_LNK], &lr) ||
+			    read_user_stack_32(&uregs[PT_R1], &sp))
+				return;
+			level = 0;
+			perf_callchain_store_context(entry, PERF_CONTEXT_USER);
+			perf_callchain_store(entry, next_ip);
+			continue;
+		}
+
+		if (level == 0)
+			next_ip = lr;
+		perf_callchain_store(entry, next_ip);
+		++level;
+		sp = next_sp;
+	}
+}
diff --git a/arch/powerpc/perf/callchain_64.c b/arch/powerpc/perf/callchain_64.c
new file mode 100644
index 000000000000..df1ffd8b20f2
--- /dev/null
+++ b/arch/powerpc/perf/callchain_64.c
@@ -0,0 +1,174 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Performance counter callchain support - powerpc architecture code
+ *
+ * Copyright © 2009 Paul Mackerras, IBM Corporation.
+ */
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/perf_event.h>
+#include <linux/percpu.h>
+#include <linux/uaccess.h>
+#include <linux/mm.h>
+#include <asm/ptrace.h>
+#include <asm/pgtable.h>
+#include <asm/sigcontext.h>
+#include <asm/ucontext.h>
+#include <asm/vdso.h>
+#include <asm/pte-walk.h>
+
+#include "callchain.h"
+
+/*
+ * On 64-bit we don't want to invoke hash_page on user addresses from
+ * interrupt context, so if the access faults, we read the page tables
+ * to find which page (if any) is mapped and access it directly.
+ */
+int read_user_stack_slow(void __user *ptr, void *buf, int nb)
+{
+	int ret = -EFAULT;
+	pgd_t *pgdir;
+	pte_t *ptep, pte;
+	unsigned int shift;
+	unsigned long addr = (unsigned long) ptr;
+	unsigned long offset;
+	unsigned long pfn, flags;
+	void *kaddr;
+
+	pgdir = current->mm->pgd;
+	if (!pgdir)
+		return -EFAULT;
+
+	local_irq_save(flags);
+	ptep = find_current_mm_pte(pgdir, addr, NULL, &shift);
+	if (!ptep)
+		goto err_out;
+	if (!shift)
+		shift = PAGE_SHIFT;
+
+	/* align address to page boundary */
+	offset = addr & ((1UL << shift) - 1);
+
+	pte = READ_ONCE(*ptep);
+	if (!pte_present(pte) || !pte_user(pte))
+		goto err_out;
+	pfn = pte_pfn(pte);
+	if (!page_is_ram(pfn))
+		goto err_out;
+
+	/* no highmem to worry about here */
+	kaddr = pfn_to_kaddr(pfn);
+	memcpy(buf, kaddr + offset, nb);
+	ret = 0;
+err_out:
+	local_irq_restore(flags);
+	return ret;
+}
+
+static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret)
+{
+	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned long) ||
+	    ((unsigned long)ptr & 7))
+		return -EFAULT;
+
+	if (!probe_user_read(ret, ptr, sizeof(*ret)))
+		return 0;
+
+	return read_user_stack_slow(ptr, ret, 8);
+}
+
+/*
+ * 64-bit user processes use the same stack frame for RT and non-RT signals.
+ */
+struct signal_frame_64 {
+	char		dummy[__SIGNAL_FRAMESIZE];
+	struct ucontext	uc;
+	unsigned long	unused[2];
+	unsigned int	tramp[6];
+	struct siginfo	*pinfo;
+	void		*puc;
+	struct siginfo	info;
+	char		abigap[288];
+};
+
+static int is_sigreturn_64_address(unsigned long nip, unsigned long fp)
+{
+	if (nip == fp + offsetof(struct signal_frame_64, tramp))
+		return 1;
+	if (vdso64_rt_sigtramp && current->mm->context.vdso_base &&
+	    nip == current->mm->context.vdso_base + vdso64_rt_sigtramp)
+		return 1;
+	return 0;
+}
+
+/*
+ * Do some sanity checking on the signal frame pointed to by sp.
+ * We check the pinfo and puc pointers in the frame.
+ */
+static int sane_signal_64_frame(unsigned long sp)
+{
+	struct signal_frame_64 __user *sf;
+	unsigned long pinfo, puc;
+
+	sf = (struct signal_frame_64 __user *) sp;
+	if (read_user_stack_64((unsigned long __user *) &sf->pinfo, &pinfo) ||
+	    read_user_stack_64((unsigned long __user *) &sf->puc, &puc))
+		return 0;
+	return pinfo == (unsigned long) &sf->info &&
+		puc == (unsigned long) &sf->uc;
+}
+
+void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
+			    struct pt_regs *regs)
+{
+	unsigned long sp, next_sp;
+	unsigned long next_ip;
+	unsigned long lr;
+	long level = 0;
+	struct signal_frame_64 __user *sigframe;
+	unsigned long __user *fp, *uregs;
+
+	next_ip = perf_instruction_pointer(regs);
+	lr = regs->link;
+	sp = regs->gpr[1];
+	perf_callchain_store(entry, next_ip);
+
+	while (entry->nr < entry->max_stack) {
+		fp = (unsigned long __user *) sp;
+		if (invalid_user_sp(sp) || read_user_stack_64(fp, &next_sp))
+			return;
+		if (level > 0 && read_user_stack_64(&fp[2], &next_ip))
+			return;
+
+		/*
+		 * Note: the next_sp - sp >= signal frame size check
+		 * is true when next_sp < sp, which can happen when
+		 * transitioning from an alternate signal stack to the
+		 * normal stack.
+		 */
+		if (next_sp - sp >= sizeof(struct signal_frame_64) &&
+		    (is_sigreturn_64_address(next_ip, sp) ||
+		     (level <= 1 && is_sigreturn_64_address(lr, sp))) &&
+		    sane_signal_64_frame(sp)) {
+			/*
+			 * This looks like an signal frame
+			 */
+			sigframe = (struct signal_frame_64 __user *) sp;
+			uregs = sigframe->uc.uc_mcontext.gp_regs;
+			if (read_user_stack_64(&uregs[PT_NIP], &next_ip) ||
+			    read_user_stack_64(&uregs[PT_LNK], &lr) ||
+			    read_user_stack_64(&uregs[PT_R1], &sp))
+				return;
+			level = 0;
+			perf_callchain_store_context(entry, PERF_CONTEXT_USER);
+			perf_callchain_store(entry, next_ip);
+			continue;
+		}
+
+		if (level == 0)
+			next_ip = lr;
+		perf_callchain_store(entry, next_ip);
+		++level;
+		sp = next_sp;
+	}
+}
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH v12 8/8] MAINTAINERS: perf: Add pattern that matches ppc perf to the perf entry.
  2020-03-20 10:20 ` [PATCH v12 0/8] Disable compat cruft on ppc64le v12 Michal Suchanek
                     ` (6 preceding siblings ...)
  2020-03-20 10:20   ` [PATCH v12 7/8] powerpc/perf: split callchain.c by bitness Michal Suchanek
@ 2020-03-20 10:20   ` Michal Suchanek
  2020-03-20 10:33     ` Andy Shevchenko
  7 siblings, 1 reply; 53+ messages in thread
From: Michal Suchanek @ 2020-03-20 10:20 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Michal Suchanek, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Alexander Viro, Mauro Carvalho Chehab,
	David S. Miller, Rob Herring, Greg Kroah-Hartman,
	Jonathan Cameron, Andy Shevchenko, Christophe Leroy,
	Thomas Gleixner, Arnd Bergmann, Nayna Jain, Eric Richter,
	Claudio Carvalho, Nicholas Piggin, Hari Bathini, Masahiro Yamada,
	Thiago Jung Bauermann, Sebastian Andrzej Siewior,
	Valentin Schneider, Jordan Niethe, Michael Neuling,
	Gustavo Luiz Duarte, Allison Randal, Eric W. Biederman,
	linux-kernel, linux-fsdevel

While at it also simplify the existing perf patterns.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
---
v10: new patch
V12: remove redundant entries
---
 MAINTAINERS | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index e1a99197fb34..578429d22220 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -13080,7 +13080,7 @@ R:	Namhyung Kim <namhyung@kernel.org>
 L:	linux-kernel@vger.kernel.org
 T:	git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git perf/core
 S:	Supported
-F:	kernel/events/*
+F:	kernel/events/
 F:	include/linux/perf_event.h
 F:	include/uapi/linux/perf_event.h
 F:	arch/*/kernel/perf_event*.c
@@ -13088,8 +13088,8 @@ F:	arch/*/kernel/*/perf_event*.c
 F:	arch/*/kernel/*/*/perf_event*.c
 F:	arch/*/include/asm/perf_event.h
 F:	arch/*/kernel/perf_callchain.c
-F:	arch/*/events/*
-F:	arch/*/events/*/*
+F:	arch/*/events/
+F:	arch/*/perf/
 F:	tools/perf/
 
 PERFORMANCE EVENTS SUBSYSTEM ARM64 PMU EVENTS
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* Re: [PATCH v12 8/8] MAINTAINERS: perf: Add pattern that matches ppc perf to the perf entry.
  2020-03-20 10:20   ` [PATCH v12 8/8] MAINTAINERS: perf: Add pattern that matches ppc perf to the perf entry Michal Suchanek
@ 2020-03-20 10:33     ` Andy Shevchenko
  2020-03-20 11:23       ` Michal Suchánek
  0 siblings, 1 reply; 53+ messages in thread
From: Andy Shevchenko @ 2020-03-20 10:33 UTC (permalink / raw)
  To: Michal Suchanek
  Cc: linuxppc-dev, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Alexander Viro, Mauro Carvalho Chehab,
	David S. Miller, Rob Herring, Greg Kroah-Hartman,
	Jonathan Cameron, Christophe Leroy, Thomas Gleixner,
	Arnd Bergmann, Nayna Jain, Eric Richter, Claudio Carvalho,
	Nicholas Piggin, Hari Bathini, Masahiro Yamada,
	Thiago Jung Bauermann, Sebastian Andrzej Siewior,
	Valentin Schneider, Jordan Niethe, Michael Neuling,
	Gustavo Luiz Duarte, Allison Randal, Eric W. Biederman,
	linux-kernel, linux-fsdevel

On Fri, Mar 20, 2020 at 11:20:19AM +0100, Michal Suchanek wrote:
> While at it also simplify the existing perf patterns.
> 

And still missed fixes from parse-maintainers.pl.

I see it like below in the linux-next (after the script)

PERFORMANCE EVENTS SUBSYSTEM
M:      Peter Zijlstra <peterz@infradead.org>
M:      Ingo Molnar <mingo@redhat.com>
M:      Arnaldo Carvalho de Melo <acme@kernel.org>
R:      Mark Rutland <mark.rutland@arm.com>
R:      Alexander Shishkin <alexander.shishkin@linux.intel.com>
R:      Jiri Olsa <jolsa@redhat.com>
R:      Namhyung Kim <namhyung@kernel.org>
L:      linux-kernel@vger.kernel.org
S:      Supported
T:      git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git perf/core
F:      arch/*/events/*
F:      arch/*/events/*/*
F:      arch/*/include/asm/perf_event.h
F:      arch/*/kernel/*/*/perf_event*.c
F:      arch/*/kernel/*/perf_event*.c
F:      arch/*/kernel/perf_callchain.c
F:      arch/*/kernel/perf_event*.c
F:      include/linux/perf_event.h
F:      include/uapi/linux/perf_event.h
F:      kernel/events/*
F:      tools/perf/

> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -13080,7 +13080,7 @@ R:	Namhyung Kim <namhyung@kernel.org>
>  L:	linux-kernel@vger.kernel.org
>  T:	git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git perf/core
>  S:	Supported
> -F:	kernel/events/*
> +F:	kernel/events/
>  F:	include/linux/perf_event.h
>  F:	include/uapi/linux/perf_event.h
>  F:	arch/*/kernel/perf_event*.c
> @@ -13088,8 +13088,8 @@ F:	arch/*/kernel/*/perf_event*.c
>  F:	arch/*/kernel/*/*/perf_event*.c
>  F:	arch/*/include/asm/perf_event.h
>  F:	arch/*/kernel/perf_callchain.c
> -F:	arch/*/events/*
> -F:	arch/*/events/*/*
> +F:	arch/*/events/
> +F:	arch/*/perf/
>  F:	tools/perf/
>  
>  PERFORMANCE EVENTS SUBSYSTEM ARM64 PMU EVENTS

-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v12 8/8] MAINTAINERS: perf: Add pattern that matches ppc perf to the perf entry.
  2020-03-20 10:33     ` Andy Shevchenko
@ 2020-03-20 11:23       ` Michal Suchánek
  2020-03-20 12:42         ` Andy Shevchenko
  0 siblings, 1 reply; 53+ messages in thread
From: Michal Suchánek @ 2020-03-20 11:23 UTC (permalink / raw)
  To: Andy Shevchenko
  Cc: linuxppc-dev, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Alexander Viro, Mauro Carvalho Chehab,
	David S. Miller, Rob Herring, Greg Kroah-Hartman,
	Jonathan Cameron, Christophe Leroy, Thomas Gleixner,
	Arnd Bergmann, Nayna Jain, Eric Richter, Claudio Carvalho,
	Nicholas Piggin, Hari Bathini, Masahiro Yamada,
	Thiago Jung Bauermann, Sebastian Andrzej Siewior,
	Valentin Schneider, Jordan Niethe, Michael Neuling,
	Gustavo Luiz Duarte, Allison Randal, Eric W. Biederman,
	linux-kernel, linux-fsdevel

On Fri, Mar 20, 2020 at 12:33:50PM +0200, Andy Shevchenko wrote:
> On Fri, Mar 20, 2020 at 11:20:19AM +0100, Michal Suchanek wrote:
> > While at it also simplify the existing perf patterns.
> > 
> 
> And still missed fixes from parse-maintainers.pl.

Oh, that script UX is truly ingenious. It provides no output and quietly
creates MAINTAINERS.new which is, of course, not included in the patch.

Thanks

Michal

> 
> I see it like below in the linux-next (after the script)
> 
> PERFORMANCE EVENTS SUBSYSTEM
> M:      Peter Zijlstra <peterz@infradead.org>
> M:      Ingo Molnar <mingo@redhat.com>
> M:      Arnaldo Carvalho de Melo <acme@kernel.org>
> R:      Mark Rutland <mark.rutland@arm.com>
> R:      Alexander Shishkin <alexander.shishkin@linux.intel.com>
> R:      Jiri Olsa <jolsa@redhat.com>
> R:      Namhyung Kim <namhyung@kernel.org>
> L:      linux-kernel@vger.kernel.org
> S:      Supported
> T:      git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git perf/core
> F:      arch/*/events/*
> F:      arch/*/events/*/*
> F:      arch/*/include/asm/perf_event.h
> F:      arch/*/kernel/*/*/perf_event*.c
> F:      arch/*/kernel/*/perf_event*.c
> F:      arch/*/kernel/perf_callchain.c
> F:      arch/*/kernel/perf_event*.c
> F:      include/linux/perf_event.h
> F:      include/uapi/linux/perf_event.h
> F:      kernel/events/*
> F:      tools/perf/
> 
> > --- a/MAINTAINERS
> > +++ b/MAINTAINERS
> > @@ -13080,7 +13080,7 @@ R:	Namhyung Kim <namhyung@kernel.org>
> >  L:	linux-kernel@vger.kernel.org
> >  T:	git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git perf/core
> >  S:	Supported
> > -F:	kernel/events/*
> > +F:	kernel/events/
> >  F:	include/linux/perf_event.h
> >  F:	include/uapi/linux/perf_event.h
> >  F:	arch/*/kernel/perf_event*.c
> > @@ -13088,8 +13088,8 @@ F:	arch/*/kernel/*/perf_event*.c
> >  F:	arch/*/kernel/*/*/perf_event*.c
> >  F:	arch/*/include/asm/perf_event.h
> >  F:	arch/*/kernel/perf_callchain.c
> > -F:	arch/*/events/*
> > -F:	arch/*/events/*/*
> > +F:	arch/*/events/
> > +F:	arch/*/perf/
> >  F:	tools/perf/
> >  
> >  PERFORMANCE EVENTS SUBSYSTEM ARM64 PMU EVENTS
> 
> -- 
> With Best Regards,
> Andy Shevchenko
> 
> 

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v12 8/8] MAINTAINERS: perf: Add pattern that matches ppc perf to the perf entry.
  2020-03-20 11:23       ` Michal Suchánek
@ 2020-03-20 12:42         ` Andy Shevchenko
  2020-03-20 14:42           ` Joe Perches
  0 siblings, 1 reply; 53+ messages in thread
From: Andy Shevchenko @ 2020-03-20 12:42 UTC (permalink / raw)
  To: Michal Suchánek
  Cc: linuxppc-dev, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Alexander Viro, Mauro Carvalho Chehab,
	David S. Miller, Rob Herring, Greg Kroah-Hartman,
	Jonathan Cameron, Christophe Leroy, Thomas Gleixner,
	Arnd Bergmann, Nayna Jain, Eric Richter, Claudio Carvalho,
	Nicholas Piggin, Hari Bathini, Masahiro Yamada,
	Thiago Jung Bauermann, Sebastian Andrzej Siewior,
	Valentin Schneider, Jordan Niethe, Michael Neuling,
	Gustavo Luiz Duarte, Allison Randal, Eric W. Biederman,
	linux-kernel, linux-fsdevel

On Fri, Mar 20, 2020 at 12:23:38PM +0100, Michal Suchánek wrote:
> On Fri, Mar 20, 2020 at 12:33:50PM +0200, Andy Shevchenko wrote:
> > On Fri, Mar 20, 2020 at 11:20:19AM +0100, Michal Suchanek wrote:
> > > While at it also simplify the existing perf patterns.

> > And still missed fixes from parse-maintainers.pl.
> 
> Oh, that script UX is truly ingenious.

You have at least two options, their combinations, etc:
 - complain to the author :-)
 - send a patch :-)

> It provides no output and quietly
> creates MAINTAINERS.new which is, of course, not included in the patch.

Yes. it also took me a while to understand how it works, luckily it has a
little help note.

-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v12 8/8] MAINTAINERS: perf: Add pattern that matches ppc perf to the perf entry.
  2020-03-20 12:42         ` Andy Shevchenko
@ 2020-03-20 14:42           ` Joe Perches
  2020-03-20 16:28             ` Michal Suchánek
  2020-03-20 16:31             ` Andy Shevchenko
  0 siblings, 2 replies; 53+ messages in thread
From: Joe Perches @ 2020-03-20 14:42 UTC (permalink / raw)
  To: Andy Shevchenko, Michal Suchánek
  Cc: linuxppc-dev, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Alexander Viro, Mauro Carvalho Chehab,
	David S. Miller, Rob Herring, Greg Kroah-Hartman,
	Jonathan Cameron, Christophe Leroy, Thomas Gleixner,
	Arnd Bergmann, Nayna Jain, Eric Richter, Claudio Carvalho,
	Nicholas Piggin, Hari Bathini, Masahiro Yamada,
	Thiago Jung Bauermann, Sebastian Andrzej Siewior,
	Valentin Schneider, Jordan Niethe, Michael Neuling,
	Gustavo Luiz Duarte, Allison Randal, Eric W. Biederman,
	linux-kernel, linux-fsdevel

On Fri, 2020-03-20 at 14:42 +0200, Andy Shevchenko wrote:
> On Fri, Mar 20, 2020 at 12:23:38PM +0100, Michal Suchánek wrote:
> > On Fri, Mar 20, 2020 at 12:33:50PM +0200, Andy Shevchenko wrote:
> > > On Fri, Mar 20, 2020 at 11:20:19AM +0100, Michal Suchanek wrote:
> > > > While at it also simplify the existing perf patterns.
> > > And still missed fixes from parse-maintainers.pl.
> > 
> > Oh, that script UX is truly ingenious.
> 
> You have at least two options, their combinations, etc:
>  - complain to the author :-)
>  - send a patch :-)

Recently:

https://lore.kernel.org/lkml/4d5291fa3fb4962b1fa55e8fd9ef421ef0c1b1e5.camel@perches.com/



^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v12 8/8] MAINTAINERS: perf: Add pattern that matches ppc perf to the perf entry.
  2020-03-20 14:42           ` Joe Perches
@ 2020-03-20 16:28             ` Michal Suchánek
  2020-03-20 16:31             ` Andy Shevchenko
  1 sibling, 0 replies; 53+ messages in thread
From: Michal Suchánek @ 2020-03-20 16:28 UTC (permalink / raw)
  To: Joe Perches
  Cc: Andy Shevchenko, Mark Rutland, Gustavo Luiz Duarte,
	Peter Zijlstra, Sebastian Andrzej Siewior, linux-kernel,
	Paul Mackerras, Jiri Olsa, Rob Herring, Michael Neuling,
	Mauro Carvalho Chehab, Masahiro Yamada, Nayna Jain,
	Alexander Shishkin, Ingo Molnar, Allison Randal, Jordan Niethe,
	Valentin Schneider, Arnd Bergmann, Arnaldo Carvalho de Melo,
	Alexander Viro, Jonathan Cameron, Namhyung Kim, Thomas Gleixner,
	Hari Bathini, Greg Kroah-Hartman, Nicholas Piggin,
	Claudio Carvalho, Eric Richter, Eric W. Biederman, linux-fsdevel,
	linuxppc-dev, David S. Miller, Thiago Jung Bauermann

On Fri, Mar 20, 2020 at 07:42:03AM -0700, Joe Perches wrote:
> On Fri, 2020-03-20 at 14:42 +0200, Andy Shevchenko wrote:
> > On Fri, Mar 20, 2020 at 12:23:38PM +0100, Michal Suchánek wrote:
> > > On Fri, Mar 20, 2020 at 12:33:50PM +0200, Andy Shevchenko wrote:
> > > > On Fri, Mar 20, 2020 at 11:20:19AM +0100, Michal Suchanek wrote:
> > > > > While at it also simplify the existing perf patterns.
> > > > And still missed fixes from parse-maintainers.pl.
> > > 
> > > Oh, that script UX is truly ingenious.
> > 
> > You have at least two options, their combinations, etc:
> >  - complain to the author :-)
> >  - send a patch :-)
> 
> Recently:
> 
> https://lore.kernel.org/lkml/4d5291fa3fb4962b1fa55e8fd9ef421ef0c1b1e5.camel@perches.com/

Can we expect that reaordering is taken care of in that discussion then?

Thanks

Michal

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v12 8/8] MAINTAINERS: perf: Add pattern that matches ppc perf to the perf entry.
  2020-03-20 14:42           ` Joe Perches
  2020-03-20 16:28             ` Michal Suchánek
@ 2020-03-20 16:31             ` Andy Shevchenko
  2020-03-20 16:42               ` Michal Suchánek
  2020-03-20 21:36               ` Joe Perches
  1 sibling, 2 replies; 53+ messages in thread
From: Andy Shevchenko @ 2020-03-20 16:31 UTC (permalink / raw)
  To: Joe Perches
  Cc: Michal Suchánek, linuxppc-dev, Benjamin Herrenschmidt,
	Paul Mackerras, Michael Ellerman, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Alexander Viro, Mauro Carvalho Chehab,
	David S. Miller, Rob Herring, Greg Kroah-Hartman,
	Jonathan Cameron, Christophe Leroy, Thomas Gleixner,
	Arnd Bergmann, Nayna Jain, Eric Richter, Claudio Carvalho,
	Nicholas Piggin, Hari Bathini, Masahiro Yamada,
	Thiago Jung Bauermann, Sebastian Andrzej Siewior,
	Valentin Schneider, Jordan Niethe, Michael Neuling,
	Gustavo Luiz Duarte, Allison Randal, Eric W. Biederman,
	linux-kernel, linux-fsdevel

On Fri, Mar 20, 2020 at 07:42:03AM -0700, Joe Perches wrote:
> On Fri, 2020-03-20 at 14:42 +0200, Andy Shevchenko wrote:
> > On Fri, Mar 20, 2020 at 12:23:38PM +0100, Michal Suchánek wrote:
> > > On Fri, Mar 20, 2020 at 12:33:50PM +0200, Andy Shevchenko wrote:
> > > > On Fri, Mar 20, 2020 at 11:20:19AM +0100, Michal Suchanek wrote:
> > > > > While at it also simplify the existing perf patterns.
> > > > And still missed fixes from parse-maintainers.pl.
> > > 
> > > Oh, that script UX is truly ingenious.
> > 
> > You have at least two options, their combinations, etc:
> >  - complain to the author :-)
> >  - send a patch :-)
> 
> Recently:
> 
> https://lore.kernel.org/lkml/4d5291fa3fb4962b1fa55e8fd9ef421ef0c1b1e5.camel@perches.com/

But why?

Shouldn't we rather run MAINTAINERS clean up once and require people to use
parse-maintainers.pl for good?

-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v12 8/8] MAINTAINERS: perf: Add pattern that matches ppc perf to the perf entry.
  2020-03-20 16:31             ` Andy Shevchenko
@ 2020-03-20 16:42               ` Michal Suchánek
  2020-03-20 16:47                 ` Andy Shevchenko
  2020-03-20 21:36               ` Joe Perches
  1 sibling, 1 reply; 53+ messages in thread
From: Michal Suchánek @ 2020-03-20 16:42 UTC (permalink / raw)
  To: Andy Shevchenko
  Cc: Joe Perches, linuxppc-dev, Benjamin Herrenschmidt,
	Paul Mackerras, Michael Ellerman, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Alexander Viro, Mauro Carvalho Chehab,
	David S. Miller, Rob Herring, Greg Kroah-Hartman,
	Jonathan Cameron, Christophe Leroy, Thomas Gleixner,
	Arnd Bergmann, Nayna Jain, Eric Richter, Claudio Carvalho,
	Nicholas Piggin, Hari Bathini, Masahiro Yamada,
	Thiago Jung Bauermann, Sebastian Andrzej Siewior,
	Valentin Schneider, Jordan Niethe, Michael Neuling,
	Gustavo Luiz Duarte, Allison Randal, Eric W. Biederman,
	linux-kernel, linux-fsdevel

On Fri, Mar 20, 2020 at 06:31:57PM +0200, Andy Shevchenko wrote:
> On Fri, Mar 20, 2020 at 07:42:03AM -0700, Joe Perches wrote:
> > On Fri, 2020-03-20 at 14:42 +0200, Andy Shevchenko wrote:
> > > On Fri, Mar 20, 2020 at 12:23:38PM +0100, Michal Suchánek wrote:
> > > > On Fri, Mar 20, 2020 at 12:33:50PM +0200, Andy Shevchenko wrote:
> > > > > On Fri, Mar 20, 2020 at 11:20:19AM +0100, Michal Suchanek wrote:
> > > > > > While at it also simplify the existing perf patterns.
> > > > > And still missed fixes from parse-maintainers.pl.
> > > > 
> > > > Oh, that script UX is truly ingenious.
> > > 
> > > You have at least two options, their combinations, etc:
> > >  - complain to the author :-)
> > >  - send a patch :-)
> > 
> > Recently:
> > 
> > https://lore.kernel.org/lkml/4d5291fa3fb4962b1fa55e8fd9ef421ef0c1b1e5.camel@perches.com/
> 
> But why?
> 
> Shouldn't we rather run MAINTAINERS clean up once and require people to use
> parse-maintainers.pl for good?

That cleanup did not happen yet, and I am not volunteering for one.
The difference between MAINTAINERS and MAINTAINERS.new is:

 MAINTAINERS | 5510 +++++++++++++++++++++++++++++------------------------------
 1 file changed, 2755 insertions(+), 2755 deletions(-)

Thanks

Michal

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v12 8/8] MAINTAINERS: perf: Add pattern that matches ppc perf to the perf entry.
  2020-03-20 16:42               ` Michal Suchánek
@ 2020-03-20 16:47                 ` Andy Shevchenko
  0 siblings, 0 replies; 53+ messages in thread
From: Andy Shevchenko @ 2020-03-20 16:47 UTC (permalink / raw)
  To: Michal Suchánek
  Cc: Joe Perches, linuxppc-dev, Benjamin Herrenschmidt,
	Paul Mackerras, Michael Ellerman, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Alexander Viro, Mauro Carvalho Chehab,
	David S. Miller, Rob Herring, Greg Kroah-Hartman,
	Jonathan Cameron, Christophe Leroy, Thomas Gleixner,
	Arnd Bergmann, Nayna Jain, Eric Richter, Claudio Carvalho,
	Nicholas Piggin, Hari Bathini, Masahiro Yamada,
	Thiago Jung Bauermann, Sebastian Andrzej Siewior,
	Valentin Schneider, Jordan Niethe, Michael Neuling,
	Gustavo Luiz Duarte, Allison Randal, Eric W. Biederman,
	linux-kernel, linux-fsdevel

On Fri, Mar 20, 2020 at 05:42:04PM +0100, Michal Suchánek wrote:
> On Fri, Mar 20, 2020 at 06:31:57PM +0200, Andy Shevchenko wrote:
> > On Fri, Mar 20, 2020 at 07:42:03AM -0700, Joe Perches wrote:
> > > On Fri, 2020-03-20 at 14:42 +0200, Andy Shevchenko wrote:
> > > > On Fri, Mar 20, 2020 at 12:23:38PM +0100, Michal Suchánek wrote:
> > > > > On Fri, Mar 20, 2020 at 12:33:50PM +0200, Andy Shevchenko wrote:
> > > > > > On Fri, Mar 20, 2020 at 11:20:19AM +0100, Michal Suchanek wrote:
> > > > > > > While at it also simplify the existing perf patterns.
> > > > > > And still missed fixes from parse-maintainers.pl.
> > > > > 
> > > > > Oh, that script UX is truly ingenious.
> > > > 
> > > > You have at least two options, their combinations, etc:
> > > >  - complain to the author :-)
> > > >  - send a patch :-)
> > > 
> > > Recently:
> > > 
> > > https://lore.kernel.org/lkml/4d5291fa3fb4962b1fa55e8fd9ef421ef0c1b1e5.camel@perches.com/
> > 
> > But why?
> > 
> > Shouldn't we rather run MAINTAINERS clean up once and require people to use
> > parse-maintainers.pl for good?
> 
> That cleanup did not happen yet, and I am not volunteering for one.
> The difference between MAINTAINERS and MAINTAINERS.new is:
> 
>  MAINTAINERS | 5510 +++++++++++++++++++++++++++++------------------------------
>  1 file changed, 2755 insertions(+), 2755 deletions(-)

Yes, it was basically reply to Joe.

-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v12 8/8] MAINTAINERS: perf: Add pattern that matches ppc perf to the perf entry.
  2020-03-20 16:31             ` Andy Shevchenko
  2020-03-20 16:42               ` Michal Suchánek
@ 2020-03-20 21:36               ` Joe Perches
  1 sibling, 0 replies; 53+ messages in thread
From: Joe Perches @ 2020-03-20 21:36 UTC (permalink / raw)
  To: Andy Shevchenko, Linus Torvalds
  Cc: Michal Suchánek, linuxppc-dev, linux-kernel, linux-fsdevel

(removed a bunch of cc's)

On Fri, 2020-03-20 at 18:31 +0200, Andy Shevchenko wrote:
> On Fri, Mar 20, 2020 at 07:42:03AM -0700, Joe Perches wrote:
> > On Fri, 2020-03-20 at 14:42 +0200, Andy Shevchenko wrote:
> > > On Fri, Mar 20, 2020 at 12:23:38PM +0100, Michal Suchánek wrote:
> > > > On Fri, Mar 20, 2020 at 12:33:50PM +0200, Andy Shevchenko wrote:
> > > > > On Fri, Mar 20, 2020 at 11:20:19AM +0100, Michal Suchanek wrote:
> > > > > > While at it also simplify the existing perf patterns.
> > > > > And still missed fixes from parse-maintainers.pl.
> > > > 
> > > > Oh, that script UX is truly ingenious.
> > > 
> > > You have at least two options, their combinations, etc:
> > >  - complain to the author :-)
> > >  - send a patch :-)
> > 
> > Recently:
> > 
> > https://lore.kernel.org/lkml/4d5291fa3fb4962b1fa55e8fd9ef421ef0c1b1e5.camel@perches.com/
> 
> But why?
> 
> Shouldn't we rather run MAINTAINERS clean up once and require people to use
> parse-maintainers.pl for good?

That can basically only be done by Linus just before he releases
an RC1.

I am for it.  One day...



^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v11 3/8] powerpc/perf: consolidate read_user_stack_32
  2020-03-19 12:19   ` [PATCH v11 3/8] powerpc/perf: consolidate read_user_stack_32 Michal Suchanek
@ 2020-03-24  8:48     ` Nicholas Piggin
  2020-03-24 19:38       ` Michal Suchánek
  0 siblings, 1 reply; 53+ messages in thread
From: Nicholas Piggin @ 2020-03-24  8:48 UTC (permalink / raw)
  To: linuxppc-dev, Michal Suchanek
  Cc: Arnaldo Carvalho de Melo, Alexander Shishkin, Allison Randal,
	Andy Shevchenko, Arnd Bergmann, Thiago Jung Bauermann,
	Benjamin Herrenschmidt, Sebastian Andrzej Siewior,
	Claudio Carvalho, Christophe Leroy, David S. Miller,
	Eric W. Biederman, Eric Richter, Greg Kroah-Hartman,
	Gustavo Luiz Duarte, Hari Bathini, Jordan Niethe, Jiri Olsa,
	Jonathan Cameron, linux-fsdevel, linux-kernel, Mark Rutland,
	Masahiro Yamada, Mauro Carvalho Chehab, Michael Neuling,
	Ingo Molnar, Michael Ellerman, Namhyung Kim, Nayna Jain,
	Paul Mackerras, Peter Zijlstra, Rob Herring, Thomas Gleixner,
	Valentin Schneider, Alexander Viro

Michal Suchanek's on March 19, 2020 10:19 pm:
> There are two almost identical copies for 32bit and 64bit.
> 
> The function is used only in 32bit code which will be split out in next
> patch so consolidate to one function.
> 
> Signed-off-by: Michal Suchanek <msuchanek@suse.de>
> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
> ---
> v6:  new patch
> v8:  move the consolidated function out of the ifdef block.
> v11: rebase on top of def0bfdbd603
> ---
>  arch/powerpc/perf/callchain.c | 48 +++++++++++++++++------------------
>  1 file changed, 24 insertions(+), 24 deletions(-)
> 
> diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
> index cbc251981209..c9a78c6e4361 100644
> --- a/arch/powerpc/perf/callchain.c
> +++ b/arch/powerpc/perf/callchain.c
> @@ -161,18 +161,6 @@ static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret)
>  	return read_user_stack_slow(ptr, ret, 8);
>  }
>  
> -static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
> -{
> -	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
> -	    ((unsigned long)ptr & 3))
> -		return -EFAULT;
> -
> -	if (!probe_user_read(ret, ptr, sizeof(*ret)))
> -		return 0;
> -
> -	return read_user_stack_slow(ptr, ret, 4);
> -}
> -
>  static inline int valid_user_sp(unsigned long sp, int is_64)
>  {
>  	if (!sp || (sp & 7) || sp > (is_64 ? TASK_SIZE : 0x100000000UL) - 32)
> @@ -277,19 +265,9 @@ static void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
>  }
>  
>  #else  /* CONFIG_PPC64 */
> -/*
> - * On 32-bit we just access the address and let hash_page create a
> - * HPTE if necessary, so there is no need to fall back to reading
> - * the page tables.  Since this is called at interrupt level,
> - * do_page_fault() won't treat a DSI as a page fault.
> - */
> -static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
> +static int read_user_stack_slow(void __user *ptr, void *buf, int nb)
>  {
> -	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
> -	    ((unsigned long)ptr & 3))
> -		return -EFAULT;
> -
> -	return probe_user_read(ret, ptr, sizeof(*ret));
> +	return 0;
>  }
>  
>  static inline void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
> @@ -312,6 +290,28 @@ static inline int valid_user_sp(unsigned long sp, int is_64)
>  
>  #endif /* CONFIG_PPC64 */
>  
> +/*
> + * On 32-bit we just access the address and let hash_page create a
> + * HPTE if necessary, so there is no need to fall back to reading
> + * the page tables.  Since this is called at interrupt level,
> + * do_page_fault() won't treat a DSI as a page fault.
> + */

The comment is actually probably better to stay in the 32-bit
read_user_stack_slow implementation. Is that function defined
on 32-bit purely so that you can use IS_ENABLED()? In that case
I would prefer to put a BUG() there which makes it self documenting.

Thanks,
Nick

> +static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
> +{
> +	int rc;
> +
> +	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
> +	    ((unsigned long)ptr & 3))
> +		return -EFAULT;
> +
> +	rc = probe_user_read(ret, ptr, sizeof(*ret));
> +
> +	if (IS_ENABLED(CONFIG_PPC64) && rc)
> +		return read_user_stack_slow(ptr, ret, 4);
> +
> +	return rc;
> +}
> +
>  /*
>   * Layout for non-RT signal frames
>   */
> -- 
> 2.23.0
> 
> 

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v11 5/8] powerpc/64: make buildable without CONFIG_COMPAT
  2020-03-19 12:19   ` [PATCH v11 5/8] powerpc/64: make buildable without CONFIG_COMPAT Michal Suchanek
@ 2020-03-24  8:54     ` Nicholas Piggin
  2020-03-24 19:30       ` Michal Suchánek
  0 siblings, 1 reply; 53+ messages in thread
From: Nicholas Piggin @ 2020-03-24  8:54 UTC (permalink / raw)
  To: linuxppc-dev, Michal Suchanek
  Cc: Arnaldo Carvalho de Melo, Alexander Shishkin, Allison Randal,
	Andy Shevchenko, Arnd Bergmann, Thiago Jung Bauermann,
	Benjamin Herrenschmidt, Sebastian Andrzej Siewior,
	Claudio Carvalho, Christophe Leroy, David S. Miller,
	Eric W. Biederman, Eric Richter, Greg Kroah-Hartman,
	Gustavo Luiz Duarte, Hari Bathini, Jordan Niethe, Jiri Olsa,
	Jonathan Cameron, linux-fsdevel, linux-kernel, Mark Rutland,
	Masahiro Yamada, Mauro Carvalho Chehab, Michael Neuling,
	Ingo Molnar, Michael Ellerman, Namhyung Kim, Nayna Jain,
	Paul Mackerras, Peter Zijlstra, Rob Herring, Thomas Gleixner,
	Valentin Schneider, Alexander Viro

Michal Suchanek's on March 19, 2020 10:19 pm:
> diff --git a/arch/powerpc/kernel/signal.c b/arch/powerpc/kernel/signal.c
> index 4b0152108f61..a264989626fd 100644
> --- a/arch/powerpc/kernel/signal.c
> +++ b/arch/powerpc/kernel/signal.c
> @@ -247,7 +247,6 @@ static void do_signal(struct task_struct *tsk)
>  	sigset_t *oldset = sigmask_to_save();
>  	struct ksignal ksig = { .sig = 0 };
>  	int ret;
> -	int is32 = is_32bit_task();
>  
>  	BUG_ON(tsk != current);
>  
> @@ -277,7 +276,7 @@ static void do_signal(struct task_struct *tsk)
>  
>  	rseq_signal_deliver(&ksig, tsk->thread.regs);
>  
> -	if (is32) {
> +	if (is_32bit_task()) {
>          	if (ksig.ka.sa.sa_flags & SA_SIGINFO)
>  			ret = handle_rt_signal32(&ksig, oldset, tsk);
>  		else

Unnecessary?

> diff --git a/arch/powerpc/kernel/syscall_64.c b/arch/powerpc/kernel/syscall_64.c
> index 87d95b455b83..2dcbfe38f5ac 100644
> --- a/arch/powerpc/kernel/syscall_64.c
> +++ b/arch/powerpc/kernel/syscall_64.c
> @@ -24,7 +24,6 @@ notrace long system_call_exception(long r3, long r4, long r5,
>  				   long r6, long r7, long r8,
>  				   unsigned long r0, struct pt_regs *regs)
>  {
> -	unsigned long ti_flags;
>  	syscall_fn f;
>  
>  	if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
> @@ -68,8 +67,7 @@ notrace long system_call_exception(long r3, long r4, long r5,
>  
>  	local_irq_enable();
>  
> -	ti_flags = current_thread_info()->flags;
> -	if (unlikely(ti_flags & _TIF_SYSCALL_DOTRACE)) {
> +	if (unlikely(current_thread_info()->flags & _TIF_SYSCALL_DOTRACE)) {
>  		/*
>  		 * We use the return value of do_syscall_trace_enter() as the
>  		 * syscall number. If the syscall was rejected for any reason
> @@ -94,7 +92,7 @@ notrace long system_call_exception(long r3, long r4, long r5,
>  	/* May be faster to do array_index_nospec? */
>  	barrier_nospec();
>  
> -	if (unlikely(ti_flags & _TIF_32BIT)) {
> +	if (unlikely(is_32bit_task())) {

Problem is, does this allow the load of ti_flags to be used for both
tests, or does test_bit make it re-load?

This could maybe be fixed by testing if(IS_ENABLED(CONFIG_COMPAT) &&

Other than these, the patches all look pretty good to me.

Thanks,
Nick

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v11 5/8] powerpc/64: make buildable without CONFIG_COMPAT
  2020-03-24  8:54     ` Nicholas Piggin
@ 2020-03-24 19:30       ` Michal Suchánek
  2020-04-03  7:16         ` Nicholas Piggin
  0 siblings, 1 reply; 53+ messages in thread
From: Michal Suchánek @ 2020-03-24 19:30 UTC (permalink / raw)
  To: Nicholas Piggin
  Cc: linuxppc-dev, Mark Rutland, Gustavo Luiz Duarte,
	Alexander Shishkin, Sebastian Andrzej Siewior, linux-kernel,
	Paul Mackerras, Jiri Olsa, Rob Herring, Michael Neuling,
	Eric Richter, Masahiro Yamada, Nayna Jain, Peter Zijlstra,
	Ingo Molnar, Hari Bathini, Jordan Niethe, Valentin Schneider,
	Arnd Bergmann, Arnaldo Carvalho de Melo, Alexander Viro,
	Jonathan Cameron, Namhyung Kim, Thomas Gleixner, Andy Shevchenko,
	Allison Randal, Greg Kroah-Hartman, Claudio Carvalho,
	Mauro Carvalho Chehab, Eric W. Biederman, linux-fsdevel,
	David S. Miller, Thiago Jung Bauermann

On Tue, Mar 24, 2020 at 06:54:20PM +1000, Nicholas Piggin wrote:
> Michal Suchanek's on March 19, 2020 10:19 pm:
> > diff --git a/arch/powerpc/kernel/signal.c b/arch/powerpc/kernel/signal.c
> > index 4b0152108f61..a264989626fd 100644
> > --- a/arch/powerpc/kernel/signal.c
> > +++ b/arch/powerpc/kernel/signal.c
> > @@ -247,7 +247,6 @@ static void do_signal(struct task_struct *tsk)
> >  	sigset_t *oldset = sigmask_to_save();
> >  	struct ksignal ksig = { .sig = 0 };
> >  	int ret;
> > -	int is32 = is_32bit_task();
> >  
> >  	BUG_ON(tsk != current);
> >  
> > @@ -277,7 +276,7 @@ static void do_signal(struct task_struct *tsk)
> >  
> >  	rseq_signal_deliver(&ksig, tsk->thread.regs);
> >  
> > -	if (is32) {
> > +	if (is_32bit_task()) {
> >          	if (ksig.ka.sa.sa_flags & SA_SIGINFO)
> >  			ret = handle_rt_signal32(&ksig, oldset, tsk);
> >  		else
> 
> Unnecessary?
> 
> > diff --git a/arch/powerpc/kernel/syscall_64.c b/arch/powerpc/kernel/syscall_64.c
> > index 87d95b455b83..2dcbfe38f5ac 100644
> > --- a/arch/powerpc/kernel/syscall_64.c
> > +++ b/arch/powerpc/kernel/syscall_64.c
> > @@ -24,7 +24,6 @@ notrace long system_call_exception(long r3, long r4, long r5,
> >  				   long r6, long r7, long r8,
> >  				   unsigned long r0, struct pt_regs *regs)
> >  {
> > -	unsigned long ti_flags;
> >  	syscall_fn f;
> >  
> >  	if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
> > @@ -68,8 +67,7 @@ notrace long system_call_exception(long r3, long r4, long r5,
> >  
> >  	local_irq_enable();
> >  
> > -	ti_flags = current_thread_info()->flags;
> > -	if (unlikely(ti_flags & _TIF_SYSCALL_DOTRACE)) {
> > +	if (unlikely(current_thread_info()->flags & _TIF_SYSCALL_DOTRACE)) {
> >  		/*
> >  		 * We use the return value of do_syscall_trace_enter() as the
> >  		 * syscall number. If the syscall was rejected for any reason
> > @@ -94,7 +92,7 @@ notrace long system_call_exception(long r3, long r4, long r5,
> >  	/* May be faster to do array_index_nospec? */
> >  	barrier_nospec();
> >  
> > -	if (unlikely(ti_flags & _TIF_32BIT)) {
> > +	if (unlikely(is_32bit_task())) {
> 
> Problem is, does this allow the load of ti_flags to be used for both
> tests, or does test_bit make it re-load?
> 
> This could maybe be fixed by testing if(IS_ENABLED(CONFIG_COMPAT) &&
Both points already discussed here:

https://lore.kernel.org/linuxppc-dev/13fa324dc879a7f325290bf2e131b87eb491cd7b.1573576649.git.msuchanek@suse.de/

Thanks

Michal

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v11 3/8] powerpc/perf: consolidate read_user_stack_32
  2020-03-24  8:48     ` Nicholas Piggin
@ 2020-03-24 19:38       ` Michal Suchánek
  2020-04-03  7:13         ` Nicholas Piggin
  0 siblings, 1 reply; 53+ messages in thread
From: Michal Suchánek @ 2020-03-24 19:38 UTC (permalink / raw)
  To: Nicholas Piggin
  Cc: linuxppc-dev, Arnaldo Carvalho de Melo, Alexander Shishkin,
	Allison Randal, Andy Shevchenko, Arnd Bergmann,
	Thiago Jung Bauermann, Benjamin Herrenschmidt,
	Sebastian Andrzej Siewior, Claudio Carvalho, Christophe Leroy,
	David S. Miller, Eric W. Biederman, Eric Richter,
	Greg Kroah-Hartman, Gustavo Luiz Duarte, Hari Bathini,
	Jordan Niethe, Jiri Olsa, Jonathan Cameron, linux-fsdevel,
	linux-kernel, Mark Rutland, Masahiro Yamada,
	Mauro Carvalho Chehab, Michael Neuling, Ingo Molnar,
	Michael Ellerman, Namhyung Kim, Nayna Jain, Paul Mackerras,
	Peter Zijlstra, Rob Herring, Thomas Gleixner, Valentin Schneider,
	Alexander Viro

On Tue, Mar 24, 2020 at 06:48:20PM +1000, Nicholas Piggin wrote:
> Michal Suchanek's on March 19, 2020 10:19 pm:
> > There are two almost identical copies for 32bit and 64bit.
> > 
> > The function is used only in 32bit code which will be split out in next
> > patch so consolidate to one function.
> > 
> > Signed-off-by: Michal Suchanek <msuchanek@suse.de>
> > Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
> > ---
> > v6:  new patch
> > v8:  move the consolidated function out of the ifdef block.
> > v11: rebase on top of def0bfdbd603
> > ---
> >  arch/powerpc/perf/callchain.c | 48 +++++++++++++++++------------------
> >  1 file changed, 24 insertions(+), 24 deletions(-)
> > 
> > diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
> > index cbc251981209..c9a78c6e4361 100644
> > --- a/arch/powerpc/perf/callchain.c
> > +++ b/arch/powerpc/perf/callchain.c
> > @@ -161,18 +161,6 @@ static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret)
> >  	return read_user_stack_slow(ptr, ret, 8);
> >  }
> >  
> > -static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
> > -{
> > -	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
> > -	    ((unsigned long)ptr & 3))
> > -		return -EFAULT;
> > -
> > -	if (!probe_user_read(ret, ptr, sizeof(*ret)))
> > -		return 0;
> > -
> > -	return read_user_stack_slow(ptr, ret, 4);
> > -}
> > -
> >  static inline int valid_user_sp(unsigned long sp, int is_64)
> >  {
> >  	if (!sp || (sp & 7) || sp > (is_64 ? TASK_SIZE : 0x100000000UL) - 32)
> > @@ -277,19 +265,9 @@ static void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
> >  }
> >  
> >  #else  /* CONFIG_PPC64 */
> > -/*
> > - * On 32-bit we just access the address and let hash_page create a
> > - * HPTE if necessary, so there is no need to fall back to reading
> > - * the page tables.  Since this is called at interrupt level,
> > - * do_page_fault() won't treat a DSI as a page fault.
> > - */
> > -static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
> > +static int read_user_stack_slow(void __user *ptr, void *buf, int nb)
> >  {
> > -	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
> > -	    ((unsigned long)ptr & 3))
> > -		return -EFAULT;
> > -
> > -	return probe_user_read(ret, ptr, sizeof(*ret));
> > +	return 0;
> >  }
> >  
> >  static inline void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
> > @@ -312,6 +290,28 @@ static inline int valid_user_sp(unsigned long sp, int is_64)
> >  
> >  #endif /* CONFIG_PPC64 */
> >  
> > +/*
> > + * On 32-bit we just access the address and let hash_page create a
> > + * HPTE if necessary, so there is no need to fall back to reading
> > + * the page tables.  Since this is called at interrupt level,
> > + * do_page_fault() won't treat a DSI as a page fault.
> > + */
> 
> The comment is actually probably better to stay in the 32-bit
> read_user_stack_slow implementation. Is that function defined
> on 32-bit purely so that you can use IS_ENABLED()? In that case
It documents the IS_ENABLED() and that's where it is. The 32bit
definition is only a technical detail.
> I would prefer to put a BUG() there which makes it self documenting.
Which will cause checkpatch complaints about introducing new BUG() which
is frowned on.

Thanks

Michal

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v11 3/8] powerpc/perf: consolidate read_user_stack_32
  2020-03-24 19:38       ` Michal Suchánek
@ 2020-04-03  7:13         ` Nicholas Piggin
  2020-04-03 10:52           ` Michal Suchánek
                             ` (2 more replies)
  0 siblings, 3 replies; 53+ messages in thread
From: Nicholas Piggin @ 2020-04-03  7:13 UTC (permalink / raw)
  To: Michal Suchánek
  Cc: Arnaldo Carvalho de Melo, Alexander Shishkin, Allison Randal,
	Andy Shevchenko, Arnd Bergmann, Thiago Jung Bauermann,
	Benjamin Herrenschmidt, Sebastian Andrzej Siewior,
	Claudio Carvalho, Christophe Leroy, David S. Miller,
	Eric W. Biederman, Eric Richter, Greg Kroah-Hartman,
	Gustavo Luiz Duarte, Hari Bathini, Jordan Niethe, Jiri Olsa,
	Jonathan Cameron, linux-fsdevel, linux-kernel, linuxppc-dev,
	Mark Rutland, Masahiro Yamada, Mauro Carvalho Chehab,
	Michael Neuling, Ingo Molnar, Michael Ellerman, Namhyung Kim,
	Nayna Jain, Paul Mackerras, Peter Zijlstra, Rob Herring,
	Thomas Gleixner, Valentin Schneider, Alexander Viro

Michal Suchánek's on March 25, 2020 5:38 am:
> On Tue, Mar 24, 2020 at 06:48:20PM +1000, Nicholas Piggin wrote:
>> Michal Suchanek's on March 19, 2020 10:19 pm:
>> > There are two almost identical copies for 32bit and 64bit.
>> > 
>> > The function is used only in 32bit code which will be split out in next
>> > patch so consolidate to one function.
>> > 
>> > Signed-off-by: Michal Suchanek <msuchanek@suse.de>
>> > Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
>> > ---
>> > v6:  new patch
>> > v8:  move the consolidated function out of the ifdef block.
>> > v11: rebase on top of def0bfdbd603
>> > ---
>> >  arch/powerpc/perf/callchain.c | 48 +++++++++++++++++------------------
>> >  1 file changed, 24 insertions(+), 24 deletions(-)
>> > 
>> > diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
>> > index cbc251981209..c9a78c6e4361 100644
>> > --- a/arch/powerpc/perf/callchain.c
>> > +++ b/arch/powerpc/perf/callchain.c
>> > @@ -161,18 +161,6 @@ static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret)
>> >  	return read_user_stack_slow(ptr, ret, 8);
>> >  }
>> >  
>> > -static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
>> > -{
>> > -	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
>> > -	    ((unsigned long)ptr & 3))
>> > -		return -EFAULT;
>> > -
>> > -	if (!probe_user_read(ret, ptr, sizeof(*ret)))
>> > -		return 0;
>> > -
>> > -	return read_user_stack_slow(ptr, ret, 4);
>> > -}
>> > -
>> >  static inline int valid_user_sp(unsigned long sp, int is_64)
>> >  {
>> >  	if (!sp || (sp & 7) || sp > (is_64 ? TASK_SIZE : 0x100000000UL) - 32)
>> > @@ -277,19 +265,9 @@ static void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
>> >  }
>> >  
>> >  #else  /* CONFIG_PPC64 */
>> > -/*
>> > - * On 32-bit we just access the address and let hash_page create a
>> > - * HPTE if necessary, so there is no need to fall back to reading
>> > - * the page tables.  Since this is called at interrupt level,
>> > - * do_page_fault() won't treat a DSI as a page fault.
>> > - */
>> > -static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
>> > +static int read_user_stack_slow(void __user *ptr, void *buf, int nb)
>> >  {
>> > -	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
>> > -	    ((unsigned long)ptr & 3))
>> > -		return -EFAULT;
>> > -
>> > -	return probe_user_read(ret, ptr, sizeof(*ret));
>> > +	return 0;
>> >  }
>> >  
>> >  static inline void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
>> > @@ -312,6 +290,28 @@ static inline int valid_user_sp(unsigned long sp, int is_64)
>> >  
>> >  #endif /* CONFIG_PPC64 */
>> >  
>> > +/*
>> > + * On 32-bit we just access the address and let hash_page create a
>> > + * HPTE if necessary, so there is no need to fall back to reading
>> > + * the page tables.  Since this is called at interrupt level,
>> > + * do_page_fault() won't treat a DSI as a page fault.
>> > + */
>> 
>> The comment is actually probably better to stay in the 32-bit
>> read_user_stack_slow implementation. Is that function defined
>> on 32-bit purely so that you can use IS_ENABLED()? In that case
> It documents the IS_ENABLED() and that's where it is. The 32bit
> definition is only a technical detail.

Sorry for the late reply, busy trying to fix bugs in the C rewrite
series. I don't think it is the right place, it should be in the
ppc32 implementation detail. ppc64 has an equivalent comment at the
top of its read_user_stack functions.

>> I would prefer to put a BUG() there which makes it self documenting.
> Which will cause checkpatch complaints about introducing new BUG() which
> is frowned on.

It's fine in this case, that warning is about not introducing
runtime bugs, but this wouldn't be.

But... I actually don't like adding read_user_stack_slow on 32-bit
and especially not just to make IS_ENABLED work.

IMO this would be better if you really want to consolidate it

---

diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
index cbc251981209..ca3a599b3f54 100644
--- a/arch/powerpc/perf/callchain.c
+++ b/arch/powerpc/perf/callchain.c
@@ -108,7 +108,7 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *re
  * interrupt context, so if the access faults, we read the page tables
  * to find which page (if any) is mapped and access it directly.
  */
-static int read_user_stack_slow(void __user *ptr, void *buf, int nb)
+static int read_user_stack_slow(const void __user *ptr, void *buf, int nb)
 {
 	int ret = -EFAULT;
 	pgd_t *pgdir;
@@ -149,28 +149,21 @@ static int read_user_stack_slow(void __user *ptr, void *buf, int nb)
 	return ret;
 }
 
-static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret)
+static int __read_user_stack(const void __user *ptr, void *ret, size_t size)
 {
-	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned long) ||
-	    ((unsigned long)ptr & 7))
+	if ((unsigned long)ptr > TASK_SIZE - size ||
+	    ((unsigned long)ptr & (size - 1)))
 		return -EFAULT;
 
-	if (!probe_user_read(ret, ptr, sizeof(*ret)))
+	if (!probe_user_read(ret, ptr, size))
 		return 0;
 
-	return read_user_stack_slow(ptr, ret, 8);
+	return read_user_stack_slow(ptr, ret, size);
 }
 
-static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
+static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret)
 {
-	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
-	    ((unsigned long)ptr & 3))
-		return -EFAULT;
-
-	if (!probe_user_read(ret, ptr, sizeof(*ret)))
-		return 0;
-
-	return read_user_stack_slow(ptr, ret, 4);
+	return __read_user_stack(ptr, ret, sizeof(*ret));
 }
 
 static inline int valid_user_sp(unsigned long sp, int is_64)
@@ -283,13 +276,13 @@ static void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
  * the page tables.  Since this is called at interrupt level,
  * do_page_fault() won't treat a DSI as a page fault.
  */
-static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
+static int __read_user_stack(const void __user *ptr, void *ret, size_t size)
 {
-	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
-	    ((unsigned long)ptr & 3))
+	if ((unsigned long)ptr > TASK_SIZE - size ||
+	    ((unsigned long)ptr & (size - 1)))
 		return -EFAULT;
 
-	return probe_user_read(ret, ptr, sizeof(*ret));
+	return probe_user_read(ret, ptr, size);
 }
 
 static inline void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
@@ -312,6 +305,11 @@ static inline int valid_user_sp(unsigned long sp, int is_64)
 
 #endif /* CONFIG_PPC64 */
 
+static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
+{
+	return __read_user_stack(ptr, ret, sizeof(*ret));
+}
+
 /*
  * Layout for non-RT signal frames
  */

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* Re: [PATCH v11 5/8] powerpc/64: make buildable without CONFIG_COMPAT
  2020-03-24 19:30       ` Michal Suchánek
@ 2020-04-03  7:16         ` Nicholas Piggin
  0 siblings, 0 replies; 53+ messages in thread
From: Nicholas Piggin @ 2020-04-03  7:16 UTC (permalink / raw)
  To: Michal Suchánek
  Cc: Arnaldo Carvalho de Melo, Alexander Shishkin, Allison Randal,
	Andy Shevchenko, Arnd Bergmann, Thiago Jung Bauermann,
	Sebastian Andrzej Siewior, Claudio Carvalho, David S. Miller,
	Eric W. Biederman, Eric Richter, Greg Kroah-Hartman,
	Gustavo Luiz Duarte, Hari Bathini, Jordan Niethe, Jiri Olsa,
	Jonathan Cameron, linux-fsdevel, linux-kernel, linuxppc-dev,
	Mark Rutland, Masahiro Yamada, Mauro Carvalho Chehab,
	Michael Neuling, Ingo Molnar, Namhyung Kim, Nayna Jain,
	Paul Mackerras, Peter Zijlstra, Rob Herring, Thomas Gleixner,
	Valentin Schneider, Alexander Viro

Michal Suchánek's on March 25, 2020 5:30 am:
> On Tue, Mar 24, 2020 at 06:54:20PM +1000, Nicholas Piggin wrote:
>> Michal Suchanek's on March 19, 2020 10:19 pm:
>> > diff --git a/arch/powerpc/kernel/signal.c b/arch/powerpc/kernel/signal.c
>> > index 4b0152108f61..a264989626fd 100644
>> > --- a/arch/powerpc/kernel/signal.c
>> > +++ b/arch/powerpc/kernel/signal.c
>> > @@ -247,7 +247,6 @@ static void do_signal(struct task_struct *tsk)
>> >  	sigset_t *oldset = sigmask_to_save();
>> >  	struct ksignal ksig = { .sig = 0 };
>> >  	int ret;
>> > -	int is32 = is_32bit_task();
>> >  
>> >  	BUG_ON(tsk != current);
>> >  
>> > @@ -277,7 +276,7 @@ static void do_signal(struct task_struct *tsk)
>> >  
>> >  	rseq_signal_deliver(&ksig, tsk->thread.regs);
>> >  
>> > -	if (is32) {
>> > +	if (is_32bit_task()) {
>> >          	if (ksig.ka.sa.sa_flags & SA_SIGINFO)
>> >  			ret = handle_rt_signal32(&ksig, oldset, tsk);
>> >  		else
>> 
>> Unnecessary?
>> 
>> > diff --git a/arch/powerpc/kernel/syscall_64.c b/arch/powerpc/kernel/syscall_64.c
>> > index 87d95b455b83..2dcbfe38f5ac 100644
>> > --- a/arch/powerpc/kernel/syscall_64.c
>> > +++ b/arch/powerpc/kernel/syscall_64.c
>> > @@ -24,7 +24,6 @@ notrace long system_call_exception(long r3, long r4, long r5,
>> >  				   long r6, long r7, long r8,
>> >  				   unsigned long r0, struct pt_regs *regs)
>> >  {
>> > -	unsigned long ti_flags;
>> >  	syscall_fn f;
>> >  
>> >  	if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
>> > @@ -68,8 +67,7 @@ notrace long system_call_exception(long r3, long r4, long r5,
>> >  
>> >  	local_irq_enable();
>> >  
>> > -	ti_flags = current_thread_info()->flags;
>> > -	if (unlikely(ti_flags & _TIF_SYSCALL_DOTRACE)) {
>> > +	if (unlikely(current_thread_info()->flags & _TIF_SYSCALL_DOTRACE)) {
>> >  		/*
>> >  		 * We use the return value of do_syscall_trace_enter() as the
>> >  		 * syscall number. If the syscall was rejected for any reason
>> > @@ -94,7 +92,7 @@ notrace long system_call_exception(long r3, long r4, long r5,
>> >  	/* May be faster to do array_index_nospec? */
>> >  	barrier_nospec();
>> >  
>> > -	if (unlikely(ti_flags & _TIF_32BIT)) {
>> > +	if (unlikely(is_32bit_task())) {
>> 
>> Problem is, does this allow the load of ti_flags to be used for both
>> tests, or does test_bit make it re-load?
>> 
>> This could maybe be fixed by testing if(IS_ENABLED(CONFIG_COMPAT) &&
> Both points already discussed here:

Agh, I'm hopeless.

I don't think it really resolves this issue. But probably don't have time
to look at generated asm, and might never because it won't really hit
LE unless we add a 32-bit ABI. It's pretty minor though either way.

Sorry for being difficult, I really do like your patches :)

Thanks,
Nick

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v11 0/8] Disable compat cruft on ppc64le v11
  2020-03-19 12:19 ` [PATCH v11 0/8] Disable compat cruft on ppc64le v11 Michal Suchanek
                     ` (8 preceding siblings ...)
  2020-03-19 12:36   ` [PATCH v11 0/8] Disable compat cruft on ppc64le v11 Christophe Leroy
@ 2020-04-03  7:25   ` Nicholas Piggin
  2020-04-03  7:26     ` Christophe Leroy
  9 siblings, 1 reply; 53+ messages in thread
From: Nicholas Piggin @ 2020-04-03  7:25 UTC (permalink / raw)
  To: linuxppc-dev, Michal Suchanek
  Cc: Arnaldo Carvalho de Melo, Alexander Shishkin, Allison Randal,
	Andy Shevchenko, Arnd Bergmann, Thiago Jung Bauermann,
	Benjamin Herrenschmidt, Sebastian Andrzej Siewior,
	Claudio Carvalho, Christophe Leroy, David S. Miller,
	Eric W. Biederman, Eric Richter, Greg Kroah-Hartman,
	Gustavo Luiz Duarte, Hari Bathini, Jordan Niethe, Jiri Olsa,
	Jonathan Cameron, linux-fsdevel, linux-kernel, Mark Rutland,
	Masahiro Yamada, Mauro Carvalho Chehab, Michael Neuling,
	Ingo Molnar, Michael Ellerman, Namhyung Kim, Nayna Jain,
	Paul Mackerras, Peter Zijlstra, Rob Herring, Thomas Gleixner,
	Valentin Schneider, Alexander Viro

Michal Suchanek's on March 19, 2020 10:19 pm:
> Less code means less bugs so add a knob to skip the compat stuff.
> 
> Changes in v2: saner CONFIG_COMPAT ifdefs
> Changes in v3:
>  - change llseek to 32bit instead of builing it unconditionally in fs
>  - clanup the makefile conditionals
>  - remove some ifdefs or convert to IS_DEFINED where possible
> Changes in v4:
>  - cleanup is_32bit_task and current_is_64bit
>  - more makefile cleanup
> Changes in v5:
>  - more current_is_64bit cleanup
>  - split off callchain.c 32bit and 64bit parts
> Changes in v6:
>  - cleanup makefile after split
>  - consolidate read_user_stack_32
>  - fix some checkpatch warnings
> Changes in v7:
>  - add back __ARCH_WANT_SYS_LLSEEK to fix build with llseek
>  - remove leftover hunk
>  - add review tags
> Changes in v8:
>  - consolidate valid_user_sp to fix it in the split callchain.c
>  - fix build errors/warnings with PPC64 !COMPAT and PPC32
> Changes in v9:
>  - remove current_is_64bit()
> Chanegs in v10:
>  - rebase, sent together with the syscall cleanup
> Changes in v11:
>  - rebase
>  - add MAINTAINERS pattern for ppc perf

These all look good to me. I had some minor comment about one patch but 
not really a big deal and there were more cleanups on top of it, so I 
don't mind if it's merged as is.

Actually I think we have a bit of stack reading fixes for 64s radix now
(not a bug fix as such, but we don't need the hash fault logic in radix),
so if I get around to that I can propose the changes in that series.

Thanks,
Nick

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v11 0/8] Disable compat cruft on ppc64le v11
  2020-04-03  7:25   ` Nicholas Piggin
@ 2020-04-03  7:26     ` Christophe Leroy
  2020-04-03  9:43       ` Nicholas Piggin
  0 siblings, 1 reply; 53+ messages in thread
From: Christophe Leroy @ 2020-04-03  7:26 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev, Michal Suchanek
  Cc: Arnaldo Carvalho de Melo, Alexander Shishkin, Allison Randal,
	Andy Shevchenko, Arnd Bergmann, Thiago Jung Bauermann,
	Benjamin Herrenschmidt, Sebastian Andrzej Siewior,
	Claudio Carvalho, David S. Miller, Eric W. Biederman,
	Eric Richter, Greg Kroah-Hartman, Gustavo Luiz Duarte,
	Hari Bathini, Jordan Niethe, Jiri Olsa, Jonathan Cameron,
	linux-fsdevel, linux-kernel, Mark Rutland, Masahiro Yamada,
	Mauro Carvalho Chehab, Michael Neuling, Ingo Molnar,
	Michael Ellerman, Namhyung Kim, Nayna Jain, Paul Mackerras,
	Peter Zijlstra, Rob Herring, Thomas Gleixner, Valentin Schneider,
	Alexander Viro



Le 03/04/2020 à 09:25, Nicholas Piggin a écrit :
> Michal Suchanek's on March 19, 2020 10:19 pm:
>> Less code means less bugs so add a knob to skip the compat stuff.
>>
>> Changes in v2: saner CONFIG_COMPAT ifdefs
>> Changes in v3:
>>   - change llseek to 32bit instead of builing it unconditionally in fs
>>   - clanup the makefile conditionals
>>   - remove some ifdefs or convert to IS_DEFINED where possible
>> Changes in v4:
>>   - cleanup is_32bit_task and current_is_64bit
>>   - more makefile cleanup
>> Changes in v5:
>>   - more current_is_64bit cleanup
>>   - split off callchain.c 32bit and 64bit parts
>> Changes in v6:
>>   - cleanup makefile after split
>>   - consolidate read_user_stack_32
>>   - fix some checkpatch warnings
>> Changes in v7:
>>   - add back __ARCH_WANT_SYS_LLSEEK to fix build with llseek
>>   - remove leftover hunk
>>   - add review tags
>> Changes in v8:
>>   - consolidate valid_user_sp to fix it in the split callchain.c
>>   - fix build errors/warnings with PPC64 !COMPAT and PPC32
>> Changes in v9:
>>   - remove current_is_64bit()
>> Chanegs in v10:
>>   - rebase, sent together with the syscall cleanup
>> Changes in v11:
>>   - rebase
>>   - add MAINTAINERS pattern for ppc perf
> 
> These all look good to me. I had some minor comment about one patch but
> not really a big deal and there were more cleanups on top of it, so I
> don't mind if it's merged as is.
> 
> Actually I think we have a bit of stack reading fixes for 64s radix now
> (not a bug fix as such, but we don't need the hash fault logic in radix),
> so if I get around to that I can propose the changes in that series.
> 

As far as I can see, there is a v12

Christophe

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v11 0/8] Disable compat cruft on ppc64le v11
  2020-04-03  7:26     ` Christophe Leroy
@ 2020-04-03  9:43       ` Nicholas Piggin
  2020-04-05  0:40         ` Michael Ellerman
  0 siblings, 1 reply; 53+ messages in thread
From: Nicholas Piggin @ 2020-04-03  9:43 UTC (permalink / raw)
  To: Christophe Leroy, linuxppc-dev, Michal Suchanek
  Cc: Arnaldo Carvalho de Melo, Alexander Shishkin, Allison Randal,
	Andy Shevchenko, Arnd Bergmann, Thiago Jung Bauermann,
	Benjamin Herrenschmidt, Sebastian Andrzej Siewior,
	Claudio Carvalho, David S. Miller, Eric W. Biederman,
	Eric Richter, Greg Kroah-Hartman, Gustavo Luiz Duarte,
	Hari Bathini, Jordan Niethe, Jiri Olsa, Jonathan Cameron,
	linux-fsdevel, linux-kernel, Mark Rutland, Masahiro Yamada,
	Mauro Carvalho Chehab, Michael Neuling, Ingo Molnar,
	Michael Ellerman, Namhyung Kim, Nayna Jain, Paul Mackerras,
	Peter Zijlstra, Rob Herring, Thomas Gleixner, Valentin Schneider,
	Alexander Viro

Christophe Leroy's on April 3, 2020 5:26 pm:
> 
> 
> Le 03/04/2020 à 09:25, Nicholas Piggin a écrit :
>> Michal Suchanek's on March 19, 2020 10:19 pm:
>>> Less code means less bugs so add a knob to skip the compat stuff.
>>>
>>> Changes in v2: saner CONFIG_COMPAT ifdefs
>>> Changes in v3:
>>>   - change llseek to 32bit instead of builing it unconditionally in fs
>>>   - clanup the makefile conditionals
>>>   - remove some ifdefs or convert to IS_DEFINED where possible
>>> Changes in v4:
>>>   - cleanup is_32bit_task and current_is_64bit
>>>   - more makefile cleanup
>>> Changes in v5:
>>>   - more current_is_64bit cleanup
>>>   - split off callchain.c 32bit and 64bit parts
>>> Changes in v6:
>>>   - cleanup makefile after split
>>>   - consolidate read_user_stack_32
>>>   - fix some checkpatch warnings
>>> Changes in v7:
>>>   - add back __ARCH_WANT_SYS_LLSEEK to fix build with llseek
>>>   - remove leftover hunk
>>>   - add review tags
>>> Changes in v8:
>>>   - consolidate valid_user_sp to fix it in the split callchain.c
>>>   - fix build errors/warnings with PPC64 !COMPAT and PPC32
>>> Changes in v9:
>>>   - remove current_is_64bit()
>>> Chanegs in v10:
>>>   - rebase, sent together with the syscall cleanup
>>> Changes in v11:
>>>   - rebase
>>>   - add MAINTAINERS pattern for ppc perf
>> 
>> These all look good to me. I had some minor comment about one patch but
>> not really a big deal and there were more cleanups on top of it, so I
>> don't mind if it's merged as is.
>> 
>> Actually I think we have a bit of stack reading fixes for 64s radix now
>> (not a bug fix as such, but we don't need the hash fault logic in radix),
>> so if I get around to that I can propose the changes in that series.
>> 
> 
> As far as I can see, there is a v12

For the most part I was looking at the patches in mpe's next-test
tree on github, if that's the v12 series, same comment applies but
it's a pretty small nitpick.

Thanks,
Nick

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v11 3/8] powerpc/perf: consolidate read_user_stack_32
  2020-04-03  7:13         ` Nicholas Piggin
@ 2020-04-03 10:52           ` Michal Suchánek
  2020-04-03 11:26             ` Nicholas Piggin
  2020-04-06 20:52           ` Michal Suchánek
  2020-04-06 21:00           ` [PATCH] powerpcs: perf: consolidate perf_callchain_user_64 and perf_callchain_user_32 Michal Suchanek
  2 siblings, 1 reply; 53+ messages in thread
From: Michal Suchánek @ 2020-04-03 10:52 UTC (permalink / raw)
  To: Nicholas Piggin
  Cc: Arnaldo Carvalho de Melo, Alexander Shishkin, Allison Randal,
	Andy Shevchenko, Arnd Bergmann, Thiago Jung Bauermann,
	Benjamin Herrenschmidt, Sebastian Andrzej Siewior,
	Claudio Carvalho, Christophe Leroy, David S. Miller,
	Eric W. Biederman, Eric Richter, Greg Kroah-Hartman,
	Gustavo Luiz Duarte, Hari Bathini, Jordan Niethe, Jiri Olsa,
	Jonathan Cameron, linux-fsdevel, linux-kernel, linuxppc-dev,
	Mark Rutland, Masahiro Yamada, Mauro Carvalho Chehab,
	Michael Neuling, Ingo Molnar, Michael Ellerman, Namhyung Kim,
	Nayna Jain, Paul Mackerras, Peter Zijlstra, Rob Herring,
	Thomas Gleixner, Valentin Schneider, Alexander Viro

Hello,

there are 3 variants of the function

read_user_stack_64

32bit read_user_stack_32
64bit read_user_Stack_32

On Fri, Apr 03, 2020 at 05:13:25PM +1000, Nicholas Piggin wrote:
> Michal Suchánek's on March 25, 2020 5:38 am:
> > On Tue, Mar 24, 2020 at 06:48:20PM +1000, Nicholas Piggin wrote:
> >> Michal Suchanek's on March 19, 2020 10:19 pm:
> >> > There are two almost identical copies for 32bit and 64bit.
> >> > 
> >> > The function is used only in 32bit code which will be split out in next
> >> > patch so consolidate to one function.
> >> > 
> >> > Signed-off-by: Michal Suchanek <msuchanek@suse.de>
> >> > Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
> >> > ---
> >> > v6:  new patch
> >> > v8:  move the consolidated function out of the ifdef block.
> >> > v11: rebase on top of def0bfdbd603
> >> > ---
> >> >  arch/powerpc/perf/callchain.c | 48 +++++++++++++++++------------------
> >> >  1 file changed, 24 insertions(+), 24 deletions(-)
> >> > 
> >> > diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
> >> > index cbc251981209..c9a78c6e4361 100644
> >> > --- a/arch/powerpc/perf/callchain.c
> >> > +++ b/arch/powerpc/perf/callchain.c
> >> > @@ -161,18 +161,6 @@ static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret)
> >> >  	return read_user_stack_slow(ptr, ret, 8);
> >> >  }
> >> >  
> >> > -static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
> >> > -{
> >> > -	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
> >> > -	    ((unsigned long)ptr & 3))
> >> > -		return -EFAULT;
> >> > -
> >> > -	if (!probe_user_read(ret, ptr, sizeof(*ret)))
> >> > -		return 0;
> >> > -
> >> > -	return read_user_stack_slow(ptr, ret, 4);
> >> > -}
> >> > -
> >> >  static inline int valid_user_sp(unsigned long sp, int is_64)
> >> >  {
> >> >  	if (!sp || (sp & 7) || sp > (is_64 ? TASK_SIZE : 0x100000000UL) - 32)
> >> > @@ -277,19 +265,9 @@ static void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
> >> >  }
> >> >  
> >> >  #else  /* CONFIG_PPC64 */
> >> > -/*
> >> > - * On 32-bit we just access the address and let hash_page create a
> >> > - * HPTE if necessary, so there is no need to fall back to reading
> >> > - * the page tables.  Since this is called at interrupt level,
> >> > - * do_page_fault() won't treat a DSI as a page fault.
> >> > - */
> >> > -static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
> >> > +static int read_user_stack_slow(void __user *ptr, void *buf, int nb)
> >> >  {
> >> > -	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
> >> > -	    ((unsigned long)ptr & 3))
> >> > -		return -EFAULT;
> >> > -
> >> > -	return probe_user_read(ret, ptr, sizeof(*ret));
> >> > +	return 0;
> >> >  }
> >> >  
> >> >  static inline void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
> >> > @@ -312,6 +290,28 @@ static inline int valid_user_sp(unsigned long sp, int is_64)
> >> >  
> >> >  #endif /* CONFIG_PPC64 */
> >> >  
> >> > +/*
> >> > + * On 32-bit we just access the address and let hash_page create a
> >> > + * HPTE if necessary, so there is no need to fall back to reading
> >> > + * the page tables.  Since this is called at interrupt level,
> >> > + * do_page_fault() won't treat a DSI as a page fault.
> >> > + */
> >> 
> >> The comment is actually probably better to stay in the 32-bit
> >> read_user_stack_slow implementation. Is that function defined
> >> on 32-bit purely so that you can use IS_ENABLED()? In that case
> > It documents the IS_ENABLED() and that's where it is. The 32bit
> > definition is only a technical detail.
> 
> Sorry for the late reply, busy trying to fix bugs in the C rewrite
> series. I don't think it is the right place, it should be in the
> ppc32 implementation detail. ppc64 has an equivalent comment at the
> top of its read_user_stack functions.
> 
> >> I would prefer to put a BUG() there which makes it self documenting.
> > Which will cause checkpatch complaints about introducing new BUG() which
> > is frowned on.
> 
> It's fine in this case, that warning is about not introducing
> runtime bugs, but this wouldn't be.
> 
> But... I actually don't like adding read_user_stack_slow on 32-bit
> and especially not just to make IS_ENABLED work.
> 
> IMO this would be better if you really want to consolidate it
> 
> ---
> 
> diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
> index cbc251981209..ca3a599b3f54 100644
> --- a/arch/powerpc/perf/callchain.c
> +++ b/arch/powerpc/perf/callchain.c
> @@ -108,7 +108,7 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *re
>   * interrupt context, so if the access faults, we read the page tables
>   * to find which page (if any) is mapped and access it directly.
>   */
> -static int read_user_stack_slow(void __user *ptr, void *buf, int nb)
> +static int read_user_stack_slow(const void __user *ptr, void *buf, int nb)
>  {
>  	int ret = -EFAULT;
>  	pgd_t *pgdir;
> @@ -149,28 +149,21 @@ static int read_user_stack_slow(void __user *ptr, void *buf, int nb)
>  	return ret;
>  }
>  
> -static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret)
> +static int __read_user_stack(const void __user *ptr, void *ret, size_t size)
>  {
> -	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned long) ||
> -	    ((unsigned long)ptr & 7))
> +	if ((unsigned long)ptr > TASK_SIZE - size ||
> +	    ((unsigned long)ptr & (size - 1)))
>  		return -EFAULT;
>  
> -	if (!probe_user_read(ret, ptr, sizeof(*ret)))
> +	if (!probe_user_read(ret, ptr, size))
>  		return 0;
>  
> -	return read_user_stack_slow(ptr, ret, 8);
> +	return read_user_stack_slow(ptr, ret, size);
>  }
>  
> -static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
> +static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret)
>  {
> -	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
> -	    ((unsigned long)ptr & 3))
> -		return -EFAULT;
> -
> -	if (!probe_user_read(ret, ptr, sizeof(*ret)))
> -		return 0;
> -
> -	return read_user_stack_slow(ptr, ret, 4);
> +	return __read_user_stack(ptr, ret, sizeof(*ret));
>  }
>  
>  static inline int valid_user_sp(unsigned long sp, int is_64)
> @@ -283,13 +276,13 @@ static void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
>   * the page tables.  Since this is called at interrupt level,
>   * do_page_fault() won't treat a DSI as a page fault.
>   */
> -static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
> +static int __read_user_stack(const void __user *ptr, void *ret, size_t size)
>  {
> -	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
> -	    ((unsigned long)ptr & 3))
> +	if ((unsigned long)ptr > TASK_SIZE - size ||
> +	    ((unsigned long)ptr & (size - 1)))
>  		return -EFAULT;
>  
> -	return probe_user_read(ret, ptr, sizeof(*ret));
> +	return probe_user_read(ret, ptr, size);
>  }
>  
>  static inline void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
> @@ -312,6 +305,11 @@ static inline int valid_user_sp(unsigned long sp, int is_64)
>  
>  #endif /* CONFIG_PPC64 */
>  
> +static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
> +{
> +	return __read_user_stack(ptr, ret, sizeof(*ret));
Does not work for 64bit read_user_stack_32 ^ this should be 4.

Other than that it should preserve the existing logic just fine.

Thanks

Michal

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v11 3/8] powerpc/perf: consolidate read_user_stack_32
  2020-04-03 10:52           ` Michal Suchánek
@ 2020-04-03 11:26             ` Nicholas Piggin
  2020-04-03 11:51               ` Michal Suchánek
  0 siblings, 1 reply; 53+ messages in thread
From: Nicholas Piggin @ 2020-04-03 11:26 UTC (permalink / raw)
  To: Michal Suchánek
  Cc: Arnaldo Carvalho de Melo, Alexander Shishkin, Allison Randal,
	Andy Shevchenko, Arnd Bergmann, Thiago Jung Bauermann,
	Benjamin Herrenschmidt, Sebastian Andrzej Siewior,
	Claudio Carvalho, Christophe Leroy, David S. Miller,
	Eric W. Biederman, Eric Richter, Greg Kroah-Hartman,
	Gustavo Luiz Duarte, Hari Bathini, Jordan Niethe, Jiri Olsa,
	Jonathan Cameron, linux-fsdevel, linux-kernel, linuxppc-dev,
	Mark Rutland, Masahiro Yamada, Mauro Carvalho Chehab,
	Michael Neuling, Ingo Molnar, Michael Ellerman, Namhyung Kim,
	Nayna Jain, Paul Mackerras, Peter Zijlstra, Rob Herring,
	Thomas Gleixner, Valentin Schneider, Alexander Viro

Michal Suchánek's on April 3, 2020 8:52 pm:
> Hello,
> 
> there are 3 variants of the function
> 
> read_user_stack_64
> 
> 32bit read_user_stack_32
> 64bit read_user_Stack_32

Right.

> On Fri, Apr 03, 2020 at 05:13:25PM +1000, Nicholas Piggin wrote:
[...]
>>  #endif /* CONFIG_PPC64 */
>>  
>> +static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
>> +{
>> +	return __read_user_stack(ptr, ret, sizeof(*ret));
> Does not work for 64bit read_user_stack_32 ^ this should be 4.
> 
> Other than that it should preserve the existing logic just fine.

sizeof(int) == 4 on 64bit so it should work.

Thanks,
Nick

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v11 3/8] powerpc/perf: consolidate read_user_stack_32
  2020-04-03 11:26             ` Nicholas Piggin
@ 2020-04-03 11:51               ` Michal Suchánek
  0 siblings, 0 replies; 53+ messages in thread
From: Michal Suchánek @ 2020-04-03 11:51 UTC (permalink / raw)
  To: Nicholas Piggin
  Cc: Arnaldo Carvalho de Melo, Alexander Shishkin, Allison Randal,
	Andy Shevchenko, Arnd Bergmann, Thiago Jung Bauermann,
	Benjamin Herrenschmidt, Sebastian Andrzej Siewior,
	Claudio Carvalho, Christophe Leroy, David S. Miller,
	Eric W. Biederman, Eric Richter, Greg Kroah-Hartman,
	Gustavo Luiz Duarte, Hari Bathini, Jordan Niethe, Jiri Olsa,
	Jonathan Cameron, linux-fsdevel, linux-kernel, linuxppc-dev,
	Mark Rutland, Masahiro Yamada, Mauro Carvalho Chehab,
	Michael Neuling, Ingo Molnar, Michael Ellerman, Namhyung Kim,
	Nayna Jain, Paul Mackerras, Peter Zijlstra, Rob Herring,
	Thomas Gleixner, Valentin Schneider, Alexander Viro

On Fri, Apr 03, 2020 at 09:26:27PM +1000, Nicholas Piggin wrote:
> Michal Suchánek's on April 3, 2020 8:52 pm:
> > Hello,
> > 
> > there are 3 variants of the function
> > 
> > read_user_stack_64
> > 
> > 32bit read_user_stack_32
> > 64bit read_user_Stack_32
> 
> Right.
> 
> > On Fri, Apr 03, 2020 at 05:13:25PM +1000, Nicholas Piggin wrote:
> [...]
> >>  #endif /* CONFIG_PPC64 */
> >>  
> >> +static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
> >> +{
> >> +	return __read_user_stack(ptr, ret, sizeof(*ret));
> > Does not work for 64bit read_user_stack_32 ^ this should be 4.
> > 
> > Other than that it should preserve the existing logic just fine.
> 
> sizeof(int) == 4 on 64bit so it should work.
> 
Right, the type is different for the 32bit and 64bit version.

Thanks

Michal

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v11 0/8] Disable compat cruft on ppc64le v11
  2020-04-03  9:43       ` Nicholas Piggin
@ 2020-04-05  0:40         ` Michael Ellerman
  0 siblings, 0 replies; 53+ messages in thread
From: Michael Ellerman @ 2020-04-05  0:40 UTC (permalink / raw)
  To: Nicholas Piggin, Christophe Leroy, linuxppc-dev, Michal Suchanek
  Cc: Arnaldo Carvalho de Melo, Alexander Shishkin, Allison Randal,
	Andy Shevchenko, Arnd Bergmann, Thiago Jung Bauermann,
	Benjamin Herrenschmidt, Sebastian Andrzej Siewior,
	Claudio Carvalho, David S. Miller, Eric W. Biederman,
	Eric Richter, Greg Kroah-Hartman, Gustavo Luiz Duarte,
	Hari Bathini, Jordan Niethe, Jiri Olsa, Jonathan Cameron,
	linux-fsdevel, linux-kernel, Mark Rutland, Masahiro Yamada,
	Mauro Carvalho Chehab, Michael Neuling, Ingo Molnar,
	Namhyung Kim, Nayna Jain, Paul Mackerras, Peter Zijlstra,
	Rob Herring, Thomas Gleixner, Valentin Schneider, Alexander Viro

Nicholas Piggin <npiggin@gmail.com> writes:
> Christophe Leroy's on April 3, 2020 5:26 pm:
>> Le 03/04/2020 à 09:25, Nicholas Piggin a écrit :
>>> Michal Suchanek's on March 19, 2020 10:19 pm:
>>>> Less code means less bugs so add a knob to skip the compat stuff.
>>>>
>>>> Changes in v2: saner CONFIG_COMPAT ifdefs
>>>> Changes in v3:
>>>>   - change llseek to 32bit instead of builing it unconditionally in fs
>>>>   - clanup the makefile conditionals
>>>>   - remove some ifdefs or convert to IS_DEFINED where possible
>>>> Changes in v4:
>>>>   - cleanup is_32bit_task and current_is_64bit
>>>>   - more makefile cleanup
>>>> Changes in v5:
>>>>   - more current_is_64bit cleanup
>>>>   - split off callchain.c 32bit and 64bit parts
>>>> Changes in v6:
>>>>   - cleanup makefile after split
>>>>   - consolidate read_user_stack_32
>>>>   - fix some checkpatch warnings
>>>> Changes in v7:
>>>>   - add back __ARCH_WANT_SYS_LLSEEK to fix build with llseek
>>>>   - remove leftover hunk
>>>>   - add review tags
>>>> Changes in v8:
>>>>   - consolidate valid_user_sp to fix it in the split callchain.c
>>>>   - fix build errors/warnings with PPC64 !COMPAT and PPC32
>>>> Changes in v9:
>>>>   - remove current_is_64bit()
>>>> Chanegs in v10:
>>>>   - rebase, sent together with the syscall cleanup
>>>> Changes in v11:
>>>>   - rebase
>>>>   - add MAINTAINERS pattern for ppc perf
>>> 
>>> These all look good to me. I had some minor comment about one patch but
>>> not really a big deal and there were more cleanups on top of it, so I
>>> don't mind if it's merged as is.
>>> 
>>> Actually I think we have a bit of stack reading fixes for 64s radix now
>>> (not a bug fix as such, but we don't need the hash fault logic in radix),
>>> so if I get around to that I can propose the changes in that series.
>>> 
>> 
>> As far as I can see, there is a v12
>
> For the most part I was looking at the patches in mpe's next-test
> tree on github, if that's the v12 series, same comment applies but
> it's a pretty small nitpick.

Yeah I have v12 in my tree.

This has floated around long enough (our fault), so I'm going to take it
and we can fix anything up later.

cheers

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v11 3/8] powerpc/perf: consolidate read_user_stack_32
  2020-04-03  7:13         ` Nicholas Piggin
  2020-04-03 10:52           ` Michal Suchánek
@ 2020-04-06 20:52           ` Michal Suchánek
  2020-04-06 21:00           ` [PATCH] powerpcs: perf: consolidate perf_callchain_user_64 and perf_callchain_user_32 Michal Suchanek
  2 siblings, 0 replies; 53+ messages in thread
From: Michal Suchánek @ 2020-04-06 20:52 UTC (permalink / raw)
  To: Nicholas Piggin
  Cc: Arnaldo Carvalho de Melo, Alexander Shishkin, Allison Randal,
	Andy Shevchenko, Arnd Bergmann, Thiago Jung Bauermann,
	Benjamin Herrenschmidt, Sebastian Andrzej Siewior,
	Claudio Carvalho, Christophe Leroy, David S. Miller,
	Eric W. Biederman, Eric Richter, Greg Kroah-Hartman,
	Gustavo Luiz Duarte, Hari Bathini, Jordan Niethe, Jiri Olsa,
	Jonathan Cameron, linux-fsdevel, linux-kernel, linuxppc-dev,
	Mark Rutland, Masahiro Yamada, Mauro Carvalho Chehab,
	Michael Neuling, Ingo Molnar, Michael Ellerman, Namhyung Kim,
	Nayna Jain, Paul Mackerras, Peter Zijlstra, Rob Herring,
	Thomas Gleixner, Valentin Schneider, Alexander Viro

On Fri, Apr 03, 2020 at 05:13:25PM +1000, Nicholas Piggin wrote:
> Michal Suchánek's on March 25, 2020 5:38 am:
> > On Tue, Mar 24, 2020 at 06:48:20PM +1000, Nicholas Piggin wrote:
> >> Michal Suchanek's on March 19, 2020 10:19 pm:
> >> > There are two almost identical copies for 32bit and 64bit.
> >> > 
> >> > The function is used only in 32bit code which will be split out in next
> >> > patch so consolidate to one function.
> >> > 
> >> > Signed-off-by: Michal Suchanek <msuchanek@suse.de>
> >> > Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
> >> > ---
> >> > v6:  new patch
> >> > v8:  move the consolidated function out of the ifdef block.
> >> > v11: rebase on top of def0bfdbd603
> >> > ---
> >> >  arch/powerpc/perf/callchain.c | 48 +++++++++++++++++------------------
> >> >  1 file changed, 24 insertions(+), 24 deletions(-)
> >> > 
> >> > diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
> >> > index cbc251981209..c9a78c6e4361 100644
> >> > --- a/arch/powerpc/perf/callchain.c
> >> > +++ b/arch/powerpc/perf/callchain.c
> >> > @@ -161,18 +161,6 @@ static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret)
> >> >  	return read_user_stack_slow(ptr, ret, 8);
> >> >  }
> >> >  
> >> > -static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
> >> > -{
> >> > -	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
> >> > -	    ((unsigned long)ptr & 3))
> >> > -		return -EFAULT;
> >> > -
> >> > -	if (!probe_user_read(ret, ptr, sizeof(*ret)))
> >> > -		return 0;
> >> > -
> >> > -	return read_user_stack_slow(ptr, ret, 4);
> >> > -}
> >> > -
> >> >  static inline int valid_user_sp(unsigned long sp, int is_64)
> >> >  {
> >> >  	if (!sp || (sp & 7) || sp > (is_64 ? TASK_SIZE : 0x100000000UL) - 32)
> >> > @@ -277,19 +265,9 @@ static void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
> >> >  }
> >> >  
> >> >  #else  /* CONFIG_PPC64 */
> >> > -/*
> >> > - * On 32-bit we just access the address and let hash_page create a
> >> > - * HPTE if necessary, so there is no need to fall back to reading
> >> > - * the page tables.  Since this is called at interrupt level,
> >> > - * do_page_fault() won't treat a DSI as a page fault.
> >> > - */
> >> > -static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
> >> > +static int read_user_stack_slow(void __user *ptr, void *buf, int nb)
> >> >  {
> >> > -	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
> >> > -	    ((unsigned long)ptr & 3))
> >> > -		return -EFAULT;
> >> > -
> >> > -	return probe_user_read(ret, ptr, sizeof(*ret));
> >> > +	return 0;
> >> >  }
> >> >  
> >> >  static inline void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
> >> > @@ -312,6 +290,28 @@ static inline int valid_user_sp(unsigned long sp, int is_64)
> >> >  
> >> >  #endif /* CONFIG_PPC64 */
> >> >  
> >> > +/*
> >> > + * On 32-bit we just access the address and let hash_page create a
> >> > + * HPTE if necessary, so there is no need to fall back to reading
> >> > + * the page tables.  Since this is called at interrupt level,
> >> > + * do_page_fault() won't treat a DSI as a page fault.
> >> > + */
> >> 
> >> The comment is actually probably better to stay in the 32-bit
> >> read_user_stack_slow implementation. Is that function defined
> >> on 32-bit purely so that you can use IS_ENABLED()? In that case
> > It documents the IS_ENABLED() and that's where it is. The 32bit
> > definition is only a technical detail.
> 
> Sorry for the late reply, busy trying to fix bugs in the C rewrite
> series. I don't think it is the right place, it should be in the
> ppc32 implementation detail.
Which does not exist anymore after the 32bit and 64bit part is split.
> ppc64 has an equivalent comment at the top of its read_user_stack functions.
> 
> >> I would prefer to put a BUG() there which makes it self documenting.
> > Which will cause checkpatch complaints about introducing new BUG() which
> > is frowned on.
> 
> It's fine in this case, that warning is about not introducing
> runtime bugs, but this wouldn't be.
> 
> But... I actually don't like adding read_user_stack_slow on 32-bit
> and especially not just to make IS_ENABLED work.
That's to not break build at this point. Later the function is removed.
> 
> IMO this would be better if you really want to consolidate it
> 
> ---
> 
> diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
> index cbc251981209..ca3a599b3f54 100644
> --- a/arch/powerpc/perf/callchain.c
> +++ b/arch/powerpc/perf/callchain.c
> @@ -108,7 +108,7 @@ perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *re
>   * interrupt context, so if the access faults, we read the page tables
>   * to find which page (if any) is mapped and access it directly.
>   */
> -static int read_user_stack_slow(void __user *ptr, void *buf, int nb)
> +static int read_user_stack_slow(const void __user *ptr, void *buf, int nb)
>  {
>  	int ret = -EFAULT;
>  	pgd_t *pgdir;
> @@ -149,28 +149,21 @@ static int read_user_stack_slow(void __user *ptr, void *buf, int nb)
>  	return ret;
>  }
>  
> -static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret)
> +static int __read_user_stack(const void __user *ptr, void *ret, size_t size)
>  {
> -	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned long) ||
> -	    ((unsigned long)ptr & 7))
> +	if ((unsigned long)ptr > TASK_SIZE - size ||
> +	    ((unsigned long)ptr & (size - 1)))
>  		return -EFAULT;
>  
> -	if (!probe_user_read(ret, ptr, sizeof(*ret)))
> +	if (!probe_user_read(ret, ptr, size))
>  		return 0;
>  
> -	return read_user_stack_slow(ptr, ret, 8);
> +	return read_user_stack_slow(ptr, ret, size);
>  }
>  
> -static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
> +static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret)
>  {
> -	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
> -	    ((unsigned long)ptr & 3))
> -		return -EFAULT;
> -
> -	if (!probe_user_read(ret, ptr, sizeof(*ret)))
> -		return 0;
> -
> -	return read_user_stack_slow(ptr, ret, 4);
> +	return __read_user_stack(ptr, ret, sizeof(*ret));
>  }
>  
>  static inline int valid_user_sp(unsigned long sp, int is_64)
> @@ -283,13 +276,13 @@ static void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
>   * the page tables.  Since this is called at interrupt level,
>   * do_page_fault() won't treat a DSI as a page fault.
>   */
> -static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
> +static int __read_user_stack(const void __user *ptr, void *ret, size_t size)
>  {
> -	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
> -	    ((unsigned long)ptr & 3))
> +	if ((unsigned long)ptr > TASK_SIZE - size ||
> +	    ((unsigned long)ptr & (size - 1)))
>  		return -EFAULT;
>  
> -	return probe_user_read(ret, ptr, sizeof(*ret));
> +	return probe_user_read(ret, ptr, size);
>  }
>  
>  static inline void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
> @@ -312,6 +305,11 @@ static inline int valid_user_sp(unsigned long sp, int is_64)
>  
>  #endif /* CONFIG_PPC64 */
>  
> +static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
> +{
> +	return __read_user_stack(ptr, ret, sizeof(*ret));
> +}
> +
>  /*
>   * Layout for non-RT signal frames
>   */

^ permalink raw reply	[flat|nested] 53+ messages in thread

* [PATCH] powerpcs: perf: consolidate perf_callchain_user_64 and perf_callchain_user_32
  2020-04-03  7:13         ` Nicholas Piggin
  2020-04-03 10:52           ` Michal Suchánek
  2020-04-06 20:52           ` Michal Suchánek
@ 2020-04-06 21:00           ` Michal Suchanek
  2020-04-07  5:21             ` Christophe Leroy
  2 siblings, 1 reply; 53+ messages in thread
From: Michal Suchanek @ 2020-04-06 21:00 UTC (permalink / raw)
  To: linuxppc-dev, Nicholas Piggin
  Cc: Michal Suchanek, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Michael Ellerman,
	Benjamin Herrenschmidt, Paul Mackerras, linux-kernel,
	Christophe Leroy

perf_callchain_user_64 and perf_callchain_user_32 are nearly identical.
Consolidate into one function with thin wrappers.

Suggested-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
---
 arch/powerpc/perf/callchain.h    | 24 +++++++++++++++++++++++-
 arch/powerpc/perf/callchain_32.c | 21 ++-------------------
 arch/powerpc/perf/callchain_64.c | 14 ++++----------
 3 files changed, 29 insertions(+), 30 deletions(-)

diff --git a/arch/powerpc/perf/callchain.h b/arch/powerpc/perf/callchain.h
index 7a2cb9e1181a..7540bb71cb60 100644
--- a/arch/powerpc/perf/callchain.h
+++ b/arch/powerpc/perf/callchain.h
@@ -2,7 +2,7 @@
 #ifndef _POWERPC_PERF_CALLCHAIN_H
 #define _POWERPC_PERF_CALLCHAIN_H
 
-int read_user_stack_slow(void __user *ptr, void *buf, int nb);
+int read_user_stack_slow(const void __user *ptr, void *buf, int nb);
 void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
 			    struct pt_regs *regs);
 void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry,
@@ -16,4 +16,26 @@ static inline bool invalid_user_sp(unsigned long sp)
 	return (!sp || (sp & mask) || (sp > top));
 }
 
+/*
+ * On 32-bit we just access the address and let hash_page create a
+ * HPTE if necessary, so there is no need to fall back to reading
+ * the page tables.  Since this is called at interrupt level,
+ * do_page_fault() won't treat a DSI as a page fault.
+ */
+static inline int __read_user_stack(const void __user *ptr, void *ret,
+				    size_t size)
+{
+	int rc;
+
+	if ((unsigned long)ptr > TASK_SIZE - size ||
+			((unsigned long)ptr & (size - 1)))
+		return -EFAULT;
+	rc = probe_user_read(ret, ptr, size);
+
+	if (rc && IS_ENABLED(CONFIG_PPC64))
+		return read_user_stack_slow(ptr, ret, size);
+
+	return rc;
+}
+
 #endif /* _POWERPC_PERF_CALLCHAIN_H */
diff --git a/arch/powerpc/perf/callchain_32.c b/arch/powerpc/perf/callchain_32.c
index 8aa951003141..1b4621f177e8 100644
--- a/arch/powerpc/perf/callchain_32.c
+++ b/arch/powerpc/perf/callchain_32.c
@@ -31,26 +31,9 @@
 
 #endif /* CONFIG_PPC64 */
 
-/*
- * On 32-bit we just access the address and let hash_page create a
- * HPTE if necessary, so there is no need to fall back to reading
- * the page tables.  Since this is called at interrupt level,
- * do_page_fault() won't treat a DSI as a page fault.
- */
-static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
+static int read_user_stack_32(const unsigned int __user *ptr, unsigned int *ret)
 {
-	int rc;
-
-	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
-	    ((unsigned long)ptr & 3))
-		return -EFAULT;
-
-	rc = probe_user_read(ret, ptr, sizeof(*ret));
-
-	if (IS_ENABLED(CONFIG_PPC64) && rc)
-		return read_user_stack_slow(ptr, ret, 4);
-
-	return rc;
+	return __read_user_stack(ptr, ret, sizeof(*ret);
 }
 
 /*
diff --git a/arch/powerpc/perf/callchain_64.c b/arch/powerpc/perf/callchain_64.c
index df1ffd8b20f2..55bbc25a54ed 100644
--- a/arch/powerpc/perf/callchain_64.c
+++ b/arch/powerpc/perf/callchain_64.c
@@ -24,7 +24,7 @@
  * interrupt context, so if the access faults, we read the page tables
  * to find which page (if any) is mapped and access it directly.
  */
-int read_user_stack_slow(void __user *ptr, void *buf, int nb)
+int read_user_stack_slow(const void __user *ptr, void *buf, int nb)
 {
 	int ret = -EFAULT;
 	pgd_t *pgdir;
@@ -65,16 +65,10 @@ int read_user_stack_slow(void __user *ptr, void *buf, int nb)
 	return ret;
 }
 
-static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret)
+static int read_user_stack_64(const unsigned long __user *ptr,
+			      unsigned long *ret)
 {
-	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned long) ||
-	    ((unsigned long)ptr & 7))
-		return -EFAULT;
-
-	if (!probe_user_read(ret, ptr, sizeof(*ret)))
-		return 0;
-
-	return read_user_stack_slow(ptr, ret, 8);
+	return __read_user_stack(ptr, ret, sizeof(*ret));
 }
 
 /*
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* Re: [PATCH] powerpcs: perf: consolidate perf_callchain_user_64 and perf_callchain_user_32
  2020-04-06 21:00           ` [PATCH] powerpcs: perf: consolidate perf_callchain_user_64 and perf_callchain_user_32 Michal Suchanek
@ 2020-04-07  5:21             ` Christophe Leroy
  2020-04-09 11:22               ` Michal Suchánek
  0 siblings, 1 reply; 53+ messages in thread
From: Christophe Leroy @ 2020-04-07  5:21 UTC (permalink / raw)
  To: Michal Suchanek, linuxppc-dev, Nicholas Piggin
  Cc: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Michael Ellerman, Benjamin Herrenschmidt, Paul Mackerras,
	linux-kernel



Le 06/04/2020 à 23:00, Michal Suchanek a écrit :
> perf_callchain_user_64 and perf_callchain_user_32 are nearly identical.
> Consolidate into one function with thin wrappers.
> 
> Suggested-by: Nicholas Piggin <npiggin@gmail.com>
> Signed-off-by: Michal Suchanek <msuchanek@suse.de>
> ---
>   arch/powerpc/perf/callchain.h    | 24 +++++++++++++++++++++++-
>   arch/powerpc/perf/callchain_32.c | 21 ++-------------------
>   arch/powerpc/perf/callchain_64.c | 14 ++++----------
>   3 files changed, 29 insertions(+), 30 deletions(-)
> 
> diff --git a/arch/powerpc/perf/callchain.h b/arch/powerpc/perf/callchain.h
> index 7a2cb9e1181a..7540bb71cb60 100644
> --- a/arch/powerpc/perf/callchain.h
> +++ b/arch/powerpc/perf/callchain.h
> @@ -2,7 +2,7 @@
>   #ifndef _POWERPC_PERF_CALLCHAIN_H
>   #define _POWERPC_PERF_CALLCHAIN_H
>   
> -int read_user_stack_slow(void __user *ptr, void *buf, int nb);
> +int read_user_stack_slow(const void __user *ptr, void *buf, int nb);

Does the constification of ptr has to be in this patch ?
Wouldn't it be better to have it as a separate patch ?

>   void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
>   			    struct pt_regs *regs);
>   void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry,
> @@ -16,4 +16,26 @@ static inline bool invalid_user_sp(unsigned long sp)
>   	return (!sp || (sp & mask) || (sp > top));
>   }
>   
> +/*
> + * On 32-bit we just access the address and let hash_page create a
> + * HPTE if necessary, so there is no need to fall back to reading
> + * the page tables.  Since this is called at interrupt level,
> + * do_page_fault() won't treat a DSI as a page fault.
> + */
> +static inline int __read_user_stack(const void __user *ptr, void *ret,
> +				    size_t size)
> +{
> +	int rc;
> +
> +	if ((unsigned long)ptr > TASK_SIZE - size ||
> +			((unsigned long)ptr & (size - 1)))
> +		return -EFAULT;
> +	rc = probe_user_read(ret, ptr, size);
> +
> +	if (rc && IS_ENABLED(CONFIG_PPC64))

gcc is probably smart enough to deal with it efficiently, but it would
be more correct to test rc after checking CONFIG_PPC64.

> +		return read_user_stack_slow(ptr, ret, size);
> +
> +	return rc;
> +}
> +
>   #endif /* _POWERPC_PERF_CALLCHAIN_H */
> diff --git a/arch/powerpc/perf/callchain_32.c b/arch/powerpc/perf/callchain_32.c
> index 8aa951003141..1b4621f177e8 100644
> --- a/arch/powerpc/perf/callchain_32.c
> +++ b/arch/powerpc/perf/callchain_32.c
> @@ -31,26 +31,9 @@
>   
>   #endif /* CONFIG_PPC64 */
>   
> -/*
> - * On 32-bit we just access the address and let hash_page create a
> - * HPTE if necessary, so there is no need to fall back to reading
> - * the page tables.  Since this is called at interrupt level,
> - * do_page_fault() won't treat a DSI as a page fault.
> - */
> -static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
> +static int read_user_stack_32(const unsigned int __user *ptr, unsigned int *ret)
>   {
> -	int rc;
> -
> -	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
> -	    ((unsigned long)ptr & 3))
> -		return -EFAULT;
> -
> -	rc = probe_user_read(ret, ptr, sizeof(*ret));
> -
> -	if (IS_ENABLED(CONFIG_PPC64) && rc)
> -		return read_user_stack_slow(ptr, ret, 4);
> -
> -	return rc;
> +	return __read_user_stack(ptr, ret, sizeof(*ret);
>   }
>   
>   /*
> diff --git a/arch/powerpc/perf/callchain_64.c b/arch/powerpc/perf/callchain_64.c
> index df1ffd8b20f2..55bbc25a54ed 100644
> --- a/arch/powerpc/perf/callchain_64.c
> +++ b/arch/powerpc/perf/callchain_64.c
> @@ -24,7 +24,7 @@
>    * interrupt context, so if the access faults, we read the page tables
>    * to find which page (if any) is mapped and access it directly.
>    */
> -int read_user_stack_slow(void __user *ptr, void *buf, int nb)
> +int read_user_stack_slow(const void __user *ptr, void *buf, int nb)
>   {
>   	int ret = -EFAULT;
>   	pgd_t *pgdir;
> @@ -65,16 +65,10 @@ int read_user_stack_slow(void __user *ptr, void *buf, int nb)
>   	return ret;
>   }
>   
> -static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret)
> +static int read_user_stack_64(const unsigned long __user *ptr,
> +			      unsigned long *ret)
>   {
> -	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned long) ||
> -	    ((unsigned long)ptr & 7))
> -		return -EFAULT;
> -
> -	if (!probe_user_read(ret, ptr, sizeof(*ret)))
> -		return 0;
> -
> -	return read_user_stack_slow(ptr, ret, 8);
> +	return __read_user_stack(ptr, ret, sizeof(*ret));
>   }
>   
>   /*
> 

Christophe

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v12 5/8] powerpc/64: make buildable without CONFIG_COMPAT
  2020-03-20 10:20   ` [PATCH v12 5/8] powerpc/64: make buildable without CONFIG_COMPAT Michal Suchanek
@ 2020-04-07  5:50     ` Christophe Leroy
  2020-04-07  9:57       ` Michal Suchánek
  0 siblings, 1 reply; 53+ messages in thread
From: Christophe Leroy @ 2020-04-07  5:50 UTC (permalink / raw)
  To: Michal Suchanek, linuxppc-dev
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Alexander Viro, Mauro Carvalho Chehab, David S. Miller,
	Rob Herring, Greg Kroah-Hartman, Jonathan Cameron,
	Andy Shevchenko, Thomas Gleixner, Arnd Bergmann, Nayna Jain,
	Eric Richter, Claudio Carvalho, Nicholas Piggin, Hari Bathini,
	Masahiro Yamada, Thiago Jung Bauermann,
	Sebastian Andrzej Siewior, Valentin Schneider, Jordan Niethe,
	Michael Neuling, Gustavo Luiz Duarte, Allison Randal,
	Eric W. Biederman, linux-kernel, linux-fsdevel



Le 20/03/2020 à 11:20, Michal Suchanek a écrit :
> There are numerous references to 32bit functions in generic and 64bit
> code so ifdef them out.
> 
> Signed-off-by: Michal Suchanek <msuchanek@suse.de>
> ---
> v2:
> - fix 32bit ifdef condition in signal.c
> - simplify the compat ifdef condition in vdso.c - 64bit is redundant
> - simplify the compat ifdef condition in callchain.c - 64bit is redundant
> v3:
> - use IS_ENABLED and maybe_unused where possible
> - do not ifdef declarations
> - clean up Makefile
> v4:
> - further makefile cleanup
> - simplify is_32bit_task conditions
> - avoid ifdef in condition by using return
> v5:
> - avoid unreachable code on 32bit
> - make is_current_64bit constant on !COMPAT
> - add stub perf_callchain_user_32 to avoid some ifdefs
> v6:
> - consolidate current_is_64bit
> v7:
> - remove leftover perf_callchain_user_32 stub from previous series version
> v8:
> - fix build again - too trigger-happy with stub removal
> - remove a vdso.c hunk that causes warning according to kbuild test robot
> v9:
> - removed current_is_64bit in previous patch
> v10:
> - rebase on top of 70ed86f4de5bd
> ---
>   arch/powerpc/include/asm/thread_info.h | 4 ++--
>   arch/powerpc/kernel/Makefile           | 6 +++---
>   arch/powerpc/kernel/entry_64.S         | 2 ++
>   arch/powerpc/kernel/signal.c           | 3 +--
>   arch/powerpc/kernel/syscall_64.c       | 6 ++----
>   arch/powerpc/kernel/vdso.c             | 3 ++-
>   arch/powerpc/perf/callchain.c          | 8 +++++++-
>   7 files changed, 19 insertions(+), 13 deletions(-)
> 

[...]

> diff --git a/arch/powerpc/kernel/syscall_64.c b/arch/powerpc/kernel/syscall_64.c
> index 87d95b455b83..2dcbfe38f5ac 100644
> --- a/arch/powerpc/kernel/syscall_64.c
> +++ b/arch/powerpc/kernel/syscall_64.c
> @@ -24,7 +24,6 @@ notrace long system_call_exception(long r3, long r4, long r5,
>   				   long r6, long r7, long r8,
>   				   unsigned long r0, struct pt_regs *regs)
>   {
> -	unsigned long ti_flags;
>   	syscall_fn f;
>   
>   	if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
> @@ -68,8 +67,7 @@ notrace long system_call_exception(long r3, long r4, long r5,
>   
>   	local_irq_enable();
>   
> -	ti_flags = current_thread_info()->flags;
> -	if (unlikely(ti_flags & _TIF_SYSCALL_DOTRACE)) {
> +	if (unlikely(current_thread_info()->flags & _TIF_SYSCALL_DOTRACE)) {
>   		/*
>   		 * We use the return value of do_syscall_trace_enter() as the
>   		 * syscall number. If the syscall was rejected for any reason
> @@ -94,7 +92,7 @@ notrace long system_call_exception(long r3, long r4, long r5,
>   	/* May be faster to do array_index_nospec? */
>   	barrier_nospec();
>   
> -	if (unlikely(ti_flags & _TIF_32BIT)) {
> +	if (unlikely(is_32bit_task())) {

is_compat() should be used here instead, because we dont want to use 
compat_sys_call_table() on PPC32.

>   		f = (void *)compat_sys_call_table[r0];
>   
>   		r3 &= 0x00000000ffffffffULL;

Christophe

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH v12 5/8] powerpc/64: make buildable without CONFIG_COMPAT
  2020-04-07  5:50     ` Christophe Leroy
@ 2020-04-07  9:57       ` Michal Suchánek
  0 siblings, 0 replies; 53+ messages in thread
From: Michal Suchánek @ 2020-04-07  9:57 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: linuxppc-dev, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Alexander Viro, Mauro Carvalho Chehab,
	David S. Miller, Rob Herring, Greg Kroah-Hartman,
	Jonathan Cameron, Andy Shevchenko, Thomas Gleixner,
	Arnd Bergmann, Nayna Jain, Eric Richter, Claudio Carvalho,
	Nicholas Piggin, Hari Bathini, Masahiro Yamada,
	Thiago Jung Bauermann, Sebastian Andrzej Siewior,
	Valentin Schneider, Jordan Niethe, Michael Neuling,
	Gustavo Luiz Duarte, Allison Randal, Eric W. Biederman,
	linux-kernel, linux-fsdevel

On Tue, Apr 07, 2020 at 07:50:30AM +0200, Christophe Leroy wrote:
> 
> 
> Le 20/03/2020 à 11:20, Michal Suchanek a écrit :
> > There are numerous references to 32bit functions in generic and 64bit
> > code so ifdef them out.
> > 
> > Signed-off-by: Michal Suchanek <msuchanek@suse.de>
> > ---
> > v2:
> > - fix 32bit ifdef condition in signal.c
> > - simplify the compat ifdef condition in vdso.c - 64bit is redundant
> > - simplify the compat ifdef condition in callchain.c - 64bit is redundant
> > v3:
> > - use IS_ENABLED and maybe_unused where possible
> > - do not ifdef declarations
> > - clean up Makefile
> > v4:
> > - further makefile cleanup
> > - simplify is_32bit_task conditions
> > - avoid ifdef in condition by using return
> > v5:
> > - avoid unreachable code on 32bit
> > - make is_current_64bit constant on !COMPAT
> > - add stub perf_callchain_user_32 to avoid some ifdefs
> > v6:
> > - consolidate current_is_64bit
> > v7:
> > - remove leftover perf_callchain_user_32 stub from previous series version
> > v8:
> > - fix build again - too trigger-happy with stub removal
> > - remove a vdso.c hunk that causes warning according to kbuild test robot
> > v9:
> > - removed current_is_64bit in previous patch
> > v10:
> > - rebase on top of 70ed86f4de5bd
> > ---
> >   arch/powerpc/include/asm/thread_info.h | 4 ++--
> >   arch/powerpc/kernel/Makefile           | 6 +++---
> >   arch/powerpc/kernel/entry_64.S         | 2 ++
> >   arch/powerpc/kernel/signal.c           | 3 +--
> >   arch/powerpc/kernel/syscall_64.c       | 6 ++----
> >   arch/powerpc/kernel/vdso.c             | 3 ++-
> >   arch/powerpc/perf/callchain.c          | 8 +++++++-
> >   7 files changed, 19 insertions(+), 13 deletions(-)
> > 
> 
> [...]
> 
> > diff --git a/arch/powerpc/kernel/syscall_64.c b/arch/powerpc/kernel/syscall_64.c
> > index 87d95b455b83..2dcbfe38f5ac 100644
> > --- a/arch/powerpc/kernel/syscall_64.c
> > +++ b/arch/powerpc/kernel/syscall_64.c
> > @@ -24,7 +24,6 @@ notrace long system_call_exception(long r3, long r4, long r5,
> >   				   long r6, long r7, long r8,
> >   				   unsigned long r0, struct pt_regs *regs)
> >   {
> > -	unsigned long ti_flags;
> >   	syscall_fn f;
> >   	if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
> > @@ -68,8 +67,7 @@ notrace long system_call_exception(long r3, long r4, long r5,
> >   	local_irq_enable();
> > -	ti_flags = current_thread_info()->flags;
> > -	if (unlikely(ti_flags & _TIF_SYSCALL_DOTRACE)) {
> > +	if (unlikely(current_thread_info()->flags & _TIF_SYSCALL_DOTRACE)) {
> >   		/*
> >   		 * We use the return value of do_syscall_trace_enter() as the
> >   		 * syscall number. If the syscall was rejected for any reason
> > @@ -94,7 +92,7 @@ notrace long system_call_exception(long r3, long r4, long r5,
> >   	/* May be faster to do array_index_nospec? */
> >   	barrier_nospec();
> > -	if (unlikely(ti_flags & _TIF_32BIT)) {
> > +	if (unlikely(is_32bit_task())) {
> 
> is_compat() should be used here instead, because we dont want to use
is_compat_task()
> compat_sys_call_table() on PPC32.
> 
> >   		f = (void *)compat_sys_call_table[r0];
> >   		r3 &= 0x00000000ffffffffULL;
> 
That only applies once you use this for 32bit as well. Right now it's
64bit only so the two are the same.

Thanks

Michal

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH] powerpcs: perf: consolidate perf_callchain_user_64 and perf_callchain_user_32
  2020-04-07  5:21             ` Christophe Leroy
@ 2020-04-09 11:22               ` Michal Suchánek
  0 siblings, 0 replies; 53+ messages in thread
From: Michal Suchánek @ 2020-04-09 11:22 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: linuxppc-dev, Nicholas Piggin, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Michael Ellerman,
	Benjamin Herrenschmidt, Paul Mackerras, linux-kernel

On Tue, Apr 07, 2020 at 07:21:06AM +0200, Christophe Leroy wrote:
> 
> 
> Le 06/04/2020 à 23:00, Michal Suchanek a écrit :
> > perf_callchain_user_64 and perf_callchain_user_32 are nearly identical.
> > Consolidate into one function with thin wrappers.
> > 
> > Suggested-by: Nicholas Piggin <npiggin@gmail.com>
> > Signed-off-by: Michal Suchanek <msuchanek@suse.de>
> > ---
> >   arch/powerpc/perf/callchain.h    | 24 +++++++++++++++++++++++-
> >   arch/powerpc/perf/callchain_32.c | 21 ++-------------------
> >   arch/powerpc/perf/callchain_64.c | 14 ++++----------
> >   3 files changed, 29 insertions(+), 30 deletions(-)
> > 
> > diff --git a/arch/powerpc/perf/callchain.h b/arch/powerpc/perf/callchain.h
> > index 7a2cb9e1181a..7540bb71cb60 100644
> > --- a/arch/powerpc/perf/callchain.h
> > +++ b/arch/powerpc/perf/callchain.h
> > @@ -2,7 +2,7 @@
> >   #ifndef _POWERPC_PERF_CALLCHAIN_H
> >   #define _POWERPC_PERF_CALLCHAIN_H
> > -int read_user_stack_slow(void __user *ptr, void *buf, int nb);
> > +int read_user_stack_slow(const void __user *ptr, void *buf, int nb);
> 
> Does the constification of ptr has to be in this patch ?
It was in the original patch. The code is touched anyway.
> Wouldn't it be better to have it as a separate patch ?
Don't care much either way. Can resend it as separate patches.
> 
> >   void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
> >   			    struct pt_regs *regs);
> >   void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry,
> > @@ -16,4 +16,26 @@ static inline bool invalid_user_sp(unsigned long sp)
> >   	return (!sp || (sp & mask) || (sp > top));
> >   }
> > +/*
> > + * On 32-bit we just access the address and let hash_page create a
> > + * HPTE if necessary, so there is no need to fall back to reading
> > + * the page tables.  Since this is called at interrupt level,
> > + * do_page_fault() won't treat a DSI as a page fault.
> > + */
> > +static inline int __read_user_stack(const void __user *ptr, void *ret,
> > +				    size_t size)
> > +{
> > +	int rc;
> > +
> > +	if ((unsigned long)ptr > TASK_SIZE - size ||
> > +			((unsigned long)ptr & (size - 1)))
> > +		return -EFAULT;
> > +	rc = probe_user_read(ret, ptr, size);
> > +
> > +	if (rc && IS_ENABLED(CONFIG_PPC64))
> 
> gcc is probably smart enough to deal with it efficiently, but it would
> be more correct to test rc after checking CONFIG_PPC64.
IS_ENABLED(CONFIG_PPC64) is constant so that part of the check should be
compiled out in any case.

Thanks

Michal

^ permalink raw reply	[flat|nested] 53+ messages in thread

* [PATCH v11 3/8] powerpc/perf: consolidate read_user_stack_32
  2020-03-19 11:52 [PATCH v11 0/8] Disable compat cruft on ppc64le v11 Michal Suchanek
@ 2020-03-19 11:52 ` Michal Suchanek
  0 siblings, 0 replies; 53+ messages in thread
From: Michal Suchanek @ 2020-03-19 11:52 UTC (permalink / raw)
  To: linuxppc-dev
  Cc: Michal Suchanek, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Alexander Viro, Mauro Carvalho Chehab,
	David S. Miller, Rob Herring, Greg Kroah-Hartman,
	Jonathan Cameron, Andy Shevchenko, Christophe Leroy,
	Thomas Gleixner, Arnd Bergmann, Nayna Jain, Eric Richter,
	Claudio Carvalho, Nicholas Piggin, Hari Bathini, Masahiro Yamada,
	Thiago Jung Bauermann, Sebastian Andrzej Siewior,
	Valentin Schneider, Jordan Niethe, Michael Neuling,
	Gustavo Luiz Duarte, Allison Randal, Eric W. Biederman,
	linux-kernel, linux-fsdevel @ vger . kernel . org --in-reply-to=

There are two almost identical copies for 32bit and 64bit.

The function is used only in 32bit code which will be split out in next
patch so consolidate to one function.

Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
v6:  new patch
v8:  move the consolidated function out of the ifdef block.
v11: rebase on top of def0bfdbd603
---
 arch/powerpc/perf/callchain.c | 48 +++++++++++++++++------------------
 1 file changed, 24 insertions(+), 24 deletions(-)

diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
index cbc251981209..c9a78c6e4361 100644
--- a/arch/powerpc/perf/callchain.c
+++ b/arch/powerpc/perf/callchain.c
@@ -161,18 +161,6 @@ static int read_user_stack_64(unsigned long __user *ptr, unsigned long *ret)
 	return read_user_stack_slow(ptr, ret, 8);
 }
 
-static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
-{
-	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
-	    ((unsigned long)ptr & 3))
-		return -EFAULT;
-
-	if (!probe_user_read(ret, ptr, sizeof(*ret)))
-		return 0;
-
-	return read_user_stack_slow(ptr, ret, 4);
-}
-
 static inline int valid_user_sp(unsigned long sp, int is_64)
 {
 	if (!sp || (sp & 7) || sp > (is_64 ? TASK_SIZE : 0x100000000UL) - 32)
@@ -277,19 +265,9 @@ static void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
 }
 
 #else  /* CONFIG_PPC64 */
-/*
- * On 32-bit we just access the address and let hash_page create a
- * HPTE if necessary, so there is no need to fall back to reading
- * the page tables.  Since this is called at interrupt level,
- * do_page_fault() won't treat a DSI as a page fault.
- */
-static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
+static int read_user_stack_slow(void __user *ptr, void *buf, int nb)
 {
-	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
-	    ((unsigned long)ptr & 3))
-		return -EFAULT;
-
-	return probe_user_read(ret, ptr, sizeof(*ret));
+	return 0;
 }
 
 static inline void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
@@ -312,6 +290,28 @@ static inline int valid_user_sp(unsigned long sp, int is_64)
 
 #endif /* CONFIG_PPC64 */
 
+/*
+ * On 32-bit we just access the address and let hash_page create a
+ * HPTE if necessary, so there is no need to fall back to reading
+ * the page tables.  Since this is called at interrupt level,
+ * do_page_fault() won't treat a DSI as a page fault.
+ */
+static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
+{
+	int rc;
+
+	if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) ||
+	    ((unsigned long)ptr & 3))
+		return -EFAULT;
+
+	rc = probe_user_read(ret, ptr, sizeof(*ret));
+
+	if (IS_ENABLED(CONFIG_PPC64) && rc)
+		return read_user_stack_slow(ptr, ret, 4);
+
+	return rc;
+}
+
 /*
  * Layout for non-RT signal frames
  */
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

end of thread, other threads:[~2020-04-09 11:22 UTC | newest]

Thread overview: 53+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20200225173541.1549955-1-npiggin@gmail.com>
2020-03-19 12:19 ` [PATCH v11 0/8] Disable compat cruft on ppc64le v11 Michal Suchanek
2020-03-19 12:19   ` [PATCH v11 1/8] powerpc: Add back __ARCH_WANT_SYS_LLSEEK macro Michal Suchanek
2020-03-19 12:19   ` [PATCH v11 2/8] powerpc: move common register copy functions from signal_32.c to signal.c Michal Suchanek
2020-03-19 12:19   ` [PATCH v11 3/8] powerpc/perf: consolidate read_user_stack_32 Michal Suchanek
2020-03-24  8:48     ` Nicholas Piggin
2020-03-24 19:38       ` Michal Suchánek
2020-04-03  7:13         ` Nicholas Piggin
2020-04-03 10:52           ` Michal Suchánek
2020-04-03 11:26             ` Nicholas Piggin
2020-04-03 11:51               ` Michal Suchánek
2020-04-06 20:52           ` Michal Suchánek
2020-04-06 21:00           ` [PATCH] powerpcs: perf: consolidate perf_callchain_user_64 and perf_callchain_user_32 Michal Suchanek
2020-04-07  5:21             ` Christophe Leroy
2020-04-09 11:22               ` Michal Suchánek
2020-03-19 12:19   ` [PATCH v11 4/8] powerpc/perf: consolidate valid_user_sp Michal Suchanek
2020-03-19 12:19   ` [PATCH v11 5/8] powerpc/64: make buildable without CONFIG_COMPAT Michal Suchanek
2020-03-24  8:54     ` Nicholas Piggin
2020-03-24 19:30       ` Michal Suchánek
2020-04-03  7:16         ` Nicholas Piggin
2020-03-19 12:19   ` [PATCH v11 6/8] powerpc/64: Make COMPAT user-selectable disabled on littleendian by default Michal Suchanek
2020-03-19 12:19   ` [PATCH v11 7/8] powerpc/perf: split callchain.c by bitness Michal Suchanek
2020-03-19 12:19   ` [PATCH v11 8/8] MAINTAINERS: perf: Add pattern that matches ppc perf to the perf entry Michal Suchanek
2020-03-19 13:37     ` Andy Shevchenko
2020-03-19 14:00       ` Michal Suchánek
2020-03-19 14:26         ` Andy Shevchenko
2020-03-19 17:03     ` Joe Perches
2020-03-19 12:36   ` [PATCH v11 0/8] Disable compat cruft on ppc64le v11 Christophe Leroy
2020-03-19 14:01     ` Michal Suchánek
2020-04-03  7:25   ` Nicholas Piggin
2020-04-03  7:26     ` Christophe Leroy
2020-04-03  9:43       ` Nicholas Piggin
2020-04-05  0:40         ` Michael Ellerman
2020-03-20 10:20 ` [PATCH v12 0/8] Disable compat cruft on ppc64le v12 Michal Suchanek
2020-03-20 10:20   ` [PATCH v12 1/8] powerpc: Add back __ARCH_WANT_SYS_LLSEEK macro Michal Suchanek
2020-03-20 10:20   ` [PATCH v12 2/8] powerpc: move common register copy functions from signal_32.c to signal.c Michal Suchanek
2020-03-20 10:20   ` [PATCH v12 3/8] powerpc/perf: consolidate read_user_stack_32 Michal Suchanek
2020-03-20 10:20   ` [PATCH v12 4/8] powerpc/perf: consolidate valid_user_sp -> invalid_user_sp Michal Suchanek
2020-03-20 10:20   ` [PATCH v12 5/8] powerpc/64: make buildable without CONFIG_COMPAT Michal Suchanek
2020-04-07  5:50     ` Christophe Leroy
2020-04-07  9:57       ` Michal Suchánek
2020-03-20 10:20   ` [PATCH v12 6/8] powerpc/64: Make COMPAT user-selectable disabled on littleendian by default Michal Suchanek
2020-03-20 10:20   ` [PATCH v12 7/8] powerpc/perf: split callchain.c by bitness Michal Suchanek
2020-03-20 10:20   ` [PATCH v12 8/8] MAINTAINERS: perf: Add pattern that matches ppc perf to the perf entry Michal Suchanek
2020-03-20 10:33     ` Andy Shevchenko
2020-03-20 11:23       ` Michal Suchánek
2020-03-20 12:42         ` Andy Shevchenko
2020-03-20 14:42           ` Joe Perches
2020-03-20 16:28             ` Michal Suchánek
2020-03-20 16:31             ` Andy Shevchenko
2020-03-20 16:42               ` Michal Suchánek
2020-03-20 16:47                 ` Andy Shevchenko
2020-03-20 21:36               ` Joe Perches
2020-03-19 11:52 [PATCH v11 0/8] Disable compat cruft on ppc64le v11 Michal Suchanek
2020-03-19 11:52 ` [PATCH v11 3/8] powerpc/perf: consolidate read_user_stack_32 Michal Suchanek

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).