linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] powerpc/livepatch: reliable stack unwinder fixes
@ 2019-01-22 15:57 Joe Lawrence
  2019-01-22 15:57 ` [PATCH 1/4] powerpc/64s: Clear on-stack exception marker upon exception return Joe Lawrence
                   ` (5 more replies)
  0 siblings, 6 replies; 21+ messages in thread
From: Joe Lawrence @ 2019-01-22 15:57 UTC (permalink / raw)
  To: linux-kernel, linuxppc-dev, live-patching
  Cc: Nicolai Stange, Jiri Kosina, Josh Poimboeuf, Torsten Duwe

This patchset fixes a false negative report (ie, unreliable) from the
ppc64 reliable stack unwinder, discussed here [1] when it may
inadvertently trip over a stale exception marker left on the stack.

The first two patches fix this bug.  Nicolai's change clears the marker
from the stack when an exception is finished.  The next patch modifies
the unwinder to only look for such on stack elements when the ABI
guarantees that they will actually be initialized.

The final two patches consist of code cleanups that Nicolai and I
spotted during the development of the fixes.

Testing included re-running the original test scenario (loading a
livepatch module on ppc64le) on a 5.0.0-rc2 kernel as well as a RHEL-7
backport.  I ran internal tests on the RHEL-7 backport and no new test
failures were introduced.  I believe that Nicolai has done the same
with respect to the first patch.

[1] https://lore.kernel.org/lkml/7f468285-b149-37e2-e782-c9e538b997a9@redhat.com/

Joe Lawrence (3):
  powerpc/livepatch: relax reliable stack tracer checks for first-frame
  powerpc/livepatch: small cleanups in save_stack_trace_tsk_reliable()
  powerpc/livepatch: return -ERRNO values in
    save_stack_trace_tsk_reliable()

Nicolai Stange (1):
  powerpc/64s: Clear on-stack exception marker upon exception return

 arch/powerpc/Kconfig             |  2 +-
 arch/powerpc/kernel/entry_64.S   |  7 +++
 arch/powerpc/kernel/stacktrace.c | 74 +++++++++++++++++---------------
 3 files changed, 47 insertions(+), 36 deletions(-)

-- 
2.20.1


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH 1/4] powerpc/64s: Clear on-stack exception marker upon exception return
  2019-01-22 15:57 [PATCH 0/4] powerpc/livepatch: reliable stack unwinder fixes Joe Lawrence
@ 2019-01-22 15:57 ` Joe Lawrence
  2019-01-30 12:27   ` Michael Ellerman
                     ` (2 more replies)
  2019-01-22 15:57 ` [PATCH 2/4] powerpc/livepatch: relax reliable stack tracer checks for first-frame Joe Lawrence
                   ` (4 subsequent siblings)
  5 siblings, 3 replies; 21+ messages in thread
From: Joe Lawrence @ 2019-01-22 15:57 UTC (permalink / raw)
  To: linux-kernel, linuxppc-dev, live-patching
  Cc: Nicolai Stange, Jiri Kosina, Josh Poimboeuf, Torsten Duwe

From: Nicolai Stange <nstange@suse.de>

The ppc64 specific implementation of the reliable stacktracer,
save_stack_trace_tsk_reliable(), bails out and reports an "unreliable
trace" whenever it finds an exception frame on the stack. Stack frames
are classified as exception frames if the STACK_FRAME_REGS_MARKER magic,
as written by exception prologues, is found at a particular location.

However, as observed by Joe Lawrence, it is possible in practice that
non-exception stack frames can alias with prior exception frames and thus,
that the reliable stacktracer can find a stale STACK_FRAME_REGS_MARKER on
the stack. It in turn falsely reports an unreliable stacktrace and blocks
any live patching transition to finish. Said condition lasts until the
stack frame is overwritten/initialized by function call or other means.

In principle, we could mitigate this by making the exception frame
classification condition in save_stack_trace_tsk_reliable() stronger:
in addition to testing for STACK_FRAME_REGS_MARKER, we could also take into
account that for all exceptions executing on the kernel stack
- their stack frames's backlink pointers always match what is saved
  in their pt_regs instance's ->gpr[1] slot and that
- their exception frame size equals STACK_INT_FRAME_SIZE, a value
  uncommonly large for non-exception frames.

However, while these are currently true, relying on them would make the
reliable stacktrace implementation more sensitive towards future changes in
the exception entry code. Note that false negatives, i.e. not detecting
exception frames, would silently break the live patching consistency model.

Furthermore, certain other places (diagnostic stacktraces, perf, xmon)
rely on STACK_FRAME_REGS_MARKER as well.

Make the exception exit code clear the on-stack STACK_FRAME_REGS_MARKER
for those exceptions running on the "normal" kernel stack and returning
to kernelspace: because the topmost frame is ignored by the reliable stack
tracer anyway, returns to userspace don't need to take care of clearing
the marker.

Furthermore, as I don't have the ability to test this on Book 3E or
32 bits, limit the change to Book 3S and 64 bits.

Finally, make the HAVE_RELIABLE_STACKTRACE Kconfig option depend on
PPC_BOOK3S_64 for documentation purposes. Before this patch, it depended
on PPC64 && CPU_LITTLE_ENDIAN and because CPU_LITTLE_ENDIAN implies
PPC_BOOK3S_64, there's no functional change here.

Fixes: df78d3f61480 ("powerpc/livepatch: Implement reliable stack tracing for the consistency model")
Reported-by: Joe Lawrence <joe.lawrence@redhat.com>
Signed-off-by: Nicolai Stange <nstange@suse.de>
Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
---
 arch/powerpc/Kconfig           | 2 +-
 arch/powerpc/kernel/entry_64.S | 7 +++++++
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 2890d36eb531..73bf87b1d274 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -220,7 +220,7 @@ config PPC
 	select HAVE_PERF_USER_STACK_DUMP
 	select HAVE_RCU_TABLE_FREE		if SMP
 	select HAVE_REGS_AND_STACK_ACCESS_API
-	select HAVE_RELIABLE_STACKTRACE		if PPC64 && CPU_LITTLE_ENDIAN
+	select HAVE_RELIABLE_STACKTRACE		if PPC_BOOK3S_64 && CPU_LITTLE_ENDIAN
 	select HAVE_SYSCALL_TRACEPOINTS
 	select HAVE_VIRT_CPU_ACCOUNTING
 	select HAVE_IRQ_TIME_ACCOUNTING
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 435927f549c4..a2c168b395d2 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -1002,6 +1002,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
 	ld	r2,_NIP(r1)
 	mtspr	SPRN_SRR0,r2
 
+	/*
+	 * Leaving a stale exception_marker on the stack can confuse
+	 * the reliable stack unwinder later on. Clear it.
+	 */
+	li	r2,0
+	std	r2,STACK_FRAME_OVERHEAD-16(r1)
+
 	ld	r0,GPR0(r1)
 	ld	r2,GPR2(r1)
 	ld	r3,GPR3(r1)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 2/4] powerpc/livepatch: relax reliable stack tracer checks for first-frame
  2019-01-22 15:57 [PATCH 0/4] powerpc/livepatch: reliable stack unwinder fixes Joe Lawrence
  2019-01-22 15:57 ` [PATCH 1/4] powerpc/64s: Clear on-stack exception marker upon exception return Joe Lawrence
@ 2019-01-22 15:57 ` Joe Lawrence
  2019-01-22 15:57 ` [PATCH 3/4] powerpc/livepatch: small cleanups in save_stack_trace_tsk_reliable() Joe Lawrence
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 21+ messages in thread
From: Joe Lawrence @ 2019-01-22 15:57 UTC (permalink / raw)
  To: linux-kernel, linuxppc-dev, live-patching
  Cc: Nicolai Stange, Jiri Kosina, Josh Poimboeuf, Torsten Duwe

The bottom-most stack frame (the first to be unwound) may be largely
uninitialized, for the "Power Architecture 64-Bit ELF V2 ABI" only
requires its backchain pointer to be set.

The reliable stack tracer should be careful when verifying this frame:
skip checks on STACK_FRAME_LR_SAVE and STACK_FRAME_MARKER offsets that
may contain uninitialized residual data.

Fixes: df78d3f61480 ("powerpc/livepatch: Implement reliable stack tracing for the consistency model")
Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
---
 arch/powerpc/kernel/stacktrace.c | 32 ++++++++++++++++++++++++--------
 1 file changed, 24 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c
index e2c50b55138f..06688f4d557b 100644
--- a/arch/powerpc/kernel/stacktrace.c
+++ b/arch/powerpc/kernel/stacktrace.c
@@ -84,6 +84,12 @@ save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
 EXPORT_SYMBOL_GPL(save_stack_trace_regs);
 
 #ifdef CONFIG_HAVE_RELIABLE_STACKTRACE
+/*
+ * This function returns an error if it detects any unreliable features of the
+ * stack.  Otherwise it guarantees that the stack trace is reliable.
+ *
+ * If the task is not 'current', the caller *must* ensure the task is inactive.
+ */
 int
 save_stack_trace_tsk_reliable(struct task_struct *tsk,
 				struct stack_trace *trace)
@@ -142,12 +148,6 @@ save_stack_trace_tsk_reliable(struct task_struct *tsk,
 		if (sp & 0xF)
 			return 1;
 
-		/* Mark stacktraces with exception frames as unreliable. */
-		if (sp <= stack_end - STACK_INT_FRAME_SIZE &&
-		    stack[STACK_FRAME_MARKER] == STACK_FRAME_REGS_MARKER) {
-			return 1;
-		}
-
 		newsp = stack[0];
 		/* Stack grows downwards; unwinder may only go up. */
 		if (newsp <= sp)
@@ -158,11 +158,26 @@ save_stack_trace_tsk_reliable(struct task_struct *tsk,
 			return 1; /* invalid backlink, too far up. */
 		}
 
+		/*
+		 * We can only trust the bottom frame's backlink, the
+		 * rest of the frame may be uninitialized, continue to
+		 * the next.
+		 */
+		if (firstframe) {
+			firstframe = 0;
+			goto next;
+		}
+
+		/* Mark stacktraces with exception frames as unreliable. */
+		if (sp <= stack_end - STACK_INT_FRAME_SIZE &&
+		    stack[STACK_FRAME_MARKER] == STACK_FRAME_REGS_MARKER) {
+			return 1;
+		}
+
 		/* Examine the saved LR: it must point into kernel code. */
 		ip = stack[STACK_FRAME_LR_SAVE];
-		if (!firstframe && !__kernel_text_address(ip))
+		if (!__kernel_text_address(ip))
 			return 1;
-		firstframe = 0;
 
 		/*
 		 * FIXME: IMHO these tests do not belong in
@@ -183,6 +198,7 @@ save_stack_trace_tsk_reliable(struct task_struct *tsk,
 		else
 			trace->skip--;
 
+next:
 		if (newsp == stack_end)
 			break;
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 3/4] powerpc/livepatch: small cleanups in save_stack_trace_tsk_reliable()
  2019-01-22 15:57 [PATCH 0/4] powerpc/livepatch: reliable stack unwinder fixes Joe Lawrence
  2019-01-22 15:57 ` [PATCH 1/4] powerpc/64s: Clear on-stack exception marker upon exception return Joe Lawrence
  2019-01-22 15:57 ` [PATCH 2/4] powerpc/livepatch: relax reliable stack tracer checks for first-frame Joe Lawrence
@ 2019-01-22 15:57 ` Joe Lawrence
  2019-01-22 15:57 ` [PATCH 4/4] powerpc/livepatch: return -ERRNO values " Joe Lawrence
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 21+ messages in thread
From: Joe Lawrence @ 2019-01-22 15:57 UTC (permalink / raw)
  To: linux-kernel, linuxppc-dev, live-patching
  Cc: Nicolai Stange, Jiri Kosina, Josh Poimboeuf, Torsten Duwe

Mostly cosmetic changes:

- Group common stack pointer code at the top
- Simplify the first frame logic
- Code stackframe iteration into for...loop construct
- Check for trace->nr_entries overflow before adding any into the array

Suggested-by: Nicolai Stange <nstange@suse.de>
Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
---
 arch/powerpc/kernel/stacktrace.c | 40 +++++++++++---------------------
 1 file changed, 14 insertions(+), 26 deletions(-)

diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c
index 06688f4d557b..28c3c25755d7 100644
--- a/arch/powerpc/kernel/stacktrace.c
+++ b/arch/powerpc/kernel/stacktrace.c
@@ -95,20 +95,11 @@ save_stack_trace_tsk_reliable(struct task_struct *tsk,
 				struct stack_trace *trace)
 {
 	unsigned long sp;
+	unsigned long newsp;
 	unsigned long stack_page = (unsigned long)task_stack_page(tsk);
 	unsigned long stack_end;
 	int graph_idx = 0;
-
-	/*
-	 * The last frame (unwinding first) may not yet have saved
-	 * its LR onto the stack.
-	 */
-	int firstframe = 1;
-
-	if (tsk == current)
-		sp = current_stack_pointer();
-	else
-		sp = tsk->thread.ksp;
+	bool firstframe;
 
 	stack_end = stack_page + THREAD_SIZE;
 	if (!is_idle_task(tsk)) {
@@ -135,14 +126,20 @@ save_stack_trace_tsk_reliable(struct task_struct *tsk,
 		stack_end -= STACK_FRAME_OVERHEAD;
 	}
 
+	if (tsk == current)
+		sp = current_stack_pointer();
+	else
+		sp = tsk->thread.ksp;
+
 	if (sp < stack_page + sizeof(struct thread_struct) ||
 	    sp > stack_end - STACK_FRAME_MIN_SIZE) {
 		return 1;
 	}
 
-	for (;;) {
+	for (firstframe = true; sp != stack_end;
+	     firstframe = false, sp = newsp) {
 		unsigned long *stack = (unsigned long *) sp;
-		unsigned long newsp, ip;
+		unsigned long ip;
 
 		/* sanity check: ABI requires SP to be aligned 16 bytes. */
 		if (sp & 0xF)
@@ -163,10 +160,8 @@ save_stack_trace_tsk_reliable(struct task_struct *tsk,
 		 * rest of the frame may be uninitialized, continue to
 		 * the next.
 		 */
-		if (firstframe) {
-			firstframe = 0;
-			goto next;
-		}
+		if (firstframe)
+			continue;
 
 		/* Mark stacktraces with exception frames as unreliable. */
 		if (sp <= stack_end - STACK_INT_FRAME_SIZE &&
@@ -193,19 +188,12 @@ save_stack_trace_tsk_reliable(struct task_struct *tsk,
 			return 1;
 #endif
 
+		if (trace->nr_entries >= trace->max_entries)
+			return -E2BIG;
 		if (!trace->skip)
 			trace->entries[trace->nr_entries++] = ip;
 		else
 			trace->skip--;
-
-next:
-		if (newsp == stack_end)
-			break;
-
-		if (trace->nr_entries >= trace->max_entries)
-			return -E2BIG;
-
-		sp = newsp;
 	}
 	return 0;
 }
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 4/4] powerpc/livepatch: return -ERRNO values in save_stack_trace_tsk_reliable()
  2019-01-22 15:57 [PATCH 0/4] powerpc/livepatch: reliable stack unwinder fixes Joe Lawrence
                   ` (2 preceding siblings ...)
  2019-01-22 15:57 ` [PATCH 3/4] powerpc/livepatch: small cleanups in save_stack_trace_tsk_reliable() Joe Lawrence
@ 2019-01-22 15:57 ` Joe Lawrence
  2019-02-02  0:59   ` Balbir Singh
  2019-01-29 21:10 ` [PATCH 0/4] powerpc/livepatch: reliable stack unwinder fixes Josh Poimboeuf
  2019-01-29 23:50 ` Jiri Kosina
  5 siblings, 1 reply; 21+ messages in thread
From: Joe Lawrence @ 2019-01-22 15:57 UTC (permalink / raw)
  To: linux-kernel, linuxppc-dev, live-patching
  Cc: Nicolai Stange, Jiri Kosina, Josh Poimboeuf, Torsten Duwe

To match its x86 counterpart, save_stack_trace_tsk_reliable() should
return -EINVAL in cases that it is currently returning 1.  No caller is
currently differentiating non-zero error codes, but let's keep the
arch-specific implementations consistent.

Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
---
 arch/powerpc/kernel/stacktrace.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c
index 28c3c25755d7..cf31ce6c1f53 100644
--- a/arch/powerpc/kernel/stacktrace.c
+++ b/arch/powerpc/kernel/stacktrace.c
@@ -133,7 +133,7 @@ save_stack_trace_tsk_reliable(struct task_struct *tsk,
 
 	if (sp < stack_page + sizeof(struct thread_struct) ||
 	    sp > stack_end - STACK_FRAME_MIN_SIZE) {
-		return 1;
+		return -EINVAL;
 	}
 
 	for (firstframe = true; sp != stack_end;
@@ -143,16 +143,16 @@ save_stack_trace_tsk_reliable(struct task_struct *tsk,
 
 		/* sanity check: ABI requires SP to be aligned 16 bytes. */
 		if (sp & 0xF)
-			return 1;
+			return -EINVAL;
 
 		newsp = stack[0];
 		/* Stack grows downwards; unwinder may only go up. */
 		if (newsp <= sp)
-			return 1;
+			return -EINVAL;
 
 		if (newsp != stack_end &&
 		    newsp > stack_end - STACK_FRAME_MIN_SIZE) {
-			return 1; /* invalid backlink, too far up. */
+			return -EINVAL; /* invalid backlink, too far up. */
 		}
 
 		/*
@@ -166,13 +166,13 @@ save_stack_trace_tsk_reliable(struct task_struct *tsk,
 		/* Mark stacktraces with exception frames as unreliable. */
 		if (sp <= stack_end - STACK_INT_FRAME_SIZE &&
 		    stack[STACK_FRAME_MARKER] == STACK_FRAME_REGS_MARKER) {
-			return 1;
+			return -EINVAL;
 		}
 
 		/* Examine the saved LR: it must point into kernel code. */
 		ip = stack[STACK_FRAME_LR_SAVE];
 		if (!__kernel_text_address(ip))
-			return 1;
+			return -EINVAL;
 
 		/*
 		 * FIXME: IMHO these tests do not belong in
@@ -185,7 +185,7 @@ save_stack_trace_tsk_reliable(struct task_struct *tsk,
 		 * as unreliable.
 		 */
 		if (ip == (unsigned long)kretprobe_trampoline)
-			return 1;
+			return -EINVAL;
 #endif
 
 		if (trace->nr_entries >= trace->max_entries)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH 0/4] powerpc/livepatch: reliable stack unwinder fixes
  2019-01-22 15:57 [PATCH 0/4] powerpc/livepatch: reliable stack unwinder fixes Joe Lawrence
                   ` (3 preceding siblings ...)
  2019-01-22 15:57 ` [PATCH 4/4] powerpc/livepatch: return -ERRNO values " Joe Lawrence
@ 2019-01-29 21:10 ` Josh Poimboeuf
  2019-01-29 23:50 ` Jiri Kosina
  5 siblings, 0 replies; 21+ messages in thread
From: Josh Poimboeuf @ 2019-01-29 21:10 UTC (permalink / raw)
  To: Joe Lawrence
  Cc: Nicolai Stange, linux-kernel, Torsten Duwe, Jiri Kosina,
	live-patching, linuxppc-dev

On Tue, Jan 22, 2019 at 10:57:20AM -0500, Joe Lawrence wrote:
> This patchset fixes a false negative report (ie, unreliable) from the
> ppc64 reliable stack unwinder, discussed here [1] when it may
> inadvertently trip over a stale exception marker left on the stack.
> 
> The first two patches fix this bug.  Nicolai's change clears the marker
> from the stack when an exception is finished.  The next patch modifies
> the unwinder to only look for such on stack elements when the ABI
> guarantees that they will actually be initialized.
> 
> The final two patches consist of code cleanups that Nicolai and I
> spotted during the development of the fixes.
> 
> Testing included re-running the original test scenario (loading a
> livepatch module on ppc64le) on a 5.0.0-rc2 kernel as well as a RHEL-7
> backport.  I ran internal tests on the RHEL-7 backport and no new test
> failures were introduced.  I believe that Nicolai has done the same
> with respect to the first patch.
> 
> [1] https://lore.kernel.org/lkml/7f468285-b149-37e2-e782-c9e538b997a9@redhat.com/
> 
> Joe Lawrence (3):
>   powerpc/livepatch: relax reliable stack tracer checks for first-frame
>   powerpc/livepatch: small cleanups in save_stack_trace_tsk_reliable()
>   powerpc/livepatch: return -ERRNO values in
>     save_stack_trace_tsk_reliable()
> 
> Nicolai Stange (1):
>   powerpc/64s: Clear on-stack exception marker upon exception return
> 
>  arch/powerpc/Kconfig             |  2 +-
>  arch/powerpc/kernel/entry_64.S   |  7 +++
>  arch/powerpc/kernel/stacktrace.c | 74 +++++++++++++++++---------------
>  3 files changed, 47 insertions(+), 36 deletions(-)

Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com>

-- 
Josh

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 0/4] powerpc/livepatch: reliable stack unwinder fixes
  2019-01-22 15:57 [PATCH 0/4] powerpc/livepatch: reliable stack unwinder fixes Joe Lawrence
                   ` (4 preceding siblings ...)
  2019-01-29 21:10 ` [PATCH 0/4] powerpc/livepatch: reliable stack unwinder fixes Josh Poimboeuf
@ 2019-01-29 23:50 ` Jiri Kosina
  2019-01-30 12:22   ` Michael Ellerman
  5 siblings, 1 reply; 21+ messages in thread
From: Jiri Kosina @ 2019-01-29 23:50 UTC (permalink / raw)
  To: Joe Lawrence
  Cc: Nicolai Stange, linux-kernel, Torsten Duwe, Josh Poimboeuf,
	live-patching, linuxppc-dev

On Tue, 22 Jan 2019, Joe Lawrence wrote:

> This patchset fixes a false negative report (ie, unreliable) from the
> ppc64 reliable stack unwinder, discussed here [1] when it may
> inadvertently trip over a stale exception marker left on the stack.
> 
> The first two patches fix this bug.  Nicolai's change clears the marker
> from the stack when an exception is finished.  The next patch modifies
> the unwinder to only look for such on stack elements when the ABI
> guarantees that they will actually be initialized.
> 
> The final two patches consist of code cleanups that Nicolai and I
> spotted during the development of the fixes.
> 
> Testing included re-running the original test scenario (loading a
> livepatch module on ppc64le) on a 5.0.0-rc2 kernel as well as a RHEL-7
> backport.  I ran internal tests on the RHEL-7 backport and no new test
> failures were introduced.  I believe that Nicolai has done the same
> with respect to the first patch.
> 
> [1] https://lore.kernel.org/lkml/7f468285-b149-37e2-e782-c9e538b997a9@redhat.com/
> 
> Joe Lawrence (3):
>   powerpc/livepatch: relax reliable stack tracer checks for first-frame
>   powerpc/livepatch: small cleanups in save_stack_trace_tsk_reliable()
>   powerpc/livepatch: return -ERRNO values in
>     save_stack_trace_tsk_reliable()
> 
> Nicolai Stange (1):
>   powerpc/64s: Clear on-stack exception marker upon exception return

Michael, are you fine with this going through LP tree, or do you plan to 
take it through yours?

Thanks,

-- 
Jiri Kosina
SUSE Labs


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 0/4] powerpc/livepatch: reliable stack unwinder fixes
  2019-01-29 23:50 ` Jiri Kosina
@ 2019-01-30 12:22   ` Michael Ellerman
  2019-01-30 13:28     ` Jiri Kosina
  0 siblings, 1 reply; 21+ messages in thread
From: Michael Ellerman @ 2019-01-30 12:22 UTC (permalink / raw)
  To: Jiri Kosina, Joe Lawrence
  Cc: Nicolai Stange, linux-kernel, Torsten Duwe, Josh Poimboeuf,
	live-patching, linuxppc-dev

Jiri Kosina <jikos@kernel.org> writes:
> On Tue, 22 Jan 2019, Joe Lawrence wrote:
>
>> This patchset fixes a false negative report (ie, unreliable) from the
>> ppc64 reliable stack unwinder, discussed here [1] when it may
>> inadvertently trip over a stale exception marker left on the stack.
>> 
>> The first two patches fix this bug.  Nicolai's change clears the marker
>> from the stack when an exception is finished.  The next patch modifies
>> the unwinder to only look for such on stack elements when the ABI
>> guarantees that they will actually be initialized.
>> 
>> The final two patches consist of code cleanups that Nicolai and I
>> spotted during the development of the fixes.
>> 
>> Testing included re-running the original test scenario (loading a
>> livepatch module on ppc64le) on a 5.0.0-rc2 kernel as well as a RHEL-7
>> backport.  I ran internal tests on the RHEL-7 backport and no new test
>> failures were introduced.  I believe that Nicolai has done the same
>> with respect to the first patch.
>> 
>> [1] https://lore.kernel.org/lkml/7f468285-b149-37e2-e782-c9e538b997a9@redhat.com/
>> 
>> Joe Lawrence (3):
>>   powerpc/livepatch: relax reliable stack tracer checks for first-frame
>>   powerpc/livepatch: small cleanups in save_stack_trace_tsk_reliable()
>>   powerpc/livepatch: return -ERRNO values in
>>     save_stack_trace_tsk_reliable()
>> 
>> Nicolai Stange (1):
>>   powerpc/64s: Clear on-stack exception marker upon exception return
>
> Michael, are you fine with this going through LP tree, or do you plan to 
> take it through yours?

I'm happy to take it, unless there's some reason you'd rather it go via
the LP tree?

I don't have any automated live patch tests, but I assume if it's in
linux-next someone can test it? :)

cheers

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/4] powerpc/64s: Clear on-stack exception marker upon exception return
  2019-01-22 15:57 ` [PATCH 1/4] powerpc/64s: Clear on-stack exception marker upon exception return Joe Lawrence
@ 2019-01-30 12:27   ` Michael Ellerman
  2019-01-30 17:18     ` Nicolai Stange
  2019-02-02  1:14   ` Balbir Singh
  2019-02-08 13:02   ` [1/4] " Michael Ellerman
  2 siblings, 1 reply; 21+ messages in thread
From: Michael Ellerman @ 2019-01-30 12:27 UTC (permalink / raw)
  To: Joe Lawrence, linux-kernel, linuxppc-dev, live-patching
  Cc: Jiri Kosina, Torsten Duwe, Nicolai Stange, Josh Poimboeuf

Joe Lawrence <joe.lawrence@redhat.com> writes:
> From: Nicolai Stange <nstange@suse.de>
>
> The ppc64 specific implementation of the reliable stacktracer,
> save_stack_trace_tsk_reliable(), bails out and reports an "unreliable
> trace" whenever it finds an exception frame on the stack. Stack frames
> are classified as exception frames if the STACK_FRAME_REGS_MARKER magic,
> as written by exception prologues, is found at a particular location.
>
> However, as observed by Joe Lawrence, it is possible in practice that
> non-exception stack frames can alias with prior exception frames and thus,
> that the reliable stacktracer can find a stale STACK_FRAME_REGS_MARKER on
> the stack. It in turn falsely reports an unreliable stacktrace and blocks
> any live patching transition to finish. Said condition lasts until the
> stack frame is overwritten/initialized by function call or other means.
>
> In principle, we could mitigate this by making the exception frame
> classification condition in save_stack_trace_tsk_reliable() stronger:
> in addition to testing for STACK_FRAME_REGS_MARKER, we could also take into
> account that for all exceptions executing on the kernel stack
> - their stack frames's backlink pointers always match what is saved
>   in their pt_regs instance's ->gpr[1] slot and that
> - their exception frame size equals STACK_INT_FRAME_SIZE, a value
>   uncommonly large for non-exception frames.
>
> However, while these are currently true, relying on them would make the
> reliable stacktrace implementation more sensitive towards future changes in
> the exception entry code. Note that false negatives, i.e. not detecting
> exception frames, would silently break the live patching consistency model.
>
> Furthermore, certain other places (diagnostic stacktraces, perf, xmon)
> rely on STACK_FRAME_REGS_MARKER as well.
>
> Make the exception exit code clear the on-stack STACK_FRAME_REGS_MARKER
> for those exceptions running on the "normal" kernel stack and returning
> to kernelspace: because the topmost frame is ignored by the reliable stack
> tracer anyway, returns to userspace don't need to take care of clearing
> the marker.
>
> Furthermore, as I don't have the ability to test this on Book 3E or
> 32 bits, limit the change to Book 3S and 64 bits.
>
> Finally, make the HAVE_RELIABLE_STACKTRACE Kconfig option depend on
> PPC_BOOK3S_64 for documentation purposes. Before this patch, it depended
> on PPC64 && CPU_LITTLE_ENDIAN and because CPU_LITTLE_ENDIAN implies
> PPC_BOOK3S_64, there's no functional change here.

That has nothing to do with the fix and should really be in a separate
patch.

I can split it when applying.

cheers

> Fixes: df78d3f61480 ("powerpc/livepatch: Implement reliable stack tracing for the consistency model")
> Reported-by: Joe Lawrence <joe.lawrence@redhat.com>
> Signed-off-by: Nicolai Stange <nstange@suse.de>
> Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
> ---
>  arch/powerpc/Kconfig           | 2 +-
>  arch/powerpc/kernel/entry_64.S | 7 +++++++
>  2 files changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index 2890d36eb531..73bf87b1d274 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -220,7 +220,7 @@ config PPC
>  	select HAVE_PERF_USER_STACK_DUMP
>  	select HAVE_RCU_TABLE_FREE		if SMP
>  	select HAVE_REGS_AND_STACK_ACCESS_API
> -	select HAVE_RELIABLE_STACKTRACE		if PPC64 && CPU_LITTLE_ENDIAN
> +	select HAVE_RELIABLE_STACKTRACE		if PPC_BOOK3S_64 && CPU_LITTLE_ENDIAN
>  	select HAVE_SYSCALL_TRACEPOINTS
>  	select HAVE_VIRT_CPU_ACCOUNTING
>  	select HAVE_IRQ_TIME_ACCOUNTING
> diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
> index 435927f549c4..a2c168b395d2 100644
> --- a/arch/powerpc/kernel/entry_64.S
> +++ b/arch/powerpc/kernel/entry_64.S
> @@ -1002,6 +1002,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
>  	ld	r2,_NIP(r1)
>  	mtspr	SPRN_SRR0,r2
>  
> +	/*
> +	 * Leaving a stale exception_marker on the stack can confuse
> +	 * the reliable stack unwinder later on. Clear it.
> +	 */
> +	li	r2,0
> +	std	r2,STACK_FRAME_OVERHEAD-16(r1)
> +
>  	ld	r0,GPR0(r1)
>  	ld	r2,GPR2(r1)
>  	ld	r3,GPR3(r1)
> -- 
> 2.20.1

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 0/4] powerpc/livepatch: reliable stack unwinder fixes
  2019-01-30 12:22   ` Michael Ellerman
@ 2019-01-30 13:28     ` Jiri Kosina
  2019-01-31  5:46       ` Michael Ellerman
  0 siblings, 1 reply; 21+ messages in thread
From: Jiri Kosina @ 2019-01-30 13:28 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Joe Lawrence, Nicolai Stange, linux-kernel, Torsten Duwe,
	Josh Poimboeuf, live-patching, linuxppc-dev

On Wed, 30 Jan 2019, Michael Ellerman wrote:

> I'm happy to take it, unless there's some reason you'd rather it go via 
> the LP tree?

As this is more about reliable stack unwinding than live patching per se 
(which is only a user of that facility), I'd actually slightly prefer if 
it goes via your tree.

Thanks,

-- 
Jiri Kosina
SUSE Labs


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/4] powerpc/64s: Clear on-stack exception marker upon exception return
  2019-01-30 12:27   ` Michael Ellerman
@ 2019-01-30 17:18     ` Nicolai Stange
  2019-01-31  5:46       ` Michael Ellerman
  0 siblings, 1 reply; 21+ messages in thread
From: Nicolai Stange @ 2019-01-30 17:18 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Joe Lawrence, Nicolai Stange, Jiri Kosina, linux-kernel,
	Torsten Duwe, Josh Poimboeuf, live-patching, linuxppc-dev

Michael Ellerman <mpe@ellerman.id.au> writes:

> Joe Lawrence <joe.lawrence@redhat.com> writes:
>> From: Nicolai Stange <nstange@suse.de>
>>
>> The ppc64 specific implementation of the reliable stacktracer,
>> save_stack_trace_tsk_reliable(), bails out and reports an "unreliable
>> trace" whenever it finds an exception frame on the stack. Stack frames
>> are classified as exception frames if the STACK_FRAME_REGS_MARKER magic,
>> as written by exception prologues, is found at a particular location.
>>
>> However, as observed by Joe Lawrence, it is possible in practice that
>> non-exception stack frames can alias with prior exception frames and thus,
>> that the reliable stacktracer can find a stale STACK_FRAME_REGS_MARKER on
>> the stack. It in turn falsely reports an unreliable stacktrace and blocks
>> any live patching transition to finish. Said condition lasts until the
>> stack frame is overwritten/initialized by function call or other means.
>>
>> In principle, we could mitigate this by making the exception frame
>> classification condition in save_stack_trace_tsk_reliable() stronger:
>> in addition to testing for STACK_FRAME_REGS_MARKER, we could also take into
>> account that for all exceptions executing on the kernel stack
>> - their stack frames's backlink pointers always match what is saved
>>   in their pt_regs instance's ->gpr[1] slot and that
>> - their exception frame size equals STACK_INT_FRAME_SIZE, a value
>>   uncommonly large for non-exception frames.
>>
>> However, while these are currently true, relying on them would make the
>> reliable stacktrace implementation more sensitive towards future changes in
>> the exception entry code. Note that false negatives, i.e. not detecting
>> exception frames, would silently break the live patching consistency model.
>>
>> Furthermore, certain other places (diagnostic stacktraces, perf, xmon)
>> rely on STACK_FRAME_REGS_MARKER as well.
>>
>> Make the exception exit code clear the on-stack STACK_FRAME_REGS_MARKER
>> for those exceptions running on the "normal" kernel stack and returning
>> to kernelspace: because the topmost frame is ignored by the reliable stack
>> tracer anyway, returns to userspace don't need to take care of clearing
>> the marker.
>>
>> Furthermore, as I don't have the ability to test this on Book 3E or
>> 32 bits, limit the change to Book 3S and 64 bits.
>>
>> Finally, make the HAVE_RELIABLE_STACKTRACE Kconfig option depend on
>> PPC_BOOK3S_64 for documentation purposes. Before this patch, it depended
>> on PPC64 && CPU_LITTLE_ENDIAN and because CPU_LITTLE_ENDIAN implies
>> PPC_BOOK3S_64, there's no functional change here.
>
> That has nothing to do with the fix and should really be in a separate
> patch.
>
> I can split it when applying.

If you don't mind, that would be nice! Or simply drop that
chunk... Otherwise, let me know if I shall send a split v2 for this
patch [1/4] only.

Thanks,

Nicolai

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/4] powerpc/64s: Clear on-stack exception marker upon exception return
  2019-01-30 17:18     ` Nicolai Stange
@ 2019-01-31  5:46       ` Michael Ellerman
  0 siblings, 0 replies; 21+ messages in thread
From: Michael Ellerman @ 2019-01-31  5:46 UTC (permalink / raw)
  To: Nicolai Stange
  Cc: Joe Lawrence, Nicolai Stange, Jiri Kosina, linux-kernel,
	Torsten Duwe, Josh Poimboeuf, live-patching, linuxppc-dev

Nicolai Stange <nstange@suse.de> writes:

> Michael Ellerman <mpe@ellerman.id.au> writes:
>
>> Joe Lawrence <joe.lawrence@redhat.com> writes:
>>> From: Nicolai Stange <nstange@suse.de>
>>>
>>> The ppc64 specific implementation of the reliable stacktracer,
>>> save_stack_trace_tsk_reliable(), bails out and reports an "unreliable
>>> trace" whenever it finds an exception frame on the stack. Stack frames
>>> are classified as exception frames if the STACK_FRAME_REGS_MARKER magic,
>>> as written by exception prologues, is found at a particular location.
>>>
>>> However, as observed by Joe Lawrence, it is possible in practice that
>>> non-exception stack frames can alias with prior exception frames and thus,
>>> that the reliable stacktracer can find a stale STACK_FRAME_REGS_MARKER on
>>> the stack. It in turn falsely reports an unreliable stacktrace and blocks
>>> any live patching transition to finish. Said condition lasts until the
>>> stack frame is overwritten/initialized by function call or other means.
>>>
>>> In principle, we could mitigate this by making the exception frame
>>> classification condition in save_stack_trace_tsk_reliable() stronger:
>>> in addition to testing for STACK_FRAME_REGS_MARKER, we could also take into
>>> account that for all exceptions executing on the kernel stack
>>> - their stack frames's backlink pointers always match what is saved
>>>   in their pt_regs instance's ->gpr[1] slot and that
>>> - their exception frame size equals STACK_INT_FRAME_SIZE, a value
>>>   uncommonly large for non-exception frames.
>>>
>>> However, while these are currently true, relying on them would make the
>>> reliable stacktrace implementation more sensitive towards future changes in
>>> the exception entry code. Note that false negatives, i.e. not detecting
>>> exception frames, would silently break the live patching consistency model.
>>>
>>> Furthermore, certain other places (diagnostic stacktraces, perf, xmon)
>>> rely on STACK_FRAME_REGS_MARKER as well.
>>>
>>> Make the exception exit code clear the on-stack STACK_FRAME_REGS_MARKER
>>> for those exceptions running on the "normal" kernel stack and returning
>>> to kernelspace: because the topmost frame is ignored by the reliable stack
>>> tracer anyway, returns to userspace don't need to take care of clearing
>>> the marker.
>>>
>>> Furthermore, as I don't have the ability to test this on Book 3E or
>>> 32 bits, limit the change to Book 3S and 64 bits.
>>>
>>> Finally, make the HAVE_RELIABLE_STACKTRACE Kconfig option depend on
>>> PPC_BOOK3S_64 for documentation purposes. Before this patch, it depended
>>> on PPC64 && CPU_LITTLE_ENDIAN and because CPU_LITTLE_ENDIAN implies
>>> PPC_BOOK3S_64, there's no functional change here.
>>
>> That has nothing to do with the fix and should really be in a separate
>> patch.
>>
>> I can split it when applying.
>
> If you don't mind, that would be nice! Or simply drop that
> chunk... Otherwise, let me know if I shall send a split v2 for this
> patch [1/4] only.

No worries, I split it out:

commit a50d3250d7ae34c561177a1f9cfb79816fcbcff1
Author:     Nicolai Stange <nstange@suse.de>
AuthorDate: Thu Jan 31 16:41:50 2019 +1100
Commit:     Michael Ellerman <mpe@ellerman.id.au>
CommitDate: Thu Jan 31 16:43:29 2019 +1100

    powerpc/64s: Make reliable stacktrace dependency clearer
    
    Make the HAVE_RELIABLE_STACKTRACE Kconfig option depend on
    PPC_BOOK3S_64 for documentation purposes. Before this patch, it
    depended on PPC64 && CPU_LITTLE_ENDIAN and because CPU_LITTLE_ENDIAN
    implies PPC_BOOK3S_64, there's no functional change here.
    
    Signed-off-by: Nicolai Stange <nstange@suse.de>
    Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
    [mpe: Split out of larger patch]
    Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 2890d36eb531..73bf87b1d274 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -220,7 +220,7 @@ config PPC
 	select HAVE_PERF_USER_STACK_DUMP
 	select HAVE_RCU_TABLE_FREE		if SMP
 	select HAVE_REGS_AND_STACK_ACCESS_API
-	select HAVE_RELIABLE_STACKTRACE		if PPC64 && CPU_LITTLE_ENDIAN
+	select HAVE_RELIABLE_STACKTRACE		if PPC_BOOK3S_64 && CPU_LITTLE_ENDIAN
 	select HAVE_SYSCALL_TRACEPOINTS
 	select HAVE_VIRT_CPU_ACCOUNTING
 	select HAVE_IRQ_TIME_ACCOUNTING


cheers

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH 0/4] powerpc/livepatch: reliable stack unwinder fixes
  2019-01-30 13:28     ` Jiri Kosina
@ 2019-01-31  5:46       ` Michael Ellerman
  0 siblings, 0 replies; 21+ messages in thread
From: Michael Ellerman @ 2019-01-31  5:46 UTC (permalink / raw)
  To: Jiri Kosina
  Cc: Joe Lawrence, Nicolai Stange, linux-kernel, Torsten Duwe,
	Josh Poimboeuf, live-patching, linuxppc-dev

Jiri Kosina <jikos@kernel.org> writes:

> On Wed, 30 Jan 2019, Michael Ellerman wrote:
>
>> I'm happy to take it, unless there's some reason you'd rather it go via 
>> the LP tree?
>
> As this is more about reliable stack unwinding than live patching per se 
> (which is only a user of that facility), I'd actually slightly prefer if 
> it goes via your tree.

Sure thing, I'll take it.

cheers

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 4/4] powerpc/livepatch: return -ERRNO values in save_stack_trace_tsk_reliable()
  2019-01-22 15:57 ` [PATCH 4/4] powerpc/livepatch: return -ERRNO values " Joe Lawrence
@ 2019-02-02  0:59   ` Balbir Singh
  0 siblings, 0 replies; 21+ messages in thread
From: Balbir Singh @ 2019-02-02  0:59 UTC (permalink / raw)
  To: Joe Lawrence
  Cc: Nicolai Stange, Jiri Kosina, linux-kernel, Torsten Duwe,
	Josh Poimboeuf, live-patching, linuxppc-dev

On Tue, Jan 22, 2019 at 10:57:24AM -0500, Joe Lawrence wrote:
> To match its x86 counterpart, save_stack_trace_tsk_reliable() should
> return -EINVAL in cases that it is currently returning 1.  No caller is
> currently differentiating non-zero error codes, but let's keep the
> arch-specific implementations consistent.
> 
> Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>

Seems straight forward

Acked-by: Balbir Singh <bsingharora@gmail.com>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/4] powerpc/64s: Clear on-stack exception marker upon exception return
  2019-01-22 15:57 ` [PATCH 1/4] powerpc/64s: Clear on-stack exception marker upon exception return Joe Lawrence
  2019-01-30 12:27   ` Michael Ellerman
@ 2019-02-02  1:14   ` Balbir Singh
  2019-02-02  3:42     ` Balbir Singh
  2019-02-08 13:02   ` [1/4] " Michael Ellerman
  2 siblings, 1 reply; 21+ messages in thread
From: Balbir Singh @ 2019-02-02  1:14 UTC (permalink / raw)
  To: Joe Lawrence
  Cc: Nicolai Stange, Jiri Kosina, linux-kernel, Torsten Duwe,
	Josh Poimboeuf, live-patching, linuxppc-dev

On Tue, Jan 22, 2019 at 10:57:21AM -0500, Joe Lawrence wrote:
> From: Nicolai Stange <nstange@suse.de>
> 
> The ppc64 specific implementation of the reliable stacktracer,
> save_stack_trace_tsk_reliable(), bails out and reports an "unreliable
> trace" whenever it finds an exception frame on the stack. Stack frames
> are classified as exception frames if the STACK_FRAME_REGS_MARKER magic,
> as written by exception prologues, is found at a particular location.
> 
> However, as observed by Joe Lawrence, it is possible in practice that
> non-exception stack frames can alias with prior exception frames and thus,
> that the reliable stacktracer can find a stale STACK_FRAME_REGS_MARKER on
> the stack. It in turn falsely reports an unreliable stacktrace and blocks
> any live patching transition to finish. Said condition lasts until the
> stack frame is overwritten/initialized by function call or other means.
> 
> In principle, we could mitigate this by making the exception frame
> classification condition in save_stack_trace_tsk_reliable() stronger:
> in addition to testing for STACK_FRAME_REGS_MARKER, we could also take into
> account that for all exceptions executing on the kernel stack
> - their stack frames's backlink pointers always match what is saved
>   in their pt_regs instance's ->gpr[1] slot and that
> - their exception frame size equals STACK_INT_FRAME_SIZE, a value
>   uncommonly large for non-exception frames.
> 
> However, while these are currently true, relying on them would make the
> reliable stacktrace implementation more sensitive towards future changes in
> the exception entry code. Note that false negatives, i.e. not detecting
> exception frames, would silently break the live patching consistency model.
> 
> Furthermore, certain other places (diagnostic stacktraces, perf, xmon)
> rely on STACK_FRAME_REGS_MARKER as well.
> 
> Make the exception exit code clear the on-stack STACK_FRAME_REGS_MARKER
> for those exceptions running on the "normal" kernel stack and returning
> to kernelspace: because the topmost frame is ignored by the reliable stack
> tracer anyway, returns to userspace don't need to take care of clearing
> the marker.
> 
> Furthermore, as I don't have the ability to test this on Book 3E or
> 32 bits, limit the change to Book 3S and 64 bits.
> 
> Finally, make the HAVE_RELIABLE_STACKTRACE Kconfig option depend on
> PPC_BOOK3S_64 for documentation purposes. Before this patch, it depended
> on PPC64 && CPU_LITTLE_ENDIAN and because CPU_LITTLE_ENDIAN implies
> PPC_BOOK3S_64, there's no functional change here.
> 
> Fixes: df78d3f61480 ("powerpc/livepatch: Implement reliable stack tracing for the consistency model")
> Reported-by: Joe Lawrence <joe.lawrence@redhat.com>
> Signed-off-by: Nicolai Stange <nstange@suse.de>
> Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
> ---
>  arch/powerpc/Kconfig           | 2 +-
>  arch/powerpc/kernel/entry_64.S | 7 +++++++
>  2 files changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index 2890d36eb531..73bf87b1d274 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -220,7 +220,7 @@ config PPC
>  	select HAVE_PERF_USER_STACK_DUMP
>  	select HAVE_RCU_TABLE_FREE		if SMP
>  	select HAVE_REGS_AND_STACK_ACCESS_API
> -	select HAVE_RELIABLE_STACKTRACE		if PPC64 && CPU_LITTLE_ENDIAN
> +	select HAVE_RELIABLE_STACKTRACE		if PPC_BOOK3S_64 && CPU_LITTLE_ENDIAN
>  	select HAVE_SYSCALL_TRACEPOINTS
>  	select HAVE_VIRT_CPU_ACCOUNTING
>  	select HAVE_IRQ_TIME_ACCOUNTING
> diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
> index 435927f549c4..a2c168b395d2 100644
> --- a/arch/powerpc/kernel/entry_64.S
> +++ b/arch/powerpc/kernel/entry_64.S
> @@ -1002,6 +1002,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
>  	ld	r2,_NIP(r1)
>  	mtspr	SPRN_SRR0,r2
>  
> +	/*
> +	 * Leaving a stale exception_marker on the stack can confuse
> +	 * the reliable stack unwinder later on. Clear it.
> +	 */
> +	li	r2,0
> +	std	r2,STACK_FRAME_OVERHEAD-16(r1)
> +

Could you please double check, r4 is already 0 at this point
IIUC. So the change might be a simple

std r4,STACK_FRAME_OVERHEAD-16(r1)

Balbir

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/4] powerpc/64s: Clear on-stack exception marker upon exception return
  2019-02-02  1:14   ` Balbir Singh
@ 2019-02-02  3:42     ` Balbir Singh
  2019-02-05 11:24       ` Michael Ellerman
  0 siblings, 1 reply; 21+ messages in thread
From: Balbir Singh @ 2019-02-02  3:42 UTC (permalink / raw)
  To: Joe Lawrence
  Cc: Nicolai Stange, Jiri Kosina, linux-kernel, Torsten Duwe,
	Josh Poimboeuf, live-patching,
	open list:LINUX FOR POWERPC (32-BIT AND 64-BIT)

On Sat, Feb 2, 2019 at 12:14 PM Balbir Singh <bsingharora@gmail.com> wrote:
>
> On Tue, Jan 22, 2019 at 10:57:21AM -0500, Joe Lawrence wrote:
> > From: Nicolai Stange <nstange@suse.de>
> >
> > The ppc64 specific implementation of the reliable stacktracer,
> > save_stack_trace_tsk_reliable(), bails out and reports an "unreliable
> > trace" whenever it finds an exception frame on the stack. Stack frames
> > are classified as exception frames if the STACK_FRAME_REGS_MARKER magic,
> > as written by exception prologues, is found at a particular location.
> >
> > However, as observed by Joe Lawrence, it is possible in practice that
> > non-exception stack frames can alias with prior exception frames and thus,
> > that the reliable stacktracer can find a stale STACK_FRAME_REGS_MARKER on
> > the stack. It in turn falsely reports an unreliable stacktrace and blocks
> > any live patching transition to finish. Said condition lasts until the
> > stack frame is overwritten/initialized by function call or other means.
> >
> > In principle, we could mitigate this by making the exception frame
> > classification condition in save_stack_trace_tsk_reliable() stronger:
> > in addition to testing for STACK_FRAME_REGS_MARKER, we could also take into
> > account that for all exceptions executing on the kernel stack
> > - their stack frames's backlink pointers always match what is saved
> >   in their pt_regs instance's ->gpr[1] slot and that
> > - their exception frame size equals STACK_INT_FRAME_SIZE, a value
> >   uncommonly large for non-exception frames.
> >
> > However, while these are currently true, relying on them would make the
> > reliable stacktrace implementation more sensitive towards future changes in
> > the exception entry code. Note that false negatives, i.e. not detecting
> > exception frames, would silently break the live patching consistency model.
> >
> > Furthermore, certain other places (diagnostic stacktraces, perf, xmon)
> > rely on STACK_FRAME_REGS_MARKER as well.
> >
> > Make the exception exit code clear the on-stack STACK_FRAME_REGS_MARKER
> > for those exceptions running on the "normal" kernel stack and returning
> > to kernelspace: because the topmost frame is ignored by the reliable stack
> > tracer anyway, returns to userspace don't need to take care of clearing
> > the marker.
> >
> > Furthermore, as I don't have the ability to test this on Book 3E or
> > 32 bits, limit the change to Book 3S and 64 bits.
> >
> > Finally, make the HAVE_RELIABLE_STACKTRACE Kconfig option depend on
> > PPC_BOOK3S_64 for documentation purposes. Before this patch, it depended
> > on PPC64 && CPU_LITTLE_ENDIAN and because CPU_LITTLE_ENDIAN implies
> > PPC_BOOK3S_64, there's no functional change here.
> >
> > Fixes: df78d3f61480 ("powerpc/livepatch: Implement reliable stack tracing for the consistency model")
> > Reported-by: Joe Lawrence <joe.lawrence@redhat.com>
> > Signed-off-by: Nicolai Stange <nstange@suse.de>
> > Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
> > ---
> >  arch/powerpc/Kconfig           | 2 +-
> >  arch/powerpc/kernel/entry_64.S | 7 +++++++
> >  2 files changed, 8 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> > index 2890d36eb531..73bf87b1d274 100644
> > --- a/arch/powerpc/Kconfig
> > +++ b/arch/powerpc/Kconfig
> > @@ -220,7 +220,7 @@ config PPC
> >       select HAVE_PERF_USER_STACK_DUMP
> >       select HAVE_RCU_TABLE_FREE              if SMP
> >       select HAVE_REGS_AND_STACK_ACCESS_API
> > -     select HAVE_RELIABLE_STACKTRACE         if PPC64 && CPU_LITTLE_ENDIAN
> > +     select HAVE_RELIABLE_STACKTRACE         if PPC_BOOK3S_64 && CPU_LITTLE_ENDIAN
> >       select HAVE_SYSCALL_TRACEPOINTS
> >       select HAVE_VIRT_CPU_ACCOUNTING
> >       select HAVE_IRQ_TIME_ACCOUNTING
> > diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
> > index 435927f549c4..a2c168b395d2 100644
> > --- a/arch/powerpc/kernel/entry_64.S
> > +++ b/arch/powerpc/kernel/entry_64.S
> > @@ -1002,6 +1002,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
> >       ld      r2,_NIP(r1)
> >       mtspr   SPRN_SRR0,r2
> >
> > +     /*
> > +      * Leaving a stale exception_marker on the stack can confuse
> > +      * the reliable stack unwinder later on. Clear it.
> > +      */
> > +     li      r2,0
> > +     std     r2,STACK_FRAME_OVERHEAD-16(r1)
> > +
>
> Could you please double check, r4 is already 0 at this point
> IIUC. So the change might be a simple
>
> std r4,STACK_FRAME_OVERHEAD-16(r1)
>

r4 is not 0, sorry for the noise

Balbir

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/4] powerpc/64s: Clear on-stack exception marker upon exception return
  2019-02-02  3:42     ` Balbir Singh
@ 2019-02-05 11:24       ` Michael Ellerman
  2019-02-06  2:48         ` Balbir Singh
  0 siblings, 1 reply; 21+ messages in thread
From: Michael Ellerman @ 2019-02-05 11:24 UTC (permalink / raw)
  To: Balbir Singh, Joe Lawrence
  Cc: Nicolai Stange, Jiri Kosina, linux-kernel, Torsten Duwe,
	Josh Poimboeuf, live-patching,
	open list:LINUX FOR POWERPC (32-BIT AND 64-BIT)

Balbir Singh <bsingharora@gmail.com> writes:
> On Sat, Feb 2, 2019 at 12:14 PM Balbir Singh <bsingharora@gmail.com> wrote:
>>
>> On Tue, Jan 22, 2019 at 10:57:21AM -0500, Joe Lawrence wrote:
>> > From: Nicolai Stange <nstange@suse.de>
>> >
>> > The ppc64 specific implementation of the reliable stacktracer,
>> > save_stack_trace_tsk_reliable(), bails out and reports an "unreliable
>> > trace" whenever it finds an exception frame on the stack. Stack frames
>> > are classified as exception frames if the STACK_FRAME_REGS_MARKER magic,
>> > as written by exception prologues, is found at a particular location.
>> >
>> > However, as observed by Joe Lawrence, it is possible in practice that
>> > non-exception stack frames can alias with prior exception frames and thus,
>> > that the reliable stacktracer can find a stale STACK_FRAME_REGS_MARKER on
>> > the stack. It in turn falsely reports an unreliable stacktrace and blocks
>> > any live patching transition to finish. Said condition lasts until the
>> > stack frame is overwritten/initialized by function call or other means.
>> >
>> > In principle, we could mitigate this by making the exception frame
>> > classification condition in save_stack_trace_tsk_reliable() stronger:
>> > in addition to testing for STACK_FRAME_REGS_MARKER, we could also take into
>> > account that for all exceptions executing on the kernel stack
>> > - their stack frames's backlink pointers always match what is saved
>> >   in their pt_regs instance's ->gpr[1] slot and that
>> > - their exception frame size equals STACK_INT_FRAME_SIZE, a value
>> >   uncommonly large for non-exception frames.
>> >
>> > However, while these are currently true, relying on them would make the
>> > reliable stacktrace implementation more sensitive towards future changes in
>> > the exception entry code. Note that false negatives, i.e. not detecting
>> > exception frames, would silently break the live patching consistency model.
>> >
>> > Furthermore, certain other places (diagnostic stacktraces, perf, xmon)
>> > rely on STACK_FRAME_REGS_MARKER as well.
>> >
>> > Make the exception exit code clear the on-stack STACK_FRAME_REGS_MARKER
>> > for those exceptions running on the "normal" kernel stack and returning
>> > to kernelspace: because the topmost frame is ignored by the reliable stack
>> > tracer anyway, returns to userspace don't need to take care of clearing
>> > the marker.
>> >
>> > Furthermore, as I don't have the ability to test this on Book 3E or
>> > 32 bits, limit the change to Book 3S and 64 bits.
>> >
>> > Finally, make the HAVE_RELIABLE_STACKTRACE Kconfig option depend on
>> > PPC_BOOK3S_64 for documentation purposes. Before this patch, it depended
>> > on PPC64 && CPU_LITTLE_ENDIAN and because CPU_LITTLE_ENDIAN implies
>> > PPC_BOOK3S_64, there's no functional change here.
>> >
>> > Fixes: df78d3f61480 ("powerpc/livepatch: Implement reliable stack tracing for the consistency model")
>> > Reported-by: Joe Lawrence <joe.lawrence@redhat.com>
>> > Signed-off-by: Nicolai Stange <nstange@suse.de>
>> > Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
>> > ---
>> >  arch/powerpc/Kconfig           | 2 +-
>> >  arch/powerpc/kernel/entry_64.S | 7 +++++++
>> >  2 files changed, 8 insertions(+), 1 deletion(-)
>> >
>> > diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
>> > index 2890d36eb531..73bf87b1d274 100644
>> > --- a/arch/powerpc/Kconfig
>> > +++ b/arch/powerpc/Kconfig
>> > @@ -220,7 +220,7 @@ config PPC
>> >       select HAVE_PERF_USER_STACK_DUMP
>> >       select HAVE_RCU_TABLE_FREE              if SMP
>> >       select HAVE_REGS_AND_STACK_ACCESS_API
>> > -     select HAVE_RELIABLE_STACKTRACE         if PPC64 && CPU_LITTLE_ENDIAN
>> > +     select HAVE_RELIABLE_STACKTRACE         if PPC_BOOK3S_64 && CPU_LITTLE_ENDIAN
>> >       select HAVE_SYSCALL_TRACEPOINTS
>> >       select HAVE_VIRT_CPU_ACCOUNTING
>> >       select HAVE_IRQ_TIME_ACCOUNTING
>> > diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
>> > index 435927f549c4..a2c168b395d2 100644
>> > --- a/arch/powerpc/kernel/entry_64.S
>> > +++ b/arch/powerpc/kernel/entry_64.S
>> > @@ -1002,6 +1002,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
>> >       ld      r2,_NIP(r1)
>> >       mtspr   SPRN_SRR0,r2
>> >
>> > +     /*
>> > +      * Leaving a stale exception_marker on the stack can confuse
>> > +      * the reliable stack unwinder later on. Clear it.
>> > +      */
>> > +     li      r2,0
>> > +     std     r2,STACK_FRAME_OVERHEAD-16(r1)
>> > +
>>
>> Could you please double check, r4 is already 0 at this point
>> IIUC. So the change might be a simple
>>
>> std r4,STACK_FRAME_OVERHEAD-16(r1)
>>
>
> r4 is not 0, sorry for the noise

Isn't it?

cheers

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/4] powerpc/64s: Clear on-stack exception marker upon exception return
  2019-02-05 11:24       ` Michael Ellerman
@ 2019-02-06  2:48         ` Balbir Singh
  2019-02-06  4:44           ` Michael Ellerman
  0 siblings, 1 reply; 21+ messages in thread
From: Balbir Singh @ 2019-02-06  2:48 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Joe Lawrence, Nicolai Stange, Jiri Kosina, linux-kernel,
	Torsten Duwe, Josh Poimboeuf, live-patching,
	open list:LINUX FOR POWERPC (32-BIT AND 64-BIT)

On Tue, Feb 5, 2019 at 10:24 PM Michael Ellerman <mpe@ellerman.id.au> wrote:
>
> Balbir Singh <bsingharora@gmail.com> writes:
> > On Sat, Feb 2, 2019 at 12:14 PM Balbir Singh <bsingharora@gmail.com> wrote:
> >>
> >> On Tue, Jan 22, 2019 at 10:57:21AM -0500, Joe Lawrence wrote:
> >> > From: Nicolai Stange <nstange@suse.de>
> >> >
> >> > The ppc64 specific implementation of the reliable stacktracer,
> >> > save_stack_trace_tsk_reliable(), bails out and reports an "unreliable
> >> > trace" whenever it finds an exception frame on the stack. Stack frames
> >> > are classified as exception frames if the STACK_FRAME_REGS_MARKER magic,
> >> > as written by exception prologues, is found at a particular location.
> >> >
> >> > However, as observed by Joe Lawrence, it is possible in practice that
> >> > non-exception stack frames can alias with prior exception frames and thus,
> >> > that the reliable stacktracer can find a stale STACK_FRAME_REGS_MARKER on
> >> > the stack. It in turn falsely reports an unreliable stacktrace and blocks
> >> > any live patching transition to finish. Said condition lasts until the
> >> > stack frame is overwritten/initialized by function call or other means.
> >> >
> >> > In principle, we could mitigate this by making the exception frame
> >> > classification condition in save_stack_trace_tsk_reliable() stronger:
> >> > in addition to testing for STACK_FRAME_REGS_MARKER, we could also take into
> >> > account that for all exceptions executing on the kernel stack
> >> > - their stack frames's backlink pointers always match what is saved
> >> >   in their pt_regs instance's ->gpr[1] slot and that
> >> > - their exception frame size equals STACK_INT_FRAME_SIZE, a value
> >> >   uncommonly large for non-exception frames.
> >> >
> >> > However, while these are currently true, relying on them would make the
> >> > reliable stacktrace implementation more sensitive towards future changes in
> >> > the exception entry code. Note that false negatives, i.e. not detecting
> >> > exception frames, would silently break the live patching consistency model.
> >> >
> >> > Furthermore, certain other places (diagnostic stacktraces, perf, xmon)
> >> > rely on STACK_FRAME_REGS_MARKER as well.
> >> >
> >> > Make the exception exit code clear the on-stack STACK_FRAME_REGS_MARKER
> >> > for those exceptions running on the "normal" kernel stack and returning
> >> > to kernelspace: because the topmost frame is ignored by the reliable stack
> >> > tracer anyway, returns to userspace don't need to take care of clearing
> >> > the marker.
> >> >
> >> > Furthermore, as I don't have the ability to test this on Book 3E or
> >> > 32 bits, limit the change to Book 3S and 64 bits.
> >> >
> >> > Finally, make the HAVE_RELIABLE_STACKTRACE Kconfig option depend on
> >> > PPC_BOOK3S_64 for documentation purposes. Before this patch, it depended
> >> > on PPC64 && CPU_LITTLE_ENDIAN and because CPU_LITTLE_ENDIAN implies
> >> > PPC_BOOK3S_64, there's no functional change here.
> >> >
> >> > Fixes: df78d3f61480 ("powerpc/livepatch: Implement reliable stack tracing for the consistency model")
> >> > Reported-by: Joe Lawrence <joe.lawrence@redhat.com>
> >> > Signed-off-by: Nicolai Stange <nstange@suse.de>
> >> > Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
> >> > ---
> >> >  arch/powerpc/Kconfig           | 2 +-
> >> >  arch/powerpc/kernel/entry_64.S | 7 +++++++
> >> >  2 files changed, 8 insertions(+), 1 deletion(-)
> >> >
> >> > diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> >> > index 2890d36eb531..73bf87b1d274 100644
> >> > --- a/arch/powerpc/Kconfig
> >> > +++ b/arch/powerpc/Kconfig
> >> > @@ -220,7 +220,7 @@ config PPC
> >> >       select HAVE_PERF_USER_STACK_DUMP
> >> >       select HAVE_RCU_TABLE_FREE              if SMP
> >> >       select HAVE_REGS_AND_STACK_ACCESS_API
> >> > -     select HAVE_RELIABLE_STACKTRACE         if PPC64 && CPU_LITTLE_ENDIAN
> >> > +     select HAVE_RELIABLE_STACKTRACE         if PPC_BOOK3S_64 && CPU_LITTLE_ENDIAN
> >> >       select HAVE_SYSCALL_TRACEPOINTS
> >> >       select HAVE_VIRT_CPU_ACCOUNTING
> >> >       select HAVE_IRQ_TIME_ACCOUNTING
> >> > diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
> >> > index 435927f549c4..a2c168b395d2 100644
> >> > --- a/arch/powerpc/kernel/entry_64.S
> >> > +++ b/arch/powerpc/kernel/entry_64.S
> >> > @@ -1002,6 +1002,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
> >> >       ld      r2,_NIP(r1)
> >> >       mtspr   SPRN_SRR0,r2
> >> >
> >> > +     /*
> >> > +      * Leaving a stale exception_marker on the stack can confuse
> >> > +      * the reliable stack unwinder later on. Clear it.
> >> > +      */
> >> > +     li      r2,0
> >> > +     std     r2,STACK_FRAME_OVERHEAD-16(r1)
> >> > +
> >>
> >> Could you please double check, r4 is already 0 at this point
> >> IIUC. So the change might be a simple
> >>
> >> std r4,STACK_FRAME_OVERHEAD-16(r1)
> >>
> >
> > r4 is not 0, sorry for the noise
>
>
 Isn't it?

It is, I seem to be reading the wrong bits and confused myself, had to
re-read mtmsrd to ensure it does not modify RS, just MSR. So I guess
we could reuse r4. Should I send a patch on top of this? I have
limited testing infrastructure at the moment, I could use qemu

Balbir Singh.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/4] powerpc/64s: Clear on-stack exception marker upon exception return
  2019-02-06  2:48         ` Balbir Singh
@ 2019-02-06  4:44           ` Michael Ellerman
  2019-02-06  8:45             ` Balbir Singh
  0 siblings, 1 reply; 21+ messages in thread
From: Michael Ellerman @ 2019-02-06  4:44 UTC (permalink / raw)
  To: Balbir Singh
  Cc: Joe Lawrence, Nicolai Stange, Jiri Kosina, linux-kernel,
	Torsten Duwe, Josh Poimboeuf, live-patching,
	open list:LINUX FOR POWERPC (32-BIT AND 64-BIT)

Balbir Singh <bsingharora@gmail.com> writes:
> On Tue, Feb 5, 2019 at 10:24 PM Michael Ellerman <mpe@ellerman.id.au> wrote:
>> Balbir Singh <bsingharora@gmail.com> writes:
>> > On Sat, Feb 2, 2019 at 12:14 PM Balbir Singh <bsingharora@gmail.com> wrote:
>> >> On Tue, Jan 22, 2019 at 10:57:21AM -0500, Joe Lawrence wrote:
>> >> > From: Nicolai Stange <nstange@suse.de>
>> >> >
>> >> > The ppc64 specific implementation of the reliable stacktracer,
>> >> > save_stack_trace_tsk_reliable(), bails out and reports an "unreliable
>> >> > trace" whenever it finds an exception frame on the stack. Stack frames
>> >> > are classified as exception frames if the STACK_FRAME_REGS_MARKER magic,
>> >> > as written by exception prologues, is found at a particular location.
>> >> >
>> >> > However, as observed by Joe Lawrence, it is possible in practice that
>> >> > non-exception stack frames can alias with prior exception frames and thus,
>> >> > that the reliable stacktracer can find a stale STACK_FRAME_REGS_MARKER on
>> >> > the stack. It in turn falsely reports an unreliable stacktrace and blocks
>> >> > any live patching transition to finish. Said condition lasts until the
>> >> > stack frame is overwritten/initialized by function call or other means.
>> >> >
>> >> > In principle, we could mitigate this by making the exception frame
>> >> > classification condition in save_stack_trace_tsk_reliable() stronger:
>> >> > in addition to testing for STACK_FRAME_REGS_MARKER, we could also take into
>> >> > account that for all exceptions executing on the kernel stack
>> >> > - their stack frames's backlink pointers always match what is saved
>> >> >   in their pt_regs instance's ->gpr[1] slot and that
>> >> > - their exception frame size equals STACK_INT_FRAME_SIZE, a value
>> >> >   uncommonly large for non-exception frames.
>> >> >
>> >> > However, while these are currently true, relying on them would make the
>> >> > reliable stacktrace implementation more sensitive towards future changes in
>> >> > the exception entry code. Note that false negatives, i.e. not detecting
>> >> > exception frames, would silently break the live patching consistency model.
>> >> >
>> >> > Furthermore, certain other places (diagnostic stacktraces, perf, xmon)
>> >> > rely on STACK_FRAME_REGS_MARKER as well.
>> >> >
>> >> > Make the exception exit code clear the on-stack STACK_FRAME_REGS_MARKER
>> >> > for those exceptions running on the "normal" kernel stack and returning
>> >> > to kernelspace: because the topmost frame is ignored by the reliable stack
>> >> > tracer anyway, returns to userspace don't need to take care of clearing
>> >> > the marker.
>> >> >
>> >> > Furthermore, as I don't have the ability to test this on Book 3E or
>> >> > 32 bits, limit the change to Book 3S and 64 bits.
>> >> >
>> >> > Finally, make the HAVE_RELIABLE_STACKTRACE Kconfig option depend on
>> >> > PPC_BOOK3S_64 for documentation purposes. Before this patch, it depended
>> >> > on PPC64 && CPU_LITTLE_ENDIAN and because CPU_LITTLE_ENDIAN implies
>> >> > PPC_BOOK3S_64, there's no functional change here.
>> >> >
>> >> > Fixes: df78d3f61480 ("powerpc/livepatch: Implement reliable stack tracing for the consistency model")
>> >> > Reported-by: Joe Lawrence <joe.lawrence@redhat.com>
>> >> > Signed-off-by: Nicolai Stange <nstange@suse.de>
>> >> > Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
>> >> > ---
>> >> >  arch/powerpc/Kconfig           | 2 +-
>> >> >  arch/powerpc/kernel/entry_64.S | 7 +++++++
>> >> >  2 files changed, 8 insertions(+), 1 deletion(-)
>> >> >
>> >> > diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
>> >> > index 2890d36eb531..73bf87b1d274 100644
>> >> > --- a/arch/powerpc/Kconfig
>> >> > +++ b/arch/powerpc/Kconfig
>> >> > @@ -220,7 +220,7 @@ config PPC
>> >> >       select HAVE_PERF_USER_STACK_DUMP
>> >> >       select HAVE_RCU_TABLE_FREE              if SMP
>> >> >       select HAVE_REGS_AND_STACK_ACCESS_API
>> >> > -     select HAVE_RELIABLE_STACKTRACE         if PPC64 && CPU_LITTLE_ENDIAN
>> >> > +     select HAVE_RELIABLE_STACKTRACE         if PPC_BOOK3S_64 && CPU_LITTLE_ENDIAN
>> >> >       select HAVE_SYSCALL_TRACEPOINTS
>> >> >       select HAVE_VIRT_CPU_ACCOUNTING
>> >> >       select HAVE_IRQ_TIME_ACCOUNTING
>> >> > diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
>> >> > index 435927f549c4..a2c168b395d2 100644
>> >> > --- a/arch/powerpc/kernel/entry_64.S
>> >> > +++ b/arch/powerpc/kernel/entry_64.S
>> >> > @@ -1002,6 +1002,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
>> >> >       ld      r2,_NIP(r1)
>> >> >       mtspr   SPRN_SRR0,r2
>> >> >
>> >> > +     /*
>> >> > +      * Leaving a stale exception_marker on the stack can confuse
>> >> > +      * the reliable stack unwinder later on. Clear it.
>> >> > +      */
>> >> > +     li      r2,0
>> >> > +     std     r2,STACK_FRAME_OVERHEAD-16(r1)
>> >> > +
>> >>
>> >> Could you please double check, r4 is already 0 at this point
>> >> IIUC. So the change might be a simple
>> >>
>> >> std r4,STACK_FRAME_OVERHEAD-16(r1)
>> >>
>> >
>> > r4 is not 0, sorry for the noise
>>
>> Isn't it?
>
> It is, I seem to be reading the wrong bits and confused myself, had to
> re-read mtmsrd to ensure it does not modify RS, just MSR. So I guess
> we could reuse r4.

Yeah it's a bit hard to follow now that we have the split exit paths for
user vs kernel. r4 does get used on the return to userspace case, by
ACCOUNT_CPU_USER_EXIT(), but for the return to kernel it's still got
zero in it.

> Should I send a patch on top of this? I have limited testing
> infrastructure at the moment, I could use qemu

I'm not sure. It's a bit fragile relying on the r4 value being zero, it
would be easy to accidentally reuse r4. Though it actually wouldn't
matter as long as r4 never has "regshere" in it.

In fact we could store any random value there, it just needs to not be
the exception marker. eg. we could just stick the SRR0 value in there,
that should never alias with "regshere".

But I think maybe we're over thinking it, the cost of the li is pretty
minimal compared to everything else going on here, and this is only on
the return to kernel case, which is arguably not a super hot path.

cheers

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/4] powerpc/64s: Clear on-stack exception marker upon exception return
  2019-02-06  4:44           ` Michael Ellerman
@ 2019-02-06  8:45             ` Balbir Singh
  0 siblings, 0 replies; 21+ messages in thread
From: Balbir Singh @ 2019-02-06  8:45 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Joe Lawrence, Nicolai Stange, Jiri Kosina, linux-kernel,
	Torsten Duwe, Josh Poimboeuf, live-patching,
	open list:LINUX FOR POWERPC (32-BIT AND 64-BIT)

On Wed, Feb 6, 2019 at 3:44 PM Michael Ellerman <mpe@ellerman.id.au> wrote:
>
> Balbir Singh <bsingharora@gmail.com> writes:
> > On Tue, Feb 5, 2019 at 10:24 PM Michael Ellerman <mpe@ellerman.id.au> wrote:
> >> Balbir Singh <bsingharora@gmail.com> writes:
> >> > On Sat, Feb 2, 2019 at 12:14 PM Balbir Singh <bsingharora@gmail.com> wrote:
> >> >> On Tue, Jan 22, 2019 at 10:57:21AM -0500, Joe Lawrence wrote:
> >> >> > From: Nicolai Stange <nstange@suse.de>
> >> >> >
> >> >> > The ppc64 specific implementation of the reliable stacktracer,
> >> >> > save_stack_trace_tsk_reliable(), bails out and reports an "unreliable
> >> >> > trace" whenever it finds an exception frame on the stack. Stack frames
> >> >> > are classified as exception frames if the STACK_FRAME_REGS_MARKER magic,
> >> >> > as written by exception prologues, is found at a particular location.
> >> >> >
> >> >> > However, as observed by Joe Lawrence, it is possible in practice that
> >> >> > non-exception stack frames can alias with prior exception frames and thus,
> >> >> > that the reliable stacktracer can find a stale STACK_FRAME_REGS_MARKER on
> >> >> > the stack. It in turn falsely reports an unreliable stacktrace and blocks
> >> >> > any live patching transition to finish. Said condition lasts until the
> >> >> > stack frame is overwritten/initialized by function call or other means.
> >> >> >
> >> >> > In principle, we could mitigate this by making the exception frame
> >> >> > classification condition in save_stack_trace_tsk_reliable() stronger:
> >> >> > in addition to testing for STACK_FRAME_REGS_MARKER, we could also take into
> >> >> > account that for all exceptions executing on the kernel stack
> >> >> > - their stack frames's backlink pointers always match what is saved
> >> >> >   in their pt_regs instance's ->gpr[1] slot and that
> >> >> > - their exception frame size equals STACK_INT_FRAME_SIZE, a value
> >> >> >   uncommonly large for non-exception frames.
> >> >> >
> >> >> > However, while these are currently true, relying on them would make the
> >> >> > reliable stacktrace implementation more sensitive towards future changes in
> >> >> > the exception entry code. Note that false negatives, i.e. not detecting
> >> >> > exception frames, would silently break the live patching consistency model.
> >> >> >
> >> >> > Furthermore, certain other places (diagnostic stacktraces, perf, xmon)
> >> >> > rely on STACK_FRAME_REGS_MARKER as well.
> >> >> >
> >> >> > Make the exception exit code clear the on-stack STACK_FRAME_REGS_MARKER
> >> >> > for those exceptions running on the "normal" kernel stack and returning
> >> >> > to kernelspace: because the topmost frame is ignored by the reliable stack
> >> >> > tracer anyway, returns to userspace don't need to take care of clearing
> >> >> > the marker.
> >> >> >
> >> >> > Furthermore, as I don't have the ability to test this on Book 3E or
> >> >> > 32 bits, limit the change to Book 3S and 64 bits.
> >> >> >
> >> >> > Finally, make the HAVE_RELIABLE_STACKTRACE Kconfig option depend on
> >> >> > PPC_BOOK3S_64 for documentation purposes. Before this patch, it depended
> >> >> > on PPC64 && CPU_LITTLE_ENDIAN and because CPU_LITTLE_ENDIAN implies
> >> >> > PPC_BOOK3S_64, there's no functional change here.
> >> >> >
> >> >> > Fixes: df78d3f61480 ("powerpc/livepatch: Implement reliable stack tracing for the consistency model")
> >> >> > Reported-by: Joe Lawrence <joe.lawrence@redhat.com>
> >> >> > Signed-off-by: Nicolai Stange <nstange@suse.de>
> >> >> > Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
> >> >> > ---
> >> >> >  arch/powerpc/Kconfig           | 2 +-
> >> >> >  arch/powerpc/kernel/entry_64.S | 7 +++++++
> >> >> >  2 files changed, 8 insertions(+), 1 deletion(-)
> >> >> >
> >> >> > diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> >> >> > index 2890d36eb531..73bf87b1d274 100644
> >> >> > --- a/arch/powerpc/Kconfig
> >> >> > +++ b/arch/powerpc/Kconfig
> >> >> > @@ -220,7 +220,7 @@ config PPC
> >> >> >       select HAVE_PERF_USER_STACK_DUMP
> >> >> >       select HAVE_RCU_TABLE_FREE              if SMP
> >> >> >       select HAVE_REGS_AND_STACK_ACCESS_API
> >> >> > -     select HAVE_RELIABLE_STACKTRACE         if PPC64 && CPU_LITTLE_ENDIAN
> >> >> > +     select HAVE_RELIABLE_STACKTRACE         if PPC_BOOK3S_64 && CPU_LITTLE_ENDIAN
> >> >> >       select HAVE_SYSCALL_TRACEPOINTS
> >> >> >       select HAVE_VIRT_CPU_ACCOUNTING
> >> >> >       select HAVE_IRQ_TIME_ACCOUNTING
> >> >> > diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
> >> >> > index 435927f549c4..a2c168b395d2 100644
> >> >> > --- a/arch/powerpc/kernel/entry_64.S
> >> >> > +++ b/arch/powerpc/kernel/entry_64.S
> >> >> > @@ -1002,6 +1002,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
> >> >> >       ld      r2,_NIP(r1)
> >> >> >       mtspr   SPRN_SRR0,r2
> >> >> >
> >> >> > +     /*
> >> >> > +      * Leaving a stale exception_marker on the stack can confuse
> >> >> > +      * the reliable stack unwinder later on. Clear it.
> >> >> > +      */
> >> >> > +     li      r2,0
> >> >> > +     std     r2,STACK_FRAME_OVERHEAD-16(r1)
> >> >> > +
> >> >>
> >> >> Could you please double check, r4 is already 0 at this point
> >> >> IIUC. So the change might be a simple
> >> >>
> >> >> std r4,STACK_FRAME_OVERHEAD-16(r1)
> >> >>
> >> >
> >> > r4 is not 0, sorry for the noise
> >>
> >> Isn't it?
> >
> > It is, I seem to be reading the wrong bits and confused myself, had to
> > re-read mtmsrd to ensure it does not modify RS, just MSR. So I guess
> > we could reuse r4.
>
> Yeah it's a bit hard to follow now that we have the split exit paths for
> user vs kernel. r4 does get used on the return to userspace case, by
> ACCOUNT_CPU_USER_EXIT(), but for the return to kernel it's still got
> zero in it.
>
> > Should I send a patch on top of this? I have limited testing
> > infrastructure at the moment, I could use qemu
>
> I'm not sure. It's a bit fragile relying on the r4 value being zero, it
> would be easy to accidentally reuse r4. Though it actually wouldn't
> matter as long as r4 never has "regshere" in it.
>

Yep, r4 will eventually get reloaded right below, so unless reuses it
as a scratch register, shouldn't matter

> In fact we could store any random value there, it just needs to not be
> the exception marker. eg. we could just stick the SRR0 value in there,
> that should never alias with "regshere".
>
> But I think maybe we're over thinking it, the cost of the li is pretty
> minimal compared to everything else going on here, and this is only on
> the return to kernel case, which is arguably not a super hot path.

Agreed

Cheers
Balbir Singh.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [1/4] powerpc/64s: Clear on-stack exception marker upon exception return
  2019-01-22 15:57 ` [PATCH 1/4] powerpc/64s: Clear on-stack exception marker upon exception return Joe Lawrence
  2019-01-30 12:27   ` Michael Ellerman
  2019-02-02  1:14   ` Balbir Singh
@ 2019-02-08 13:02   ` Michael Ellerman
  2 siblings, 0 replies; 21+ messages in thread
From: Michael Ellerman @ 2019-02-08 13:02 UTC (permalink / raw)
  To: Joe Lawrence, linux-kernel, linuxppc-dev, live-patching
  Cc: Jiri Kosina, Torsten Duwe, Nicolai Stange, Josh Poimboeuf

On Tue, 2019-01-22 at 15:57:21 UTC, Joe Lawrence wrote:
> From: Nicolai Stange <nstange@suse.de>
> 
> The ppc64 specific implementation of the reliable stacktracer,
> save_stack_trace_tsk_reliable(), bails out and reports an "unreliable
> trace" whenever it finds an exception frame on the stack. Stack frames
> are classified as exception frames if the STACK_FRAME_REGS_MARKER magic,
> as written by exception prologues, is found at a particular location.
> 
> However, as observed by Joe Lawrence, it is possible in practice that
> non-exception stack frames can alias with prior exception frames and thus,
> that the reliable stacktracer can find a stale STACK_FRAME_REGS_MARKER on
> the stack. It in turn falsely reports an unreliable stacktrace and blocks
> any live patching transition to finish. Said condition lasts until the
> stack frame is overwritten/initialized by function call or other means.
> 
> In principle, we could mitigate this by making the exception frame
> classification condition in save_stack_trace_tsk_reliable() stronger:
> in addition to testing for STACK_FRAME_REGS_MARKER, we could also take into
> account that for all exceptions executing on the kernel stack
> - their stack frames's backlink pointers always match what is saved
>   in their pt_regs instance's ->gpr[1] slot and that
> - their exception frame size equals STACK_INT_FRAME_SIZE, a value
>   uncommonly large for non-exception frames.
> 
> However, while these are currently true, relying on them would make the
> reliable stacktrace implementation more sensitive towards future changes in
> the exception entry code. Note that false negatives, i.e. not detecting
> exception frames, would silently break the live patching consistency model.
> 
> Furthermore, certain other places (diagnostic stacktraces, perf, xmon)
> rely on STACK_FRAME_REGS_MARKER as well.
> 
> Make the exception exit code clear the on-stack STACK_FRAME_REGS_MARKER
> for those exceptions running on the "normal" kernel stack and returning
> to kernelspace: because the topmost frame is ignored by the reliable stack
> tracer anyway, returns to userspace don't need to take care of clearing
> the marker.
> 
> Furthermore, as I don't have the ability to test this on Book 3E or
> 32 bits, limit the change to Book 3S and 64 bits.
> 
> Finally, make the HAVE_RELIABLE_STACKTRACE Kconfig option depend on
> PPC_BOOK3S_64 for documentation purposes. Before this patch, it depended
> on PPC64 && CPU_LITTLE_ENDIAN and because CPU_LITTLE_ENDIAN implies
> PPC_BOOK3S_64, there's no functional change here.
> 
> Fixes: df78d3f61480 ("powerpc/livepatch: Implement reliable stack tracing for the consistency model")
> Reported-by: Joe Lawrence <joe.lawrence@redhat.com>
> Signed-off-by: Nicolai Stange <nstange@suse.de>
> Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>

Series applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/eddd0b332304d554ad6243942f87c2fc

cheers

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2019-02-08 14:04 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-22 15:57 [PATCH 0/4] powerpc/livepatch: reliable stack unwinder fixes Joe Lawrence
2019-01-22 15:57 ` [PATCH 1/4] powerpc/64s: Clear on-stack exception marker upon exception return Joe Lawrence
2019-01-30 12:27   ` Michael Ellerman
2019-01-30 17:18     ` Nicolai Stange
2019-01-31  5:46       ` Michael Ellerman
2019-02-02  1:14   ` Balbir Singh
2019-02-02  3:42     ` Balbir Singh
2019-02-05 11:24       ` Michael Ellerman
2019-02-06  2:48         ` Balbir Singh
2019-02-06  4:44           ` Michael Ellerman
2019-02-06  8:45             ` Balbir Singh
2019-02-08 13:02   ` [1/4] " Michael Ellerman
2019-01-22 15:57 ` [PATCH 2/4] powerpc/livepatch: relax reliable stack tracer checks for first-frame Joe Lawrence
2019-01-22 15:57 ` [PATCH 3/4] powerpc/livepatch: small cleanups in save_stack_trace_tsk_reliable() Joe Lawrence
2019-01-22 15:57 ` [PATCH 4/4] powerpc/livepatch: return -ERRNO values " Joe Lawrence
2019-02-02  0:59   ` Balbir Singh
2019-01-29 21:10 ` [PATCH 0/4] powerpc/livepatch: reliable stack unwinder fixes Josh Poimboeuf
2019-01-29 23:50 ` Jiri Kosina
2019-01-30 12:22   ` Michael Ellerman
2019-01-30 13:28     ` Jiri Kosina
2019-01-31  5:46       ` Michael Ellerman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).