* [PATCH 0/3] Candidate ARM64 stable patches for linux-4.1.y
@ 2016-02-02 4:06 Hanjun Guo
2016-02-02 4:06 ` [PATCH 1/3] arm64: fix missing syscall trace exit Hanjun Guo
` (3 more replies)
0 siblings, 4 replies; 11+ messages in thread
From: Hanjun Guo @ 2016-02-02 4:06 UTC (permalink / raw)
To: gregkh, stable; +Cc: will.deacon
From: Hanjun Guo <hanjun.guo@linaro.org>
Hi Greg,
Here are 3 candidate patches for stable linux-4.1.y, patch 2
alredy Cc stable in the change log, but need patch 1 (which
also a bugfix) to apply patch 2 cleanly, the last one is worth
to take as a bugfix too I think, please consider to merge :)
Thanks
Hanjun
Josh Stone (1):
arm64: fix missing syscall trace exit
Will Deacon (2):
arm64: entry: always restore x0 from the stack on syscall return
arm64: mm: ensure patched kernel text is fetched from PoU
arch/arm64/kernel/entry.S | 22 +++++++++++-----------
arch/arm64/kernel/head.S | 8 ++++++++
arch/arm64/kernel/sleep.S | 8 ++++++++
arch/arm64/mm/proc.S | 1 -
4 files changed, 27 insertions(+), 12 deletions(-)
--
1.9.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 1/3] arm64: fix missing syscall trace exit
2016-02-02 4:06 [PATCH 0/3] Candidate ARM64 stable patches for linux-4.1.y Hanjun Guo
@ 2016-02-02 4:06 ` Hanjun Guo
2016-02-02 4:06 ` [PATCH 2/3] arm64: entry: always restore x0 from the stack on syscall return Hanjun Guo
` (2 subsequent siblings)
3 siblings, 0 replies; 11+ messages in thread
From: Hanjun Guo @ 2016-02-02 4:06 UTC (permalink / raw)
To: gregkh, stable; +Cc: will.deacon
From: Josh Stone <jistone@redhat.com>
commit 04d7e098f541769721d7511d56aea4b976fd29fd upstream.
If a syscall is entered without TIF_SYSCALL_TRACE set, then it goes on
the fast path. It's then possible to have TIF_SYSCALL_TRACE added in
the middle of the syscall, but ret_fast_syscall doesn't check this flag
again. This causes a ptrace syscall-exit-stop to be missed.
For instance, from a PTRACE_EVENT_FORK reported during do_fork, the
tracer might resume with PTRACE_SYSCALL, setting TIF_SYSCALL_TRACE.
Now the completion of the fork should have a syscall-exit-stop.
Russell King fixed this on arm by re-checking _TIF_SYSCALL_WORK in the
fast exit path. Do the same on arm64.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Josh Stone <jistone@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
---
arch/arm64/kernel/entry.S | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index bddd04d..6657a09 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -609,11 +609,16 @@ ENDPROC(cpu_switch_to)
*/
ret_fast_syscall:
disable_irq // disable interrupts
- ldr x1, [tsk, #TI_FLAGS]
+ ldr x1, [tsk, #TI_FLAGS] // re-check for syscall tracing
+ and x2, x1, #_TIF_SYSCALL_WORK
+ cbnz x2, ret_fast_syscall_trace
and x2, x1, #_TIF_WORK_MASK
cbnz x2, fast_work_pending
enable_step_tsk x1, x2
kernel_exit 0, ret = 1
+ret_fast_syscall_trace:
+ enable_irq // enable interrupts
+ b __sys_trace_return
/*
* Ok, we need to do extra processing, enter the slow path.
--
1.9.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 2/3] arm64: entry: always restore x0 from the stack on syscall return
2016-02-02 4:06 [PATCH 0/3] Candidate ARM64 stable patches for linux-4.1.y Hanjun Guo
2016-02-02 4:06 ` [PATCH 1/3] arm64: fix missing syscall trace exit Hanjun Guo
@ 2016-02-02 4:06 ` Hanjun Guo
2016-02-02 4:06 ` [PATCH 3/3] arm64: mm: ensure patched kernel text is fetched from PoU Hanjun Guo
2016-02-14 21:00 ` [PATCH 0/3] Candidate ARM64 stable patches for linux-4.1.y Greg KH
3 siblings, 0 replies; 11+ messages in thread
From: Hanjun Guo @ 2016-02-02 4:06 UTC (permalink / raw)
To: gregkh, stable; +Cc: will.deacon
From: Will Deacon <will.deacon@arm.com>
commit 412fcb6cebd758d080cacd5a41a0cbc656ea5fce upstream.
We have a micro-optimisation on the fast syscall return path where we
take care to keep x0 live with the return value from the syscall so that
we can avoid restoring it from the stack. The benefit of doing this is
fairly suspect, since we will be restoring x1 from the stack anyway
(which lives adjacent in the pt_regs structure) and the only additional
cost is saving x0 back to pt_regs after the syscall handler, which could
be seen as a poor man's prefetch.
More importantly, this causes issues with the context tracking code.
The ct_user_enter macro ends up branching into C code, which is free to
use x0 as a scratch register and consequently leads to us returning junk
back to userspace as the syscall return value. Rather than special case
the context-tracking code, this patch removes the questionable
optimisation entirely.
Cc: <stable@vger.kernel.org>
Cc: Larry Bassel <larry.bassel@linaro.org>
Cc: Kevin Hilman <khilman@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Hanjun Guo <hanjun.guo@linaro.org>
Tested-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
---
arch/arm64/kernel/entry.S | 17 ++++++-----------
1 file changed, 6 insertions(+), 11 deletions(-)
diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 6657a09..3236b3e 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -116,7 +116,7 @@
*/
.endm
- .macro kernel_exit, el, ret = 0
+ .macro kernel_exit, el
ldp x21, x22, [sp, #S_PC] // load ELR, SPSR
.if \el == 0
ct_user_enter
@@ -143,11 +143,7 @@
.endif
msr elr_el1, x21 // set up the return data
msr spsr_el1, x22
- .if \ret
- ldr x1, [sp, #S_X1] // preserve x0 (syscall return)
- .else
ldp x0, x1, [sp, #16 * 0]
- .endif
ldp x2, x3, [sp, #16 * 1]
ldp x4, x5, [sp, #16 * 2]
ldp x6, x7, [sp, #16 * 3]
@@ -609,22 +605,21 @@ ENDPROC(cpu_switch_to)
*/
ret_fast_syscall:
disable_irq // disable interrupts
+ str x0, [sp, #S_X0] // returned x0
ldr x1, [tsk, #TI_FLAGS] // re-check for syscall tracing
and x2, x1, #_TIF_SYSCALL_WORK
cbnz x2, ret_fast_syscall_trace
and x2, x1, #_TIF_WORK_MASK
- cbnz x2, fast_work_pending
+ cbnz x2, work_pending
enable_step_tsk x1, x2
- kernel_exit 0, ret = 1
+ kernel_exit 0
ret_fast_syscall_trace:
enable_irq // enable interrupts
- b __sys_trace_return
+ b __sys_trace_return_skipped // we already saved x0
/*
* Ok, we need to do extra processing, enter the slow path.
*/
-fast_work_pending:
- str x0, [sp, #S_X0] // returned x0
work_pending:
tbnz x1, #TIF_NEED_RESCHED, work_resched
/* TIF_SIGPENDING, TIF_NOTIFY_RESUME or TIF_FOREIGN_FPSTATE case */
@@ -648,7 +643,7 @@ ret_to_user:
cbnz x2, work_pending
enable_step_tsk x1, x2
no_work_pending:
- kernel_exit 0, ret = 0
+ kernel_exit 0
ENDPROC(ret_to_user)
/*
--
1.9.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 3/3] arm64: mm: ensure patched kernel text is fetched from PoU
2016-02-02 4:06 [PATCH 0/3] Candidate ARM64 stable patches for linux-4.1.y Hanjun Guo
2016-02-02 4:06 ` [PATCH 1/3] arm64: fix missing syscall trace exit Hanjun Guo
2016-02-02 4:06 ` [PATCH 2/3] arm64: entry: always restore x0 from the stack on syscall return Hanjun Guo
@ 2016-02-02 4:06 ` Hanjun Guo
2016-02-14 21:00 ` Greg KH
2016-02-14 21:00 ` [PATCH 0/3] Candidate ARM64 stable patches for linux-4.1.y Greg KH
3 siblings, 1 reply; 11+ messages in thread
From: Hanjun Guo @ 2016-02-02 4:06 UTC (permalink / raw)
To: gregkh, stable; +Cc: will.deacon
From: Will Deacon <will.deacon@arm.com>
The arm64 booting document requires that the bootloader has cleaned the
kernel image to the PoC. However, when a CPU re-enters the kernel due to
either a CPU hotplug "on" event or resuming from a low-power state (e.g.
cpuidle), the kernel text may in-fact be dirty at the PoU due to things
like alternative patching or even module loading.
Thanks to I-cache speculation with the MMU off, stale instructions could
be fetched prior to enabling the MMU, potentially leading to crashes
when executing regions of code that have been modified at runtime.
This patch addresses the issue by ensuring that the local I-cache is
invalidated immediately after a CPU has enabled its MMU but before
jumping out of the identity mapping. Any stale instructions fetched from
the PoC will then be discarded and refetched correctly from the PoU.
Patching kernel text executed prior to the MMU being enabled is
prohibited, so the early entry code will always be clean.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
---
arch/arm64/kernel/head.S | 8 ++++++++
arch/arm64/kernel/sleep.S | 8 ++++++++
arch/arm64/mm/proc.S | 1 -
3 files changed, 16 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 36aa31f..af6e4e8 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -682,5 +682,13 @@ __enable_mmu:
isb
msr sctlr_el1, x0
isb
+ /*
+ * Invalidate the local I-cache so that any instructions fetched
+ * speculatively from the PoC are discarded, since they may have
+ * been dynamically patched at the PoU.
+ */
+ ic iallu
+ dsb nsh
+ isb
br x27
ENDPROC(__enable_mmu)
diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S
index ede186c..1c6969b 100644
--- a/arch/arm64/kernel/sleep.S
+++ b/arch/arm64/kernel/sleep.S
@@ -134,6 +134,14 @@ ENTRY(cpu_resume_mmu)
ldr x3, =cpu_resume_after_mmu
msr sctlr_el1, x0 // restore sctlr_el1
isb
+ /*
+ * Invalidate the local I-cache so that any instructions fetched
+ * speculatively from the PoC are discarded, since they may have
+ * been dynamically patched at the PoU.
+ */
+ ic iallu
+ dsb nsh
+ isb
br x3 // global jump to virtual address
ENDPROC(cpu_resume_mmu)
cpu_resume_after_mmu:
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index cdd754e..ee18bbc 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -196,7 +196,6 @@ ENDPROC(cpu_do_switch_mm)
* value of the SCTLR_EL1 register.
*/
ENTRY(__cpu_setup)
- ic iallu // I+BTB cache invalidate
tlbi vmalle1is // invalidate I + D TLBs
dsb ish
--
1.9.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 0/3] Candidate ARM64 stable patches for linux-4.1.y
2016-02-02 4:06 [PATCH 0/3] Candidate ARM64 stable patches for linux-4.1.y Hanjun Guo
` (2 preceding siblings ...)
2016-02-02 4:06 ` [PATCH 3/3] arm64: mm: ensure patched kernel text is fetched from PoU Hanjun Guo
@ 2016-02-14 21:00 ` Greg KH
2016-02-15 1:35 ` Hanjun Guo
3 siblings, 1 reply; 11+ messages in thread
From: Greg KH @ 2016-02-14 21:00 UTC (permalink / raw)
To: Hanjun Guo; +Cc: stable, will.deacon
On Tue, Feb 02, 2016 at 12:06:44PM +0800, Hanjun Guo wrote:
> From: Hanjun Guo <hanjun.guo@linaro.org>
>
> Hi Greg,
I'm no longer dealing with 4.1 stable things, sorry.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 3/3] arm64: mm: ensure patched kernel text is fetched from PoU
2016-02-02 4:06 ` [PATCH 3/3] arm64: mm: ensure patched kernel text is fetched from PoU Hanjun Guo
@ 2016-02-14 21:00 ` Greg KH
2016-02-15 1:43 ` Hanjun Guo
0 siblings, 1 reply; 11+ messages in thread
From: Greg KH @ 2016-02-14 21:00 UTC (permalink / raw)
To: Hanjun Guo; +Cc: stable, will.deacon
On Tue, Feb 02, 2016 at 12:06:47PM +0800, Hanjun Guo wrote:
> From: Will Deacon <will.deacon@arm.com>
>
> The arm64 booting document requires that the bootloader has cleaned the
> kernel image to the PoC. However, when a CPU re-enters the kernel due to
> either a CPU hotplug "on" event or resuming from a low-power state (e.g.
> cpuidle), the kernel text may in-fact be dirty at the PoU due to things
> like alternative patching or even module loading.
>
> Thanks to I-cache speculation with the MMU off, stale instructions could
> be fetched prior to enabling the MMU, potentially leading to crashes
> when executing regions of code that have been modified at runtime.
>
> This patch addresses the issue by ensuring that the local I-cache is
> invalidated immediately after a CPU has enabled its MMU but before
> jumping out of the identity mapping. Any stale instructions fetched from
> the PoC will then be discarded and refetched correctly from the PoU.
> Patching kernel text executed prior to the MMU being enabled is
> prohibited, so the early entry code will always be clean.
>
> Reviewed-by: Mark Rutland <mark.rutland@arm.com>
> Tested-by: Mark Rutland <mark.rutland@arm.com>
> Signed-off-by: Will Deacon <will.deacon@arm.com>
> Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
> ---
> arch/arm64/kernel/head.S | 8 ++++++++
> arch/arm64/kernel/sleep.S | 8 ++++++++
> arch/arm64/mm/proc.S | 1 -
> 3 files changed, 16 insertions(+), 1 deletion(-)
You forgot to say what the upstream git commit id is for this :(
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 0/3] Candidate ARM64 stable patches for linux-4.1.y
2016-02-14 21:00 ` [PATCH 0/3] Candidate ARM64 stable patches for linux-4.1.y Greg KH
@ 2016-02-15 1:35 ` Hanjun Guo
2016-02-15 1:44 ` Greg KH
0 siblings, 1 reply; 11+ messages in thread
From: Hanjun Guo @ 2016-02-15 1:35 UTC (permalink / raw)
To: Greg KH; +Cc: stable, will.deacon, sasha.levin
On 2016/2/15 5:00, Greg KH wrote:
> On Tue, Feb 02, 2016 at 12:06:44PM +0800, Hanjun Guo wrote:
>> From: Hanjun Guo <hanjun.guo@linaro.org>
>>
>> Hi Greg,
> I'm no longer dealing with 4.1 stable things, sorry.
>
Sorry, I didn't notice this when I was sending out this patch set,
should I resend this to Sasha (with Sasha in the cc list)?
Thanks
Hanjun
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 3/3] arm64: mm: ensure patched kernel text is fetched from PoU
2016-02-14 21:00 ` Greg KH
@ 2016-02-15 1:43 ` Hanjun Guo
0 siblings, 0 replies; 11+ messages in thread
From: Hanjun Guo @ 2016-02-15 1:43 UTC (permalink / raw)
To: Greg KH; +Cc: stable, will.deacon, sasha.levin
On 2016/2/15 5:00, Greg KH wrote:
> On Tue, Feb 02, 2016 at 12:06:47PM +0800, Hanjun Guo wrote:
>> From: Will Deacon <will.deacon@arm.com>
>>
>> The arm64 booting document requires that the bootloader has cleaned the
>> kernel image to the PoC. However, when a CPU re-enters the kernel due to
>> either a CPU hotplug "on" event or resuming from a low-power state (e.g.
>> cpuidle), the kernel text may in-fact be dirty at the PoU due to things
>> like alternative patching or even module loading.
>>
>> Thanks to I-cache speculation with the MMU off, stale instructions could
>> be fetched prior to enabling the MMU, potentially leading to crashes
>> when executing regions of code that have been modified at runtime.
>>
>> This patch addresses the issue by ensuring that the local I-cache is
>> invalidated immediately after a CPU has enabled its MMU but before
>> jumping out of the identity mapping. Any stale instructions fetched from
>> the PoC will then be discarded and refetched correctly from the PoU.
>> Patching kernel text executed prior to the MMU being enabled is
>> prohibited, so the early entry code will always be clean.
>>
>> Reviewed-by: Mark Rutland <mark.rutland@arm.com>
>> Tested-by: Mark Rutland <mark.rutland@arm.com>
>> Signed-off-by: Will Deacon <will.deacon@arm.com>
>> Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
>> ---
>> arch/arm64/kernel/head.S | 8 ++++++++
>> arch/arm64/kernel/sleep.S | 8 ++++++++
>> arch/arm64/mm/proc.S | 1 -
>> 3 files changed, 16 insertions(+), 1 deletion(-)
> You forgot to say what the upstream git commit id is for this :(
>
Sorry, it's 8ec41987436d566f7c4559c6871738b869f7ef07.
Thanks
Hanjun
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 0/3] Candidate ARM64 stable patches for linux-4.1.y
2016-02-15 1:35 ` Hanjun Guo
@ 2016-02-15 1:44 ` Greg KH
2016-02-16 5:59 ` Hanjun Guo
0 siblings, 1 reply; 11+ messages in thread
From: Greg KH @ 2016-02-15 1:44 UTC (permalink / raw)
To: Hanjun Guo; +Cc: stable, will.deacon, sasha.levin
On Mon, Feb 15, 2016 at 09:35:40AM +0800, Hanjun Guo wrote:
> On 2016/2/15 5:00, Greg KH wrote:
> > On Tue, Feb 02, 2016 at 12:06:44PM +0800, Hanjun Guo wrote:
> >> From: Hanjun Guo <hanjun.guo@linaro.org>
> >>
> >> Hi Greg,
> > I'm no longer dealing with 4.1 stable things, sorry.
> >
>
> Sorry, I didn't notice this when I was sending out this patch set,
> should I resend this to Sasha (with Sasha in the cc list)?
Don't know, that's up to Sasha :)
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 0/3] Candidate ARM64 stable patches for linux-4.1.y
2016-02-15 1:44 ` Greg KH
@ 2016-02-16 5:59 ` Hanjun Guo
2016-02-16 6:09 ` Sasha Levin
0 siblings, 1 reply; 11+ messages in thread
From: Hanjun Guo @ 2016-02-16 5:59 UTC (permalink / raw)
To: Hanjun Guo, sasha.levin; +Cc: Greg KH, stable, will.deacon
Hi Sasha,
On 2016/2/15 9:44, Greg KH wrote:
> On Mon, Feb 15, 2016 at 09:35:40AM +0800, Hanjun Guo wrote:
>> On 2016/2/15 5:00, Greg KH wrote:
>>> On Tue, Feb 02, 2016 at 12:06:44PM +0800, Hanjun Guo wrote:
>>>> From: Hanjun Guo <hanjun.guo@linaro.org>
>>>>
>>>> Hi Greg,
>>> I'm no longer dealing with 4.1 stable things, sorry.
>>>
>>
>> Sorry, I didn't notice this when I was sending out this patch set,
>> should I resend this to Sasha (with Sasha in the cc list)?
>
> Don't know, that's up to Sasha :)
What's your suggestion here?
Thanks
Hanjun
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 0/3] Candidate ARM64 stable patches for linux-4.1.y
2016-02-16 5:59 ` Hanjun Guo
@ 2016-02-16 6:09 ` Sasha Levin
0 siblings, 0 replies; 11+ messages in thread
From: Sasha Levin @ 2016-02-16 6:09 UTC (permalink / raw)
To: Hanjun Guo, Hanjun Guo; +Cc: Greg KH, stable, will.deacon
On 02/16/2016 12:59 AM, Hanjun Guo wrote:
> Hi Sasha,
>
> On 2016/2/15 9:44, Greg KH wrote:
>> On Mon, Feb 15, 2016 at 09:35:40AM +0800, Hanjun Guo wrote:
>>> On 2016/2/15 5:00, Greg KH wrote:
>>>> On Tue, Feb 02, 2016 at 12:06:44PM +0800, Hanjun Guo wrote:
>>>>> From: Hanjun Guo <hanjun.guo@linaro.org>
>>>>>
>>>>> Hi Greg,
>>>> I'm no longer dealing with 4.1 stable things, sorry.
>>>>
>>>
>>> Sorry, I didn't notice this when I was sending out this patch set,
>>> should I resend this to Sasha (with Sasha in the cc list)?
>>
>> Don't know, that's up to Sasha :)
>
> What's your suggestion here?
I'll grab them, thanks.
Thanks,
Sasha
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2016-02-16 6:09 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-02-02 4:06 [PATCH 0/3] Candidate ARM64 stable patches for linux-4.1.y Hanjun Guo
2016-02-02 4:06 ` [PATCH 1/3] arm64: fix missing syscall trace exit Hanjun Guo
2016-02-02 4:06 ` [PATCH 2/3] arm64: entry: always restore x0 from the stack on syscall return Hanjun Guo
2016-02-02 4:06 ` [PATCH 3/3] arm64: mm: ensure patched kernel text is fetched from PoU Hanjun Guo
2016-02-14 21:00 ` Greg KH
2016-02-15 1:43 ` Hanjun Guo
2016-02-14 21:00 ` [PATCH 0/3] Candidate ARM64 stable patches for linux-4.1.y Greg KH
2016-02-15 1:35 ` Hanjun Guo
2016-02-15 1:44 ` Greg KH
2016-02-16 5:59 ` Hanjun Guo
2016-02-16 6:09 ` Sasha Levin
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.