All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] arm64: ensure CPUs are quiescent before patching
@ 2021-12-03 10:47 Mark Rutland
  2021-12-03 10:47 ` [PATCH 1/4] arm64: alternative: wait for other CPUs " Mark Rutland
                   ` (3 more replies)
  0 siblings, 4 replies; 13+ messages in thread
From: Mark Rutland @ 2021-12-03 10:47 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: andre.przywara, ardb, catalin.marinas, james.morse, joey.gouly,
	mark.rutland, suzuki.poulose, will

On arm64, certain instructions cannot be patched while they are being
concurrently executed, and in these cases we use stop_machine() to
ensure that while one CPU is patching instructions all other CPUs are in
a quiescent state. We have two distinct sequences for this, one used for
boot-time patching of alternatives, and on used for runtime patching
(e.g. kprobes).

Both sequences wait for patching to be complete before CPUs exit the
quiescent state, but we don't wait for CPUs to be quiescent *before* we
start patching, and so we may patch code which is still being executed
(e.g. portions of stop_machine() itself).

These patches fix this problem by updating the sequences to wait for
CPUs to become quiescent before starting patches. The first two patches
are potentially backportable fixes for the individual sequences, and the
this patch unifies them behind an arm64-specific patch_machine() helper.
The last patch prevents taking asynchronous exceptions out of a
quiescent state (just DAIF for now; I'm not sure exactly how to handle
SDEI).

The architecture documentation is a little vague on how to ensure
completion of prior execution (i.e. when patching from another CPU
cannot possibly affect this and cause UNPREDICTABLE behaviour). For the
moment I'm assuming that an atomic store cannot become visible until all
prior execution has completed, but I suspect that we *might* need to add
barriers into patch_machine() prior to signalling quiescence.

This series does not intend to address the more general problem that out
patching sequences may use directly-patchable or instrumentable code,
and I'm intending that we address those with subsequent patches. Fixing
that will require a more substantial rework (e.g. of the insn code).

Thanks,
Mark.

Mark Rutland (4):
  arm64: alternative: wait for other CPUs before patching
  arm64: insn: wait for other CPUs before patching
  arm64: patching: unify stop_machine() patch synchronization
  arm64: patching: mask exceptions in patch_machine()

 arch/arm64/include/asm/patching.h |  4 ++
 arch/arm64/kernel/alternative.c   | 33 +++--------
 arch/arm64/kernel/patching.c      | 94 +++++++++++++++++++++++++------
 3 files changed, 89 insertions(+), 42 deletions(-)

-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2021-12-14 16:03 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-03 10:47 [PATCH 0/4] arm64: ensure CPUs are quiescent before patching Mark Rutland
2021-12-03 10:47 ` [PATCH 1/4] arm64: alternative: wait for other CPUs " Mark Rutland
2021-12-10 14:49   ` Catalin Marinas
2021-12-13 13:01     ` Mark Rutland
2021-12-13 13:27       ` Will Deacon
2021-12-13 13:31   ` Will Deacon
2021-12-13 13:41     ` Will Deacon
2021-12-13 13:54       ` Mark Rutland
2021-12-14 16:01         ` Will Deacon
2021-12-13 13:49     ` Mark Rutland
2021-12-03 10:47 ` [PATCH 2/4] arm64: insn: " Mark Rutland
2021-12-03 10:47 ` [PATCH 3/4] arm64: patching: unify stop_machine() patch synchronization Mark Rutland
2021-12-03 10:47 ` [PATCH 4/4] arm64: patching: mask exceptions in patch_machine() Mark Rutland

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.