linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/3] arm64 live patching
@ 2018-10-26 14:20 Torsten Duwe
  2018-10-26 14:21 ` [PATCH v4 1/3] arm64: implement ftrace with regs Torsten Duwe
                   ` (2 more replies)
  0 siblings, 3 replies; 17+ messages in thread
From: Torsten Duwe @ 2018-10-26 14:20 UTC (permalink / raw)
  To: Will Deacon, Catalin Marinas, Julien Thierry, Steven Rostedt,
	Josh Poimboeuf, Ingo Molnar, Ard Biesheuvel, Arnd Bergmann,
	AKASHI Takahiro
  Cc: linux-arm-kernel, linux-kernel, live-patching

Hi again!

V4 should include all your requested changes. Since only Julien
commented "OK" on the reliable stacktrace part, I finished it on my
own. This set now passes the relevant tests in Libor's test suite, so
livepatching the kernel proper does work.

Remember to apply Jessica's addendum in order to livepatch functions
that live in modules.

[Changes from v3]:

* Compiler support for -fpatchable-function-entry now automagically
  selects _WITH_REGS when DYNAMIC_FTRACE is switched on. Consequently,
  CONFIG_DYNAMIC_FTRACE_WITH_REGS is the only preprocessor symbol
  set by this feature (as asked for by Takahiro in v2)

* The dynamic ftrace caller creates 2 stack frames, as suggested by Ard:
  first a "preliminary" for the callee, and another for ftrace_caller
  itself. This gives the stack layout really a clean look.

* Because the ftrace-clobbered x9 is now saved immediately in the
  "callee" frame, it can be used to base pt_regs access. Much prettier now.

* Dynamic replacement insn "mov x9, lr" is generated using the common
  framework; a hopefully meaningful macro name is used for abbreviation.

* The use_ftrace_trampoline() helper introduced in v3 got renamed
  and streamlined with a reference variable, both as pointed out by Mark.

* Superflous barriers during trace application removed.

* #ifdef replaced by IS_ENABLED() where possible.

* Made stuff compile with gcc7 or older, too ;-)

* Fix my misguided .text.ftrace_regs_trampoline section assumption.
  the second trampoline goes into .text.ftrace_trampoline as well.

* Properly detect the bottom of kthread stacks, by setting a global
  symbol to the address where their LR points to and compare against it.

* Rewrote many comments to hopefully clear things up.

[Changes from v2]:

* ifeq($(CONFIG_DYNAMIC_FTRACE_WITH_REGS),y) instead of ifdef

* "fix" commit 06aeaaeabf69da4. (new patch 1)
  Made DYNAMIC_FTRACE_WITH_REGS a real choice. The current situation
  would be that a linux-4.20 kernel on arm64 should be built with
  gcc >= 8; as in this case, as well as all other archs, the "default y"
  works. Only kernels >= 4.20, arm64, gcc < 8, must change this to "n"
  in order to not be stopped by the Makefile $(error) from patch 2/4.
  You'll then fall back to the DYNAMIC_FTRACE, if selected, like before.

* use some S_X* constants to refer to offsets into pt_regs in assembly.

* have the compiler/assembler generate the mov x9,x30 instruction that
  saves LR at compile time, rather than generate it repeatedly at runtime.

* flip the ftrace_regs_caller stack frame so that it is no longer
  upside down, as Ard remarked. This change broke the graph caller somehow.

* extend handling of the module arch-dependent ftrace trampoline with
  a companion "regs" version.
  
* clear the _TIF_PATCH_PENDING on do_notify_resume()

* took care of arch/arm64/kernel/time.c when changing stack unwinder
  semantics

[Changes from v1]:

* Missing compiler support is now a Makefile error, instead
  of a warning. This will keep the compile log shorter and
  it will thus be easier to spot the problem.

* A separate ftrace_regs_caller. Only that one will write out
  a complete pt_regs, for efficiency.

* Replace the use of X19 with X28 to remember the old PC during
  live patch detection, as only that is saved&restored now for
  non-regs ftrace.

* CONFIG_DYNAMIC_FTRACE_WITH_REGS and CONFIG_DYNAMIC_FTRACE_WITH_REGS
  are currently synonymous on arm64, but differentiate better for
  the future when this is no longer the case.

* Clean up "old"/"new" insn value setting vs. #ifdefs.

* #define a INSN_MOV_X9_X30 with suggested aarch64_insn_gen call
  and use that instead of an immediate hex value.

	Torsten

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2018-11-12 11:51 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-26 14:20 [PATCH v4 0/3] arm64 live patching Torsten Duwe
2018-10-26 14:21 ` [PATCH v4 1/3] arm64: implement ftrace with regs Torsten Duwe
2018-10-31 12:10   ` Mark Rutland
2018-10-31 13:19     ` Jiri Kosina
2018-10-31 14:18       ` Mark Rutland
2018-10-31 17:58         ` Torsten Duwe
2018-11-08 12:12   ` Ard Biesheuvel
2018-11-12 11:51     ` Torsten Duwe
2018-10-26 14:21 ` [PATCH v4 2/3] arm64: implement live patching Torsten Duwe
2018-11-06 16:49   ` Miroslav Benes
2018-11-08 12:42   ` Ard Biesheuvel
2018-11-12 11:01     ` Torsten Duwe
2018-11-12 11:06       ` Ard Biesheuvel
2018-10-26 14:21 ` [PATCH v4 3/3] arm64: reliable stacktraces Torsten Duwe
2018-10-26 15:37   ` Josh Poimboeuf
2018-10-29  9:28     ` Mark Rutland
2018-10-29 15:42       ` Josh Poimboeuf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).