linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/8] arm64/ftrace: Add support for DYNAMIC_FTRACE_WITH_CALL_OPS
@ 2023-01-09 13:58 Mark Rutland
  2023-01-09 13:58 ` [PATCH 1/8] Compiler attributes: GCC function alignment workarounds Mark Rutland
                   ` (8 more replies)
  0 siblings, 9 replies; 26+ messages in thread
From: Mark Rutland @ 2023-01-09 13:58 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, lenb, linux-acpi, linux-kernel, mark.rutland,
	mhiramat, ndesaulniers, ojeda, peterz, rafael.j.wysocki, revest,
	robert.moore, rostedt, will

This series adds a new DYNAMIC_FTRACE_WITH_CALL_OPS mechanism, and
enables support for this on arm64. This significantly reduces the
overhead of tracing when a callsite/tracee has a single associated
tracer, avoids a number of issues that make it undesireably and
infeasible to use dynamically-allocated trampolines (e.g. branch range
limitations), and makes it possible to implement support for
DYNAMIC_FTRACE_WITH_DIRECT_CALLS in future.

The main idea is to give each ftrace callsite an associated pointer to
an ftrace_ops. The architecture's ftrace_caller trampoline can recover
the ops pointer and invoke ops->func from this without needing to use
ftrace_ops_list_func, which has to iterate through all registered ops.

To do this, we use -fpatchable-function-entry=M,N, there N NOPs are
placed before the function entry point. On arm64 NOPs are always 4
bytes, so by allocating 2 per-function NOPs, we have enaough space to
place a 64-bit value. So that we can manipulate the pointer atomically,
we need to align instrumented functions to at least 8 bytes.

The first three patches enable this function alignment, requiring
changes to the ACPICA Makefile, and working around cases where GCC drops
alignment.

The final four patches implement support for arm64. As noted in the
final patch, this results in a significant reduction in overhead:

  Before this patch:

  Number of tracers     || Total time  | Per-call average time (ns)
  Relevant | Irrelevant || (ns)        | Total        | Overhead
  =========+============++=============+==============+============
         0 |          0 ||      94,583 |         0.95 |           -
         0 |          1 ||      93,709 |         0.94 |           -
         0 |          2 ||      93,666 |         0.94 |           -
         0 |         10 ||      93,709 |         0.94 |           -
         0 |        100 ||      93,792 |         0.94 |           -
  ---------+------------++-------------+--------------+------------
         1 |          1 ||   6,467,833 |        64.68 |       63.73
         1 |          2 ||   7,509,708 |        75.10 |       74.15
         1 |         10 ||  23,786,792 |       237.87 |      236.92
         1 |        100 || 106,432,500 |     1,064.43 |     1063.38
  ---------+------------++-------------+--------------+------------
         1 |          0 ||   1,431,875 |        14.32 |       13.37
         2 |          0 ||   6,456,334 |        64.56 |       63.62
        10 |          0 ||  22,717,000 |       227.17 |      226.22
       100 |          0 || 103,293,667 |      1032.94 |     1031.99
  ---------+------------++-------------+--------------+--------------

  Note: per-call overhead is estiamated relative to the baseline case
  with 0 relevant tracers and 0 irrelevant tracers.

  After this patch

  Number of tracers     || Total time  | Per-call average time (ns)
  Relevant | Irrelevant || (ns)        | Total        | Overhead
  =========+============++=============+==============+============
         0 |          0 ||      94,541 |         0.95 |           -
         0 |          1 ||      93,666 |         0.94 |           -
         0 |          2 ||      93,709 |         0.94 |           -
         0 |         10 ||      93,667 |         0.94 |           -
         0 |        100 ||      93,792 |         0.94 |           -
  ---------+------------++-------------+--------------+------------
         1 |          1 ||     281,000 |         2.81 |        1.86
         1 |          2 ||     281,042 |         2.81 |        1.87
         1 |         10 ||     280,958 |         2.81 |        1.86
         1 |        100 ||     281,250 |         2.81 |        1.87
  ---------+------------++-------------+--------------+------------
         1 |          0 ||     280,959 |         2.81 |        1.86
         2 |          0 ||   6,502,708 |        65.03 |       64.08
        10 |          0 ||  18,681,209 |       186.81 |      185.87
       100 |          0 || 103,550,458 |     1,035.50 |     1034.56
  ---------+------------++-------------+--------------+------------

  Note: per-call overhead is estiamated relative to the baseline case
  with 0 relevant tracers and 0 irrelevant tracers.

Thanks,
Mark.

Mark Rutland (8):
  Compiler attributes: GCC function alignment workarounds
  ACPI: Don't build ACPICA with '-Os'
  arm64: Extend support for CONFIG_FUNCTION_ALIGNMENT
  ftrace: Add DYNAMIC_FTRACE_WITH_CALL_OPS
  arm64: insn: Add helpers for BTI
  arm64: patching: Add aarch64_insn_write_literal_u64()
  arm64: ftrace: Update stale comment
  arm64: Implement HAVE_DYNAMIC_FTRACE_WITH_CALL_OPS

 arch/arm64/Kconfig                  |   3 +
 arch/arm64/Makefile                 |   5 +-
 arch/arm64/include/asm/ftrace.h     |  15 +--
 arch/arm64/include/asm/insn.h       |   1 +
 arch/arm64/include/asm/linkage.h    |  10 +-
 arch/arm64/include/asm/patching.h   |   2 +
 arch/arm64/kernel/asm-offsets.c     |   4 +
 arch/arm64/kernel/entry-ftrace.S    |  32 +++++-
 arch/arm64/kernel/ftrace.c          | 158 +++++++++++++++++++++++++++-
 arch/arm64/kernel/patching.c        |  17 +++
 drivers/acpi/acpica/Makefile        |   2 +-
 include/linux/compiler_attributes.h |  23 +++-
 include/linux/ftrace.h              |  15 ++-
 kernel/trace/Kconfig                |   7 ++
 kernel/trace/ftrace.c               | 109 ++++++++++++++++++-
 15 files changed, 371 insertions(+), 32 deletions(-)

-- 
2.30.2


^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2023-01-15 21:32 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-09 13:58 [PATCH 0/8] arm64/ftrace: Add support for DYNAMIC_FTRACE_WITH_CALL_OPS Mark Rutland
2023-01-09 13:58 ` [PATCH 1/8] Compiler attributes: GCC function alignment workarounds Mark Rutland
2023-01-09 14:43   ` Miguel Ojeda
2023-01-09 17:06     ` Mark Rutland
2023-01-09 22:35       ` Miguel Ojeda
2023-01-11 18:27     ` Mark Rutland
2023-01-12 11:38       ` Mark Rutland
2023-01-13 12:49         ` Mark Rutland
2023-01-15 21:32           ` Miguel Ojeda
2023-01-09 13:58 ` [PATCH 2/8] ACPI: Don't build ACPICA with '-Os' Mark Rutland
2023-01-10 13:45   ` Rafael J. Wysocki
2023-01-09 13:58 ` [PATCH 3/8] arm64: Extend support for CONFIG_FUNCTION_ALIGNMENT Mark Rutland
2023-01-10 20:35   ` Peter Zijlstra
2023-01-10 20:43     ` Will Deacon
2023-01-11 11:39       ` Mark Rutland
2023-01-11 11:36     ` Mark Rutland
2023-01-09 13:58 ` [PATCH 4/8] ftrace: Add DYNAMIC_FTRACE_WITH_CALL_OPS Mark Rutland
2023-01-12  6:48   ` Li Huafei
2023-01-12 11:00     ` Mark Rutland
2023-01-13  1:15       ` Li Huafei
2023-01-09 13:58 ` [PATCH 5/8] arm64: insn: Add helpers for BTI Mark Rutland
2023-01-09 13:58 ` [PATCH 6/8] arm64: patching: Add aarch64_insn_write_literal_u64() Mark Rutland
2023-01-09 13:58 ` [PATCH 7/8] arm64: ftrace: Update stale comment Mark Rutland
2023-01-09 13:58 ` [PATCH 8/8] arm64: Implement HAVE_DYNAMIC_FTRACE_WITH_CALL_OPS Mark Rutland
2023-01-10  8:55 ` [PATCH 0/8] arm64/ftrace: Add support for DYNAMIC_FTRACE_WITH_CALL_OPS David Laight
2023-01-10 10:31   ` Mark Rutland

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).