All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/2] arm64: ftrace: fix interop issues with module PLTs
@ 2017-06-06 17:00 Ard Biesheuvel
  2017-06-06 17:00 ` [PATCH v3 1/2] arm64: ftrace: don't validate branch via PLT in ftrace_make_nop() Ard Biesheuvel
  2017-06-06 17:00 ` [PATCH v3 2/2] arm64: ftrace: add support for far branches to dynamic ftrace Ard Biesheuvel
  0 siblings, 2 replies; 6+ messages in thread
From: Ard Biesheuvel @ 2017-06-06 17:00 UTC (permalink / raw)
  To: linux-arm-kernel

The arm64 ftrace code assumes that all core kernel symbols are within the
range of an ordinary branch instruction from anywhere in the kernel, even
in modules. While this is usually the case, it may not be true when running
with KASLR enabled.

This series fixes two distinct but related issues involving ftrace under
KASLR:

- the initial sweep over all _mcount() calls in newly loaded modules
  validates those calls by comparing each to a 'bl _mcount' opcode, which
  will fail if _mcount() [which is in the core kernel] is out of range, and
  which will result in false negatives, given that those call sites will
  contain a correct branch to _mcount(), but via a PLT entry (patch #1)

- patching up those NOPs into calls to ftrace_caller() [which is the only
  ftrace branch target that we ever use on arm64, IIUC] will fail in the
  same way, and needs to be redirected via a trampoline as well (patch #2)

v3:
- perform partial validation on 'bl _mcount' call sites, i.e., perform full
  validation unless the call site is in a module and is in fact too far away
  from the branch target, in which case we check that the instruction is a
  bl instruction pointing somewhere inside the module itself (#1). Note that
  this will correctly identify the trampolines added in #2 as valid branch
  targets, with the caveat that the 'mod' parameter is only supplied by the
  ftrace framework at module load time.
- disable preemption around __module_text_address(): this is not really needed
  given that ftrace_lock will prevent the module from being unloaded while we
  are using it, but the code in module.c contains an assert for it. (#1, #2)
- add barrier between updating the trampoline target field and patching the
  instruction that invokes it (#2)
- refactored the code so it is obviously identical to the old code when
  CONFIG_ARM64_MODULE_PLTS is disabled. (#1, #2)

v2:
- no changes to #1
- don't hardcode references to ftrace_caller() (#2)
- avoid a stack frame for the trampoline, given that it will confuse the
  ftrace code (#2)

Ard Biesheuvel (2):
  arm64: ftrace: don't validate branch via PLT in ftrace_make_nop()
  arm64: ftrace: add support for far branches to dynamic ftrace

 arch/arm64/Kconfig              |  2 +-
 arch/arm64/Makefile             |  3 +
 arch/arm64/include/asm/module.h |  3 +
 arch/arm64/kernel/Makefile      |  3 +
 arch/arm64/kernel/ftrace-mod.S  | 18 ++++
 arch/arm64/kernel/ftrace.c      | 97 +++++++++++++++++++-
 arch/arm64/kernel/module.c      |  6 +-
 7 files changed, 127 insertions(+), 5 deletions(-)
 create mode 100644 arch/arm64/kernel/ftrace-mod.S

-- 
2.9.3

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v3 1/2] arm64: ftrace: don't validate branch via PLT in ftrace_make_nop()
  2017-06-06 17:00 [PATCH v3 0/2] arm64: ftrace: fix interop issues with module PLTs Ard Biesheuvel
@ 2017-06-06 17:00 ` Ard Biesheuvel
  2017-06-07 10:42   ` Will Deacon
  2017-06-06 17:00 ` [PATCH v3 2/2] arm64: ftrace: add support for far branches to dynamic ftrace Ard Biesheuvel
  1 sibling, 1 reply; 6+ messages in thread
From: Ard Biesheuvel @ 2017-06-06 17:00 UTC (permalink / raw)
  To: linux-arm-kernel

When turning branch instructions into NOPs, we attempt to validate the
action by comparing the old value at the call site with the opcode of
a direct relative branch instruction pointing at the old target.

However, these call sites are statically initialized to call _mcount(),
and may be redirected via a PLT entry if the module is loaded far away
from the kernel text, leading to false negatives and spurious errors.

So skip the validation if CONFIG_ARM64_MODULE_PLTS is configured.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/kernel/ftrace.c | 46 ++++++++++++++++++--
 1 file changed, 43 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c
index 40ad08ac569a..4cb576374b82 100644
--- a/arch/arm64/kernel/ftrace.c
+++ b/arch/arm64/kernel/ftrace.c
@@ -84,12 +84,52 @@ int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec,
 		    unsigned long addr)
 {
 	unsigned long pc = rec->ip;
-	u32 old, new;
+	long offset = (long)pc - (long)addr;
+	bool validate = true;
+	u32 old = 0, new;
+
+	if (IS_ENABLED(CONFIG_ARM64_MODULE_PLTS) &&
+	    (offset < -SZ_128M || offset >= SZ_128M)) {
+		u32 replaced;
+
+		/*
+		 * 'mod' is only set at module load time, but if we end up
+		 * dealing with an out-of-range condition, we can assume it
+		 * is due to a module being loaded far away from the kernel.
+		 */
+		if (!mod) {
+			preempt_disable();
+			mod = __module_text_address(pc);
+			preempt_enable();
+
+			if (WARN_ON(!mod))
+				return -EINVAL;
+		}
+
+		/*
+		 * The instruction we are about to patch may be a branch and
+		 * link instruction that was redirected via a PLT entry. In
+		 * this case, the normal validation will fail, but we can at
+		 * least check that we are dealing with a branch and link
+		 * instruction that points into the right module.
+		 */
+		if (aarch64_insn_read((void *)pc, &replaced))
+			return -EFAULT;
+
+		if (!aarch64_insn_is_bl(replaced) ||
+		    !within_module(pc + aarch64_get_branch_offset(replaced),
+				   mod))
+			return -EINVAL;
+
+		validate = false;
+	} else {
+		old = aarch64_insn_gen_branch_imm(pc, addr,
+						  AARCH64_INSN_BRANCH_LINK);
+	}
 
-	old = aarch64_insn_gen_branch_imm(pc, addr, AARCH64_INSN_BRANCH_LINK);
 	new = aarch64_insn_gen_nop();
 
-	return ftrace_modify_code(pc, old, new, true);
+	return ftrace_modify_code(pc, old, new, validate);
 }
 
 void arch_ftrace_update_code(int command)
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v3 2/2] arm64: ftrace: add support for far branches to dynamic ftrace
  2017-06-06 17:00 [PATCH v3 0/2] arm64: ftrace: fix interop issues with module PLTs Ard Biesheuvel
  2017-06-06 17:00 ` [PATCH v3 1/2] arm64: ftrace: don't validate branch via PLT in ftrace_make_nop() Ard Biesheuvel
@ 2017-06-06 17:00 ` Ard Biesheuvel
  1 sibling, 0 replies; 6+ messages in thread
From: Ard Biesheuvel @ 2017-06-06 17:00 UTC (permalink / raw)
  To: linux-arm-kernel

Currently, dynamic ftrace support in the arm64 kernel assumes that all
core kernel code is within range of ordinary branch instructions that
occur in module code, which is usually the case, but is no longer
guaranteed now that we have support for module PLTs and address space
randomization.

Since on arm64, all patching of branch instructions involves function
calls to the same entry point [ftrace_caller()], we can emit the modules
with a trampoline that has unlimited range, and patch both the trampoline
itself and the branch instruction to redirect the call via the trampoline.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/Kconfig              |  2 +-
 arch/arm64/Makefile             |  3 ++
 arch/arm64/include/asm/module.h |  3 ++
 arch/arm64/kernel/Makefile      |  3 ++
 arch/arm64/kernel/ftrace-mod.S  | 18 +++++++
 arch/arm64/kernel/ftrace.c      | 51 ++++++++++++++++++++
 arch/arm64/kernel/module.c      |  6 ++-
 7 files changed, 84 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 3dcd7ec69bca..22f769b254b4 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -982,7 +982,7 @@ config RANDOMIZE_BASE
 
 config RANDOMIZE_MODULE_REGION_FULL
 	bool "Randomize the module region independently from the core kernel"
-	depends on RANDOMIZE_BASE && !DYNAMIC_FTRACE
+	depends on RANDOMIZE_BASE
 	default y
 	help
 	  Randomizes the location of the module region without considering the
diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index f839ecd919f9..1ce57b42f390 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -70,6 +70,9 @@ endif
 
 ifeq ($(CONFIG_ARM64_MODULE_PLTS),y)
 KBUILD_LDFLAGS_MODULE	+= -T $(srctree)/arch/arm64/kernel/module.lds
+ifeq ($(CONFIG_DYNAMIC_FTRACE),y)
+KBUILD_LDFLAGS_MODULE	+= $(objtree)/arch/arm64/kernel/ftrace-mod.o
+endif
 endif
 
 # Default value
diff --git a/arch/arm64/include/asm/module.h b/arch/arm64/include/asm/module.h
index d57693f5d4ec..19bd97671bb8 100644
--- a/arch/arm64/include/asm/module.h
+++ b/arch/arm64/include/asm/module.h
@@ -30,6 +30,9 @@ struct mod_plt_sec {
 struct mod_arch_specific {
 	struct mod_plt_sec	core;
 	struct mod_plt_sec	init;
+
+	/* for CONFIG_DYNAMIC_FTRACE */
+	void			*ftrace_trampoline;
 };
 #endif
 
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 1dcb69d3d0e5..f2b4e816b6de 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -62,3 +62,6 @@ extra-y					+= $(head-y) vmlinux.lds
 ifeq ($(CONFIG_DEBUG_EFI),y)
 AFLAGS_head.o += -DVMLINUX_PATH="\"$(realpath $(objtree)/vmlinux)\""
 endif
+
+# will be included by each individual module but not by the core kernel itself
+extra-$(CONFIG_DYNAMIC_FTRACE) += ftrace-mod.o
diff --git a/arch/arm64/kernel/ftrace-mod.S b/arch/arm64/kernel/ftrace-mod.S
new file mode 100644
index 000000000000..00c4025be4ff
--- /dev/null
+++ b/arch/arm64/kernel/ftrace-mod.S
@@ -0,0 +1,18 @@
+/*
+ * Copyright (C) 2017 Linaro Ltd <ard.biesheuvel@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/linkage.h>
+#include <asm/assembler.h>
+
+	.section	".text.ftrace_trampoline", "ax"
+	.align		3
+0:	.quad		0
+__ftrace_trampoline:
+	ldr		x16, 0b
+	br		x16
+ENDPROC(__ftrace_trampoline)
diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c
index 4cb576374b82..c23fa29883f8 100644
--- a/arch/arm64/kernel/ftrace.c
+++ b/arch/arm64/kernel/ftrace.c
@@ -10,10 +10,12 @@
  */
 
 #include <linux/ftrace.h>
+#include <linux/module.h>
 #include <linux/swab.h>
 #include <linux/uaccess.h>
 
 #include <asm/cacheflush.h>
+#include <asm/debug-monitors.h>
 #include <asm/ftrace.h>
 #include <asm/insn.h>
 
@@ -69,8 +71,57 @@ int ftrace_update_ftrace_func(ftrace_func_t func)
 int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
 {
 	unsigned long pc = rec->ip;
+	long offset = (long)pc - (long)addr;
 	u32 old, new;
 
+	if (IS_ENABLED(CONFIG_ARM64_MODULE_PLTS) &&
+	    (offset < -SZ_128M || offset >= SZ_128M)) {
+		unsigned long *trampoline;
+		struct module *mod;
+
+		/*
+		 * On kernels that support module PLTs, the offset between the
+		 * branch instruction and its target may legally exceed the
+		 * range of an ordinary relative 'bl' opcode. In this case, we
+		 * need to branch via a trampoline in the module.
+		 *
+		 * NOTE: __module_text_address() must be called with preemption
+		 * disabled, but we can rely on ftrace_lock to ensure that 'mod'
+		 * retains its validity throughout the remainder of this code.
+		 */
+		preempt_disable();
+		mod = __module_text_address(pc);
+		preempt_enable();
+
+		if (WARN_ON(!mod))
+			return -EINVAL;
+
+		/*
+		 * There is only one ftrace trampoline per module. For now,
+		 * this is not a problem since on arm64, all dynamic ftrace
+		 * invocations are routed via ftrace_caller(). This will need
+		 * to be revisited if support for multiple ftrace entry points
+		 * is added in the future, but for now, the pr_err() below
+		 * deals with a theoretical issue only.
+		 */
+		trampoline = (unsigned long *)mod->arch.ftrace_trampoline;
+		if (trampoline[0] != addr) {
+			if (trampoline[0] != 0) {
+				pr_err("ftrace: far branches to multiple entry points unsupported inside a single module\n");
+				return -EINVAL;
+			}
+
+			/* point the trampoline to our ftrace entry point */
+			module_disable_ro(mod);
+			trampoline[0] = addr;
+			module_enable_ro(mod, true);
+
+			/* update the trampoline before taking its address */
+			smp_wmb();
+		}
+		addr = (unsigned long)&trampoline[1];
+	}
+
 	old = aarch64_insn_gen_nop();
 	new = aarch64_insn_gen_branch_imm(pc, addr, AARCH64_INSN_BRANCH_LINK);
 
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index f035ff6fb223..8c3a7264fb0f 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -420,8 +420,12 @@ int module_finalize(const Elf_Ehdr *hdr,
 	for (s = sechdrs, se = sechdrs + hdr->e_shnum; s < se; s++) {
 		if (strcmp(".altinstructions", secstrs + s->sh_name) == 0) {
 			apply_alternatives((void *)s->sh_addr, s->sh_size);
-			return 0;
 		}
+#ifdef CONFIG_ARM64_MODULE_PLTS
+		if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE) &&
+		    !strcmp(".text.ftrace_trampoline", secstrs + s->sh_name))
+			me->arch.ftrace_trampoline = (void *)s->sh_addr;
+#endif
 	}
 
 	return 0;
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v3 1/2] arm64: ftrace: don't validate branch via PLT in ftrace_make_nop()
  2017-06-06 17:00 ` [PATCH v3 1/2] arm64: ftrace: don't validate branch via PLT in ftrace_make_nop() Ard Biesheuvel
@ 2017-06-07 10:42   ` Will Deacon
  2017-06-07 10:45     ` Ard Biesheuvel
  0 siblings, 1 reply; 6+ messages in thread
From: Will Deacon @ 2017-06-07 10:42 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jun 06, 2017 at 05:00:21PM +0000, Ard Biesheuvel wrote:
> When turning branch instructions into NOPs, we attempt to validate the
> action by comparing the old value at the call site with the opcode of
> a direct relative branch instruction pointing at the old target.
> 
> However, these call sites are statically initialized to call _mcount(),
> and may be redirected via a PLT entry if the module is loaded far away
> from the kernel text, leading to false negatives and spurious errors.
> 
> So skip the validation if CONFIG_ARM64_MODULE_PLTS is configured.
> 
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
>  arch/arm64/kernel/ftrace.c | 46 ++++++++++++++++++--
>  1 file changed, 43 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c
> index 40ad08ac569a..4cb576374b82 100644
> --- a/arch/arm64/kernel/ftrace.c
> +++ b/arch/arm64/kernel/ftrace.c
> @@ -84,12 +84,52 @@ int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec,
>  		    unsigned long addr)
>  {
>  	unsigned long pc = rec->ip;
> -	u32 old, new;
> +	long offset = (long)pc - (long)addr;
> +	bool validate = true;
> +	u32 old = 0, new;
> +
> +	if (IS_ENABLED(CONFIG_ARM64_MODULE_PLTS) &&
> +	    (offset < -SZ_128M || offset >= SZ_128M)) {
> +		u32 replaced;
> +
> +		/*
> +		 * 'mod' is only set at module load time, but if we end up
> +		 * dealing with an out-of-range condition, we can assume it
> +		 * is due to a module being loaded far away from the kernel.
> +		 */
> +		if (!mod) {
> +			preempt_disable();
> +			mod = __module_text_address(pc);
> +			preempt_enable();

The comment in __module_text_address says that preempt must be disabled so
that the module doesn't get freed under its feet, but if that's a
possibility here then it feels really dangerous to re-enable preemption
before we've done the patching. Shouldn't we take module_mutex or something?

Other than that, this looks fine. Thanks.

Will

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v3 1/2] arm64: ftrace: don't validate branch via PLT in ftrace_make_nop()
  2017-06-07 10:42   ` Will Deacon
@ 2017-06-07 10:45     ` Ard Biesheuvel
  2017-06-07 10:47       ` Will Deacon
  0 siblings, 1 reply; 6+ messages in thread
From: Ard Biesheuvel @ 2017-06-07 10:45 UTC (permalink / raw)
  To: linux-arm-kernel

On 7 June 2017 at 10:42, Will Deacon <will.deacon@arm.com> wrote:
> On Tue, Jun 06, 2017 at 05:00:21PM +0000, Ard Biesheuvel wrote:
>> When turning branch instructions into NOPs, we attempt to validate the
>> action by comparing the old value at the call site with the opcode of
>> a direct relative branch instruction pointing at the old target.
>>
>> However, these call sites are statically initialized to call _mcount(),
>> and may be redirected via a PLT entry if the module is loaded far away
>> from the kernel text, leading to false negatives and spurious errors.
>>
>> So skip the validation if CONFIG_ARM64_MODULE_PLTS is configured.
>>
>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
>> ---
>>  arch/arm64/kernel/ftrace.c | 46 ++++++++++++++++++--
>>  1 file changed, 43 insertions(+), 3 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c
>> index 40ad08ac569a..4cb576374b82 100644
>> --- a/arch/arm64/kernel/ftrace.c
>> +++ b/arch/arm64/kernel/ftrace.c
>> @@ -84,12 +84,52 @@ int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec,
>>                   unsigned long addr)
>>  {
>>       unsigned long pc = rec->ip;
>> -     u32 old, new;
>> +     long offset = (long)pc - (long)addr;
>> +     bool validate = true;
>> +     u32 old = 0, new;
>> +
>> +     if (IS_ENABLED(CONFIG_ARM64_MODULE_PLTS) &&
>> +         (offset < -SZ_128M || offset >= SZ_128M)) {
>> +             u32 replaced;
>> +
>> +             /*
>> +              * 'mod' is only set at module load time, but if we end up
>> +              * dealing with an out-of-range condition, we can assume it
>> +              * is due to a module being loaded far away from the kernel.
>> +              */
>> +             if (!mod) {
>> +                     preempt_disable();
>> +                     mod = __module_text_address(pc);
>> +                     preempt_enable();
>
> The comment in __module_text_address says that preempt must be disabled so
> that the module doesn't get freed under its feet, but if that's a
> possibility here then it feels really dangerous to re-enable preemption
> before we've done the patching. Shouldn't we take module_mutex or something?
>

Ah yes. I added a comment only in patch #2, on another instance in
ftrace_make_call(), and I thought it was redundant to duplicate it
here: ftrace_lock is held here, which will prevent the module from
being unloaded in the mean time, so disabling preemption is only
necessary to prevent an assert from firing.

> Other than that, this looks fine. Thanks.
>
> Will

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v3 1/2] arm64: ftrace: don't validate branch via PLT in ftrace_make_nop()
  2017-06-07 10:45     ` Ard Biesheuvel
@ 2017-06-07 10:47       ` Will Deacon
  0 siblings, 0 replies; 6+ messages in thread
From: Will Deacon @ 2017-06-07 10:47 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jun 07, 2017 at 10:45:09AM +0000, Ard Biesheuvel wrote:
> On 7 June 2017 at 10:42, Will Deacon <will.deacon@arm.com> wrote:
> > On Tue, Jun 06, 2017 at 05:00:21PM +0000, Ard Biesheuvel wrote:
> >> When turning branch instructions into NOPs, we attempt to validate the
> >> action by comparing the old value at the call site with the opcode of
> >> a direct relative branch instruction pointing at the old target.
> >>
> >> However, these call sites are statically initialized to call _mcount(),
> >> and may be redirected via a PLT entry if the module is loaded far away
> >> from the kernel text, leading to false negatives and spurious errors.
> >>
> >> So skip the validation if CONFIG_ARM64_MODULE_PLTS is configured.
> >>
> >> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> >> ---
> >>  arch/arm64/kernel/ftrace.c | 46 ++++++++++++++++++--
> >>  1 file changed, 43 insertions(+), 3 deletions(-)
> >>
> >> diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c
> >> index 40ad08ac569a..4cb576374b82 100644
> >> --- a/arch/arm64/kernel/ftrace.c
> >> +++ b/arch/arm64/kernel/ftrace.c
> >> @@ -84,12 +84,52 @@ int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec,
> >>                   unsigned long addr)
> >>  {
> >>       unsigned long pc = rec->ip;
> >> -     u32 old, new;
> >> +     long offset = (long)pc - (long)addr;
> >> +     bool validate = true;
> >> +     u32 old = 0, new;
> >> +
> >> +     if (IS_ENABLED(CONFIG_ARM64_MODULE_PLTS) &&
> >> +         (offset < -SZ_128M || offset >= SZ_128M)) {
> >> +             u32 replaced;
> >> +
> >> +             /*
> >> +              * 'mod' is only set at module load time, but if we end up
> >> +              * dealing with an out-of-range condition, we can assume it
> >> +              * is due to a module being loaded far away from the kernel.
> >> +              */
> >> +             if (!mod) {
> >> +                     preempt_disable();
> >> +                     mod = __module_text_address(pc);
> >> +                     preempt_enable();
> >
> > The comment in __module_text_address says that preempt must be disabled so
> > that the module doesn't get freed under its feet, but if that's a
> > possibility here then it feels really dangerous to re-enable preemption
> > before we've done the patching. Shouldn't we take module_mutex or something?
> >
> 
> Ah yes. I added a comment only in patch #2, on another instance in
> ftrace_make_call(), and I thought it was redundant to duplicate it
> here: ftrace_lock is held here, which will prevent the module from
> being unloaded in the mean time, so disabling preemption is only
> necessary to prevent an assert from firing.

I suppose !lockdep_is_held(&ftrace_lock) should be added to the WARN_ON_ONCE
in module_assert_mutex_or_preempt, but that's a separate patch so I'll queue
these as-is.

Thanks for the quick explanation!

Will

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2017-06-07 10:47 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-06-06 17:00 [PATCH v3 0/2] arm64: ftrace: fix interop issues with module PLTs Ard Biesheuvel
2017-06-06 17:00 ` [PATCH v3 1/2] arm64: ftrace: don't validate branch via PLT in ftrace_make_nop() Ard Biesheuvel
2017-06-07 10:42   ` Will Deacon
2017-06-07 10:45     ` Ard Biesheuvel
2017-06-07 10:47       ` Will Deacon
2017-06-06 17:00 ` [PATCH v3 2/2] arm64: ftrace: add support for far branches to dynamic ftrace Ard Biesheuvel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.