All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3] kernel/module_64.c: Add REL24 relocation support of livepatch symbols
@ 2017-10-17  5:18 Kamalesh Babulal
  2017-10-31 14:19 ` Naveen N . Rao
  0 siblings, 1 reply; 20+ messages in thread
From: Kamalesh Babulal @ 2017-10-17  5:18 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Kamalesh Babulal, Balbir Singh, Naveen N . Rao, Josh Poimboeuf,
	Jessica Yu, Ananth N Mavinakayanahalli, Aravinda Prasad,
	Torsten Duwe, linuxppc-dev, live-patching

Livepatch re-uses module loader function apply_relocate_add() to write
relocations, instead of managing them by arch-dependent
klp_write_module_reloc() function.

apply_relocate_add() doesn't understand livepatch symbols (marked with
SHN_LIVEPATCH symbol section index) and assumes them to be local symbols
by default for R_PPC64_REL24 relocation type. It fails with an error,
when trying to calculate offset with local_entry_offset():

module_64: kpatch_meminfo: REL24 -1152921504897399800 out of range!

Whereas livepatch symbols are essentially SHN_UNDEF, should be
called via stub used for global calls. This issue can be fixed by
teaching apply_relocate_add() to handle both SHN_UNDEF/SHN_LIVEPATCH
symbols via the same stub. It isn't a complete fix, as it will fail for
local calls becoming global.

Consider a livepatch sequence[1] below, where function A calls B, B is
livepatched function and any calls to function B is redirected to
patched version P. Now, livepatched function P calls function C in M2,
whereas C was local to function B and global to function P.

  +--------+            +--------+    +--------+      +--------+
  |        |   +--------+--------+--->|        |  +-->|        |
  |  A     |   |        |  B     |    |  F     |  |   |  P     |
  |        |   |        |        |    |        +--+   |        |
  |        +---+        |        |    |        |<-+   |        |
  |        |<--+   +----+   C    |    |        |  |   |        |
  |        |   |   | +->|        |    |        |  |   |        |<---+
  | K / M1 |   |   | |  | K / M2 |  +-+ Kernel |  +---+ Mod3   +--+ |
  +--------+   |   | |  +--------+  | +--------+      +--------+  | |
               |   | |              |                             | |
               +---+-+--------------+                             | |
                   | |                                            | |
                   | +--------------------------------------------+ |
                   +------------------------------------------------+

Handling such call with existing stub, triggers another error:

module_64: kpatch_meminfo: Expect noop after relocate, got 3d220000

Reason being, ppc64le ABI v2 expects a nop instruction after every
branch to a global call. That gets overwritten by an instruction to
restore TOC with r2 value of callee. Given function C was local to
function B, it does not store/restore TOC as they are not expected to be
clobbered for functions called via local entry point.

The current stub can be extended to re-store TOC and have a single stub
for both SHN_UNDEF/SHN_LIVEPATCH symbols. Constructing a single stub proves
to be an overhead for non-livepatch calls, with additional instructions
to restore TOC.

A new stub to call livepatch symbols with an intermediate stack to
store/restore, TOC/LR between livepatched function and global function
will work for most of the cases but will fail when arguments are passed
via stack between functions.

This issue has been already solved by introduction of livepatch_handler,
which runs in _mcount context by introducing livepatch stack growing
upwards from the base of the regular stack, eliminating the need for an
intermediate stack.

Current approach is to setup klp_stub mimicking the functionality of
entry_64.S::livepatch_handler to store/restore TOC/LR values, other than
the stub setup and branch. This patch also introduces new
ppc64le_klp_stub_entry[], along with the helpers to find/allocate
livepatch stub.

[1] ASCII diagram adopted from:
http://mpe.github.io/posts/2016/05/23/kernel-live-patching-for-ppc64le/

Signed-off-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com>
Reviewed-by: Balbir Singh <bsingharora@gmail.com>
Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Jessica Yu <jeyu@kernel.org>
Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
Cc: Aravinda Prasad <aravinda@linux.vnet.ibm.com>
Cc: Torsten Duwe <duwe@lst.de>
Cc: linuxppc-dev@lists.ozlabs.org
Cc: live-patching@vger.kernel.org
---
v3:
 - Defined FUNC_DESC_OFFSET to calculate func_desc offset from
   struct ppc64le_klp_stub_entry.
 - Replaced BUG_ON() with WARN_ON() in klp_stub_for_addr().
 - Major commit message re-write.

v2:
 - Changed klp_stub construction to re-use livepatch_handler and
   additional patch code required for klp_stub, instead of duplicating it.
 - Minor comments and commit body edit.

 arch/powerpc/include/asm/module.h              |   4 +
 arch/powerpc/kernel/module_64.c                | 139 ++++++++++++++++++++++++-
 arch/powerpc/kernel/trace/ftrace_64_mprofile.S |  31 ++++++
 3 files changed, 171 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/module.h b/arch/powerpc/include/asm/module.h
index 6c0132c..de22c4c 100644
--- a/arch/powerpc/include/asm/module.h
+++ b/arch/powerpc/include/asm/module.h
@@ -44,6 +44,10 @@ struct mod_arch_specific {
 	unsigned long toc;
 	unsigned long tramp;
 #endif
+#ifdef CONFIG_LIVEPATCH
+	/* Count of kernel livepatch relocations */
+	unsigned long klp_relocs;
+#endif
 
 #else /* powerpc64 */
 	/* Indices of PLT sections within module. */
diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c
index 0b0f896..005aaea 100644
--- a/arch/powerpc/kernel/module_64.c
+++ b/arch/powerpc/kernel/module_64.c
@@ -140,6 +140,24 @@ static u32 ppc64_stub_insns[] = {
 	0x4e800420			/* bctr */
 };
 
+#ifdef CONFIG_LIVEPATCH
+extern u32 klp_stub_insn[], klp_stub_insn_end[];
+extern u32 livepatch_handler[], livepatch_handler_end[];
+
+struct ppc64le_klp_stub_entry {
+	/*
+	 * Other than setting up the stub and livepatch stub also needs to
+	 * allocate extra instructions to allocate livepatch stack,
+	 * storing/restoring TOC/LR values on/from the livepatch stack.
+	 */
+	u32 jump[31];
+	/* Used by ftrace to identify stubs */
+	u32 magic;
+	/* Data for the above code */
+	func_desc_t funcdata;
+};
+#endif
+
 #ifdef CONFIG_DYNAMIC_FTRACE
 int module_trampoline_target(struct module *mod, unsigned long addr,
 			     unsigned long *target)
@@ -239,10 +257,19 @@ static void relaswap(void *_x, void *_y, int size)
 
 /* Get size of potential trampolines required. */
 static unsigned long get_stubs_size(const Elf64_Ehdr *hdr,
-				    const Elf64_Shdr *sechdrs)
+				    const Elf64_Shdr *sechdrs,
+				    struct module *me)
 {
 	/* One extra reloc so it's always 0-funcaddr terminated */
 	unsigned long relocs = 1;
+	/*
+	 * size of livepatch stub is 28 instructions, whereas the
+	 * non-livepatch stub requires 7 instructions. Account for
+	 * different stub sizes and track the livepatch relocation
+	 * count in me->arch.klp_relocs.
+	 */
+	unsigned long sec_relocs = 0;
+	unsigned long klp_relocs = 0;
 	unsigned i;
 
 	/* Every relocated section... */
@@ -262,9 +289,14 @@ static unsigned long get_stubs_size(const Elf64_Ehdr *hdr,
 			     sechdrs[i].sh_size / sizeof(Elf64_Rela),
 			     sizeof(Elf64_Rela), relacmp, relaswap);
 
-			relocs += count_relocs((void *)sechdrs[i].sh_addr,
+			sec_relocs = count_relocs((void *)sechdrs[i].sh_addr,
 					       sechdrs[i].sh_size
 					       / sizeof(Elf64_Rela));
+			relocs += sec_relocs;
+#ifdef CONFIG_LIVEPATCH
+			if (sechdrs[i].sh_flags & SHF_RELA_LIVEPATCH)
+				klp_relocs += sec_relocs;
+#endif
 		}
 	}
 
@@ -273,6 +305,15 @@ static unsigned long get_stubs_size(const Elf64_Ehdr *hdr,
 	relocs++;
 #endif
 
+	relocs -= klp_relocs;
+#ifdef CONFIG_LIVEPATCH
+	me->arch.klp_relocs = klp_relocs;
+
+	pr_debug("Looks like a total of %lu stubs, (%lu) livepatch stubs, max\n",
+				relocs, klp_relocs);
+	return (relocs * sizeof(struct ppc64_stub_entry) +
+		klp_relocs * sizeof(struct ppc64le_klp_stub_entry));
+#endif
 	pr_debug("Looks like a total of %lu stubs, max\n", relocs);
 	return relocs * sizeof(struct ppc64_stub_entry);
 }
@@ -369,7 +410,7 @@ int module_frob_arch_sections(Elf64_Ehdr *hdr,
 		me->arch.toc_section = me->arch.stubs_section;
 
 	/* Override the stubs size */
-	sechdrs[me->arch.stubs_section].sh_size = get_stubs_size(hdr, sechdrs);
+	sechdrs[me->arch.stubs_section].sh_size = get_stubs_size(hdr, sechdrs, me);
 	return 0;
 }
 
@@ -415,6 +456,59 @@ static inline int create_stub(const Elf64_Shdr *sechdrs,
 	return 1;
 }
 
+#ifdef CONFIG_LIVEPATCH
+
+#define FUNC_DESC_OFFSET offsetof(struct ppc64le_klp_stub_entry, funcdata)
+
+/* Patch livepatch stub to reference function and correct r2 value. */
+static inline int create_klp_stub(const Elf64_Shdr *sechdrs,
+				  struct ppc64le_klp_stub_entry *entry,
+				  unsigned long addr,
+				  struct module *me)
+{
+	long reladdr;
+	unsigned long klp_stub_idx, klp_stub_idx_end;
+
+	klp_stub_idx = (klp_stub_insn - livepatch_handler);
+	klp_stub_idx_end = (livepatch_handler_end - klp_stub_insn_end);
+
+	/* Copy first half of livepatch_handler till klp_stub_insn */
+	memcpy(entry->jump, livepatch_handler, sizeof(u32) * klp_stub_idx);
+
+	/* Stub uses address relative to r2. */
+	reladdr = (unsigned long)entry - my_r2(sechdrs, me);
+	if (reladdr > 0x7FFFFFFF || reladdr < -(0x80000000L)) {
+		pr_err("%s: Address %p of stub out of range of %p.\n",
+				me->name, (void *)reladdr, (void *)my_r2);
+		return 0;
+	}
+	pr_debug("Stub %p get data from reladdr %li\n", entry, reladdr);
+
+	/*
+	 * Patch the code required to load the trampoline address into r11,
+	 * function global entry point into r12, ctr.
+	 */
+	entry->jump[klp_stub_idx++] = (PPC_INST_ADDIS | ___PPC_RT(11) |
+					___PPC_RA(2) | PPC_HA(reladdr));
+
+	entry->jump[klp_stub_idx++] = (PPC_INST_ADDI | ___PPC_RT(11) |
+					 ___PPC_RA(11) | PPC_LO(reladdr));
+
+	entry->jump[klp_stub_idx++] = (PPC_INST_LD | ___PPC_RT(12) |
+					 ___PPC_RA(11) | FUNC_DESC_OFFSET);
+
+	entry->jump[klp_stub_idx++] = PPC_INST_MTCTR | ___PPC_RT(12);
+
+	/* Copy second half of livepatch_handler starting klp_stub_insn_end */
+	memcpy(entry->jump + klp_stub_idx, klp_stub_insn_end,
+				sizeof(u32) * klp_stub_idx_end);
+
+	entry->funcdata = func_desc(addr);
+	entry->magic = STUB_MAGIC;
+	return 1;
+}
+#endif
+
 /* Create stub to jump to function described in this OPD/ptr: we need the
    stub to set up the TOC ptr (r2) for the function. */
 static unsigned long stub_for_addr(const Elf64_Shdr *sechdrs,
@@ -441,6 +535,39 @@ static unsigned long stub_for_addr(const Elf64_Shdr *sechdrs,
 	return (unsigned long)&stubs[i];
 }
 
+#ifdef CONFIG_LIVEPATCH
+static unsigned long klp_stub_for_addr(const Elf64_Shdr *sechdrs,
+				       unsigned long addr,
+				       struct module *me)
+{
+	struct ppc64le_klp_stub_entry *klp_stubs;
+	unsigned int num_klp_stubs = me->arch.klp_relocs;
+	unsigned int i, num_stubs;
+
+	num_stubs = (sechdrs[me->arch.stubs_section].sh_size -
+		    (num_klp_stubs * sizeof(*klp_stubs))) /
+				sizeof(struct ppc64_stub_entry);
+
+	/*
+	 * Create livepatch stubs after the regular stubs.
+	 */
+	klp_stubs = (void *)sechdrs[me->arch.stubs_section].sh_addr +
+		    (num_stubs * sizeof(struct ppc64_stub_entry));
+	for (i = 0; stub_func_addr(klp_stubs[i].funcdata); i++) {
+		if (WARN_ON(i >= num_klp_stubs))
+			return 0;
+
+		if (stub_func_addr(klp_stubs[i].funcdata) == func_addr(addr))
+			return (unsigned long)&klp_stubs[i];
+	}
+
+	if (!create_klp_stub(sechdrs, &klp_stubs[i], addr, me))
+		return 0;
+
+	return (unsigned long)&klp_stubs[i];
+}
+#endif
+
 #ifdef CC_USING_MPROFILE_KERNEL
 static bool is_early_mcount_callsite(u32 *instruction)
 {
@@ -622,6 +749,12 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
 					return -ENOEXEC;
 
 				squash_toc_save_inst(strtab + sym->st_name, value);
+#ifdef CONFIG_LIVEPATCH
+			} else if (sym->st_shndx == SHN_LIVEPATCH) {
+				value = klp_stub_for_addr(sechdrs, value, me);
+				if (!value)
+					return -ENOENT;
+#endif
 			} else
 				value += local_entry_offset(sym);
 
diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
index b4e2b71..4ef9329 100644
--- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
+++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
@@ -183,7 +183,10 @@ _GLOBAL(ftrace_stub)
 	 *  - CTR holds the new NIP in C
 	 *  - r0, r11 & r12 are free
 	 */
+
+	.global livepatch_handler
 livepatch_handler:
+
 	CURRENT_THREAD_INFO(r12, r1)
 
 	/* Allocate 3 x 8 bytes */
@@ -201,8 +204,33 @@ livepatch_handler:
 	ori     r12, r12, STACK_END_MAGIC@l
 	std	r12, -8(r11)
 
+	/*
+	 * klp_stub_insn/klp_stub_insn_end marks the beginning/end of the
+	 * additional instructions, which gets patched by create_klp_stub()
+	 * for livepatch symbol relocation stub. The instructions are:
+	 *
+	 * Load TOC relative address into r11. module_64.c::klp_stub_for_addr()
+	 * identifies the available free stub slot and loads the address into
+	 * r11 with two instructions.
+	 *
+	 * addis r11, r2, stub_address@ha
+	 * addi  r11, r11, stub_address@l
+	 *
+	 * Load global entry into r12 from entry->funcdata offset
+	 * ld    r12, FUNC_DESC_OFFSET(r11)
+	 *
+	 * Put r12 into ctr and branch there
+	 * mtctr r12
+	 */
+
+	.global klp_stub_insn
+klp_stub_insn:
+
 	/* Put ctr in r12 for global entry and branch there */
 	mfctr	r12
+
+	.global klp_stub_insn_end
+klp_stub_insn_end:
 	bctrl
 
 	/*
@@ -234,6 +262,9 @@ livepatch_handler:
 
 	/* Return to original caller of live patched function */
 	blr
+
+	.global livepatch_handler_end
+livepatch_handler_end:
 #endif /* CONFIG_LIVEPATCH */
 
 #endif /* CONFIG_DYNAMIC_FTRACE */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v3] kernel/module_64.c: Add REL24 relocation support of livepatch symbols
  2017-10-17  5:18 [PATCH v3] kernel/module_64.c: Add REL24 relocation support of livepatch symbols Kamalesh Babulal
@ 2017-10-31 14:19 ` Naveen N . Rao
  2017-10-31 15:30   ` Torsten Duwe
  2017-11-02  5:49   ` Kamalesh Babulal
  0 siblings, 2 replies; 20+ messages in thread
From: Naveen N . Rao @ 2017-10-31 14:19 UTC (permalink / raw)
  To: Kamalesh Babulal
  Cc: Michael Ellerman, Balbir Singh, Josh Poimboeuf, Jessica Yu,
	Ananth N Mavinakayanahalli, Aravinda Prasad, Torsten Duwe,
	linuxppc-dev, live-patching

Hi Kamalesh,
Sorry for the late review. Overall, the patch looks good to me. So:
Acked-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>

However, I have a few minor comments which can be addressed in a 
subsequent patch.

On 2017/10/17 05:18AM, Kamalesh Babulal wrote:
> Livepatch re-uses module loader function apply_relocate_add() to write
> relocations, instead of managing them by arch-dependent
> klp_write_module_reloc() function.
> 
> apply_relocate_add() doesn't understand livepatch symbols (marked with
> SHN_LIVEPATCH symbol section index) and assumes them to be local symbols
> by default for R_PPC64_REL24 relocation type. It fails with an error,
> when trying to calculate offset with local_entry_offset():
> 
> module_64: kpatch_meminfo: REL24 -1152921504897399800 out of range!
> 
> Whereas livepatch symbols are essentially SHN_UNDEF, should be
> called via stub used for global calls. This issue can be fixed by
> teaching apply_relocate_add() to handle both SHN_UNDEF/SHN_LIVEPATCH
> symbols via the same stub. It isn't a complete fix, as it will fail for
> local calls becoming global.
> 
> Consider a livepatch sequence[1] below, where function A calls B, B is
> livepatched function and any calls to function B is redirected to
> patched version P. Now, livepatched function P calls function C in M2,
> whereas C was local to function B and global to function P.
> 
>   +--------+            +--------+    +--------+      +--------+
>   |        |   +--------+--------+--->|        |  +-->|        |
>   |  A     |   |        |  B     |    |  F     |  |   |  P     |
>   |        |   |        |        |    |        +--+   |        |
>   |        +---+        |        |    |        |<-+   |        |
>   |        |<--+   +----+   C    |    |        |  |   |        |
>   |        |   |   | +->|        |    |        |  |   |        |<---+
>   | K / M1 |   |   | |  | K / M2 |  +-+ Kernel |  +---+ Mod3   +--+ |
>   +--------+   |   | |  +--------+  | +--------+      +--------+  | |
>                |   | |              |                             | |
>                +---+-+--------------+                             | |
>                    | |                                            | |
>                    | +--------------------------------------------+ |
>                    +------------------------------------------------+
> 
> Handling such call with existing stub, triggers another error:
> 
> module_64: kpatch_meminfo: Expect noop after relocate, got 3d220000
> 
> Reason being, ppc64le ABI v2 expects a nop instruction after every
> branch to a global call. That gets overwritten by an instruction to
> restore TOC with r2 value of callee. Given function C was local to
> function B, it does not store/restore TOC as they are not expected to be
> clobbered for functions called via local entry point.
> 
> The current stub can be extended to re-store TOC and have a single stub
> for both SHN_UNDEF/SHN_LIVEPATCH symbols. Constructing a single stub proves
> to be an overhead for non-livepatch calls, with additional instructions
> to restore TOC.
> 
> A new stub to call livepatch symbols with an intermediate stack to
> store/restore, TOC/LR between livepatched function and global function
> will work for most of the cases but will fail when arguments are passed
> via stack between functions.
> 
> This issue has been already solved by introduction of livepatch_handler,
> which runs in _mcount context by introducing livepatch stack growing
> upwards from the base of the regular stack, eliminating the need for an
> intermediate stack.
> 
> Current approach is to setup klp_stub mimicking the functionality of
> entry_64.S::livepatch_handler to store/restore TOC/LR values, other than
> the stub setup and branch. This patch also introduces new
> ppc64le_klp_stub_entry[], along with the helpers to find/allocate
> livepatch stub.
> 
> [1] ASCII diagram adopted from:
> http://mpe.github.io/posts/2016/05/23/kernel-live-patching-for-ppc64le/
> 
> Signed-off-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com>
> Reviewed-by: Balbir Singh <bsingharora@gmail.com>
> Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> Cc: Josh Poimboeuf <jpoimboe@redhat.com>
> Cc: Jessica Yu <jeyu@kernel.org>
> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
> Cc: Aravinda Prasad <aravinda@linux.vnet.ibm.com>
> Cc: Torsten Duwe <duwe@lst.de>
> Cc: linuxppc-dev@lists.ozlabs.org
> Cc: live-patching@vger.kernel.org
> ---
> v3:
>  - Defined FUNC_DESC_OFFSET to calculate func_desc offset from
>    struct ppc64le_klp_stub_entry.
>  - Replaced BUG_ON() with WARN_ON() in klp_stub_for_addr().
>  - Major commit message re-write.
> 
> v2:
>  - Changed klp_stub construction to re-use livepatch_handler and
>    additional patch code required for klp_stub, instead of duplicating it.
>  - Minor comments and commit body edit.
> 
>  arch/powerpc/include/asm/module.h              |   4 +
>  arch/powerpc/kernel/module_64.c                | 139 ++++++++++++++++++++++++-
>  arch/powerpc/kernel/trace/ftrace_64_mprofile.S |  31 ++++++
>  3 files changed, 171 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/module.h b/arch/powerpc/include/asm/module.h
> index 6c0132c..de22c4c 100644
> --- a/arch/powerpc/include/asm/module.h
> +++ b/arch/powerpc/include/asm/module.h
> @@ -44,6 +44,10 @@ struct mod_arch_specific {
>  	unsigned long toc;
>  	unsigned long tramp;
>  #endif
> +#ifdef CONFIG_LIVEPATCH
> +	/* Count of kernel livepatch relocations */
> +	unsigned long klp_relocs;
> +#endif
> 
>  #else /* powerpc64 */
>  	/* Indices of PLT sections within module. */
> diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c
> index 0b0f896..005aaea 100644
> --- a/arch/powerpc/kernel/module_64.c
> +++ b/arch/powerpc/kernel/module_64.c
> @@ -140,6 +140,24 @@ static u32 ppc64_stub_insns[] = {
>  	0x4e800420			/* bctr */
>  };
> 
> +#ifdef CONFIG_LIVEPATCH
> +extern u32 klp_stub_insn[], klp_stub_insn_end[];
> +extern u32 livepatch_handler[], livepatch_handler_end[];
> +
> +struct ppc64le_klp_stub_entry {
> +	/*
> +	 * Other than setting up the stub and livepatch stub also needs to
> +	 * allocate extra instructions to allocate livepatch stack,
> +	 * storing/restoring TOC/LR values on/from the livepatch stack.
> +	 */
> +	u32 jump[31];
> +	/* Used by ftrace to identify stubs */
> +	u32 magic;
> +	/* Data for the above code */
> +	func_desc_t funcdata;
> +};
> +#endif
> +
>  #ifdef CONFIG_DYNAMIC_FTRACE
>  int module_trampoline_target(struct module *mod, unsigned long addr,
>  			     unsigned long *target)
> @@ -239,10 +257,19 @@ static void relaswap(void *_x, void *_y, int size)
> 
>  /* Get size of potential trampolines required. */
>  static unsigned long get_stubs_size(const Elf64_Ehdr *hdr,
> -				    const Elf64_Shdr *sechdrs)
> +				    const Elf64_Shdr *sechdrs,
> +				    struct module *me)
>  {
>  	/* One extra reloc so it's always 0-funcaddr terminated */
>  	unsigned long relocs = 1;

You might want to get rid of 'relocs' above and simply use the below 
two.

> +	/*
> +	 * size of livepatch stub is 28 instructions, whereas the
> +	 * non-livepatch stub requires 7 instructions. Account for
> +	 * different stub sizes and track the livepatch relocation
> +	 * count in me->arch.klp_relocs.
> +	 */
> +	unsigned long sec_relocs = 0;
> +	unsigned long klp_relocs = 0;

You should also initialize this to 1 (similar to relocs above) for use 
in the iterators below. Or, update the iterators to use 
me->arch.klp_relocs (1)

>  	unsigned i;
> 
>  	/* Every relocated section... */
> @@ -262,9 +289,14 @@ static unsigned long get_stubs_size(const Elf64_Ehdr *hdr,
>  			     sechdrs[i].sh_size / sizeof(Elf64_Rela),
>  			     sizeof(Elf64_Rela), relacmp, relaswap);
> 
> -			relocs += count_relocs((void *)sechdrs[i].sh_addr,
> +			sec_relocs = count_relocs((void *)sechdrs[i].sh_addr,
>  					       sechdrs[i].sh_size
>  					       / sizeof(Elf64_Rela));

How about also capturing 'sec_relocs' in 'struct mod_arch_specific'? (2)  
That will help simplify a lot of the calculations here and elsewhere.  
Note that there are many other places where the number of stubs is 
derived looking at 'sh_size', which is incorrect since we now have klp 
stubs as well (create_ftrace_stub() for instance).

> +			relocs += sec_relocs;
> +#ifdef CONFIG_LIVEPATCH
> +			if (sechdrs[i].sh_flags & SHF_RELA_LIVEPATCH)
> +				klp_relocs += sec_relocs;
> +#endif

If a module has SHF_RELA_LIVEPATCH, but the kernel is compiled without 
CONFIG_LIVEPATCH, should we refuse to load the kernel module?

You might want to consider removing the above #ifdef and processing some 
of these flags regardless of CONFIG_LIVEPATCH.

>  		}
>  	}
> 
> @@ -273,6 +305,15 @@ static unsigned long get_stubs_size(const Elf64_Ehdr *hdr,
>  	relocs++;
>  #endif
> 
> +	relocs -= klp_relocs;
> +#ifdef CONFIG_LIVEPATCH
> +	me->arch.klp_relocs = klp_relocs;
> +
> +	pr_debug("Looks like a total of %lu stubs, (%lu) livepatch stubs, max\n",
						   ^^^^^
						   (%lu livepatch stubs)

Just wondering why the brackets are the way they are...

> +				relocs, klp_relocs);
> +	return (relocs * sizeof(struct ppc64_stub_entry) +
> +		klp_relocs * sizeof(struct ppc64le_klp_stub_entry));
> +#endif
>  	pr_debug("Looks like a total of %lu stubs, max\n", relocs);
>  	return relocs * sizeof(struct ppc64_stub_entry);
>  }
> @@ -369,7 +410,7 @@ int module_frob_arch_sections(Elf64_Ehdr *hdr,
>  		me->arch.toc_section = me->arch.stubs_section;
> 
>  	/* Override the stubs size */
> -	sechdrs[me->arch.stubs_section].sh_size = get_stubs_size(hdr, sechdrs);
> +	sechdrs[me->arch.stubs_section].sh_size = get_stubs_size(hdr, sechdrs, me);
>  	return 0;
>  }
> 
> @@ -415,6 +456,59 @@ static inline int create_stub(const Elf64_Shdr *sechdrs,
>  	return 1;
>  }
> 
> +#ifdef CONFIG_LIVEPATCH
> +
> +#define FUNC_DESC_OFFSET offsetof(struct ppc64le_klp_stub_entry, funcdata)
> +
> +/* Patch livepatch stub to reference function and correct r2 value. */
> +static inline int create_klp_stub(const Elf64_Shdr *sechdrs,
> +				  struct ppc64le_klp_stub_entry *entry,
> +				  unsigned long addr,
> +				  struct module *me)
> +{
> +	long reladdr;
> +	unsigned long klp_stub_idx, klp_stub_idx_end;
> +
> +	klp_stub_idx = (klp_stub_insn - livepatch_handler);
> +	klp_stub_idx_end = (livepatch_handler_end - klp_stub_insn_end);
> +
> +	/* Copy first half of livepatch_handler till klp_stub_insn */
> +	memcpy(entry->jump, livepatch_handler, sizeof(u32) * klp_stub_idx);
> +
> +	/* Stub uses address relative to r2. */
> +	reladdr = (unsigned long)entry - my_r2(sechdrs, me);
> +	if (reladdr > 0x7FFFFFFF || reladdr < -(0x80000000L)) {
> +		pr_err("%s: Address %p of stub out of range of %p.\n",
> +				me->name, (void *)reladdr, (void *)my_r2);
> +		return 0;
> +	}
> +	pr_debug("Stub %p get data from reladdr %li\n", entry, reladdr);
> +
> +	/*
> +	 * Patch the code required to load the trampoline address into r11,
> +	 * function global entry point into r12, ctr.
> +	 */
> +	entry->jump[klp_stub_idx++] = (PPC_INST_ADDIS | ___PPC_RT(11) |
> +					___PPC_RA(2) | PPC_HA(reladdr));
> +
> +	entry->jump[klp_stub_idx++] = (PPC_INST_ADDI | ___PPC_RT(11) |
> +					 ___PPC_RA(11) | PPC_LO(reladdr));
> +
> +	entry->jump[klp_stub_idx++] = (PPC_INST_LD | ___PPC_RT(12) |
> +					 ___PPC_RA(11) | FUNC_DESC_OFFSET);
> +
> +	entry->jump[klp_stub_idx++] = PPC_INST_MTCTR | ___PPC_RT(12);
> +
> +	/* Copy second half of livepatch_handler starting klp_stub_insn_end */
> +	memcpy(entry->jump + klp_stub_idx, klp_stub_insn_end,
> +				sizeof(u32) * klp_stub_idx_end);
> +
> +	entry->funcdata = func_desc(addr);
> +	entry->magic = STUB_MAGIC;
> +	return 1;
> +}
> +#endif
> +
>  /* Create stub to jump to function described in this OPD/ptr: we need the
>     stub to set up the TOC ptr (r2) for the function. */
>  static unsigned long stub_for_addr(const Elf64_Shdr *sechdrs,
> @@ -441,6 +535,39 @@ static unsigned long stub_for_addr(const Elf64_Shdr *sechdrs,
>  	return (unsigned long)&stubs[i];
>  }
> 
> +#ifdef CONFIG_LIVEPATCH
> +static unsigned long klp_stub_for_addr(const Elf64_Shdr *sechdrs,
> +				       unsigned long addr,
> +				       struct module *me)
> +{
> +	struct ppc64le_klp_stub_entry *klp_stubs;
> +	unsigned int num_klp_stubs = me->arch.klp_relocs;
> +	unsigned int i, num_stubs;
> +
> +	num_stubs = (sechdrs[me->arch.stubs_section].sh_size -
> +		    (num_klp_stubs * sizeof(*klp_stubs))) /
> +				sizeof(struct ppc64_stub_entry);

(2) This can be simplified if we have me->arch.sec_relocs.

> +
> +	/*
> +	 * Create livepatch stubs after the regular stubs.
> +	 */
> +	klp_stubs = (void *)sechdrs[me->arch.stubs_section].sh_addr +
> +		    (num_stubs * sizeof(struct ppc64_stub_entry));
> +	for (i = 0; stub_func_addr(klp_stubs[i].funcdata); i++) {
		    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
		    (1) This needs us to allocate one additional stub.


- Naveen


> +		if (WARN_ON(i >= num_klp_stubs))
> +			return 0;
> +
> +		if (stub_func_addr(klp_stubs[i].funcdata) == func_addr(addr))
> +			return (unsigned long)&klp_stubs[i];
> +	}
> +
> +	if (!create_klp_stub(sechdrs, &klp_stubs[i], addr, me))
> +		return 0;
> +
> +	return (unsigned long)&klp_stubs[i];
> +}
> +#endif
> +
>  #ifdef CC_USING_MPROFILE_KERNEL
>  static bool is_early_mcount_callsite(u32 *instruction)
>  {
> @@ -622,6 +749,12 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
>  					return -ENOEXEC;
> 
>  				squash_toc_save_inst(strtab + sym->st_name, value);
> +#ifdef CONFIG_LIVEPATCH
> +			} else if (sym->st_shndx == SHN_LIVEPATCH) {
> +				value = klp_stub_for_addr(sechdrs, value, me);
> +				if (!value)
> +					return -ENOENT;
> +#endif
>  			} else
>  				value += local_entry_offset(sym);
> 
> diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
> index b4e2b71..4ef9329 100644
> --- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
> +++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
> @@ -183,7 +183,10 @@ _GLOBAL(ftrace_stub)
>  	 *  - CTR holds the new NIP in C
>  	 *  - r0, r11 & r12 are free
>  	 */
> +
> +	.global livepatch_handler
>  livepatch_handler:
> +
>  	CURRENT_THREAD_INFO(r12, r1)
> 
>  	/* Allocate 3 x 8 bytes */
> @@ -201,8 +204,33 @@ livepatch_handler:
>  	ori     r12, r12, STACK_END_MAGIC@l
>  	std	r12, -8(r11)
> 
> +	/*
> +	 * klp_stub_insn/klp_stub_insn_end marks the beginning/end of the
> +	 * additional instructions, which gets patched by create_klp_stub()
> +	 * for livepatch symbol relocation stub. The instructions are:
> +	 *
> +	 * Load TOC relative address into r11. module_64.c::klp_stub_for_addr()
> +	 * identifies the available free stub slot and loads the address into
> +	 * r11 with two instructions.
> +	 *
> +	 * addis r11, r2, stub_address@ha
> +	 * addi  r11, r11, stub_address@l
> +	 *
> +	 * Load global entry into r12 from entry->funcdata offset
> +	 * ld    r12, FUNC_DESC_OFFSET(r11)
> +	 *
> +	 * Put r12 into ctr and branch there
> +	 * mtctr r12
> +	 */
> +
> +	.global klp_stub_insn
> +klp_stub_insn:
> +
>  	/* Put ctr in r12 for global entry and branch there */
>  	mfctr	r12
> +
> +	.global klp_stub_insn_end
> +klp_stub_insn_end:
>  	bctrl
> 
>  	/*
> @@ -234,6 +262,9 @@ livepatch_handler:
> 
>  	/* Return to original caller of live patched function */
>  	blr
> +
> +	.global livepatch_handler_end
> +livepatch_handler_end:
>  #endif /* CONFIG_LIVEPATCH */
> 
>  #endif /* CONFIG_DYNAMIC_FTRACE */
> -- 
> 2.7.4
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3] kernel/module_64.c: Add REL24 relocation support of livepatch symbols
  2017-10-31 14:19 ` Naveen N . Rao
@ 2017-10-31 15:30   ` Torsten Duwe
  2017-10-31 16:23     ` Naveen N . Rao
  2017-11-02  5:49   ` Kamalesh Babulal
  1 sibling, 1 reply; 20+ messages in thread
From: Torsten Duwe @ 2017-10-31 15:30 UTC (permalink / raw)
  To: Naveen N . Rao
  Cc: Kamalesh Babulal, Michael Ellerman, Balbir Singh, Josh Poimboeuf,
	Jessica Yu, Ananth N Mavinakayanahalli, Aravinda Prasad,
	linuxppc-dev, live-patching

On Tue, Oct 31, 2017 at 07:49:59PM +0530, Naveen N . Rao wrote:
> Hi Kamalesh,
> Sorry for the late review. Overall, the patch looks good to me.

If you're good with a hammer...

Maybe I failed to express my views properly; I find the whole approach
mislead in the first place. I had asked questions previously because it
would have never come to my mind that someone would deliberately, without
a compelling reason, creates broken live patch modules and then tries
to fix them up with some ugly band-aid in the kernel. So for the record:

NAK'd-by: Torsten Duwe <duwe@suse.de>

	Torsten

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3] kernel/module_64.c: Add REL24 relocation support of livepatch symbols
  2017-10-31 15:30   ` Torsten Duwe
@ 2017-10-31 16:23     ` Naveen N . Rao
  2017-10-31 18:39       ` Torsten Duwe
  0 siblings, 1 reply; 20+ messages in thread
From: Naveen N . Rao @ 2017-10-31 16:23 UTC (permalink / raw)
  To: Torsten Duwe
  Cc: Kamalesh Babulal, Michael Ellerman, Balbir Singh, Josh Poimboeuf,
	Jessica Yu, Ananth N Mavinakayanahalli, Aravinda Prasad,
	linuxppc-dev, live-patching

On 2017/10/31 03:30PM, Torsten Duwe wrote:
> On Tue, Oct 31, 2017 at 07:49:59PM +0530, Naveen N . Rao wrote:
> > Hi Kamalesh,
> > Sorry for the late review. Overall, the patch looks good to me.
> 
> If you're good with a hammer...
> 
> Maybe I failed to express my views properly; I find the whole approach
> mislead in the first place. I had asked questions previously because it
> would have never come to my mind that someone would deliberately, without
> a compelling reason, creates broken live patch modules and then tries
> to fix them up with some ugly band-aid in the kernel. So for the record:
> 
> NAK'd-by: Torsten Duwe <duwe@suse.de>

Hmm... that wasn't evident at all given Balbir's reponse to your 
previous concerns and your lack of response for the same:
https://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg125350.html


- Naveen

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3] kernel/module_64.c: Add REL24 relocation support of livepatch symbols
  2017-10-31 16:23     ` Naveen N . Rao
@ 2017-10-31 18:39       ` Torsten Duwe
  2017-11-01  0:23         ` Balbir Singh
  2017-11-07  4:54         ` Josh Poimboeuf
  0 siblings, 2 replies; 20+ messages in thread
From: Torsten Duwe @ 2017-10-31 18:39 UTC (permalink / raw)
  To: Naveen N . Rao
  Cc: Kamalesh Babulal, Michael Ellerman, Balbir Singh, Josh Poimboeuf,
	Jessica Yu, Ananth N Mavinakayanahalli, Aravinda Prasad,
	linuxppc-dev, live-patching

On Tue, Oct 31, 2017 at 09:53:16PM +0530, Naveen N . Rao wrote:
> On 2017/10/31 03:30PM, Torsten Duwe wrote:
> > 
> > Maybe I failed to express my views properly; I find the whole approach
[...]
> > NAK'd-by: Torsten Duwe <duwe@suse.de>
> 
> Hmm... that wasn't evident at all given Balbir's reponse to your 
> previous concerns and your lack of response for the same:
> https://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg125350.html

To me it was obvious that the root cause was kpatch's current inability to
deal with ppc calling conventions when copying binary functions. Hence my
hint at the discussion about a possible source-level solution that would
work nicely for all architectures.

	Torsten

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3] kernel/module_64.c: Add REL24 relocation support of livepatch symbols
  2017-10-31 18:39       ` Torsten Duwe
@ 2017-11-01  0:23         ` Balbir Singh
  2017-11-07  4:54         ` Josh Poimboeuf
  1 sibling, 0 replies; 20+ messages in thread
From: Balbir Singh @ 2017-11-01  0:23 UTC (permalink / raw)
  To: Torsten Duwe
  Cc: Naveen N . Rao, Kamalesh Babulal, Michael Ellerman,
	Josh Poimboeuf, Jessica Yu, Ananth N Mavinakayanahalli,
	Aravinda Prasad, open list:LINUX FOR POWERPC (32-BIT AND 64-BIT),
	live-patching

On Wed, Nov 1, 2017 at 5:39 AM, Torsten Duwe <duwe@lst.de> wrote:
> On Tue, Oct 31, 2017 at 09:53:16PM +0530, Naveen N . Rao wrote:
>> On 2017/10/31 03:30PM, Torsten Duwe wrote:
>> >
>> > Maybe I failed to express my views properly; I find the whole approach
> [...]
>> > NAK'd-by: Torsten Duwe <duwe@suse.de>
>>
>> Hmm... that wasn't evident at all given Balbir's reponse to your
>> previous concerns and your lack of response for the same:
>> https://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg125350.html
>
> To me it was obvious that the root cause was kpatch's current inability to
> deal with ppc calling conventions when copying binary functions. Hence my
> hint at the discussion about a possible source-level solution that would
> work nicely for all architectures.

Alternatives are good,  at this point kpatch is important and should
be supported.
Source level alternatives are not controlled by us, but by distros and
tooling. I don't
think the NAK helps, it only states that kpatch should not be enabled
for ppc64 as
it needs a new stub. I know SuSE does not use kpatch, but we would want to be
able to support tools across distros.

When we get a source level patch tool going, I'd love to have it and have use it
by default and we can revisit the usefulness of this stub at that
point and deprecate
if required.

Balbir Singh.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3] kernel/module_64.c: Add REL24 relocation support of livepatch symbols
  2017-10-31 14:19 ` Naveen N . Rao
  2017-10-31 15:30   ` Torsten Duwe
@ 2017-11-02  5:49   ` Kamalesh Babulal
  1 sibling, 0 replies; 20+ messages in thread
From: Kamalesh Babulal @ 2017-11-02  5:49 UTC (permalink / raw)
  To: Naveen N . Rao
  Cc: Michael Ellerman, Balbir Singh, Josh Poimboeuf, Jessica Yu,
	Ananth N Mavinakayanahalli, Aravinda Prasad, Torsten Duwe,
	linuxppc-dev, live-patching

On Tuesday 31 October 2017 07:49 PM, Naveen N . Rao wrote:
> Hi Kamalesh,
> Sorry for the late review. Overall, the patch looks good to me. So:
> Acked-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
>
> However, I have a few minor comments which can be addressed in a
> subsequent patch.
>

Thanks for the review.

[...]

>> diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c
>> index 0b0f896..005aaea 100644
>> --- a/arch/powerpc/kernel/module_64.c
>> +++ b/arch/powerpc/kernel/module_64.c

[...]

>> @@ -239,10 +257,19 @@ static void relaswap(void *_x, void *_y, int size)
>>
>>  /* Get size of potential trampolines required. */
>>  static unsigned long get_stubs_size(const Elf64_Ehdr *hdr,
>> -				    const Elf64_Shdr *sechdrs)
>> +				    const Elf64_Shdr *sechdrs,
>> +				    struct module *me)
>>  {
>>  	/* One extra reloc so it's always 0-funcaddr terminated */
>>  	unsigned long relocs = 1;
>
> You might want to get rid of 'relocs' above and simply use the below
> two.
>
>> +	/*
>> +	 * size of livepatch stub is 28 instructions, whereas the
>> +	 * non-livepatch stub requires 7 instructions. Account for
>> +	 * different stub sizes and track the livepatch relocation
>> +	 * count in me->arch.klp_relocs.
>> +	 */
>> +	unsigned long sec_relocs = 0;
>> +	unsigned long klp_relocs = 0;
>
> You should also initialize this to 1 (similar to relocs above) for use
> in the iterators below. Or, update the iterators to use
> me->arch.klp_relocs (1)

relocs is Sum(sec_relocs), whereas per section relocation count is 
assigned to sec_relocs.  If the section is livepatch section, then it
added to klp_relocs or else to relocs. sec_relocs acts like a temp 
variable to hold the current section relocation count.

>
>>  	unsigned i;
>>
>>  	/* Every relocated section... */
>> @@ -262,9 +289,14 @@ static unsigned long get_stubs_size(const Elf64_Ehdr *hdr,
>>  			     sechdrs[i].sh_size / sizeof(Elf64_Rela),
>>  			     sizeof(Elf64_Rela), relacmp, relaswap);
>>
>> -			relocs += count_relocs((void *)sechdrs[i].sh_addr,
>> +			sec_relocs = count_relocs((void *)sechdrs[i].sh_addr,
>>  					       sechdrs[i].sh_size
>>  					       / sizeof(Elf64_Rela));
>
> How about also capturing 'sec_relocs' in 'struct mod_arch_specific'? (2)
> That will help simplify a lot of the calculations here and elsewhere.
> Note that there are many other places where the number of stubs is
> derived looking at 'sh_size', which is incorrect since we now have klp
> stubs as well (create_ftrace_stub() for instance).
>
>> +			relocs += sec_relocs;
>> +#ifdef CONFIG_LIVEPATCH
>> +			if (sechdrs[i].sh_flags & SHF_RELA_LIVEPATCH)
>> +				klp_relocs += sec_relocs;
>> +#endif
>
> If a module has SHF_RELA_LIVEPATCH, but the kernel is compiled without
> CONFIG_LIVEPATCH, should we refuse to load the kernel module?

Yes, the module load will fail.

>
> You might want to consider removing the above #ifdef and processing some
> of these flags regardless of CONFIG_LIVEPATCH.

ifdef guard, can be replaced with check for SHF_RELA_LIVEPATCH in 
section flag.

>
>>  		}
>>  	}
>>
>> @@ -273,6 +305,15 @@ static unsigned long get_stubs_size(const Elf64_Ehdr *hdr,
>>  	relocs++;
>>  #endif
>>
>> +	relocs -= klp_relocs;
>> +#ifdef CONFIG_LIVEPATCH
>> +	me->arch.klp_relocs = klp_relocs;
>> +
>> +	pr_debug("Looks like a total of %lu stubs, (%lu) livepatch stubs, max\n",
> 						   ^^^^^
> 						   (%lu livepatch stubs)
>
> Just wondering why the brackets are the way they are...

Makes it better to use the brackets like you suggested.


>
>> +				relocs, klp_relocs);
>> +	return (relocs * sizeof(struct ppc64_stub_entry) +
>> +		klp_relocs * sizeof(struct ppc64le_klp_stub_entry));
>> +#endif
>>  	pr_debug("Looks like a total of %lu stubs, max\n", relocs);
>>  	return relocs * sizeof(struct ppc64_stub_entry);
>>  }

[...]

>>
>> +#ifdef CONFIG_LIVEPATCH
>> +static unsigned long klp_stub_for_addr(const Elf64_Shdr *sechdrs,
>> +				       unsigned long addr,
>> +				       struct module *me)
>> +{
>> +	struct ppc64le_klp_stub_entry *klp_stubs;
>> +	unsigned int num_klp_stubs = me->arch.klp_relocs;
>> +	unsigned int i, num_stubs;
>> +
>> +	num_stubs = (sechdrs[me->arch.stubs_section].sh_size -
>> +		    (num_klp_stubs * sizeof(*klp_stubs))) /
>> +				sizeof(struct ppc64_stub_entry);
>
> (2) This can be simplified if we have me->arch.sec_relocs.

Having it will make the calculation look simple:
num_stubs = (me->arch.sec_relocs * size(struct ppc64_stub_entry).


>
>> +
>> +	/*
>> +	 * Create livepatch stubs after the regular stubs.
>> +	 */
>> +	klp_stubs = (void *)sechdrs[me->arch.stubs_section].sh_addr +
>> +		    (num_stubs * sizeof(struct ppc64_stub_entry));
>> +	for (i = 0; stub_func_addr(klp_stubs[i].funcdata); i++) {
> 		    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> 		    (1) This needs us to allocate one additional stub.
>
>

Good catch, yes there should an extra stub.

-- 
cheers,
Kamalesh.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3] kernel/module_64.c: Add REL24 relocation support of livepatch symbols
  2017-10-31 18:39       ` Torsten Duwe
  2017-11-01  0:23         ` Balbir Singh
@ 2017-11-07  4:54         ` Josh Poimboeuf
  2017-11-07  8:34           ` Michael Ellerman
  1 sibling, 1 reply; 20+ messages in thread
From: Josh Poimboeuf @ 2017-11-07  4:54 UTC (permalink / raw)
  To: Torsten Duwe
  Cc: Naveen N . Rao, Kamalesh Babulal, Michael Ellerman, Balbir Singh,
	Jessica Yu, Ananth N Mavinakayanahalli, Aravinda Prasad,
	linuxppc-dev, live-patching

On Tue, Oct 31, 2017 at 07:39:59PM +0100, Torsten Duwe wrote:
> On Tue, Oct 31, 2017 at 09:53:16PM +0530, Naveen N . Rao wrote:
> > On 2017/10/31 03:30PM, Torsten Duwe wrote:
> > > 
> > > Maybe I failed to express my views properly; I find the whole approach
> [...]
> > > NAK'd-by: Torsten Duwe <duwe@suse.de>
> > 
> > Hmm... that wasn't evident at all given Balbir's reponse to your 
> > previous concerns and your lack of response for the same:
> > https://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg125350.html
> 
> To me it was obvious that the root cause was kpatch's current inability to
> deal with ppc calling conventions when copying binary functions. Hence my
> hint at the discussion about a possible source-level solution that would
> work nicely for all architectures.

That other discussion isn't relevant.  Even if we do eventually decide
to go with a source-based approach, that's still a long ways off.

For the foreseeable future, kpatch-build is the only available safe way
to create live patches.  We need to figure out a way to make it work,
one way or another.

If I understand correctly, the main problem here is that a call to a
previously-local-but-now-global function is missing a needed nop
instruction after the call, which is needed for restoring r2 (the TOC
pointer).

So, just brainstorming a bit, here are the possible solutions I can
think of:

a) Create a special klp stub for such calls (as in Kamalesh's patch)

b) Have kpatch-build rewrite the function to insert nops after calls to
   previously-local functions.  This would also involve adjusting the
   offsets of intra-function branches and relocations which come
   afterwards in the same section.  And also patching up the DWARF
   debuginfo, if we care about that (I think we do).  And also patching
   up the jump tables which GCC sometimes creates for switch statements.
   Yuck.  I'm pretty sure this is a horrible idea.

c) Create a new GCC flag which treats all calls as global, which can be
   used by kpatch-build to generate the right code (assuming this flag
   doesn't already exist).  This would be a good option, I think.

d) Have kpatch-build do some other kind of transformation?  For example,
   maybe it could generate klp stubs which the callee calls into.  Each
   klp stub could then do a proper global call to the SHN_LIVEPATCH
   symbol.

Do I understand the problem correctly?  Do the potential solutions make
sense?  Any other possible solutions I missed?

-- 
Josh

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3] kernel/module_64.c: Add REL24 relocation support of livepatch symbols
  2017-11-07  4:54         ` Josh Poimboeuf
@ 2017-11-07  8:34           ` Michael Ellerman
  2017-11-07 11:31             ` Torsten Duwe
  0 siblings, 1 reply; 20+ messages in thread
From: Michael Ellerman @ 2017-11-07  8:34 UTC (permalink / raw)
  To: Josh Poimboeuf, Torsten Duwe
  Cc: Naveen N . Rao, Kamalesh Babulal, Balbir Singh, Jessica Yu,
	Ananth N Mavinakayanahalli, Aravinda Prasad, linuxppc-dev,
	live-patching

Josh Poimboeuf <jpoimboe@redhat.com> writes:

> On Tue, Oct 31, 2017 at 07:39:59PM +0100, Torsten Duwe wrote:
>> On Tue, Oct 31, 2017 at 09:53:16PM +0530, Naveen N . Rao wrote:
>> > On 2017/10/31 03:30PM, Torsten Duwe wrote:
>> > > 
>> > > Maybe I failed to express my views properly; I find the whole approach
>> [...]
>> > > NAK'd-by: Torsten Duwe <duwe@suse.de>
>> > 
>> > Hmm... that wasn't evident at all given Balbir's reponse to your 
>> > previous concerns and your lack of response for the same:
>> > https://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg125350.html
>> 
>> To me it was obvious that the root cause was kpatch's current inability to
>> deal with ppc calling conventions when copying binary functions. Hence my
>> hint at the discussion about a possible source-level solution that would
>> work nicely for all architectures.
>
> That other discussion isn't relevant.  Even if we do eventually decide
> to go with a source-based approach, that's still a long ways off.

OK, that was my thinking, but good to have it confirmed.

> For the foreseeable future, kpatch-build is the only available safe way
> to create live patches.  We need to figure out a way to make it work,
> one way or another.
>
> If I understand correctly, the main problem here is that a call to a
> previously-local-but-now-global function is missing a needed nop
> instruction after the call, which is needed for restoring r2 (the TOC
> pointer).

Yes, that's the root of the problem.

> So, just brainstorming a bit, here are the possible solutions I can
> think of:
>
> a) Create a special klp stub for such calls (as in Kamalesh's patch)
>
> b) Have kpatch-build rewrite the function to insert nops after calls to
>    previously-local functions.  This would also involve adjusting the
>    offsets of intra-function branches and relocations which come
>    afterwards in the same section.  And also patching up the DWARF
>    debuginfo, if we care about that (I think we do).  And also patching
>    up the jump tables which GCC sometimes creates for switch statements.
>    Yuck.  I'm pretty sure this is a horrible idea.

It's fairly horrible. It might be *less* horrible if you generated an
assembly listing using the compiler, modified that, and then fed that
through the assembler and linker.

> c) Create a new GCC flag which treats all calls as global, which can be
>    used by kpatch-build to generate the right code (assuming this flag
>    doesn't already exist).  This would be a good option, I think.

That's not impossible, but I doubt it will be all that popular with the
toolchain folks who'd have to implement it :)  It will also take a long
time to percolate out to users.

> d) Have kpatch-build do some other kind of transformation?  For example,
>    maybe it could generate klp stubs which the callee calls into.  Each
>    klp stub could then do a proper global call to the SHN_LIVEPATCH
>    symbol.

That could work.

> Do I understand the problem correctly?  Do the potential solutions make
> sense?  Any other possible solutions I missed?

Possibly, I'm not really across how kpatch works in detail and what the
constraints are.

One option would be to detect any local calls made by the patched
function and pull those in as well - ie. make them part of the patch.
The pathological case could obviously end up pulling in every function
in the kernel, but in practice that's probably unlikely. It already
happens to some extent anyway via inlining.

If modifying the source is an option, a sneaky solution is to mark the
local functions as weak, which means the compiler/linker has to assume
they could become global calls.

cheers

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3] kernel/module_64.c: Add REL24 relocation support of livepatch symbols
  2017-11-07  8:34           ` Michael Ellerman
@ 2017-11-07 11:31             ` Torsten Duwe
  2017-11-07 14:52               ` Josh Poimboeuf
       [not found]               ` <20171107145233.lmlg5lkfcczkxj4d__32032.512050546$1510066460$gmane$org@treble>
  0 siblings, 2 replies; 20+ messages in thread
From: Torsten Duwe @ 2017-11-07 11:31 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Josh Poimboeuf, Naveen N . Rao, Kamalesh Babulal, Balbir Singh,
	Jessica Yu, Ananth N Mavinakayanahalli, Aravinda Prasad,
	linuxppc-dev, live-patching

On Tue, Nov 07, 2017 at 07:34:29PM +1100, Michael Ellerman wrote:
> Josh Poimboeuf <jpoimboe@redhat.com> writes:
> 
> > On Tue, Oct 31, 2017 at 07:39:59PM +0100, Torsten Duwe wrote:
> >> On Tue, Oct 31, 2017 at 09:53:16PM +0530, Naveen N . Rao wrote:
> >> > On 2017/10/31 03:30PM, Torsten Duwe wrote:
> >> > > 
> >> > > Maybe I failed to express my views properly; I find the whole approach
> >> [...]
> >> > > NAK'd-by: Torsten Duwe <duwe@suse.de>
> >> > 
> >> > Hmm... that wasn't evident at all given Balbir's reponse to your 
> >> > previous concerns and your lack of response for the same:
> >> > https://www.mail-archive.com/linuxppc-dev@lists.ozlabs.org/msg125350.html
> >> 
> >> To me it was obvious that the root cause was kpatch's current inability to
> >> deal with ppc calling conventions when copying binary functions. Hence my
> >> hint at the discussion about a possible source-level solution that would
> >> work nicely for all architectures.
> >
> > That other discussion isn't relevant.  Even if we do eventually decide
> > to go with a source-based approach, that's still a long ways off.
> 
> OK, that was my thinking, but good to have it confirmed.

It depends. We can write and compile live patching modules right away. From
my point of view it's a matter of proceedingly automating this task.

> > For the foreseeable future, kpatch-build is the only available safe way
> > to create live patches.  We need to figure out a way to make it work,
> > one way or another.

As stated, I disagree here, but let's leave that aside, and stick to
your ( :-) problem.

> > If I understand correctly, the main problem here is that a call to a
> > previously-local-but-now-global function is missing a needed nop
> > instruction after the call, which is needed for restoring r2 (the TOC
> > pointer).
> 
> Yes, that's the root of the problem.
Yes.

> > So, just brainstorming a bit, here are the possible solutions I can
> > think of:
> >
> > a) Create a special klp stub for such calls (as in Kamalesh's patch)
> >
> > b) Have kpatch-build rewrite the function to insert nops after calls to
> >    previously-local functions.  This would also involve adjusting the
> >    offsets of intra-function branches and relocations which come
> >    afterwards in the same section.  And also patching up the DWARF
> >    debuginfo, if we care about that (I think we do).  And also patching
> >    up the jump tables which GCC sometimes creates for switch statements.
> >    Yuck.  I'm pretty sure this is a horrible idea.
> 
> It's fairly horrible. It might be *less* horrible if you generated an
> assembly listing using the compiler, modified that, and then fed that
> through the assembler and linker.
> 
> > c) Create a new GCC flag which treats all calls as global, which can be
> >    used by kpatch-build to generate the right code (assuming this flag
> >    doesn't already exist).  This would be a good option, I think.
> 
> That's not impossible, but I doubt it will be all that popular with the
> toolchain folks who'd have to implement it :)  It will also take a long
> time to percolate out to users.

BTDT ;-)

> > d) Have kpatch-build do some other kind of transformation?  For example,
> >    maybe it could generate klp stubs which the callee calls into.  Each
> >    klp stub could then do a proper global call to the SHN_LIVEPATCH
> >    symbol.
> 
> That could work.
Indeed. There is no reason to load that off onto the kernel module loader.

> > Do I understand the problem correctly?  Do the potential solutions make
> > sense?  Any other possible solutions I missed?
> 
> Possibly, I'm not really across how kpatch works in detail and what the
> constraints are.
> 
> One option would be to detect any local calls made by the patched
> function and pull those in as well - ie. make them part of the patch.
> The pathological case could obviously end up pulling in every function
> in the kernel, but in practice that's probably unlikely. It already
> happens to some extent anyway via inlining.
> 
> If modifying the source is an option, a sneaky solution is to mark the
> local functions as weak, which means the compiler/linker has to assume
> they could become global calls.

This might also be doable with a gcc "plugin", which would not require changes
to the compiler itself. OTOH there's no such thing as a weak static function...

	Torsten

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3] kernel/module_64.c: Add REL24 relocation support of livepatch symbols
  2017-11-07 11:31             ` Torsten Duwe
@ 2017-11-07 14:52               ` Josh Poimboeuf
  2017-11-09 10:55                 ` Michael Ellerman
       [not found]               ` <20171107145233.lmlg5lkfcczkxj4d__32032.512050546$1510066460$gmane$org@treble>
  1 sibling, 1 reply; 20+ messages in thread
From: Josh Poimboeuf @ 2017-11-07 14:52 UTC (permalink / raw)
  To: Torsten Duwe
  Cc: Michael Ellerman, Naveen N . Rao, Kamalesh Babulal, Balbir Singh,
	Jessica Yu, Ananth N Mavinakayanahalli, Aravinda Prasad,
	linuxppc-dev, live-patching

On Tue, Nov 07, 2017 at 12:31:05PM +0100, Torsten Duwe wrote:
> On Tue, Nov 07, 2017 at 07:34:29PM +1100, Michael Ellerman wrote:
> > > So, just brainstorming a bit, here are the possible solutions I can
> > > think of:
> > >
> > > a) Create a special klp stub for such calls (as in Kamalesh's patch)
> > >
> > > b) Have kpatch-build rewrite the function to insert nops after calls to
> > >    previously-local functions.  This would also involve adjusting the
> > >    offsets of intra-function branches and relocations which come
> > >    afterwards in the same section.  And also patching up the DWARF
> > >    debuginfo, if we care about that (I think we do).  And also patching
> > >    up the jump tables which GCC sometimes creates for switch statements.
> > >    Yuck.  I'm pretty sure this is a horrible idea.
> > 
> > It's fairly horrible. It might be *less* horrible if you generated an
> > assembly listing using the compiler, modified that, and then fed that
> > through the assembler and linker.

Ah, that would work a lot better.

> > > c) Create a new GCC flag which treats all calls as global, which can be
> > >    used by kpatch-build to generate the right code (assuming this flag
> > >    doesn't already exist).  This would be a good option, I think.
> > 
> > That's not impossible, but I doubt it will be all that popular with the
> > toolchain folks who'd have to implement it :)  It will also take a long
> > time to percolate out to users.
> 
> BTDT ;-)

Yeah, maybe that's not so realistic.

> > > d) Have kpatch-build do some other kind of transformation?  For example,
> > >    maybe it could generate klp stubs which the callee calls into.  Each
> > >    klp stub could then do a proper global call to the SHN_LIVEPATCH
> > >    symbol.
> > 
> > That could work.
> Indeed. There is no reason to load that off onto the kernel module loader.

I agree that doing the same work in tooling would be better than adding
complexity to the kernel.

> > > Do I understand the problem correctly?  Do the potential solutions make
> > > sense?  Any other possible solutions I missed?
> > 
> > Possibly, I'm not really across how kpatch works in detail and what the
> > constraints are.
> > 
> > One option would be to detect any local calls made by the patched
> > function and pull those in as well - ie. make them part of the patch.
> > The pathological case could obviously end up pulling in every function
> > in the kernel, but in practice that's probably unlikely. It already
> > happens to some extent anyway via inlining.

That's definitely a possibility, but I'd rather not go there because it
increases risk and cognitive load.

> > If modifying the source is an option, a sneaky solution is to mark the
> > local functions as weak, which means the compiler/linker has to assume
> > they could become global calls.

That could work, but it means the patch author has to modify the patch.
I'd rather have tooling hide this problem somehow.

> This might also be doable with a gcc "plugin", which would not require changes
> to the compiler itself.

Hm, that's not a bad idea.

> OTOH there's no such thing as a weak static function...

Yeah.  Instead of weak, maybe just make them global (remove the
"static")?

Anyway, thanks for the ideas.  I may try them out and see what sticks.

-- 
Josh

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3] kernel/module_64.c: Add REL24 relocation support of livepatch symbols
  2017-11-07 14:52               ` Josh Poimboeuf
@ 2017-11-09 10:55                 ` Michael Ellerman
  0 siblings, 0 replies; 20+ messages in thread
From: Michael Ellerman @ 2017-11-09 10:55 UTC (permalink / raw)
  To: Josh Poimboeuf, Torsten Duwe
  Cc: Naveen N . Rao, Kamalesh Babulal, Balbir Singh, Jessica Yu,
	Ananth N Mavinakayanahalli, Aravinda Prasad, linuxppc-dev,
	live-patching

Josh Poimboeuf <jpoimboe@redhat.com> writes:
> On Tue, Nov 07, 2017 at 12:31:05PM +0100, Torsten Duwe wrote:
>> On Tue, Nov 07, 2017 at 07:34:29PM +1100, Michael Ellerman wrote:
>> > > So, just brainstorming a bit, here are the possible solutions I can
>> > > think of:
>> > >
>> > > a) Create a special klp stub for such calls (as in Kamalesh's patch)
>> > >
>> > > b) Have kpatch-build rewrite the function to insert nops after calls to
>> > >    previously-local functions.  This would also involve adjusting the
>> > >    offsets of intra-function branches and relocations which come
>> > >    afterwards in the same section.  And also patching up the DWARF
>> > >    debuginfo, if we care about that (I think we do).  And also patching
>> > >    up the jump tables which GCC sometimes creates for switch statements.
>> > >    Yuck.  I'm pretty sure this is a horrible idea.
>> > 
>> > It's fairly horrible. It might be *less* horrible if you generated an
>> > assembly listing using the compiler, modified that, and then fed that
>> > through the assembler and linker.
>
> Ah, that would work a lot better.

This should be fairly doable, you just need to spot function calls to
local symbols, eg. "bl bar", and if there's no nop trailing you add one.
The hard part is probably futzing with Makefiles to insert the extra step.

...
>> This might also be doable with a gcc "plugin", which would not require changes
>> to the compiler itself.
>
> Hm, that's not a bad idea.
>
>> OTOH there's no such thing as a weak static function...
>
> Yeah.  Instead of weak, maybe just make them global (remove the
> "static")?

I don't think that will work. The compiler still knows that the callee
is local to the caller, even if the callee is global.

Making the callee weak mans the compiler can't be sure the call will go
there, and so has to do a global call.

> Anyway, thanks for the ideas.  I may try them out and see what sticks.

Great, thanks for looking into it.

cheers

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3] kernel/module_64.c: Add REL24 relocation support of livepatch symbols
       [not found]                 ` <20171107145233.lmlg5lkfcczkxj4d__32032.512050546$1510066460$gmane$org@treble >
@ 2017-11-09 11:19                   ` Naveen N. Rao
  2017-11-09 15:19                     ` Josh Poimboeuf
  0 siblings, 1 reply; 20+ messages in thread
From: Naveen N. Rao @ 2017-11-09 11:19 UTC (permalink / raw)
  To: Torsten Duwe, Josh Poimboeuf
  Cc: Aravinda Prasad, Jessica Yu, Kamalesh Babulal, linuxppc-dev,
	live-patching

Josh Poimboeuf wrote:
> On Tue, Nov 07, 2017 at 12:31:05PM +0100, Torsten Duwe wrote:
>> On Tue, Nov 07, 2017 at 07:34:29PM +1100, Michael Ellerman wrote:
>> > > So, just brainstorming a bit, here are the possible solutions I can
>> > > think of:
>> > >
>> > > a) Create a special klp stub for such calls (as in Kamalesh's patch)
>> > >
>> > > b) Have kpatch-build rewrite the function to insert nops after calls=
 to
>> > >    previously-local functions.  This would also involve adjusting th=
e
>> > >    offsets of intra-function branches and relocations which come
>> > >    afterwards in the same section.  And also patching up the DWARF
>> > >    debuginfo, if we care about that (I think we do).  And also patch=
ing
>> > >    up the jump tables which GCC sometimes creates for switch stateme=
nts.
>> > >    Yuck.  I'm pretty sure this is a horrible idea.
>> >=20
>> > It's fairly horrible. It might be *less* horrible if you generated an
>> > assembly listing using the compiler, modified that, and then fed that
>> > through the assembler and linker.
>=20
> Ah, that would work a lot better.
>=20
>> > > c) Create a new GCC flag which treats all calls as global, which can=
 be
>> > >    used by kpatch-build to generate the right code (assuming this fl=
ag
>> > >    doesn't already exist).  This would be a good option, I think.
>> >=20
>> > That's not impossible, but I doubt it will be all that popular with th=
e
>> > toolchain folks who'd have to implement it :)  It will also take a lon=
g
>> > time to percolate out to users.
>>=20
>> BTDT ;-)
>=20
> Yeah, maybe that's not so realistic.
>=20
>> > > d) Have kpatch-build do some other kind of transformation?  For exam=
ple,
>> > >    maybe it could generate klp stubs which the callee calls into.  E=
ach
>> > >    klp stub could then do a proper global call to the SHN_LIVEPATCH
>> > >    symbol.
>> >=20
>> > That could work.
>> Indeed. There is no reason to load that off onto the kernel module loade=
r.
>=20
> I agree that doing the same work in tooling would be better than adding
> complexity to the kernel.

[In case we're considering this as an option...]

While I agree that it's better to do this in the tool in general, for=20
this particular scenario, I think it is better to do this in the kernel.

>From what I understand, the challenge here is the need to save/restore=20
toc, *without* creating a local stack frame for the stub to use. So, we=20
will have to use the livepatch stack in one form or the other. I'm not=20
sure if we can do anything else in a klp stub.

And, if we have to use the livepatch stack, I think it is better to let=20
the kernel control that so that kpatch doesn't need to change for any=20
changes in livepatch_handler() for instance. The current patch reuses=20
livepatch_handler() and makes its usage for kpatch quite explicit.

- Naveen

=

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3] kernel/module_64.c: Add REL24 relocation support of livepatch symbols
  2017-11-09 11:19                   ` Naveen N. Rao
@ 2017-11-09 15:19                     ` Josh Poimboeuf
  2017-11-10  2:06                       ` Balbir Singh
  0 siblings, 1 reply; 20+ messages in thread
From: Josh Poimboeuf @ 2017-11-09 15:19 UTC (permalink / raw)
  To: Naveen N. Rao
  Cc: Torsten Duwe, Aravinda Prasad, Jessica Yu, Kamalesh Babulal,
	linuxppc-dev, live-patching

On Thu, Nov 09, 2017 at 04:49:05PM +0530, Naveen N. Rao wrote:
> > > > > d) Have kpatch-build do some other kind of transformation?  For example,
> > > > >    maybe it could generate klp stubs which the callee calls into.  Each
> > > > >    klp stub could then do a proper global call to the SHN_LIVEPATCH
> > > > >    symbol.
> > > > > That could work.
> > > Indeed. There is no reason to load that off onto the kernel module loader.
> > 
> > I agree that doing the same work in tooling would be better than adding
> > complexity to the kernel.
> 
> [In case we're considering this as an option...]
> 
> While I agree that it's better to do this in the tool in general, for this
> particular scenario, I think it is better to do this in the kernel.
> 
> From what I understand, the challenge here is the need to save/restore toc,
> *without* creating a local stack frame for the stub to use. So, we will have
> to use the livepatch stack in one form or the other. I'm not sure if we can
> do anything else in a klp stub.
> 
> And, if we have to use the livepatch stack, I think it is better to let the
> kernel control that so that kpatch doesn't need to change for any changes in
> livepatch_handler() for instance. The current patch reuses
> livepatch_handler() and makes its usage for kpatch quite explicit.

FWIW, I think it won't matter anyway.  I'm currently pursuing the option
of inserting nops after local calls, because it has less runtime
complexity than using a stub.

I think I've figured out a way to do it with a GCC plugin, but if that
doesn't work I'll try the asm listing sed approach suggested by Michael.

-- 
Josh

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3] kernel/module_64.c: Add REL24 relocation support of livepatch symbols
  2017-11-09 15:19                     ` Josh Poimboeuf
@ 2017-11-10  2:06                       ` Balbir Singh
  2017-11-10  3:28                         ` Josh Poimboeuf
  0 siblings, 1 reply; 20+ messages in thread
From: Balbir Singh @ 2017-11-10  2:06 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Naveen N. Rao, Torsten Duwe, Aravinda Prasad, Jessica Yu,
	Kamalesh Babulal, open list:LINUX FOR POWERPC (32-BIT AND 64-BIT),
	live-patching

On Fri, Nov 10, 2017 at 2:19 AM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
> On Thu, Nov 09, 2017 at 04:49:05PM +0530, Naveen N. Rao wrote:
>> > > > > d) Have kpatch-build do some other kind of transformation?  For example,
>> > > > >    maybe it could generate klp stubs which the callee calls into.  Each
>> > > > >    klp stub could then do a proper global call to the SHN_LIVEPATCH
>> > > > >    symbol.
>> > > > > That could work.
>> > > Indeed. There is no reason to load that off onto the kernel module loader.
>> >
>> > I agree that doing the same work in tooling would be better than adding
>> > complexity to the kernel.
>>
>> [In case we're considering this as an option...]
>>
>> While I agree that it's better to do this in the tool in general, for this
>> particular scenario, I think it is better to do this in the kernel.
>>
>> From what I understand, the challenge here is the need to save/restore toc,
>> *without* creating a local stack frame for the stub to use. So, we will have
>> to use the livepatch stack in one form or the other. I'm not sure if we can
>> do anything else in a klp stub.
>>
>> And, if we have to use the livepatch stack, I think it is better to let the
>> kernel control that so that kpatch doesn't need to change for any changes in
>> livepatch_handler() for instance. The current patch reuses
>> livepatch_handler() and makes its usage for kpatch quite explicit.
>
> FWIW, I think it won't matter anyway.  I'm currently pursuing the option
> of inserting nops after local calls, because it has less runtime
> complexity than using a stub.
>
> I think I've figured out a way to do it with a GCC plugin, but if that
> doesn't work I'll try the asm listing sed approach suggested by Michael.
>


A plugin that runs for the new kernel with the patch? Just for
specific files involved in the patch?

Balbir Singh.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3] kernel/module_64.c: Add REL24 relocation support of livepatch symbols
  2017-11-10  2:06                       ` Balbir Singh
@ 2017-11-10  3:28                         ` Josh Poimboeuf
  2017-11-10  9:47                           ` Michael Ellerman
  2017-11-13  8:38                           ` Balbir Singh
  0 siblings, 2 replies; 20+ messages in thread
From: Josh Poimboeuf @ 2017-11-10  3:28 UTC (permalink / raw)
  To: Balbir Singh
  Cc: Naveen N. Rao, Torsten Duwe, Aravinda Prasad, Jessica Yu,
	Kamalesh Babulal, open list:LINUX FOR POWERPC (32-BIT AND 64-BIT),
	live-patching

On Fri, Nov 10, 2017 at 01:06:25PM +1100, Balbir Singh wrote:
> On Fri, Nov 10, 2017 at 2:19 AM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
> > FWIW, I think it won't matter anyway.  I'm currently pursuing the option
> > of inserting nops after local calls, because it has less runtime
> > complexity than using a stub.
> >
> > I think I've figured out a way to do it with a GCC plugin, but if that
> > doesn't work I'll try the asm listing sed approach suggested by Michael.
> >
> 
> A plugin that runs for the new kernel with the patch? Just for
> specific files involved in the patch?

The plugin will affect the code generation of all functions in the patch
module.  So all calls in all replacement functions will have the nops.

Here's a prototype (not yet fully tested):

  https://github.com/jpoimboe/kpatch/blob/TODO-ppc-fix/kpatch-build/gcc-plugins/ppc64le-plugin.c

-- 
Josh

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3] kernel/module_64.c: Add REL24 relocation support of livepatch symbols
  2017-11-10  3:28                         ` Josh Poimboeuf
@ 2017-11-10  9:47                           ` Michael Ellerman
  2017-11-13  8:38                           ` Balbir Singh
  1 sibling, 0 replies; 20+ messages in thread
From: Michael Ellerman @ 2017-11-10  9:47 UTC (permalink / raw)
  To: Josh Poimboeuf, Balbir Singh
  Cc: Kamalesh Babulal, live-patching, Torsten Duwe, Jessica Yu,
	Aravinda Prasad, Naveen N. Rao,
	open list:LINUX FOR POWERPC (32-BIT AND 64-BIT)

Josh Poimboeuf <jpoimboe@redhat.com> writes:

> On Fri, Nov 10, 2017 at 01:06:25PM +1100, Balbir Singh wrote:
>> On Fri, Nov 10, 2017 at 2:19 AM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
>> > FWIW, I think it won't matter anyway.  I'm currently pursuing the option
>> > of inserting nops after local calls, because it has less runtime
>> > complexity than using a stub.
>> >
>> > I think I've figured out a way to do it with a GCC plugin, but if that
>> > doesn't work I'll try the asm listing sed approach suggested by Michael.
>> >
>> 
>> A plugin that runs for the new kernel with the patch? Just for
>> specific files involved in the patch?
>
> The plugin will affect the code generation of all functions in the patch
> module.  So all calls in all replacement functions will have the nops.
>
> Here's a prototype (not yet fully tested):
>
>   https://github.com/jpoimboe/kpatch/blob/TODO-ppc-fix/kpatch-build/gcc-plugins/ppc64le-plugin.c

Nice.

cheers

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3] kernel/module_64.c: Add REL24 relocation support of livepatch symbols
  2017-11-10  3:28                         ` Josh Poimboeuf
  2017-11-10  9:47                           ` Michael Ellerman
@ 2017-11-13  8:38                           ` Balbir Singh
  2017-11-13 11:38                             ` Kamalesh Babulal
  1 sibling, 1 reply; 20+ messages in thread
From: Balbir Singh @ 2017-11-13  8:38 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Naveen N. Rao, Torsten Duwe, Aravinda Prasad, Jessica Yu,
	Kamalesh Babulal, open list:LINUX FOR POWERPC (32-BIT AND 64-BIT),
	live-patching

On Fri, Nov 10, 2017 at 2:28 PM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
> On Fri, Nov 10, 2017 at 01:06:25PM +1100, Balbir Singh wrote:
>> On Fri, Nov 10, 2017 at 2:19 AM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
>> > FWIW, I think it won't matter anyway.  I'm currently pursuing the option
>> > of inserting nops after local calls, because it has less runtime
>> > complexity than using a stub.
>> >
>> > I think I've figured out a way to do it with a GCC plugin, but if that
>> > doesn't work I'll try the asm listing sed approach suggested by Michael.
>> >
>>
>> A plugin that runs for the new kernel with the patch? Just for
>> specific files involved in the patch?
>
> The plugin will affect the code generation of all functions in the patch
> module.  So all calls in all replacement functions will have the nops.
>
> Here's a prototype (not yet fully tested):
>
>   https://github.com/jpoimboe/kpatch/blob/TODO-ppc-fix/kpatch-build/gcc-plugins/ppc64le-plugin.c
>


Thanks! I looked at it briefly, I need to catch up with gcc plugins,
code < 1000 was interesting.
I guess now we don't need to pass via the new kpatch helper stub, we
would just replace the no-ops
and be done! I'd have to look closer, but I wonder what happens if a
function has no global entry
point

(See http://mpe.github.io/posts/2016/05/23/kernel-live-patching-for-ppc64le/)
section
Understanding the TOC & entry points and the example of int_to_scsilun()

Balbir Singh.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3] kernel/module_64.c: Add REL24 relocation support of livepatch symbols
  2017-11-13  8:38                           ` Balbir Singh
@ 2017-11-13 11:38                             ` Kamalesh Babulal
  2017-11-15 10:19                               ` Michael Ellerman
  0 siblings, 1 reply; 20+ messages in thread
From: Kamalesh Babulal @ 2017-11-13 11:38 UTC (permalink / raw)
  To: Balbir Singh, Josh Poimboeuf
  Cc: Naveen N. Rao, Torsten Duwe, Aravinda Prasad, Jessica Yu,
	open list:LINUX FOR POWERPC (32-BIT AND 64-BIT),
	live-patching

On Monday 13 November 2017 02:08 PM, Balbir Singh wrote:
> On Fri, Nov 10, 2017 at 2:28 PM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
>> On Fri, Nov 10, 2017 at 01:06:25PM +1100, Balbir Singh wrote:
>>> On Fri, Nov 10, 2017 at 2:19 AM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
>>>> FWIW, I think it won't matter anyway.  I'm currently pursuing the option
>>>> of inserting nops after local calls, because it has less runtime
>>>> complexity than using a stub.
>>>>
>>>> I think I've figured out a way to do it with a GCC plugin, but if that
>>>> doesn't work I'll try the asm listing sed approach suggested by Michael.
>>>>
>>>
>>> A plugin that runs for the new kernel with the patch? Just for
>>> specific files involved in the patch?
>>
>> The plugin will affect the code generation of all functions in the patch
>> module.  So all calls in all replacement functions will have the nops.
>>
>> Here's a prototype (not yet fully tested):
>>
>>   https://github.com/jpoimboe/kpatch/blob/TODO-ppc-fix/kpatch-build/gcc-plugins/ppc64le-plugin.c
>>
>

[...]

> I'd have to look closer, but I wonder what happens if a
> function has no global entry
> point
>
> (See http://mpe.github.io/posts/2016/05/23/kernel-live-patching-for-ppc64le/)
> section
> Understanding the TOC & entry points and the example of int_to_scsilun()
>

int_to_scsilun() is accessed via global entry point, but the TOC setup
part in the function prologue is skipped as the function doesn't have a
need to save/restore TOC. The reason begin, it doesn't calls any global
functions, thus the r2 register is not clobbered.

As per my understanding, the plug-in inserts a nop instruction after
branch to a function called via localentry a.k.a call to local
function. In case of int_to_scsilun(), the plug-in will not modify the
function in any way, as there are no local calls also made by the
int_to_scsilun().

-- 
cheers,
Kamalesh.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v3] kernel/module_64.c: Add REL24 relocation support of livepatch symbols
  2017-11-13 11:38                             ` Kamalesh Babulal
@ 2017-11-15 10:19                               ` Michael Ellerman
  0 siblings, 0 replies; 20+ messages in thread
From: Michael Ellerman @ 2017-11-15 10:19 UTC (permalink / raw)
  To: Kamalesh Babulal, Balbir Singh, Josh Poimboeuf
  Cc: Jessica Yu, Torsten Duwe, Aravinda Prasad, live-patching,
	Naveen N. Rao, open list:LINUX FOR POWERPC (32-BIT AND 64-BIT)

Kamalesh Babulal <kamalesh@linux.vnet.ibm.com> writes:

> On Monday 13 November 2017 02:08 PM, Balbir Singh wrote:
>> On Fri, Nov 10, 2017 at 2:28 PM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
>>> On Fri, Nov 10, 2017 at 01:06:25PM +1100, Balbir Singh wrote:
>>>> On Fri, Nov 10, 2017 at 2:19 AM, Josh Poimboeuf <jpoimboe@redhat.com> wrote:
>>>>> FWIW, I think it won't matter anyway.  I'm currently pursuing the option
>>>>> of inserting nops after local calls, because it has less runtime
>>>>> complexity than using a stub.
>>>>>
>>>>> I think I've figured out a way to do it with a GCC plugin, but if that
>>>>> doesn't work I'll try the asm listing sed approach suggested by Michael.
>>>>>
>>>>
>>>> A plugin that runs for the new kernel with the patch? Just for
>>>> specific files involved in the patch?
>>>
>>> The plugin will affect the code generation of all functions in the patch
>>> module.  So all calls in all replacement functions will have the nops.
>>>
>>> Here's a prototype (not yet fully tested):
>>>
>>>   https://github.com/jpoimboe/kpatch/blob/TODO-ppc-fix/kpatch-build/gcc-plugins/ppc64le-plugin.c
>>>
>>
>
> [...]
>
>> I'd have to look closer, but I wonder what happens if a
>> function has no global entry
>> point
>>
>> (See http://mpe.github.io/posts/2016/05/23/kernel-live-patching-for-ppc64le/)
>> section
>> Understanding the TOC & entry points and the example of int_to_scsilun()
>>
>
> int_to_scsilun() is accessed via global entry point, but the TOC setup
> part in the function prologue is skipped as the function doesn't have a
> need to save/restore TOC. The reason begin, it doesn't calls any global
> functions, thus the r2 register is not clobbered.
>
> As per my understanding, the plug-in inserts a nop instruction after
> branch to a function called via localentry a.k.a call to local
> function. In case of int_to_scsilun(), the plug-in will not modify the
> function in any way, as there are no local calls also made by the
> int_to_scsilun().

Yeah there should be no issue. This happens all the time, ie. the
compiler sees it needs to make a global call, inserts the nop, but
then it turns out the callee didn't need to use the TOC anyway.

cheers

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2017-11-15 10:19 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-17  5:18 [PATCH v3] kernel/module_64.c: Add REL24 relocation support of livepatch symbols Kamalesh Babulal
2017-10-31 14:19 ` Naveen N . Rao
2017-10-31 15:30   ` Torsten Duwe
2017-10-31 16:23     ` Naveen N . Rao
2017-10-31 18:39       ` Torsten Duwe
2017-11-01  0:23         ` Balbir Singh
2017-11-07  4:54         ` Josh Poimboeuf
2017-11-07  8:34           ` Michael Ellerman
2017-11-07 11:31             ` Torsten Duwe
2017-11-07 14:52               ` Josh Poimboeuf
2017-11-09 10:55                 ` Michael Ellerman
     [not found]               ` <20171107145233.lmlg5lkfcczkxj4d__32032.512050546$1510066460$gmane$org@treble>
     [not found]                 ` <20171107145233.lmlg5lkfcczkxj4d__32032.512050546$1510066460$gmane$org@treble >
2017-11-09 11:19                   ` Naveen N. Rao
2017-11-09 15:19                     ` Josh Poimboeuf
2017-11-10  2:06                       ` Balbir Singh
2017-11-10  3:28                         ` Josh Poimboeuf
2017-11-10  9:47                           ` Michael Ellerman
2017-11-13  8:38                           ` Balbir Singh
2017-11-13 11:38                             ` Kamalesh Babulal
2017-11-15 10:19                               ` Michael Ellerman
2017-11-02  5:49   ` Kamalesh Babulal

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.