All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/4] ARM: kernel: module PLT optimizations
@ 2016-08-18 10:02 Ard Biesheuvel
  2016-08-18 10:02 ` [PATCH v3 1/4] ARM: kernel: merge core and init PLTs Ard Biesheuvel
                   ` (5 more replies)
  0 siblings, 6 replies; 9+ messages in thread
From: Ard Biesheuvel @ 2016-08-18 10:02 UTC (permalink / raw)
  To: linux-arm-kernel

As reported by Jongsung, the O(n^2) search in the PLT allocation code may
disproportionately affect module load time for modules with a larger number
of relocations.

Since the existing routines rather naively take branch instructions into
account that are internal to the module, we can improve the situation
significantly by checking the symbol section index first, and disregarding
symbols that are defined in the same module. Also, we can reduce the
algorithmic complexity to O(n log n) by sorting the reloc section before
processing it, and disregarding zero-addend relocations in the optimization.

Patch #1 merge the core and init PLTs, since the latter is virtually empty
anyway.

Patch #2 implements the optimization to only take SHN_UNDEF symbols into
account.

Patch #3 sort the reloc section, so that the duplicate check can be done by
comparing an entry with the previous one. Since REL entries (as opposed to
RELA entries) do not contain the addend, simply disregard non-zero addends
in the optimization since those are rare anyway.

Patch #4 replaces the brute force search for a matching existing entry in
the PLT generation routine with a simple check against the last entry that
was emitted. This is now sufficient since the relocation section is sorted,
and presented at relocation time in the same order.

Note that this implementation is now mostly aligned with the arm64 version
(with the exception that the arm64 implementation stashes the address of the
PLT entry in the symtab instead of comparing the last emitted entry)

v3: 
- move the SHN_UNDEF check into the switch statement, so that we only
  dereference the symbol for relocations we care about (#2)
- compare the undecoded addend values bitwise when checking for zero addends,
  rather than fully decoding the offsets and doing an arithmetic comparison
  against '-8' (or '-4' for Thumb)
- added patch #4
   
v2:
- added patch #3

Ard Biesheuvel (4):
  ARM: kernel: merge core and init PLTs
  ARM: kernel: allocate PLT entries only for external symbols
  ARM: kernel: sort relocation sections before allocating PLTs
  arm64: kernel: avoid brute force search on PLT generation

 arch/arm/include/asm/module.h |   6 +-
 arch/arm/kernel/module-plts.c | 246 ++++++++++++--------
 arch/arm/kernel/module.lds    |   3 +-
 3 files changed, 147 insertions(+), 108 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v3 1/4] ARM: kernel: merge core and init PLTs
  2016-08-18 10:02 [PATCH v3 0/4] ARM: kernel: module PLT optimizations Ard Biesheuvel
@ 2016-08-18 10:02 ` Ard Biesheuvel
  2016-08-18 10:02 ` [PATCH v3 2/4] ARM: kernel: allocate PLT entries only for external symbols Ard Biesheuvel
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Ard Biesheuvel @ 2016-08-18 10:02 UTC (permalink / raw)
  To: linux-arm-kernel

The PLT code uses a separate .init.plt section to allocate PLT entries
for jump and call instructions in __init code. However, even for fairly
sizable modules like mac80211.ko, we only end up with a couple of PLT
entries in the .init section, and so we can simplify the code
significantly by emitting all PLT entries into the same section.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm/include/asm/module.h |  6 +-
 arch/arm/kernel/module-plts.c | 68 +++++++-------------
 arch/arm/kernel/module.lds    |  3 +-
 3 files changed, 25 insertions(+), 52 deletions(-)

diff --git a/arch/arm/include/asm/module.h b/arch/arm/include/asm/module.h
index e358b7966c06..464748b9fd7d 100644
--- a/arch/arm/include/asm/module.h
+++ b/arch/arm/include/asm/module.h
@@ -23,10 +23,8 @@ struct mod_arch_specific {
 	struct unwind_table *unwind[ARM_SEC_MAX];
 #endif
 #ifdef CONFIG_ARM_MODULE_PLTS
-	struct elf32_shdr   *core_plt;
-	struct elf32_shdr   *init_plt;
-	int		    core_plt_count;
-	int		    init_plt_count;
+	struct elf32_shdr   *plt;
+	int		    plt_count;
 #endif
 };
 
diff --git a/arch/arm/kernel/module-plts.c b/arch/arm/kernel/module-plts.c
index 0c7efc3446c0..6832d1d6444e 100644
--- a/arch/arm/kernel/module-plts.c
+++ b/arch/arm/kernel/module-plts.c
@@ -30,28 +30,16 @@ struct plt_entries {
 	u32	lit[PLT_ENT_COUNT];
 };
 
-static bool in_init(const struct module *mod, u32 addr)
-{
-	return addr - (u32)mod->init_layout.base < mod->init_layout.size;
-}
-
 u32 get_module_plt(struct module *mod, unsigned long loc, Elf32_Addr val)
 {
 	struct plt_entries *plt, *plt_end;
-	int c, *count;
-
-	if (in_init(mod, loc)) {
-		plt = (void *)mod->arch.init_plt->sh_addr;
-		plt_end = (void *)plt + mod->arch.init_plt->sh_size;
-		count = &mod->arch.init_plt_count;
-	} else {
-		plt = (void *)mod->arch.core_plt->sh_addr;
-		plt_end = (void *)plt + mod->arch.core_plt->sh_size;
-		count = &mod->arch.core_plt_count;
-	}
+	int c;
+
+	plt = (void *)mod->arch.plt->sh_addr;
+	plt_end = (void *)plt + mod->arch.plt->sh_size;
 
 	/* Look for an existing entry pointing to 'val' */
-	for (c = *count; plt < plt_end; c -= PLT_ENT_COUNT, plt++) {
+	for (c = mod->arch.plt_count; plt < plt_end; c -= PLT_ENT_COUNT, plt++) {
 		int i;
 
 		if (!c) {
@@ -60,13 +48,13 @@ u32 get_module_plt(struct module *mod, unsigned long loc, Elf32_Addr val)
 				{ [0 ... PLT_ENT_COUNT - 1] = PLT_ENT_LDR, },
 				{ val, }
 			};
-			++*count;
+			mod->arch.plt_count++;
 			return (u32)plt->ldr;
 		}
 		for (i = 0; i < PLT_ENT_COUNT; i++) {
 			if (!plt->lit[i]) {
 				plt->lit[i] = val;
-				++*count;
+				mod->arch.plt_count++;
 			}
 			if (plt->lit[i] == val)
 				return (u32)&plt->ldr[i];
@@ -132,21 +120,19 @@ static unsigned int count_plts(Elf32_Addr base, const Elf32_Rel *rel, int num)
 int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs,
 			      char *secstrings, struct module *mod)
 {
-	unsigned long core_plts = 0, init_plts = 0;
+	unsigned long plts = 0;
 	Elf32_Shdr *s, *sechdrs_end = sechdrs + ehdr->e_shnum;
 
 	/*
 	 * To store the PLTs, we expand the .text section for core module code
-	 * and the .init.text section for initialization code.
+	 * and for initialization code.
 	 */
 	for (s = sechdrs; s < sechdrs_end; ++s)
-		if (strcmp(".core.plt", secstrings + s->sh_name) == 0)
-			mod->arch.core_plt = s;
-		else if (strcmp(".init.plt", secstrings + s->sh_name) == 0)
-			mod->arch.init_plt = s;
+		if (strcmp(".plt", secstrings + s->sh_name) == 0)
+			mod->arch.plt = s;
 
-	if (!mod->arch.core_plt || !mod->arch.init_plt) {
-		pr_err("%s: sections missing\n", mod->name);
+	if (!mod->arch.plt) {
+		pr_err("%s: module PLT section missing\n", mod->name);
 		return -ENOEXEC;
 	}
 
@@ -158,26 +144,16 @@ int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs,
 		if (s->sh_type != SHT_REL)
 			continue;
 
-		if (strstr(secstrings + s->sh_name, ".init"))
-			init_plts += count_plts(dstsec->sh_addr, rels, numrels);
-		else
-			core_plts += count_plts(dstsec->sh_addr, rels, numrels);
+		plts += count_plts(dstsec->sh_addr, rels, numrels);
 	}
 
-	mod->arch.core_plt->sh_type = SHT_NOBITS;
-	mod->arch.core_plt->sh_flags = SHF_EXECINSTR | SHF_ALLOC;
-	mod->arch.core_plt->sh_addralign = L1_CACHE_BYTES;
-	mod->arch.core_plt->sh_size = round_up(core_plts * PLT_ENT_SIZE,
-					       sizeof(struct plt_entries));
-	mod->arch.core_plt_count = 0;
-
-	mod->arch.init_plt->sh_type = SHT_NOBITS;
-	mod->arch.init_plt->sh_flags = SHF_EXECINSTR | SHF_ALLOC;
-	mod->arch.init_plt->sh_addralign = L1_CACHE_BYTES;
-	mod->arch.init_plt->sh_size = round_up(init_plts * PLT_ENT_SIZE,
-					       sizeof(struct plt_entries));
-	mod->arch.init_plt_count = 0;
-	pr_debug("%s: core.plt=%x, init.plt=%x\n", __func__,
-		 mod->arch.core_plt->sh_size, mod->arch.init_plt->sh_size);
+	mod->arch.plt->sh_type = SHT_NOBITS;
+	mod->arch.plt->sh_flags = SHF_EXECINSTR | SHF_ALLOC;
+	mod->arch.plt->sh_addralign = L1_CACHE_BYTES;
+	mod->arch.plt->sh_size = round_up(plts * PLT_ENT_SIZE,
+					  sizeof(struct plt_entries));
+	mod->arch.plt_count = 0;
+
+	pr_debug("%s: plt=%x\n", __func__, mod->arch.plt->sh_size);
 	return 0;
 }
diff --git a/arch/arm/kernel/module.lds b/arch/arm/kernel/module.lds
index 3682fa107918..05881e2b414c 100644
--- a/arch/arm/kernel/module.lds
+++ b/arch/arm/kernel/module.lds
@@ -1,4 +1,3 @@
 SECTIONS {
-        .core.plt : { BYTE(0) }
-        .init.plt : { BYTE(0) }
+	.plt : { BYTE(0) }
 }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 2/4] ARM: kernel: allocate PLT entries only for external symbols
  2016-08-18 10:02 [PATCH v3 0/4] ARM: kernel: module PLT optimizations Ard Biesheuvel
  2016-08-18 10:02 ` [PATCH v3 1/4] ARM: kernel: merge core and init PLTs Ard Biesheuvel
@ 2016-08-18 10:02 ` Ard Biesheuvel
  2016-08-18 10:02 ` [PATCH v3 3/4] ARM: kernel: sort relocation sections before allocating PLTs Ard Biesheuvel
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Ard Biesheuvel @ 2016-08-18 10:02 UTC (permalink / raw)
  To: linux-arm-kernel

When CONFIG_ARM_MODULE_PLTS is enabled, jump and call instructions in
modules no longer need to be within 16 MB (8 MB for Thumb2) of their
targets. If they are further away, a PLT entry will be generated on the
fly for each of them, which extends the range to the entire 32-bit
address space.

However, since these PLT entries will become the branch targets of the
original jump and call instructions, the PLT itself needs to be in
range, or we end up in the same situation we started in. Since the PLT
is in a separate section, this essentially means that all jumps and calls
inside the same module must be resolvable without PLT entries.

The PLT allocation code executes before the module itself is loaded in
its final location, and so it has to use a worst-case estimate for
which jumps and calls will require an entry in the PLT at relocation
time. As an optimization, this code deduplicates entries pointing to
the same symbol, using a O(n^2) algorithm. However, it does not take
the above into account, i.e., that PLT entries will only be needed for
jump and call relocations against symbols that are not defined in the
module.

So disregard relocations against symbols that are defined in the module
itself.

As an additional minor optimization, ignore input sections that lack
the SHF_EXECINSTR flag. Since jump and call relocations operate on
executable instructions only, there is no need to look in sections that
do not contain executable code.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm/kernel/module-plts.c | 55 +++++++++++++++-----
 1 file changed, 41 insertions(+), 14 deletions(-)

diff --git a/arch/arm/kernel/module-plts.c b/arch/arm/kernel/module-plts.c
index 6832d1d6444e..59255d8c9510 100644
--- a/arch/arm/kernel/module-plts.c
+++ b/arch/arm/kernel/module-plts.c
@@ -88,32 +88,47 @@ static int duplicate_rel(Elf32_Addr base, const Elf32_Rel *rel, int num,
 }
 
 /* Count how many PLT entries we may need */
-static unsigned int count_plts(Elf32_Addr base, const Elf32_Rel *rel, int num)
+static unsigned int count_plts(const Elf32_Sym *syms, Elf32_Addr base,
+			       const Elf32_Rel *rel, int num)
 {
 	unsigned int ret = 0;
+	const Elf32_Sym *s;
+	u32 mask;
 	int i;
 
+	if (IS_ENABLED(CONFIG_THUMB2_KERNEL))
+		mask = __opcode_to_mem_thumb32(0x07ff2fff);
+	else
+		mask = __opcode_to_mem_arm(0x00ffffff);
+
 	/*
 	 * Sure, this is order(n^2), but it's usually short, and not
 	 * time critical
 	 */
-	for (i = 0; i < num; i++)
+	for (i = 0; i < num; i++) {
 		switch (ELF32_R_TYPE(rel[i].r_info)) {
+		case R_ARM_THM_CALL:
+		case R_ARM_THM_JUMP24:
+			BUG_ON(!IS_ENABLED(CONFIG_THUMB2_KERNEL));
+
 		case R_ARM_CALL:
 		case R_ARM_PC24:
 		case R_ARM_JUMP24:
-			if (!duplicate_rel(base, rel, i,
-					   __opcode_to_mem_arm(0x00ffffff)))
-				ret++;
-			break;
-#ifdef CONFIG_THUMB2_KERNEL
-		case R_ARM_THM_CALL:
-		case R_ARM_THM_JUMP24:
-			if (!duplicate_rel(base, rel, i,
-					   __opcode_to_mem_thumb32(0x07ff2fff)))
+			/*
+			 * We only have to consider branch targets that resolve
+			 * to undefined symbols. This is not simply a heuristic,
+			 * it is a fundamental limitation, since the PLT itself
+			 * is part of the module, and needs to be within range
+			 * as well, so modules can never grow beyond that limit.
+			 */
+			s = syms + ELF32_R_SYM(rel[i].r_info);
+			if (s->st_shndx != SHN_UNDEF)
+				break;
+
+			if (!duplicate_rel(base, rel, i, mask))
 				ret++;
-#endif
 		}
+	}
 	return ret;
 }
 
@@ -122,19 +137,27 @@ int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs,
 {
 	unsigned long plts = 0;
 	Elf32_Shdr *s, *sechdrs_end = sechdrs + ehdr->e_shnum;
+	Elf32_Sym *syms = NULL;
 
 	/*
 	 * To store the PLTs, we expand the .text section for core module code
 	 * and for initialization code.
 	 */
-	for (s = sechdrs; s < sechdrs_end; ++s)
+	for (s = sechdrs; s < sechdrs_end; ++s) {
 		if (strcmp(".plt", secstrings + s->sh_name) == 0)
 			mod->arch.plt = s;
+		else if (s->sh_type == SHT_SYMTAB)
+			syms = (Elf32_Sym *)s->sh_addr;
+	}
 
 	if (!mod->arch.plt) {
 		pr_err("%s: module PLT section missing\n", mod->name);
 		return -ENOEXEC;
 	}
+	if (!syms) {
+		pr_err("%s: module symtab section missing\n", mod->name);
+		return -ENOEXEC;
+	}
 
 	for (s = sechdrs + 1; s < sechdrs_end; ++s) {
 		const Elf32_Rel *rels = (void *)ehdr + s->sh_offset;
@@ -144,7 +167,11 @@ int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs,
 		if (s->sh_type != SHT_REL)
 			continue;
 
-		plts += count_plts(dstsec->sh_addr, rels, numrels);
+		/* ignore relocations that operate on non-exec sections */
+		if (!(dstsec->sh_flags & SHF_EXECINSTR))
+			continue;
+
+		plts += count_plts(syms, dstsec->sh_addr, rels, numrels);
 	}
 
 	mod->arch.plt->sh_type = SHT_NOBITS;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 3/4] ARM: kernel: sort relocation sections before allocating PLTs
  2016-08-18 10:02 [PATCH v3 0/4] ARM: kernel: module PLT optimizations Ard Biesheuvel
  2016-08-18 10:02 ` [PATCH v3 1/4] ARM: kernel: merge core and init PLTs Ard Biesheuvel
  2016-08-18 10:02 ` [PATCH v3 2/4] ARM: kernel: allocate PLT entries only for external symbols Ard Biesheuvel
@ 2016-08-18 10:02 ` Ard Biesheuvel
  2016-08-18 10:02 ` [PATCH v3 4/4] arm64: kernel: avoid brute force search on PLT generation Ard Biesheuvel
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Ard Biesheuvel @ 2016-08-18 10:02 UTC (permalink / raw)
  To: linux-arm-kernel

The PLT allocation routines try to establish an upper bound on the
number of PLT entries that will be required at relocation time, and
optimize this by disregarding duplicates (i.e., PLT entries that will
end up pointing to the same function). This is currently a O(n^2)
algorithm, but we can greatly simplify this by
- sorting the relocation section so that relocations that can use the
  same PLT entry will be listed adjacently,
- disregard jump/call relocations with addends; these are highly unusual,
  for relocations against SHN_UNDEF symbols, and so we can simply allocate
  a PLT entry for each one we encounter, without trying to optimize away
  duplicates.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm/kernel/module-plts.c | 93 ++++++++++++++------
 1 file changed, 64 insertions(+), 29 deletions(-)

diff --git a/arch/arm/kernel/module-plts.c b/arch/arm/kernel/module-plts.c
index 59255d8c9510..5b000ea64fa4 100644
--- a/arch/arm/kernel/module-plts.c
+++ b/arch/arm/kernel/module-plts.c
@@ -9,6 +9,7 @@
 #include <linux/elf.h>
 #include <linux/kernel.h>
 #include <linux/module.h>
+#include <linux/sort.h>
 
 #include <asm/cache.h>
 #include <asm/opcodes.h>
@@ -63,28 +64,54 @@ u32 get_module_plt(struct module *mod, unsigned long loc, Elf32_Addr val)
 	BUG();
 }
 
-static int duplicate_rel(Elf32_Addr base, const Elf32_Rel *rel, int num,
-			   u32 mask)
+#define cmp_3way(a,b)	((a) < (b) ? -1 : (a) > (b))
+
+static int cmp_rel(const void *a, const void *b)
 {
-	u32 *loc1, *loc2;
+	const Elf32_Rel *x = a, *y = b;
 	int i;
 
-	for (i = 0; i < num; i++) {
-		if (rel[i].r_info != rel[num].r_info)
-			continue;
+	/* sort by type and symbol index */
+	i = cmp_3way(ELF32_R_TYPE(x->r_info), ELF32_R_TYPE(y->r_info));
+	if (i == 0)
+		i = cmp_3way(ELF32_R_SYM(x->r_info), ELF32_R_SYM(y->r_info));
+	return i;
+}
 
-		/*
-		 * Identical relocation types against identical symbols can
-		 * still result in different PLT entries if the addend in the
-		 * place is different. So resolve the target of the relocation
-		 * to compare the values.
-		 */
-		loc1 = (u32 *)(base + rel[i].r_offset);
-		loc2 = (u32 *)(base + rel[num].r_offset);
-		if (((*loc1 ^ *loc2) & mask) == 0)
-			return 1;
+static bool is_zero_addend_relocation(u32 *tval)
+{
+	/*
+	 * Do a bitwise compare on the raw addend rather than fully decoding
+	 * the offset and doing an arithmetic comparison.
+	 * Note that a zero-addend jump/call relocation is encoded taking the
+	 * PC bias into account, i.e., -8 for ARM and -4 for Thumb2.
+	 */
+	if (IS_ENABLED(CONFIG_THUMB2_KERNEL)) {
+		u16 upper, lower;
+
+		upper = __mem_to_opcode_thumb16(((u16 *)tval)[0]);
+		lower = __mem_to_opcode_thumb16(((u16 *)tval)[1]);
+
+		return (upper & 0x7ff) == 0x7ff && (lower & 0x2fff) == 0x2ffe;
 	}
-	return 0;
+	return (__mem_to_opcode_arm(*tval) & 0xffffff) == 0xfffffe;
+}
+
+static bool duplicate_rel(Elf32_Addr base, const Elf32_Rel *rel, int num)
+{
+	const Elf32_Rel *prev;
+
+	/*
+	 * Entries are sorted by type and symbol index. That means that,
+	 * if a duplicate entry exists, it must be in the preceding
+	 * slot.
+	 */
+	if (!num)
+		return false;
+
+	prev = rel + num - 1;
+	return cmp_rel(rel + num, prev) == 0 &&
+	       is_zero_addend_relocation((u32 *)(base + prev->r_offset));
 }
 
 /* Count how many PLT entries we may need */
@@ -93,20 +120,12 @@ static unsigned int count_plts(const Elf32_Sym *syms, Elf32_Addr base,
 {
 	unsigned int ret = 0;
 	const Elf32_Sym *s;
-	u32 mask;
 	int i;
 
-	if (IS_ENABLED(CONFIG_THUMB2_KERNEL))
-		mask = __opcode_to_mem_thumb32(0x07ff2fff);
-	else
-		mask = __opcode_to_mem_arm(0x00ffffff);
-
-	/*
-	 * Sure, this is order(n^2), but it's usually short, and not
-	 * time critical
-	 */
 	for (i = 0; i < num; i++) {
 		switch (ELF32_R_TYPE(rel[i].r_info)) {
+			u32 *tval;
+
 		case R_ARM_THM_CALL:
 		case R_ARM_THM_JUMP24:
 			BUG_ON(!IS_ENABLED(CONFIG_THUMB2_KERNEL));
@@ -125,7 +144,20 @@ static unsigned int count_plts(const Elf32_Sym *syms, Elf32_Addr base,
 			if (s->st_shndx != SHN_UNDEF)
 				break;
 
-			if (!duplicate_rel(base, rel, i, mask))
+			/*
+			 * Jump relocations with non-zero addends against
+			 * undefined symbols are supported by the ELF spec, but
+			 * do not occur in practice (e.g., 'jump n bytes past
+			 * the entry point of undefined function symbol f').
+			 * So we need to support them, but there is no need to
+			 * take them into consideration when trying to optimize
+			 * this code. So let's only check for duplicates when
+			 * the addend is zero.
+			 */
+			tval = (u32 *)(base + rel[i].r_offset);
+
+			if (!is_zero_addend_relocation(tval) ||
+			    !duplicate_rel(base, rel, i))
 				ret++;
 		}
 	}
@@ -160,7 +192,7 @@ int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs,
 	}
 
 	for (s = sechdrs + 1; s < sechdrs_end; ++s) {
-		const Elf32_Rel *rels = (void *)ehdr + s->sh_offset;
+		Elf32_Rel *rels = (void *)ehdr + s->sh_offset;
 		int numrels = s->sh_size / sizeof(Elf32_Rel);
 		Elf32_Shdr *dstsec = sechdrs + s->sh_info;
 
@@ -171,6 +203,9 @@ int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs,
 		if (!(dstsec->sh_flags & SHF_EXECINSTR))
 			continue;
 
+		/* sort by type and symbol index */
+		sort(rels, numrels, sizeof(Elf32_Rel), cmp_rel, NULL);
+
 		plts += count_plts(syms, dstsec->sh_addr, rels, numrels);
 	}
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 4/4] arm64: kernel: avoid brute force search on PLT generation
  2016-08-18 10:02 [PATCH v3 0/4] ARM: kernel: module PLT optimizations Ard Biesheuvel
                   ` (2 preceding siblings ...)
  2016-08-18 10:02 ` [PATCH v3 3/4] ARM: kernel: sort relocation sections before allocating PLTs Ard Biesheuvel
@ 2016-08-18 10:02 ` Ard Biesheuvel
  2016-08-18 10:04 ` [PATCH v3 0/4] ARM: kernel: module PLT optimizations Ard Biesheuvel
  2016-08-23  6:06 ` Jongsung Kim
  5 siblings, 0 replies; 9+ messages in thread
From: Ard Biesheuvel @ 2016-08-18 10:02 UTC (permalink / raw)
  To: linux-arm-kernel

Given that we now sort the relocation sections in a way that guarantees
that entries that can share a single PLT entry end up adjacently, there
is no a longer a need to go over the entire list to look for an existing
entry that matches our jump target. If such a match exists, it was the
last one to be emitted, so we can simply check the preceding slot.

Note that this will still work correctly in the [theoretical] presence of
call/jump relocations against SHN_UNDEF symbols with non-zero addends,
although not optimally. Since the relocations are presented in the same
order that we checked them for duplicates, any duplicates that we failed
to spot the first time around will be accounted for in the PLT allocation
so there is guaranteed to be sufficient space for them when actually
emitting the PLT.

For instance, the following sequence of relocations:

  000004d8  00058b0a R_ARM_THM_CALL    00000000   warn_slowpath_null
  000004fc  00058b0a R_ARM_THM_CALL    00000000   warn_slowpath_null
  0000050e  00058b0a R_ARM_THM_CALL    00000000   warn_slowpath_null
  00000520  00058b0a R_ARM_THM_CALL    00000000   warn_slowpath_null
  00000532  00058b0a R_ARM_THM_CALL    00000000   warn_slowpath_null
  00000544  00058b0a R_ARM_THM_CALL    00000000   warn_slowpath_null
  00000556  00058b0a R_ARM_THM_CALL    00000000   warn_slowpath_null
  00000568  00058b0a R_ARM_THM_CALL    00000000   warn_slowpath_null
  0000057a  00058b0a R_ARM_THM_CALL    00000000   warn_slowpath_null
  0000058c  00058b0a R_ARM_THM_CALL    00000000   warn_slowpath_null
  0000059e  00058b0a R_ARM_THM_CALL    00000000   warn_slowpath_null
  000005b0  00058b0a R_ARM_THM_CALL    00000000   warn_slowpath_null
  000005c2  00058b0a R_ARM_THM_CALL    00000000   warn_slowpath_null
  000005d4  00058b0a R_ARM_THM_CALL    00000000   warn_slowpath_null

may result in several PLT entries to be allocated, and also emitted, if
any of the entries in the middle refer to a Place that contains a non-zero
addend (i.e., one for all the preceding zero-addend relocations, one for
all the following zero-addend relocations, and one for the non-zero addend
relocation itself)

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm/kernel/module-plts.c | 60 +++++++++++---------
 1 file changed, 32 insertions(+), 28 deletions(-)

diff --git a/arch/arm/kernel/module-plts.c b/arch/arm/kernel/module-plts.c
index 5b000ea64fa4..a910f1db0c14 100644
--- a/arch/arm/kernel/module-plts.c
+++ b/arch/arm/kernel/module-plts.c
@@ -33,35 +33,39 @@ struct plt_entries {
 
 u32 get_module_plt(struct module *mod, unsigned long loc, Elf32_Addr val)
 {
-	struct plt_entries *plt, *plt_end;
-	int c;
-
-	plt = (void *)mod->arch.plt->sh_addr;
-	plt_end = (void *)plt + mod->arch.plt->sh_size;
-
-	/* Look for an existing entry pointing to 'val' */
-	for (c = mod->arch.plt_count; plt < plt_end; c -= PLT_ENT_COUNT, plt++) {
-		int i;
-
-		if (!c) {
-			/* Populate a new set of entries */
-			*plt = (struct plt_entries){
-				{ [0 ... PLT_ENT_COUNT - 1] = PLT_ENT_LDR, },
-				{ val, }
-			};
-			mod->arch.plt_count++;
-			return (u32)plt->ldr;
-		}
-		for (i = 0; i < PLT_ENT_COUNT; i++) {
-			if (!plt->lit[i]) {
-				plt->lit[i] = val;
-				mod->arch.plt_count++;
-			}
-			if (plt->lit[i] == val)
-				return (u32)&plt->ldr[i];
-		}
+	struct plt_entries *plt = (struct plt_entries *)mod->arch.plt->sh_addr;
+	int idx = 0;
+
+	/*
+	 * Look for an existing entry pointing to 'val'. Given that the
+	 * relocations are sorted, this will be the last entry we allocated.
+	 * (if one exists).
+	 */
+	if (mod->arch.plt_count > 0) {
+		plt += (mod->arch.plt_count - 1) / PLT_ENT_COUNT;
+		idx = (mod->arch.plt_count - 1) % PLT_ENT_COUNT;
+
+		if (plt->lit[idx] == val)
+			return (u32)&plt->ldr[idx];
+
+		idx = (idx + 1) % PLT_ENT_COUNT;
+		if (!idx)
+			plt++;
 	}
-	BUG();
+
+	mod->arch.plt_count++;
+	BUG_ON(mod->arch.plt_count * PLT_ENT_SIZE > mod->arch.plt->sh_size);
+
+	if (!idx)
+		/* Populate a new set of entries */
+		*plt = (struct plt_entries){
+			{ [0 ... PLT_ENT_COUNT - 1] = PLT_ENT_LDR, },
+			{ val, }
+		};
+	else
+		plt->lit[idx] = val;
+
+	return (u32)&plt->ldr[idx];
 }
 
 #define cmp_3way(a,b)	((a) < (b) ? -1 : (a) > (b))
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 0/4] ARM: kernel: module PLT optimizations
  2016-08-18 10:02 [PATCH v3 0/4] ARM: kernel: module PLT optimizations Ard Biesheuvel
                   ` (3 preceding siblings ...)
  2016-08-18 10:02 ` [PATCH v3 4/4] arm64: kernel: avoid brute force search on PLT generation Ard Biesheuvel
@ 2016-08-18 10:04 ` Ard Biesheuvel
  2016-08-23  6:06 ` Jongsung Kim
  5 siblings, 0 replies; 9+ messages in thread
From: Ard Biesheuvel @ 2016-08-18 10:04 UTC (permalink / raw)
  To: linux-arm-kernel

On 18 August 2016 at 12:02, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> As reported by Jongsung, the O(n^2) search in the PLT allocation code may
> disproportionately affect module load time for modules with a larger number
> of relocations.
>
> Since the existing routines rather naively take branch instructions into
> account that are internal to the module, we can improve the situation
> significantly by checking the symbol section index first, and disregarding
> symbols that are defined in the same module. Also, we can reduce the
> algorithmic complexity to O(n log n) by sorting the reloc section before
> processing it, and disregarding zero-addend relocations in the optimization.
>
> Patch #1 merge the core and init PLTs, since the latter is virtually empty
> anyway.
>
> Patch #2 implements the optimization to only take SHN_UNDEF symbols into
> account.
>
> Patch #3 sort the reloc section, so that the duplicate check can be done by
> comparing an entry with the previous one. Since REL entries (as opposed to
> RELA entries) do not contain the addend, simply disregard non-zero addends
> in the optimization since those are rare anyway.
>
> Patch #4 replaces the brute force search for a matching existing entry in
> the PLT generation routine with a simple check against the last entry that
> was emitted. This is now sufficient since the relocation section is sorted,
> and presented at relocation time in the same order.
>
> Note that this implementation is now mostly aligned with the arm64 version
> (with the exception that the arm64 implementation stashes the address of the
> PLT entry in the symtab instead of comparing the last emitted entry)
>
> v3:
> - move the SHN_UNDEF check into the switch statement, so that we only
>   dereference the symbol for relocations we care about (#2)
> - compare the undecoded addend values bitwise when checking for zero addends,
>   rather than fully decoding the offsets and doing an arithmetic comparison
>   against '-8' (or '-4' for Thumb)
> - added patch #4
>
> v2:
> - added patch #3
>
> Ard Biesheuvel (4):
>   ARM: kernel: merge core and init PLTs
>   ARM: kernel: allocate PLT entries only for external symbols
>   ARM: kernel: sort relocation sections before allocating PLTs
>   arm64: kernel: avoid brute force search on PLT generation
>

^^^ $SUBJECT fail: this should be ARM, of course

-- 
Ard.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v3 0/4] ARM: kernel: module PLT optimizations
  2016-08-18 10:02 [PATCH v3 0/4] ARM: kernel: module PLT optimizations Ard Biesheuvel
                   ` (4 preceding siblings ...)
  2016-08-18 10:04 ` [PATCH v3 0/4] ARM: kernel: module PLT optimizations Ard Biesheuvel
@ 2016-08-23  6:06 ` Jongsung Kim
  2016-08-23 11:30   ` Ard Biesheuvel
  5 siblings, 1 reply; 9+ messages in thread
From: Jongsung Kim @ 2016-08-23  6:06 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Ard,

I did some rough performance tests for your patchset with my large ko.

On 2016? 08? 18? 19:02, Ard Biesheuvel wrote:
> As reported by Jongsung, the O(n^2) search in the PLT allocation code may > disproportionately affect module load time for modules with a larger number > of relocations. > > Since the existing routines rather naively take branch instructions into > account that are internal to the module, we can improve the situation > significantly by checking the symbol section index first, and disregarding > symbols that are defined in the same module. Also, we can reduce the > algorithmic complexity to O(n log n) by sorting the reloc section before > processing it, and disregarding zero-addend relocations in the optimization. > > Patch #1 merge the core and init PLTs, since the latter is virtually empty > anyway. >

Patch #1 didn't make noticable difference in performance. Found 4,860 PLTs
from 262,573 RELs in ~10secs.

> Patch #2 implements the optimization to only take SHN_UNDEF symbols into > account. >

After patch #2 applied, found 249 PLTs from 262,573 RELs in ~680msecs.

> Patch #3 sort the reloc section, so that the duplicate check can be done by > comparing an entry with the previous one. Since REL entries (as opposed to > RELA entries) do not contain the addend, simply disregard non-zero addends > in the optimization since those are rare anyway. >

After patch #3 applied, found 249 PLTs from 262,573 RELs in ~6msecs.

> Patch #4 replaces the brute force search for a matching existing entry in > the PLT generation routine with a simple check against the last entry that > was emitted. This is now sufficient since the relocation section is sorted, > and presented at relocation time in the same order. >

Finally with patch #4 applied, found 249 PLTs from 262,573 RELs in < 6msecs.

> Note that this implementation is now mostly aligned with the arm64 version > (with the exception that the arm64 implementation stashes the address of the > PLT entry in the symtab instead of comparing the last emitted entry) >

Time measured around calling count_plts() with preemption disabled. My O(n)
implementation over patch #1 and #2 took over 10msecs to handle the same ko.
I'd better to use your patchset. :-)

Thank you for your works!

JS

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v3 0/4] ARM: kernel: module PLT optimizations
  2016-08-23  6:06 ` Jongsung Kim
@ 2016-08-23 11:30   ` Ard Biesheuvel
  2016-08-24 10:06     ` Jongsung Kim
  0 siblings, 1 reply; 9+ messages in thread
From: Ard Biesheuvel @ 2016-08-23 11:30 UTC (permalink / raw)
  To: linux-arm-kernel

On 23 August 2016 at 08:06, Jongsung Kim <neidhard.kim@lge.com> wrote:
> Hi Ard,
>
> I did some rough performance tests for your patchset with my large ko.
>

Thanks! Could you please confirm that the module still works as expected?

> On 2016? 08? 18? 19:02, Ard Biesheuvel wrote:
>> As reported by Jongsung, the O(n^2) search in the PLT allocation code may > disproportionately affect module load time for modules with a larger number > of relocations. > > Since the existing routines rather naively take branch instructions into > account that are internal to the module, we can improve the situation > significantly by checking the symbol section index first, and disregarding > symbols that are defined in the same module. Also, we can reduce the > algorithmic complexity to O(n log n) by sorting the reloc section before > processing it, and disregarding zero-addend relocations in the optimization. > > Patch #1 merge the core and init PLTs, since the latter is virtually empty > anyway. >
>
> Patch #1 didn't make noticable difference in performance. Found 4,860 PLTs
> from 262,573 RELs in ~10secs.
>

OK, that is expected. This is not a performance optimization but a
[minor] size optimization (and a simplification).

>> Patch #2 implements the optimization to only take SHN_UNDEF symbols into > account. >
>
> After patch #2 applied, found 249 PLTs from 262,573 RELs in ~680msecs.
>

Nice

>> Patch #3 sort the reloc section, so that the duplicate check can be done by > comparing an entry with the previous one. Since REL entries (as opposed to > RELA entries) do not contain the addend, simply disregard non-zero addends > in the optimization since those are rare anyway. >
>
> After patch #3 applied, found 249 PLTs from 262,573 RELs in ~6msecs.
>

Even better!

>> Patch #4 replaces the brute force search for a matching existing entry in > the PLT generation routine with a simple check against the last entry that > was emitted. This is now sufficient since the relocation section is sorted, > and presented at relocation time in the same order. >
>
> Finally with patch #4 applied, found 249 PLTs from 262,573 RELs in < 6msecs.
>

This is also expected, given that you only measured the time spent in
count_plts(). Patch #4 optimizes get_module_plt(), which is called
when the relocations are actually processed. I don't expect a huge
speedup, but it should be an improvement nonetheless.

>> Note that this implementation is now mostly aligned with the arm64 version > (with the exception that the arm64 implementation stashes the address of the > PLT entry in the symtab instead of comparing the last emitted entry) >
>
> Time measured around calling count_plts() with preemption disabled. My O(n)
> implementation over patch #1 and #2 took over 10msecs to handle the same ko.
> I'd better to use your patchset. :-)
>
> Thank you for your works!
>

Likewise! May I take this as a 'Tested-by: ' ?

Regards,
Ard.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v3 0/4] ARM: kernel: module PLT optimizations
  2016-08-23 11:30   ` Ard Biesheuvel
@ 2016-08-24 10:06     ` Jongsung Kim
  0 siblings, 0 replies; 9+ messages in thread
From: Jongsung Kim @ 2016-08-24 10:06 UTC (permalink / raw)
  To: linux-arm-kernel

On 2016? 08? 23? 20:30, Ard Biesheuvel wrote:
> On 23 August 2016 at 08:06, Jongsung Kim <neidhard.kim@lge.com> wrote: > > Thanks! Could you please confirm that the module still works as expected? >

The platform used in test uses 10+ kernel modules, and the entire system
looks working good with your patchset.

> > Likewise! May I take this as a 'Tested-by: ' ? >

Sure, you may.

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2016-08-24 10:06 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-18 10:02 [PATCH v3 0/4] ARM: kernel: module PLT optimizations Ard Biesheuvel
2016-08-18 10:02 ` [PATCH v3 1/4] ARM: kernel: merge core and init PLTs Ard Biesheuvel
2016-08-18 10:02 ` [PATCH v3 2/4] ARM: kernel: allocate PLT entries only for external symbols Ard Biesheuvel
2016-08-18 10:02 ` [PATCH v3 3/4] ARM: kernel: sort relocation sections before allocating PLTs Ard Biesheuvel
2016-08-18 10:02 ` [PATCH v3 4/4] arm64: kernel: avoid brute force search on PLT generation Ard Biesheuvel
2016-08-18 10:04 ` [PATCH v3 0/4] ARM: kernel: module PLT optimizations Ard Biesheuvel
2016-08-23  6:06 ` Jongsung Kim
2016-08-23 11:30   ` Ard Biesheuvel
2016-08-24 10:06     ` Jongsung Kim

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.