All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/9] MIPS Relocatable kernel & KASLR
@ 2015-12-03 10:08 ` Matt Redfearn
  0 siblings, 0 replies; 34+ messages in thread
From: Matt Redfearn @ 2015-12-03 10:08 UTC (permalink / raw)
  To: linux-mips; +Cc: Matt Redfearn

This series adds the ability for the MIPS kernel to relocate itself at
runtime, optionally to an address determined at random each boot. This
series is based on v4.3 and has been tested on the Malta platform.

Here is a description of how relocation is achieved:
* Kernel is compiled & statically linked as normal (no position
  independent code).
* The linker flag --emit-relocs is added to the linker command line,
  causing ld to include relocation sections in the output elf
* A tool derived from the x86 relocs tool is used to parse the
  relocation sections and create a binary table of relocations. Each
  entry in the table is 32bits, comprised of a 24bit offset (in words)
  from _text and an 8bit relocation type.
* The table is inserted into the vmlinux elf, into some space reserved
  for it in the linker script. Inserting the table into vmlinux means
  all boot targets will automatically include the relocation code and
  information.
* At boot, the kernel memcpy()s itself elsewhere in memory, then goes
  through the table performing each relocation on the new image.
* If all goes well, control is passed to the entry point of the new
  kernel.

Restrictions:
* The new kernel is not allowed to overlap the old kernel, such that
  the original kernel can still be booted if relocation fails.
* Relocation is supported only by multiples of 64k bytes. This
  eliminates the need to handle R_MIPS_LO16 relocations as the bottom
  16bits will remain the same at the relocated address.
* In 64 bit kernels, relocation is supported only within the same 4Gb
  memory segment as the kernel link address (CONFIG_PHYSICAL_START).
  This eliminates the need to handle R_MIPS_HIGHEST and R_MIPS_HIGHER
  relocations as the top 32bits will remain the same at the relocated
  address.
* Relocation is currently supported on R2 of the MIPS architecture,
  32bit and 64bit.

Matt Redfearn (9):
  MIPS: tools: Add relocs tool
  MIPS: tools: Build relocs tool
  MIPS: Reserve space for relocation table
  MIPS: Generate relocation table when CONFIG_RELOCATABLE
  MIPS: Kernel: Add relocate.c
  MIPS: Call relocate_kernel if CONFIG_RELOCATABLE=y
  MIPS: bootmem: When relocatable, free memory below kernel
  MIPS: Add CONFIG_RELOCATABLE Kconfig option
  MIPS: Kernel: Implement kASLR using CONFIG_RELOCATABLE

 arch/mips/Kconfig                  |  53 +++
 arch/mips/Makefile                 |  19 ++
 arch/mips/boot/tools/Makefile      |   8 +
 arch/mips/boot/tools/relocs.c      | 675 +++++++++++++++++++++++++++++++++++++
 arch/mips/boot/tools/relocs.h      |  34 ++
 arch/mips/boot/tools/relocs_32.c   |  17 +
 arch/mips/boot/tools/relocs_64.c   |  27 ++
 arch/mips/boot/tools/relocs_main.c |  84 +++++
 arch/mips/kernel/Makefile          |   2 +
 arch/mips/kernel/head.S            |  20 ++
 arch/mips/kernel/relocate.c        | 296 ++++++++++++++++
 arch/mips/kernel/setup.c           |  13 +
 arch/mips/kernel/vmlinux.lds.S     |  20 ++
 13 files changed, 1268 insertions(+)
 create mode 100644 arch/mips/boot/tools/Makefile
 create mode 100644 arch/mips/boot/tools/relocs.c
 create mode 100644 arch/mips/boot/tools/relocs.h
 create mode 100644 arch/mips/boot/tools/relocs_32.c
 create mode 100644 arch/mips/boot/tools/relocs_64.c
 create mode 100644 arch/mips/boot/tools/relocs_main.c
 create mode 100644 arch/mips/kernel/relocate.c

-- 
2.1.4

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 0/9] MIPS Relocatable kernel & KASLR
@ 2015-12-03 10:08 ` Matt Redfearn
  0 siblings, 0 replies; 34+ messages in thread
From: Matt Redfearn @ 2015-12-03 10:08 UTC (permalink / raw)
  To: linux-mips; +Cc: Matt Redfearn

This series adds the ability for the MIPS kernel to relocate itself at
runtime, optionally to an address determined at random each boot. This
series is based on v4.3 and has been tested on the Malta platform.

Here is a description of how relocation is achieved:
* Kernel is compiled & statically linked as normal (no position
  independent code).
* The linker flag --emit-relocs is added to the linker command line,
  causing ld to include relocation sections in the output elf
* A tool derived from the x86 relocs tool is used to parse the
  relocation sections and create a binary table of relocations. Each
  entry in the table is 32bits, comprised of a 24bit offset (in words)
  from _text and an 8bit relocation type.
* The table is inserted into the vmlinux elf, into some space reserved
  for it in the linker script. Inserting the table into vmlinux means
  all boot targets will automatically include the relocation code and
  information.
* At boot, the kernel memcpy()s itself elsewhere in memory, then goes
  through the table performing each relocation on the new image.
* If all goes well, control is passed to the entry point of the new
  kernel.

Restrictions:
* The new kernel is not allowed to overlap the old kernel, such that
  the original kernel can still be booted if relocation fails.
* Relocation is supported only by multiples of 64k bytes. This
  eliminates the need to handle R_MIPS_LO16 relocations as the bottom
  16bits will remain the same at the relocated address.
* In 64 bit kernels, relocation is supported only within the same 4Gb
  memory segment as the kernel link address (CONFIG_PHYSICAL_START).
  This eliminates the need to handle R_MIPS_HIGHEST and R_MIPS_HIGHER
  relocations as the top 32bits will remain the same at the relocated
  address.
* Relocation is currently supported on R2 of the MIPS architecture,
  32bit and 64bit.

Matt Redfearn (9):
  MIPS: tools: Add relocs tool
  MIPS: tools: Build relocs tool
  MIPS: Reserve space for relocation table
  MIPS: Generate relocation table when CONFIG_RELOCATABLE
  MIPS: Kernel: Add relocate.c
  MIPS: Call relocate_kernel if CONFIG_RELOCATABLE=y
  MIPS: bootmem: When relocatable, free memory below kernel
  MIPS: Add CONFIG_RELOCATABLE Kconfig option
  MIPS: Kernel: Implement kASLR using CONFIG_RELOCATABLE

 arch/mips/Kconfig                  |  53 +++
 arch/mips/Makefile                 |  19 ++
 arch/mips/boot/tools/Makefile      |   8 +
 arch/mips/boot/tools/relocs.c      | 675 +++++++++++++++++++++++++++++++++++++
 arch/mips/boot/tools/relocs.h      |  34 ++
 arch/mips/boot/tools/relocs_32.c   |  17 +
 arch/mips/boot/tools/relocs_64.c   |  27 ++
 arch/mips/boot/tools/relocs_main.c |  84 +++++
 arch/mips/kernel/Makefile          |   2 +
 arch/mips/kernel/head.S            |  20 ++
 arch/mips/kernel/relocate.c        | 296 ++++++++++++++++
 arch/mips/kernel/setup.c           |  13 +
 arch/mips/kernel/vmlinux.lds.S     |  20 ++
 13 files changed, 1268 insertions(+)
 create mode 100644 arch/mips/boot/tools/Makefile
 create mode 100644 arch/mips/boot/tools/relocs.c
 create mode 100644 arch/mips/boot/tools/relocs.h
 create mode 100644 arch/mips/boot/tools/relocs_32.c
 create mode 100644 arch/mips/boot/tools/relocs_64.c
 create mode 100644 arch/mips/boot/tools/relocs_main.c
 create mode 100644 arch/mips/kernel/relocate.c

-- 
2.1.4

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 1/9] MIPS: tools: Add relocs tool
@ 2015-12-03 10:08   ` Matt Redfearn
  0 siblings, 0 replies; 34+ messages in thread
From: Matt Redfearn @ 2015-12-03 10:08 UTC (permalink / raw)
  To: linux-mips; +Cc: Matt Redfearn

This tool is based on the x86/boot/tools/relocs tool.

It parses the relocations present in the vmlinux elf file, building a
table of relocations that will be necessary to run the kernel from an
address other than its link address. This table is inserted into the
vmlinux elf, in the .data.relocs section. The table is subsequently used
by the code in arch/mips/kernel/relocate.c (added later) to relocate the
kernel.

The tool, by default, also marks all relocation sections as 0 length.
This is due to objcopy currently being unable to handle copying the
relocations between 64 and 32 bit elf files as is done when building a
64 bit kernel.

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
---
 arch/mips/boot/tools/Makefile      |   8 +
 arch/mips/boot/tools/relocs.c      | 675 +++++++++++++++++++++++++++++++++++++
 arch/mips/boot/tools/relocs.h      |  34 ++
 arch/mips/boot/tools/relocs_32.c   |  17 +
 arch/mips/boot/tools/relocs_64.c   |  27 ++
 arch/mips/boot/tools/relocs_main.c |  84 +++++
 6 files changed, 845 insertions(+)
 create mode 100644 arch/mips/boot/tools/Makefile
 create mode 100644 arch/mips/boot/tools/relocs.c
 create mode 100644 arch/mips/boot/tools/relocs.h
 create mode 100644 arch/mips/boot/tools/relocs_32.c
 create mode 100644 arch/mips/boot/tools/relocs_64.c
 create mode 100644 arch/mips/boot/tools/relocs_main.c

diff --git a/arch/mips/boot/tools/Makefile b/arch/mips/boot/tools/Makefile
new file mode 100644
index 000000000000..d232a68f6c8a
--- /dev/null
+++ b/arch/mips/boot/tools/Makefile
@@ -0,0 +1,8 @@
+
+hostprogs-y	+= relocs
+relocs-objs	+= relocs_32.o
+relocs-objs	+= relocs_64.o
+relocs-objs	+= relocs_main.o
+PHONY += relocs
+relocs: $(obj)/relocs
+	@:
diff --git a/arch/mips/boot/tools/relocs.c b/arch/mips/boot/tools/relocs.c
new file mode 100644
index 000000000000..5f4e264db97a
--- /dev/null
+++ b/arch/mips/boot/tools/relocs.c
@@ -0,0 +1,675 @@
+/* This is included from relocs_32/64.c */
+
+#define ElfW(type)		_ElfW(ELF_BITS, type)
+#define _ElfW(bits, type)	__ElfW(bits, type)
+#define __ElfW(bits, type)	Elf##bits##_##type
+
+#define Elf_Addr		ElfW(Addr)
+#define Elf_Ehdr		ElfW(Ehdr)
+#define Elf_Phdr		ElfW(Phdr)
+#define Elf_Shdr		ElfW(Shdr)
+#define Elf_Sym			ElfW(Sym)
+
+static Elf_Ehdr ehdr;
+
+struct relocs {
+	uint32_t	*offset;
+	unsigned long	count;
+	unsigned long	size;
+};
+
+static struct relocs relocs;
+
+struct section {
+	Elf_Shdr       shdr;
+	struct section *link;
+	Elf_Sym        *symtab;
+	Elf_Rel        *reltab;
+	char           *strtab;
+	long           shdr_offset;
+};
+static struct section *secs;
+
+static const char * const regex_sym_kernel = {
+/* Symbols matching these regex's should never be relocated */
+	"^(__crc_)",
+};
+
+static regex_t sym_regex_c;
+
+static int regex_skip_reloc(const char *sym_name)
+{
+	return !regexec(&sym_regex_c, sym_name, 0, NULL, 0);
+}
+
+static void regex_init(void)
+{
+	char errbuf[128];
+	int err;
+
+	err = regcomp(&sym_regex_c, regex_sym_kernel,
+			REG_EXTENDED|REG_NOSUB);
+
+	if (err) {
+		regerror(err, &sym_regex_c, errbuf, sizeof(errbuf));
+		die("%s", errbuf);
+	}
+}
+
+static const char *rel_type(unsigned type)
+{
+	static const char * const type_name[] = {
+#define REL_TYPE(X)[X] = #X
+		REL_TYPE(R_MIPS_NONE),
+		REL_TYPE(R_MIPS_16),
+		REL_TYPE(R_MIPS_32),
+		REL_TYPE(R_MIPS_REL32),
+		REL_TYPE(R_MIPS_26),
+		REL_TYPE(R_MIPS_HI16),
+		REL_TYPE(R_MIPS_LO16),
+		REL_TYPE(R_MIPS_GPREL16),
+		REL_TYPE(R_MIPS_LITERAL),
+		REL_TYPE(R_MIPS_GOT16),
+		REL_TYPE(R_MIPS_PC16),
+		REL_TYPE(R_MIPS_CALL16),
+		REL_TYPE(R_MIPS_GPREL32),
+		REL_TYPE(R_MIPS_64),
+		REL_TYPE(R_MIPS_HIGHER),
+		REL_TYPE(R_MIPS_HIGHEST),
+#undef REL_TYPE
+	};
+	const char *name = "unknown type rel type name";
+
+	if (type < ARRAY_SIZE(type_name) && type_name[type])
+		name = type_name[type];
+	return name;
+}
+
+static const char *sec_name(unsigned shndx)
+{
+	const char *sec_strtab;
+	const char *name;
+
+	sec_strtab = secs[ehdr.e_shstrndx].strtab;
+	if (shndx < ehdr.e_shnum)
+		name = sec_strtab + secs[shndx].shdr.sh_name;
+	else if (shndx == SHN_ABS)
+		name = "ABSOLUTE";
+	else if (shndx == SHN_COMMON)
+		name = "COMMON";
+	else
+		name = "<noname>";
+	return name;
+}
+
+static struct section *sec_lookup(const char *secname)
+{
+	int i;
+
+	for (i = 0; i < ehdr.e_shnum; i++)
+		if (strcmp(secname, sec_name(i)) == 0)
+			return &secs[i];
+
+	return NULL;
+}
+
+static const char *sym_name(const char *sym_strtab, Elf_Sym *sym)
+{
+	const char *name;
+
+	if (sym->st_name)
+		name = sym_strtab + sym->st_name;
+	else
+		name = sec_name(sym->st_shndx);
+	return name;
+}
+
+#if BYTE_ORDER == LITTLE_ENDIAN
+#define le16_to_cpu(val) (val)
+#define le32_to_cpu(val) (val)
+#define le64_to_cpu(val) (val)
+#define be16_to_cpu(val) bswap_16(val)
+#define be32_to_cpu(val) bswap_32(val)
+#define be64_to_cpu(val) bswap_64(val)
+
+#define cpu_to_le16(val) (val)
+#define cpu_to_le32(val) (val)
+#define cpu_to_le64(val) (val)
+#define cpu_to_be16(val) bswap_16(val)
+#define cpu_to_be32(val) bswap_32(val)
+#define cpu_to_be64(val) bswap_64(val)
+#endif
+#if BYTE_ORDER == BIG_ENDIAN
+#define le16_to_cpu(val) bswap_16(val)
+#define le32_to_cpu(val) bswap_32(val)
+#define le64_to_cpu(val) bswap_64(val)
+#define be16_to_cpu(val) (val)
+#define be32_to_cpu(val) (val)
+#define be64_to_cpu(val) (val)
+
+#define cpu_to_le16(val) bswap_16(val)
+#define cpu_to_le32(val) bswap_32(val)
+#define cpu_to_le64(val) bswap_64(val)
+#define cpu_to_be16(val) (val)
+#define cpu_to_be32(val) (val)
+#define cpu_to_be64(val) (val)
+#endif
+
+static uint16_t elf16_to_cpu(uint16_t val)
+{
+	if (ehdr.e_ident[EI_DATA] == ELFDATA2LSB)
+		return le16_to_cpu(val);
+	else
+		return be16_to_cpu(val);
+}
+
+static uint32_t elf32_to_cpu(uint32_t val)
+{
+	if (ehdr.e_ident[EI_DATA] == ELFDATA2LSB)
+		return le32_to_cpu(val);
+	else
+		return be32_to_cpu(val);
+}
+
+static uint32_t cpu_to_elf32(uint32_t val)
+{
+	if (ehdr.e_ident[EI_DATA] == ELFDATA2LSB)
+		return cpu_to_le32(val);
+	else
+		return cpu_to_be32(val);
+}
+
+#define elf_half_to_cpu(x)	elf16_to_cpu(x)
+#define elf_word_to_cpu(x)	elf32_to_cpu(x)
+
+#if ELF_BITS == 64
+static uint64_t elf64_to_cpu(uint64_t val)
+{
+	if (ehdr.e_ident[EI_DATA] == ELFDATA2LSB)
+		return le64_to_cpu(val);
+	else
+		return be64_to_cpu(val);
+}
+#define elf_addr_to_cpu(x)	elf64_to_cpu(x)
+#define elf_off_to_cpu(x)	elf64_to_cpu(x)
+#define elf_xword_to_cpu(x)	elf64_to_cpu(x)
+#else
+#define elf_addr_to_cpu(x)	elf32_to_cpu(x)
+#define elf_off_to_cpu(x)	elf32_to_cpu(x)
+#define elf_xword_to_cpu(x)	elf32_to_cpu(x)
+#endif
+
+static void read_ehdr(FILE *fp)
+{
+	if (fread(&ehdr, sizeof(ehdr), 1, fp) != 1)
+		die("Cannot read ELF header: %s\n", strerror(errno));
+
+	if (memcmp(ehdr.e_ident, ELFMAG, SELFMAG) != 0)
+		die("No ELF magic\n");
+
+	if (ehdr.e_ident[EI_CLASS] != ELF_CLASS)
+		die("Not a %d bit executable\n", ELF_BITS);
+
+	if ((ehdr.e_ident[EI_DATA] != ELFDATA2LSB) &&
+	    (ehdr.e_ident[EI_DATA] != ELFDATA2MSB))
+		die("Unknown ELF Endianness\n");
+
+	if (ehdr.e_ident[EI_VERSION] != EV_CURRENT)
+		die("Unknown ELF version\n");
+
+	/* Convert the fields to native endian */
+	ehdr.e_type      = elf_half_to_cpu(ehdr.e_type);
+	ehdr.e_machine   = elf_half_to_cpu(ehdr.e_machine);
+	ehdr.e_version   = elf_word_to_cpu(ehdr.e_version);
+	ehdr.e_entry     = elf_addr_to_cpu(ehdr.e_entry);
+	ehdr.e_phoff     = elf_off_to_cpu(ehdr.e_phoff);
+	ehdr.e_shoff     = elf_off_to_cpu(ehdr.e_shoff);
+	ehdr.e_flags     = elf_word_to_cpu(ehdr.e_flags);
+	ehdr.e_ehsize    = elf_half_to_cpu(ehdr.e_ehsize);
+	ehdr.e_phentsize = elf_half_to_cpu(ehdr.e_phentsize);
+	ehdr.e_phnum     = elf_half_to_cpu(ehdr.e_phnum);
+	ehdr.e_shentsize = elf_half_to_cpu(ehdr.e_shentsize);
+	ehdr.e_shnum     = elf_half_to_cpu(ehdr.e_shnum);
+	ehdr.e_shstrndx  = elf_half_to_cpu(ehdr.e_shstrndx);
+
+	if ((ehdr.e_type != ET_EXEC) && (ehdr.e_type != ET_DYN))
+		die("Unsupported ELF header type\n");
+
+	if (ehdr.e_machine != ELF_MACHINE)
+		die("Not for %s\n", ELF_MACHINE_NAME);
+
+	if (ehdr.e_version != EV_CURRENT)
+		die("Unknown ELF version\n");
+
+	if (ehdr.e_ehsize != sizeof(Elf_Ehdr))
+		die("Bad Elf header size\n");
+
+	if (ehdr.e_phentsize != sizeof(Elf_Phdr))
+		die("Bad program header entry\n");
+
+	if (ehdr.e_shentsize != sizeof(Elf_Shdr))
+		die("Bad section header entry\n");
+
+	if (ehdr.e_shstrndx >= ehdr.e_shnum)
+		die("String table index out of bounds\n");
+}
+
+static void read_shdrs(FILE *fp)
+{
+	int i;
+	Elf_Shdr shdr;
+
+	secs = calloc(ehdr.e_shnum, sizeof(struct section));
+	if (!secs)
+		die("Unable to allocate %d section headers\n", ehdr.e_shnum);
+
+	if (fseek(fp, ehdr.e_shoff, SEEK_SET) < 0)
+		die("Seek to %d failed: %s\n", ehdr.e_shoff, strerror(errno));
+
+	for (i = 0; i < ehdr.e_shnum; i++) {
+		struct section *sec = &secs[i];
+
+		sec->shdr_offset = ftell(fp);
+		if (fread(&shdr, sizeof(shdr), 1, fp) != 1)
+			die("Cannot read ELF section headers %d/%d: %s\n",
+			    i, ehdr.e_shnum, strerror(errno));
+		sec->shdr.sh_name      = elf_word_to_cpu(shdr.sh_name);
+		sec->shdr.sh_type      = elf_word_to_cpu(shdr.sh_type);
+		sec->shdr.sh_flags     = elf_xword_to_cpu(shdr.sh_flags);
+		sec->shdr.sh_addr      = elf_addr_to_cpu(shdr.sh_addr);
+		sec->shdr.sh_offset    = elf_off_to_cpu(shdr.sh_offset);
+		sec->shdr.sh_size      = elf_xword_to_cpu(shdr.sh_size);
+		sec->shdr.sh_link      = elf_word_to_cpu(shdr.sh_link);
+		sec->shdr.sh_info      = elf_word_to_cpu(shdr.sh_info);
+		sec->shdr.sh_addralign = elf_xword_to_cpu(shdr.sh_addralign);
+		sec->shdr.sh_entsize   = elf_xword_to_cpu(shdr.sh_entsize);
+		if (sec->shdr.sh_link < ehdr.e_shnum)
+			sec->link = &secs[sec->shdr.sh_link];
+	}
+}
+
+static void read_strtabs(FILE *fp)
+{
+	int i;
+
+	for (i = 0; i < ehdr.e_shnum; i++) {
+		struct section *sec = &secs[i];
+
+		if (sec->shdr.sh_type != SHT_STRTAB)
+			continue;
+
+		sec->strtab = malloc(sec->shdr.sh_size);
+		if (!sec->strtab)
+			die("malloc of %d bytes for strtab failed\n",
+			    sec->shdr.sh_size);
+
+		if (fseek(fp, sec->shdr.sh_offset, SEEK_SET) < 0)
+			die("Seek to %d failed: %s\n",
+			    sec->shdr.sh_offset, strerror(errno));
+
+		if (fread(sec->strtab, 1, sec->shdr.sh_size, fp) !=
+		    sec->shdr.sh_size)
+			die("Cannot read symbol table: %s\n", strerror(errno));
+	}
+}
+
+static void read_symtabs(FILE *fp)
+{
+	int i, j;
+	for (i = 0; i < ehdr.e_shnum; i++) {
+		struct section *sec = &secs[i];
+		if (sec->shdr.sh_type != SHT_SYMTAB)
+			continue;
+
+		sec->symtab = malloc(sec->shdr.sh_size);
+		if (!sec->symtab)
+			die("malloc of %d bytes for symtab failed\n",
+			    sec->shdr.sh_size);
+
+		if (fseek(fp, sec->shdr.sh_offset, SEEK_SET) < 0)
+			die("Seek to %d failed: %s\n",
+			    sec->shdr.sh_offset, strerror(errno));
+
+		if (fread(sec->symtab, 1, sec->shdr.sh_size, fp) !=
+		    sec->shdr.sh_size)
+			die("Cannot read symbol table: %s\n", strerror(errno));
+
+		for (j = 0; j < sec->shdr.sh_size/sizeof(Elf_Sym); j++) {
+			Elf_Sym *sym = &sec->symtab[j];
+
+			sym->st_name  = elf_word_to_cpu(sym->st_name);
+			sym->st_value = elf_addr_to_cpu(sym->st_value);
+			sym->st_size  = elf_xword_to_cpu(sym->st_size);
+			sym->st_shndx = elf_half_to_cpu(sym->st_shndx);
+		}
+	}
+}
+
+static void read_relocs(FILE *fp)
+{
+	static unsigned long base = 0;
+	int i, j;
+
+	if (!base) {
+		struct section *sec = sec_lookup(".text");
+
+		if (!sec)
+			die("Could not find .text section\n");
+
+		base = sec->shdr.sh_addr;
+	}
+
+	for (i = 0; i < ehdr.e_shnum; i++) {
+		struct section *sec = &secs[i];
+
+		if (sec->shdr.sh_type != SHT_REL_TYPE)
+			continue;
+
+		sec->reltab = malloc(sec->shdr.sh_size);
+		if (!sec->reltab)
+			die("malloc of %d bytes for relocs failed\n",
+			    sec->shdr.sh_size);
+
+		if (fseek(fp, sec->shdr.sh_offset, SEEK_SET) < 0)
+			die("Seek to %d failed: %s\n",
+			    sec->shdr.sh_offset, strerror(errno));
+
+		if (fread(sec->reltab, 1, sec->shdr.sh_size, fp) !=
+		    sec->shdr.sh_size)
+			die("Cannot read symbol table: %s\n", strerror(errno));
+
+		for (j = 0; j < sec->shdr.sh_size/sizeof(Elf_Rel); j++) {
+			Elf_Rel *rel = &sec->reltab[j];
+
+			rel->r_offset = elf_addr_to_cpu(rel->r_offset);
+			/* Set offset into kernel image */
+			rel->r_offset -= base;
+#if (ELF_BITS == 32)
+			rel->r_info   = elf_xword_to_cpu(rel->r_info);
+#else
+			/* Convert MIPS64 RELA format - only the symbol
+			 * index needs converting to native endianness
+			 */
+			rel->r_info   = rel->r_info;
+			ELF_R_SYM(rel->r_info) = elf32_to_cpu(ELF_R_SYM(rel->r_info));
+#endif
+#if (SHT_REL_TYPE == SHT_RELA)
+			rel->r_addend = elf_xword_to_cpu(rel->r_addend);
+#endif
+		}
+	}
+}
+
+static void remove_relocs(FILE *fp)
+{
+	int i;
+	Elf_Shdr shdr;
+
+	for (i = 0; i < ehdr.e_shnum; i++) {
+		struct section *sec = &secs[i];
+
+		if (sec->shdr.sh_type != SHT_REL_TYPE)
+			continue;
+
+		if (fseek(fp, sec->shdr_offset, SEEK_SET) < 0)
+			die("Seek to %d failed: %s\n",
+			    sec->shdr_offset, strerror(errno));
+
+		if (fread(&shdr, sizeof(shdr), 1, fp) != 1)
+			die("Cannot read ELF section headers %d/%d: %s\n",
+			    i, ehdr.e_shnum, strerror(errno));
+
+		/* Set relocation section size to 0, effectively removing it.
+		 * This is necessary due to lack of support for relocations
+		 * in objcopy when creating 32bit elf from 64bit elf.
+		 */
+		shdr.sh_size = 0;
+
+		if (fseek(fp, sec->shdr_offset, SEEK_SET) < 0)
+			die("Seek to %d failed: %s\n",
+			    sec->shdr_offset, strerror(errno));
+
+		if (fwrite(&shdr, sizeof(shdr), 1, fp) != 1)
+			die("Cannot write ELF section headers %d/%d: %s\n",
+			    i, ehdr.e_shnum, strerror(errno));
+	}
+}
+
+static void add_reloc(struct relocs *r, uint32_t offset, unsigned type)
+{
+	/* Relocation representation in binary table:
+	 * |76543210|76543210|76543210|76543210|
+	 * |  Type  |  offset from _text >> 2  |
+	 */
+	offset >>= 2;
+	if (offset > 0x00FFFFFF)
+		die("Kernel image exceeds maximum size for relocation!\n");
+
+	offset = (offset & 0x00FFFFFF) | ((type & 0xFF) << 24);
+
+	if (r->count == r->size) {
+		unsigned long newsize = r->size + 50000;
+		void *mem = realloc(r->offset, newsize * sizeof(r->offset[0]));
+
+		if (!mem)
+			die("realloc of %ld entries for relocs failed\n", newsize);
+
+		r->offset = mem;
+		r->size = newsize;
+	}
+	r->offset[r->count++] = offset;
+}
+
+static void walk_relocs(int (*process)(struct section *sec, Elf_Rel *rel,
+			Elf_Sym *sym, const char *symname))
+{
+	int i;
+
+	/* Walk through the relocations */
+	for (i = 0; i < ehdr.e_shnum; i++) {
+		char *sym_strtab;
+		Elf_Sym *sh_symtab;
+		struct section *sec_applies, *sec_symtab;
+		int j;
+		struct section *sec = &secs[i];
+
+		if (sec->shdr.sh_type != SHT_REL_TYPE)
+			continue;
+
+		sec_symtab  = sec->link;
+		sec_applies = &secs[sec->shdr.sh_info];
+		if (!(sec_applies->shdr.sh_flags & SHF_ALLOC))
+			continue;
+
+		sh_symtab = sec_symtab->symtab;
+		sym_strtab = sec_symtab->link->strtab;
+		for (j = 0; j < sec->shdr.sh_size/sizeof(Elf_Rel); j++) {
+			Elf_Rel *rel = &sec->reltab[j];
+			Elf_Sym *sym = &sh_symtab[ELF_R_SYM(rel->r_info)];
+			const char *symname = sym_name(sym_strtab, sym);
+
+			process(sec, rel, sym, symname);
+		}
+	}
+}
+
+static int do_reloc(struct section *sec, Elf_Rel *rel, Elf_Sym *sym,
+		      const char *symname)
+{
+	unsigned r_type = ELF_R_TYPE(rel->r_info);
+	unsigned bind = ELF_ST_BIND(sym->st_info);
+
+	if ((bind == STB_WEAK) && (sym->st_value == 0)) {
+		/* Don't relocate weak symbols without a target */
+		return 0;
+	}
+
+	if (regex_skip_reloc(symname))
+		return 0;
+
+	switch (r_type) {
+	case R_MIPS_NONE:
+	case R_MIPS_REL32:
+	case R_MIPS_PC16:
+		/*
+		 * NONE can be ignored and PC relative relocations don't
+		 * need to be adjusted.
+		 */
+	case R_MIPS_HIGHEST:
+	case R_MIPS_HIGHER:
+		/* We support relocating within the same 4Gb segment only,
+		 * thus leaving the top 32bits unchanged
+		 */
+	case R_MIPS_LO16:
+		/* We support relocating by 64k jumps only
+		 * thus leaving the bottom 16bits unchanged
+		 */
+		break;
+
+	case R_MIPS_64:
+	case R_MIPS_32:
+	case R_MIPS_26:
+	case R_MIPS_HI16:
+		add_reloc(&relocs, rel->r_offset, r_type);
+		break;
+
+	default:
+		die("Unsupported relocation type: %s (%d)\n",
+		    rel_type(r_type), r_type);
+		break;
+	}
+
+	return 0;
+}
+
+static int write_reloc_as_bin(uint32_t v, FILE *f)
+{
+	unsigned char buf[4];
+
+	v = cpu_to_elf32(v);
+
+	memcpy(buf, &v, sizeof(uint32_t));
+	return fwrite(buf, 1, 4, f);
+}
+
+static int write_reloc_as_text(uint32_t v, FILE *f)
+{
+	int res;
+
+	res = fprintf(f, "\t.long 0x%08"PRIx32"\n", v);
+	if (res < 0)
+		return res;
+	else
+		return sizeof(uint32_t);
+}
+
+static void emit_relocs(int as_text, int as_bin, FILE *outf)
+{
+	int i;
+	int (*write_reloc)(uint32_t, FILE *) = write_reloc_as_bin;
+	int size = 0;
+	int size_reserved;
+	struct section *sec_reloc;
+
+	sec_reloc = sec_lookup(".data.reloc");
+	if (!sec_reloc)
+		die("Could not find relocation section\n");
+
+	size_reserved = sec_reloc->shdr.sh_size;
+
+	/* Collect up the relocations */
+	walk_relocs(do_reloc);
+
+	/* Print the relocations */
+	if (as_text) {
+		/* Print the relocations in a form suitable that
+		 * gas will like.
+		 */
+		printf(".section \".data.reloc\",\"a\"\n");
+		printf(".balign 4\n");
+		/* Output text to stdout */
+		write_reloc = write_reloc_as_text;
+		outf = stdout;
+	} else if (as_bin) {
+		/* Output raw binary to stdout */
+		outf = stdout;
+	} else {
+		/* Seek to offset of the relocation section.
+		* Each relocation is then written into the
+		* vmlinux kernel image.
+		*/
+		if (fseek(outf, sec_reloc->shdr.sh_offset, SEEK_SET) < 0) {
+			die("Seek to %d failed: %s\n",
+				sec_reloc->shdr.sh_offset, strerror(errno));
+		}
+	}
+
+	for (i = 0; i < relocs.count; i++)
+		size += write_reloc(relocs.offset[i], outf);
+
+	/* Print a stop, but only if we've actually written some relocs */
+	if (size)
+		size += write_reloc(0, outf);
+
+	if (size > size_reserved)
+		/* Die, but suggest a value for CONFIG_RELOCATION_TABLE_SIZE
+		 * which will fix this problem and allow a bit of headroom
+		 * if more kernel features are enabled
+		 */
+		die("Relocations overflow available space!\n" \
+		    "Please adjust CONFIG_RELOCATION_TABLE_SIZE " \
+		    "to at least 0x%08x\n", (size + 0x1000) & ~0xFFF);
+}
+
+/*
+ * As an aid to debugging problems with different linkers
+ * print summary information about the relocs.
+ * Since different linkers tend to emit the sections in
+ * different orders we use the section names in the output.
+ */
+static int do_reloc_info(struct section *sec, Elf_Rel *rel, ElfW(Sym) *sym,
+				const char *symname)
+{
+	printf("%16s  0x%08x  %16s  %40s  %16s\n",
+		sec_name(sec->shdr.sh_info),
+		(unsigned int)rel->r_offset,
+		rel_type(ELF_R_TYPE(rel->r_info)),
+		symname,
+		sec_name(sym->st_shndx));
+	return 0;
+}
+
+static void print_reloc_info(void)
+{
+	printf("%16s  %10s  %16s  %40s  %16s\n",
+		"reloc section",
+		"offset",
+		"reloc type",
+		"symbol",
+		"symbol section");
+	walk_relocs(do_reloc_info);
+}
+
+#if ELF_BITS == 64
+# define process process_64
+#else
+# define process process_32
+#endif
+
+void process(FILE *fp, int as_text, int as_bin,
+	     int show_reloc_info, int keep_relocs)
+{
+	regex_init();
+	read_ehdr(fp);
+	read_shdrs(fp);
+	read_strtabs(fp);
+	read_symtabs(fp);
+	read_relocs(fp);
+	if (show_reloc_info) {
+		print_reloc_info();
+		return;
+	}
+	emit_relocs(as_text, as_bin, fp);
+	if (!keep_relocs)
+		remove_relocs(fp);
+}
diff --git a/arch/mips/boot/tools/relocs.h b/arch/mips/boot/tools/relocs.h
new file mode 100644
index 000000000000..33faee617322
--- /dev/null
+++ b/arch/mips/boot/tools/relocs.h
@@ -0,0 +1,34 @@
+#ifndef RELOCS_H
+#define RELOCS_H
+
+#include <stdio.h>
+#include <stdarg.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <string.h>
+#include <errno.h>
+#include <unistd.h>
+#include <elf.h>
+#include <byteswap.h>
+#define USE_BSD
+#include <endian.h>
+#include <regex.h>
+
+void die(char *fmt, ...);
+
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+
+enum symtype {
+	S_ABS,
+	S_REL,
+	S_SEG,
+	S_LIN,
+	S_NSYMTYPES
+};
+
+void process_32(FILE *fp, int as_text, int as_bin,
+		int show_reloc_info, int keep_relocs);
+void process_64(FILE *fp, int as_text, int as_bin,
+		int show_reloc_info, int keep_relocs);
+#endif /* RELOCS_H */
diff --git a/arch/mips/boot/tools/relocs_32.c b/arch/mips/boot/tools/relocs_32.c
new file mode 100644
index 000000000000..915bdc07f5ed
--- /dev/null
+++ b/arch/mips/boot/tools/relocs_32.c
@@ -0,0 +1,17 @@
+#include "relocs.h"
+
+#define ELF_BITS 32
+
+#define ELF_MACHINE		EM_MIPS
+#define ELF_MACHINE_NAME	"MIPS"
+#define SHT_REL_TYPE		SHT_REL
+#define Elf_Rel			ElfW(Rel)
+
+#define ELF_CLASS		ELFCLASS32
+#define ELF_R_SYM(val)		ELF32_R_SYM(val)
+#define ELF_R_TYPE(val)		ELF32_R_TYPE(val)
+#define ELF_ST_TYPE(o)		ELF32_ST_TYPE(o)
+#define ELF_ST_BIND(o)		ELF32_ST_BIND(o)
+#define ELF_ST_VISIBILITY(o)	ELF32_ST_VISIBILITY(o)
+
+#include "relocs.c"
diff --git a/arch/mips/boot/tools/relocs_64.c b/arch/mips/boot/tools/relocs_64.c
new file mode 100644
index 000000000000..b671b5e2dcd8
--- /dev/null
+++ b/arch/mips/boot/tools/relocs_64.c
@@ -0,0 +1,27 @@
+#include "relocs.h"
+
+#define ELF_BITS 64
+
+#define ELF_MACHINE             EM_MIPS
+#define ELF_MACHINE_NAME        "MIPS64"
+#define SHT_REL_TYPE            SHT_RELA
+#define Elf_Rel                 Elf64_Rela
+
+typedef uint8_t Elf64_Byte;
+
+typedef struct {
+	Elf64_Word r_sym;	/* Symbol index.  */
+	Elf64_Byte r_ssym;	/* Special symbol.  */
+	Elf64_Byte r_type3;	/* Third relocation.  */
+	Elf64_Byte r_type2;	/* Second relocation.  */
+	Elf64_Byte r_type;	/* First relocation.  */
+} Elf64_Mips_Rela;
+
+#define ELF_CLASS               ELFCLASS64
+#define ELF_R_SYM(val)          (((Elf64_Mips_Rela *)(&val))->r_sym)
+#define ELF_R_TYPE(val)         (((Elf64_Mips_Rela *)(&val))->r_type)
+#define ELF_ST_TYPE(o)          ELF64_ST_TYPE(o)
+#define ELF_ST_BIND(o)          ELF64_ST_BIND(o)
+#define ELF_ST_VISIBILITY(o)    ELF64_ST_VISIBILITY(o)
+
+#include "relocs.c"
diff --git a/arch/mips/boot/tools/relocs_main.c b/arch/mips/boot/tools/relocs_main.c
new file mode 100644
index 000000000000..d8fe2343b8d0
--- /dev/null
+++ b/arch/mips/boot/tools/relocs_main.c
@@ -0,0 +1,84 @@
+
+#include <stdio.h>
+#include <stdint.h>
+#include <stdarg.h>
+#include <stdlib.h>
+#include <string.h>
+#include <errno.h>
+#include <endian.h>
+#include <elf.h>
+
+#include "relocs.h"
+
+void die(char *fmt, ...)
+{
+	va_list ap;
+
+	va_start(ap, fmt);
+	vfprintf(stderr, fmt, ap);
+	va_end(ap);
+	exit(1);
+}
+
+static void usage(void)
+{
+	die("relocs [--reloc-info|--text|--bin|--keep] vmlinux\n");
+}
+
+int main(int argc, char **argv)
+{
+	int show_reloc_info, as_text, as_bin, keep_relocs;
+	const char *fname;
+	FILE *fp;
+	int i;
+	unsigned char e_ident[EI_NIDENT];
+
+	show_reloc_info = 0;
+	as_text = 0;
+	as_bin = 0;
+	keep_relocs = 0;
+	fname = NULL;
+	for (i = 1; i < argc; i++) {
+		char *arg = argv[i];
+
+		if (*arg == '-') {
+			if (strcmp(arg, "--reloc-info") == 0) {
+				show_reloc_info = 1;
+				continue;
+			}
+			if (strcmp(arg, "--text") == 0) {
+				as_text = 1;
+				continue;
+			}
+			if (strcmp(arg, "--bin") == 0) {
+				as_bin = 1;
+				continue;
+			}
+			if (strcmp(arg, "--keep") == 0) {
+				keep_relocs = 1;
+				continue;
+			}
+		} else if (!fname) {
+			fname = arg;
+			continue;
+		}
+		usage();
+	}
+	if (!fname)
+		usage();
+
+	fp = fopen(fname, "r+");
+	if (!fp)
+		die("Cannot open %s: %s\n", fname, strerror(errno));
+
+	if (fread(&e_ident, 1, EI_NIDENT, fp) != EI_NIDENT)
+		die("Cannot read %s: %s", fname, strerror(errno));
+
+	rewind(fp);
+	if (e_ident[EI_CLASS] == ELFCLASS64)
+		process_64(fp, as_text,  as_bin, show_reloc_info, keep_relocs);
+	else
+		process_32(fp, as_text, as_bin, show_reloc_info, keep_relocs);
+	fclose(fp);
+	return 0;
+}
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 1/9] MIPS: tools: Add relocs tool
@ 2015-12-03 10:08   ` Matt Redfearn
  0 siblings, 0 replies; 34+ messages in thread
From: Matt Redfearn @ 2015-12-03 10:08 UTC (permalink / raw)
  To: linux-mips; +Cc: Matt Redfearn

This tool is based on the x86/boot/tools/relocs tool.

It parses the relocations present in the vmlinux elf file, building a
table of relocations that will be necessary to run the kernel from an
address other than its link address. This table is inserted into the
vmlinux elf, in the .data.relocs section. The table is subsequently used
by the code in arch/mips/kernel/relocate.c (added later) to relocate the
kernel.

The tool, by default, also marks all relocation sections as 0 length.
This is due to objcopy currently being unable to handle copying the
relocations between 64 and 32 bit elf files as is done when building a
64 bit kernel.

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
---
 arch/mips/boot/tools/Makefile      |   8 +
 arch/mips/boot/tools/relocs.c      | 675 +++++++++++++++++++++++++++++++++++++
 arch/mips/boot/tools/relocs.h      |  34 ++
 arch/mips/boot/tools/relocs_32.c   |  17 +
 arch/mips/boot/tools/relocs_64.c   |  27 ++
 arch/mips/boot/tools/relocs_main.c |  84 +++++
 6 files changed, 845 insertions(+)
 create mode 100644 arch/mips/boot/tools/Makefile
 create mode 100644 arch/mips/boot/tools/relocs.c
 create mode 100644 arch/mips/boot/tools/relocs.h
 create mode 100644 arch/mips/boot/tools/relocs_32.c
 create mode 100644 arch/mips/boot/tools/relocs_64.c
 create mode 100644 arch/mips/boot/tools/relocs_main.c

diff --git a/arch/mips/boot/tools/Makefile b/arch/mips/boot/tools/Makefile
new file mode 100644
index 000000000000..d232a68f6c8a
--- /dev/null
+++ b/arch/mips/boot/tools/Makefile
@@ -0,0 +1,8 @@
+
+hostprogs-y	+= relocs
+relocs-objs	+= relocs_32.o
+relocs-objs	+= relocs_64.o
+relocs-objs	+= relocs_main.o
+PHONY += relocs
+relocs: $(obj)/relocs
+	@:
diff --git a/arch/mips/boot/tools/relocs.c b/arch/mips/boot/tools/relocs.c
new file mode 100644
index 000000000000..5f4e264db97a
--- /dev/null
+++ b/arch/mips/boot/tools/relocs.c
@@ -0,0 +1,675 @@
+/* This is included from relocs_32/64.c */
+
+#define ElfW(type)		_ElfW(ELF_BITS, type)
+#define _ElfW(bits, type)	__ElfW(bits, type)
+#define __ElfW(bits, type)	Elf##bits##_##type
+
+#define Elf_Addr		ElfW(Addr)
+#define Elf_Ehdr		ElfW(Ehdr)
+#define Elf_Phdr		ElfW(Phdr)
+#define Elf_Shdr		ElfW(Shdr)
+#define Elf_Sym			ElfW(Sym)
+
+static Elf_Ehdr ehdr;
+
+struct relocs {
+	uint32_t	*offset;
+	unsigned long	count;
+	unsigned long	size;
+};
+
+static struct relocs relocs;
+
+struct section {
+	Elf_Shdr       shdr;
+	struct section *link;
+	Elf_Sym        *symtab;
+	Elf_Rel        *reltab;
+	char           *strtab;
+	long           shdr_offset;
+};
+static struct section *secs;
+
+static const char * const regex_sym_kernel = {
+/* Symbols matching these regex's should never be relocated */
+	"^(__crc_)",
+};
+
+static regex_t sym_regex_c;
+
+static int regex_skip_reloc(const char *sym_name)
+{
+	return !regexec(&sym_regex_c, sym_name, 0, NULL, 0);
+}
+
+static void regex_init(void)
+{
+	char errbuf[128];
+	int err;
+
+	err = regcomp(&sym_regex_c, regex_sym_kernel,
+			REG_EXTENDED|REG_NOSUB);
+
+	if (err) {
+		regerror(err, &sym_regex_c, errbuf, sizeof(errbuf));
+		die("%s", errbuf);
+	}
+}
+
+static const char *rel_type(unsigned type)
+{
+	static const char * const type_name[] = {
+#define REL_TYPE(X)[X] = #X
+		REL_TYPE(R_MIPS_NONE),
+		REL_TYPE(R_MIPS_16),
+		REL_TYPE(R_MIPS_32),
+		REL_TYPE(R_MIPS_REL32),
+		REL_TYPE(R_MIPS_26),
+		REL_TYPE(R_MIPS_HI16),
+		REL_TYPE(R_MIPS_LO16),
+		REL_TYPE(R_MIPS_GPREL16),
+		REL_TYPE(R_MIPS_LITERAL),
+		REL_TYPE(R_MIPS_GOT16),
+		REL_TYPE(R_MIPS_PC16),
+		REL_TYPE(R_MIPS_CALL16),
+		REL_TYPE(R_MIPS_GPREL32),
+		REL_TYPE(R_MIPS_64),
+		REL_TYPE(R_MIPS_HIGHER),
+		REL_TYPE(R_MIPS_HIGHEST),
+#undef REL_TYPE
+	};
+	const char *name = "unknown type rel type name";
+
+	if (type < ARRAY_SIZE(type_name) && type_name[type])
+		name = type_name[type];
+	return name;
+}
+
+static const char *sec_name(unsigned shndx)
+{
+	const char *sec_strtab;
+	const char *name;
+
+	sec_strtab = secs[ehdr.e_shstrndx].strtab;
+	if (shndx < ehdr.e_shnum)
+		name = sec_strtab + secs[shndx].shdr.sh_name;
+	else if (shndx == SHN_ABS)
+		name = "ABSOLUTE";
+	else if (shndx == SHN_COMMON)
+		name = "COMMON";
+	else
+		name = "<noname>";
+	return name;
+}
+
+static struct section *sec_lookup(const char *secname)
+{
+	int i;
+
+	for (i = 0; i < ehdr.e_shnum; i++)
+		if (strcmp(secname, sec_name(i)) == 0)
+			return &secs[i];
+
+	return NULL;
+}
+
+static const char *sym_name(const char *sym_strtab, Elf_Sym *sym)
+{
+	const char *name;
+
+	if (sym->st_name)
+		name = sym_strtab + sym->st_name;
+	else
+		name = sec_name(sym->st_shndx);
+	return name;
+}
+
+#if BYTE_ORDER == LITTLE_ENDIAN
+#define le16_to_cpu(val) (val)
+#define le32_to_cpu(val) (val)
+#define le64_to_cpu(val) (val)
+#define be16_to_cpu(val) bswap_16(val)
+#define be32_to_cpu(val) bswap_32(val)
+#define be64_to_cpu(val) bswap_64(val)
+
+#define cpu_to_le16(val) (val)
+#define cpu_to_le32(val) (val)
+#define cpu_to_le64(val) (val)
+#define cpu_to_be16(val) bswap_16(val)
+#define cpu_to_be32(val) bswap_32(val)
+#define cpu_to_be64(val) bswap_64(val)
+#endif
+#if BYTE_ORDER == BIG_ENDIAN
+#define le16_to_cpu(val) bswap_16(val)
+#define le32_to_cpu(val) bswap_32(val)
+#define le64_to_cpu(val) bswap_64(val)
+#define be16_to_cpu(val) (val)
+#define be32_to_cpu(val) (val)
+#define be64_to_cpu(val) (val)
+
+#define cpu_to_le16(val) bswap_16(val)
+#define cpu_to_le32(val) bswap_32(val)
+#define cpu_to_le64(val) bswap_64(val)
+#define cpu_to_be16(val) (val)
+#define cpu_to_be32(val) (val)
+#define cpu_to_be64(val) (val)
+#endif
+
+static uint16_t elf16_to_cpu(uint16_t val)
+{
+	if (ehdr.e_ident[EI_DATA] == ELFDATA2LSB)
+		return le16_to_cpu(val);
+	else
+		return be16_to_cpu(val);
+}
+
+static uint32_t elf32_to_cpu(uint32_t val)
+{
+	if (ehdr.e_ident[EI_DATA] == ELFDATA2LSB)
+		return le32_to_cpu(val);
+	else
+		return be32_to_cpu(val);
+}
+
+static uint32_t cpu_to_elf32(uint32_t val)
+{
+	if (ehdr.e_ident[EI_DATA] == ELFDATA2LSB)
+		return cpu_to_le32(val);
+	else
+		return cpu_to_be32(val);
+}
+
+#define elf_half_to_cpu(x)	elf16_to_cpu(x)
+#define elf_word_to_cpu(x)	elf32_to_cpu(x)
+
+#if ELF_BITS == 64
+static uint64_t elf64_to_cpu(uint64_t val)
+{
+	if (ehdr.e_ident[EI_DATA] == ELFDATA2LSB)
+		return le64_to_cpu(val);
+	else
+		return be64_to_cpu(val);
+}
+#define elf_addr_to_cpu(x)	elf64_to_cpu(x)
+#define elf_off_to_cpu(x)	elf64_to_cpu(x)
+#define elf_xword_to_cpu(x)	elf64_to_cpu(x)
+#else
+#define elf_addr_to_cpu(x)	elf32_to_cpu(x)
+#define elf_off_to_cpu(x)	elf32_to_cpu(x)
+#define elf_xword_to_cpu(x)	elf32_to_cpu(x)
+#endif
+
+static void read_ehdr(FILE *fp)
+{
+	if (fread(&ehdr, sizeof(ehdr), 1, fp) != 1)
+		die("Cannot read ELF header: %s\n", strerror(errno));
+
+	if (memcmp(ehdr.e_ident, ELFMAG, SELFMAG) != 0)
+		die("No ELF magic\n");
+
+	if (ehdr.e_ident[EI_CLASS] != ELF_CLASS)
+		die("Not a %d bit executable\n", ELF_BITS);
+
+	if ((ehdr.e_ident[EI_DATA] != ELFDATA2LSB) &&
+	    (ehdr.e_ident[EI_DATA] != ELFDATA2MSB))
+		die("Unknown ELF Endianness\n");
+
+	if (ehdr.e_ident[EI_VERSION] != EV_CURRENT)
+		die("Unknown ELF version\n");
+
+	/* Convert the fields to native endian */
+	ehdr.e_type      = elf_half_to_cpu(ehdr.e_type);
+	ehdr.e_machine   = elf_half_to_cpu(ehdr.e_machine);
+	ehdr.e_version   = elf_word_to_cpu(ehdr.e_version);
+	ehdr.e_entry     = elf_addr_to_cpu(ehdr.e_entry);
+	ehdr.e_phoff     = elf_off_to_cpu(ehdr.e_phoff);
+	ehdr.e_shoff     = elf_off_to_cpu(ehdr.e_shoff);
+	ehdr.e_flags     = elf_word_to_cpu(ehdr.e_flags);
+	ehdr.e_ehsize    = elf_half_to_cpu(ehdr.e_ehsize);
+	ehdr.e_phentsize = elf_half_to_cpu(ehdr.e_phentsize);
+	ehdr.e_phnum     = elf_half_to_cpu(ehdr.e_phnum);
+	ehdr.e_shentsize = elf_half_to_cpu(ehdr.e_shentsize);
+	ehdr.e_shnum     = elf_half_to_cpu(ehdr.e_shnum);
+	ehdr.e_shstrndx  = elf_half_to_cpu(ehdr.e_shstrndx);
+
+	if ((ehdr.e_type != ET_EXEC) && (ehdr.e_type != ET_DYN))
+		die("Unsupported ELF header type\n");
+
+	if (ehdr.e_machine != ELF_MACHINE)
+		die("Not for %s\n", ELF_MACHINE_NAME);
+
+	if (ehdr.e_version != EV_CURRENT)
+		die("Unknown ELF version\n");
+
+	if (ehdr.e_ehsize != sizeof(Elf_Ehdr))
+		die("Bad Elf header size\n");
+
+	if (ehdr.e_phentsize != sizeof(Elf_Phdr))
+		die("Bad program header entry\n");
+
+	if (ehdr.e_shentsize != sizeof(Elf_Shdr))
+		die("Bad section header entry\n");
+
+	if (ehdr.e_shstrndx >= ehdr.e_shnum)
+		die("String table index out of bounds\n");
+}
+
+static void read_shdrs(FILE *fp)
+{
+	int i;
+	Elf_Shdr shdr;
+
+	secs = calloc(ehdr.e_shnum, sizeof(struct section));
+	if (!secs)
+		die("Unable to allocate %d section headers\n", ehdr.e_shnum);
+
+	if (fseek(fp, ehdr.e_shoff, SEEK_SET) < 0)
+		die("Seek to %d failed: %s\n", ehdr.e_shoff, strerror(errno));
+
+	for (i = 0; i < ehdr.e_shnum; i++) {
+		struct section *sec = &secs[i];
+
+		sec->shdr_offset = ftell(fp);
+		if (fread(&shdr, sizeof(shdr), 1, fp) != 1)
+			die("Cannot read ELF section headers %d/%d: %s\n",
+			    i, ehdr.e_shnum, strerror(errno));
+		sec->shdr.sh_name      = elf_word_to_cpu(shdr.sh_name);
+		sec->shdr.sh_type      = elf_word_to_cpu(shdr.sh_type);
+		sec->shdr.sh_flags     = elf_xword_to_cpu(shdr.sh_flags);
+		sec->shdr.sh_addr      = elf_addr_to_cpu(shdr.sh_addr);
+		sec->shdr.sh_offset    = elf_off_to_cpu(shdr.sh_offset);
+		sec->shdr.sh_size      = elf_xword_to_cpu(shdr.sh_size);
+		sec->shdr.sh_link      = elf_word_to_cpu(shdr.sh_link);
+		sec->shdr.sh_info      = elf_word_to_cpu(shdr.sh_info);
+		sec->shdr.sh_addralign = elf_xword_to_cpu(shdr.sh_addralign);
+		sec->shdr.sh_entsize   = elf_xword_to_cpu(shdr.sh_entsize);
+		if (sec->shdr.sh_link < ehdr.e_shnum)
+			sec->link = &secs[sec->shdr.sh_link];
+	}
+}
+
+static void read_strtabs(FILE *fp)
+{
+	int i;
+
+	for (i = 0; i < ehdr.e_shnum; i++) {
+		struct section *sec = &secs[i];
+
+		if (sec->shdr.sh_type != SHT_STRTAB)
+			continue;
+
+		sec->strtab = malloc(sec->shdr.sh_size);
+		if (!sec->strtab)
+			die("malloc of %d bytes for strtab failed\n",
+			    sec->shdr.sh_size);
+
+		if (fseek(fp, sec->shdr.sh_offset, SEEK_SET) < 0)
+			die("Seek to %d failed: %s\n",
+			    sec->shdr.sh_offset, strerror(errno));
+
+		if (fread(sec->strtab, 1, sec->shdr.sh_size, fp) !=
+		    sec->shdr.sh_size)
+			die("Cannot read symbol table: %s\n", strerror(errno));
+	}
+}
+
+static void read_symtabs(FILE *fp)
+{
+	int i, j;
+	for (i = 0; i < ehdr.e_shnum; i++) {
+		struct section *sec = &secs[i];
+		if (sec->shdr.sh_type != SHT_SYMTAB)
+			continue;
+
+		sec->symtab = malloc(sec->shdr.sh_size);
+		if (!sec->symtab)
+			die("malloc of %d bytes for symtab failed\n",
+			    sec->shdr.sh_size);
+
+		if (fseek(fp, sec->shdr.sh_offset, SEEK_SET) < 0)
+			die("Seek to %d failed: %s\n",
+			    sec->shdr.sh_offset, strerror(errno));
+
+		if (fread(sec->symtab, 1, sec->shdr.sh_size, fp) !=
+		    sec->shdr.sh_size)
+			die("Cannot read symbol table: %s\n", strerror(errno));
+
+		for (j = 0; j < sec->shdr.sh_size/sizeof(Elf_Sym); j++) {
+			Elf_Sym *sym = &sec->symtab[j];
+
+			sym->st_name  = elf_word_to_cpu(sym->st_name);
+			sym->st_value = elf_addr_to_cpu(sym->st_value);
+			sym->st_size  = elf_xword_to_cpu(sym->st_size);
+			sym->st_shndx = elf_half_to_cpu(sym->st_shndx);
+		}
+	}
+}
+
+static void read_relocs(FILE *fp)
+{
+	static unsigned long base = 0;
+	int i, j;
+
+	if (!base) {
+		struct section *sec = sec_lookup(".text");
+
+		if (!sec)
+			die("Could not find .text section\n");
+
+		base = sec->shdr.sh_addr;
+	}
+
+	for (i = 0; i < ehdr.e_shnum; i++) {
+		struct section *sec = &secs[i];
+
+		if (sec->shdr.sh_type != SHT_REL_TYPE)
+			continue;
+
+		sec->reltab = malloc(sec->shdr.sh_size);
+		if (!sec->reltab)
+			die("malloc of %d bytes for relocs failed\n",
+			    sec->shdr.sh_size);
+
+		if (fseek(fp, sec->shdr.sh_offset, SEEK_SET) < 0)
+			die("Seek to %d failed: %s\n",
+			    sec->shdr.sh_offset, strerror(errno));
+
+		if (fread(sec->reltab, 1, sec->shdr.sh_size, fp) !=
+		    sec->shdr.sh_size)
+			die("Cannot read symbol table: %s\n", strerror(errno));
+
+		for (j = 0; j < sec->shdr.sh_size/sizeof(Elf_Rel); j++) {
+			Elf_Rel *rel = &sec->reltab[j];
+
+			rel->r_offset = elf_addr_to_cpu(rel->r_offset);
+			/* Set offset into kernel image */
+			rel->r_offset -= base;
+#if (ELF_BITS == 32)
+			rel->r_info   = elf_xword_to_cpu(rel->r_info);
+#else
+			/* Convert MIPS64 RELA format - only the symbol
+			 * index needs converting to native endianness
+			 */
+			rel->r_info   = rel->r_info;
+			ELF_R_SYM(rel->r_info) = elf32_to_cpu(ELF_R_SYM(rel->r_info));
+#endif
+#if (SHT_REL_TYPE == SHT_RELA)
+			rel->r_addend = elf_xword_to_cpu(rel->r_addend);
+#endif
+		}
+	}
+}
+
+static void remove_relocs(FILE *fp)
+{
+	int i;
+	Elf_Shdr shdr;
+
+	for (i = 0; i < ehdr.e_shnum; i++) {
+		struct section *sec = &secs[i];
+
+		if (sec->shdr.sh_type != SHT_REL_TYPE)
+			continue;
+
+		if (fseek(fp, sec->shdr_offset, SEEK_SET) < 0)
+			die("Seek to %d failed: %s\n",
+			    sec->shdr_offset, strerror(errno));
+
+		if (fread(&shdr, sizeof(shdr), 1, fp) != 1)
+			die("Cannot read ELF section headers %d/%d: %s\n",
+			    i, ehdr.e_shnum, strerror(errno));
+
+		/* Set relocation section size to 0, effectively removing it.
+		 * This is necessary due to lack of support for relocations
+		 * in objcopy when creating 32bit elf from 64bit elf.
+		 */
+		shdr.sh_size = 0;
+
+		if (fseek(fp, sec->shdr_offset, SEEK_SET) < 0)
+			die("Seek to %d failed: %s\n",
+			    sec->shdr_offset, strerror(errno));
+
+		if (fwrite(&shdr, sizeof(shdr), 1, fp) != 1)
+			die("Cannot write ELF section headers %d/%d: %s\n",
+			    i, ehdr.e_shnum, strerror(errno));
+	}
+}
+
+static void add_reloc(struct relocs *r, uint32_t offset, unsigned type)
+{
+	/* Relocation representation in binary table:
+	 * |76543210|76543210|76543210|76543210|
+	 * |  Type  |  offset from _text >> 2  |
+	 */
+	offset >>= 2;
+	if (offset > 0x00FFFFFF)
+		die("Kernel image exceeds maximum size for relocation!\n");
+
+	offset = (offset & 0x00FFFFFF) | ((type & 0xFF) << 24);
+
+	if (r->count == r->size) {
+		unsigned long newsize = r->size + 50000;
+		void *mem = realloc(r->offset, newsize * sizeof(r->offset[0]));
+
+		if (!mem)
+			die("realloc of %ld entries for relocs failed\n", newsize);
+
+		r->offset = mem;
+		r->size = newsize;
+	}
+	r->offset[r->count++] = offset;
+}
+
+static void walk_relocs(int (*process)(struct section *sec, Elf_Rel *rel,
+			Elf_Sym *sym, const char *symname))
+{
+	int i;
+
+	/* Walk through the relocations */
+	for (i = 0; i < ehdr.e_shnum; i++) {
+		char *sym_strtab;
+		Elf_Sym *sh_symtab;
+		struct section *sec_applies, *sec_symtab;
+		int j;
+		struct section *sec = &secs[i];
+
+		if (sec->shdr.sh_type != SHT_REL_TYPE)
+			continue;
+
+		sec_symtab  = sec->link;
+		sec_applies = &secs[sec->shdr.sh_info];
+		if (!(sec_applies->shdr.sh_flags & SHF_ALLOC))
+			continue;
+
+		sh_symtab = sec_symtab->symtab;
+		sym_strtab = sec_symtab->link->strtab;
+		for (j = 0; j < sec->shdr.sh_size/sizeof(Elf_Rel); j++) {
+			Elf_Rel *rel = &sec->reltab[j];
+			Elf_Sym *sym = &sh_symtab[ELF_R_SYM(rel->r_info)];
+			const char *symname = sym_name(sym_strtab, sym);
+
+			process(sec, rel, sym, symname);
+		}
+	}
+}
+
+static int do_reloc(struct section *sec, Elf_Rel *rel, Elf_Sym *sym,
+		      const char *symname)
+{
+	unsigned r_type = ELF_R_TYPE(rel->r_info);
+	unsigned bind = ELF_ST_BIND(sym->st_info);
+
+	if ((bind == STB_WEAK) && (sym->st_value == 0)) {
+		/* Don't relocate weak symbols without a target */
+		return 0;
+	}
+
+	if (regex_skip_reloc(symname))
+		return 0;
+
+	switch (r_type) {
+	case R_MIPS_NONE:
+	case R_MIPS_REL32:
+	case R_MIPS_PC16:
+		/*
+		 * NONE can be ignored and PC relative relocations don't
+		 * need to be adjusted.
+		 */
+	case R_MIPS_HIGHEST:
+	case R_MIPS_HIGHER:
+		/* We support relocating within the same 4Gb segment only,
+		 * thus leaving the top 32bits unchanged
+		 */
+	case R_MIPS_LO16:
+		/* We support relocating by 64k jumps only
+		 * thus leaving the bottom 16bits unchanged
+		 */
+		break;
+
+	case R_MIPS_64:
+	case R_MIPS_32:
+	case R_MIPS_26:
+	case R_MIPS_HI16:
+		add_reloc(&relocs, rel->r_offset, r_type);
+		break;
+
+	default:
+		die("Unsupported relocation type: %s (%d)\n",
+		    rel_type(r_type), r_type);
+		break;
+	}
+
+	return 0;
+}
+
+static int write_reloc_as_bin(uint32_t v, FILE *f)
+{
+	unsigned char buf[4];
+
+	v = cpu_to_elf32(v);
+
+	memcpy(buf, &v, sizeof(uint32_t));
+	return fwrite(buf, 1, 4, f);
+}
+
+static int write_reloc_as_text(uint32_t v, FILE *f)
+{
+	int res;
+
+	res = fprintf(f, "\t.long 0x%08"PRIx32"\n", v);
+	if (res < 0)
+		return res;
+	else
+		return sizeof(uint32_t);
+}
+
+static void emit_relocs(int as_text, int as_bin, FILE *outf)
+{
+	int i;
+	int (*write_reloc)(uint32_t, FILE *) = write_reloc_as_bin;
+	int size = 0;
+	int size_reserved;
+	struct section *sec_reloc;
+
+	sec_reloc = sec_lookup(".data.reloc");
+	if (!sec_reloc)
+		die("Could not find relocation section\n");
+
+	size_reserved = sec_reloc->shdr.sh_size;
+
+	/* Collect up the relocations */
+	walk_relocs(do_reloc);
+
+	/* Print the relocations */
+	if (as_text) {
+		/* Print the relocations in a form suitable that
+		 * gas will like.
+		 */
+		printf(".section \".data.reloc\",\"a\"\n");
+		printf(".balign 4\n");
+		/* Output text to stdout */
+		write_reloc = write_reloc_as_text;
+		outf = stdout;
+	} else if (as_bin) {
+		/* Output raw binary to stdout */
+		outf = stdout;
+	} else {
+		/* Seek to offset of the relocation section.
+		* Each relocation is then written into the
+		* vmlinux kernel image.
+		*/
+		if (fseek(outf, sec_reloc->shdr.sh_offset, SEEK_SET) < 0) {
+			die("Seek to %d failed: %s\n",
+				sec_reloc->shdr.sh_offset, strerror(errno));
+		}
+	}
+
+	for (i = 0; i < relocs.count; i++)
+		size += write_reloc(relocs.offset[i], outf);
+
+	/* Print a stop, but only if we've actually written some relocs */
+	if (size)
+		size += write_reloc(0, outf);
+
+	if (size > size_reserved)
+		/* Die, but suggest a value for CONFIG_RELOCATION_TABLE_SIZE
+		 * which will fix this problem and allow a bit of headroom
+		 * if more kernel features are enabled
+		 */
+		die("Relocations overflow available space!\n" \
+		    "Please adjust CONFIG_RELOCATION_TABLE_SIZE " \
+		    "to at least 0x%08x\n", (size + 0x1000) & ~0xFFF);
+}
+
+/*
+ * As an aid to debugging problems with different linkers
+ * print summary information about the relocs.
+ * Since different linkers tend to emit the sections in
+ * different orders we use the section names in the output.
+ */
+static int do_reloc_info(struct section *sec, Elf_Rel *rel, ElfW(Sym) *sym,
+				const char *symname)
+{
+	printf("%16s  0x%08x  %16s  %40s  %16s\n",
+		sec_name(sec->shdr.sh_info),
+		(unsigned int)rel->r_offset,
+		rel_type(ELF_R_TYPE(rel->r_info)),
+		symname,
+		sec_name(sym->st_shndx));
+	return 0;
+}
+
+static void print_reloc_info(void)
+{
+	printf("%16s  %10s  %16s  %40s  %16s\n",
+		"reloc section",
+		"offset",
+		"reloc type",
+		"symbol",
+		"symbol section");
+	walk_relocs(do_reloc_info);
+}
+
+#if ELF_BITS == 64
+# define process process_64
+#else
+# define process process_32
+#endif
+
+void process(FILE *fp, int as_text, int as_bin,
+	     int show_reloc_info, int keep_relocs)
+{
+	regex_init();
+	read_ehdr(fp);
+	read_shdrs(fp);
+	read_strtabs(fp);
+	read_symtabs(fp);
+	read_relocs(fp);
+	if (show_reloc_info) {
+		print_reloc_info();
+		return;
+	}
+	emit_relocs(as_text, as_bin, fp);
+	if (!keep_relocs)
+		remove_relocs(fp);
+}
diff --git a/arch/mips/boot/tools/relocs.h b/arch/mips/boot/tools/relocs.h
new file mode 100644
index 000000000000..33faee617322
--- /dev/null
+++ b/arch/mips/boot/tools/relocs.h
@@ -0,0 +1,34 @@
+#ifndef RELOCS_H
+#define RELOCS_H
+
+#include <stdio.h>
+#include <stdarg.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <string.h>
+#include <errno.h>
+#include <unistd.h>
+#include <elf.h>
+#include <byteswap.h>
+#define USE_BSD
+#include <endian.h>
+#include <regex.h>
+
+void die(char *fmt, ...);
+
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+
+enum symtype {
+	S_ABS,
+	S_REL,
+	S_SEG,
+	S_LIN,
+	S_NSYMTYPES
+};
+
+void process_32(FILE *fp, int as_text, int as_bin,
+		int show_reloc_info, int keep_relocs);
+void process_64(FILE *fp, int as_text, int as_bin,
+		int show_reloc_info, int keep_relocs);
+#endif /* RELOCS_H */
diff --git a/arch/mips/boot/tools/relocs_32.c b/arch/mips/boot/tools/relocs_32.c
new file mode 100644
index 000000000000..915bdc07f5ed
--- /dev/null
+++ b/arch/mips/boot/tools/relocs_32.c
@@ -0,0 +1,17 @@
+#include "relocs.h"
+
+#define ELF_BITS 32
+
+#define ELF_MACHINE		EM_MIPS
+#define ELF_MACHINE_NAME	"MIPS"
+#define SHT_REL_TYPE		SHT_REL
+#define Elf_Rel			ElfW(Rel)
+
+#define ELF_CLASS		ELFCLASS32
+#define ELF_R_SYM(val)		ELF32_R_SYM(val)
+#define ELF_R_TYPE(val)		ELF32_R_TYPE(val)
+#define ELF_ST_TYPE(o)		ELF32_ST_TYPE(o)
+#define ELF_ST_BIND(o)		ELF32_ST_BIND(o)
+#define ELF_ST_VISIBILITY(o)	ELF32_ST_VISIBILITY(o)
+
+#include "relocs.c"
diff --git a/arch/mips/boot/tools/relocs_64.c b/arch/mips/boot/tools/relocs_64.c
new file mode 100644
index 000000000000..b671b5e2dcd8
--- /dev/null
+++ b/arch/mips/boot/tools/relocs_64.c
@@ -0,0 +1,27 @@
+#include "relocs.h"
+
+#define ELF_BITS 64
+
+#define ELF_MACHINE             EM_MIPS
+#define ELF_MACHINE_NAME        "MIPS64"
+#define SHT_REL_TYPE            SHT_RELA
+#define Elf_Rel                 Elf64_Rela
+
+typedef uint8_t Elf64_Byte;
+
+typedef struct {
+	Elf64_Word r_sym;	/* Symbol index.  */
+	Elf64_Byte r_ssym;	/* Special symbol.  */
+	Elf64_Byte r_type3;	/* Third relocation.  */
+	Elf64_Byte r_type2;	/* Second relocation.  */
+	Elf64_Byte r_type;	/* First relocation.  */
+} Elf64_Mips_Rela;
+
+#define ELF_CLASS               ELFCLASS64
+#define ELF_R_SYM(val)          (((Elf64_Mips_Rela *)(&val))->r_sym)
+#define ELF_R_TYPE(val)         (((Elf64_Mips_Rela *)(&val))->r_type)
+#define ELF_ST_TYPE(o)          ELF64_ST_TYPE(o)
+#define ELF_ST_BIND(o)          ELF64_ST_BIND(o)
+#define ELF_ST_VISIBILITY(o)    ELF64_ST_VISIBILITY(o)
+
+#include "relocs.c"
diff --git a/arch/mips/boot/tools/relocs_main.c b/arch/mips/boot/tools/relocs_main.c
new file mode 100644
index 000000000000..d8fe2343b8d0
--- /dev/null
+++ b/arch/mips/boot/tools/relocs_main.c
@@ -0,0 +1,84 @@
+
+#include <stdio.h>
+#include <stdint.h>
+#include <stdarg.h>
+#include <stdlib.h>
+#include <string.h>
+#include <errno.h>
+#include <endian.h>
+#include <elf.h>
+
+#include "relocs.h"
+
+void die(char *fmt, ...)
+{
+	va_list ap;
+
+	va_start(ap, fmt);
+	vfprintf(stderr, fmt, ap);
+	va_end(ap);
+	exit(1);
+}
+
+static void usage(void)
+{
+	die("relocs [--reloc-info|--text|--bin|--keep] vmlinux\n");
+}
+
+int main(int argc, char **argv)
+{
+	int show_reloc_info, as_text, as_bin, keep_relocs;
+	const char *fname;
+	FILE *fp;
+	int i;
+	unsigned char e_ident[EI_NIDENT];
+
+	show_reloc_info = 0;
+	as_text = 0;
+	as_bin = 0;
+	keep_relocs = 0;
+	fname = NULL;
+	for (i = 1; i < argc; i++) {
+		char *arg = argv[i];
+
+		if (*arg == '-') {
+			if (strcmp(arg, "--reloc-info") == 0) {
+				show_reloc_info = 1;
+				continue;
+			}
+			if (strcmp(arg, "--text") == 0) {
+				as_text = 1;
+				continue;
+			}
+			if (strcmp(arg, "--bin") == 0) {
+				as_bin = 1;
+				continue;
+			}
+			if (strcmp(arg, "--keep") == 0) {
+				keep_relocs = 1;
+				continue;
+			}
+		} else if (!fname) {
+			fname = arg;
+			continue;
+		}
+		usage();
+	}
+	if (!fname)
+		usage();
+
+	fp = fopen(fname, "r+");
+	if (!fp)
+		die("Cannot open %s: %s\n", fname, strerror(errno));
+
+	if (fread(&e_ident, 1, EI_NIDENT, fp) != EI_NIDENT)
+		die("Cannot read %s: %s", fname, strerror(errno));
+
+	rewind(fp);
+	if (e_ident[EI_CLASS] == ELFCLASS64)
+		process_64(fp, as_text,  as_bin, show_reloc_info, keep_relocs);
+	else
+		process_32(fp, as_text, as_bin, show_reloc_info, keep_relocs);
+	fclose(fp);
+	return 0;
+}
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 2/9] MIPS: tools: Build relocs tool
@ 2015-12-03 10:08   ` Matt Redfearn
  0 siblings, 0 replies; 34+ messages in thread
From: Matt Redfearn @ 2015-12-03 10:08 UTC (permalink / raw)
  To: linux-mips; +Cc: Matt Redfearn

Build the relocs tool as part of the kbuild

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
---
 arch/mips/Makefile | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/mips/Makefile b/arch/mips/Makefile
index 252e347958f3..33fbfd276671 100644
--- a/arch/mips/Makefile
+++ b/arch/mips/Makefile
@@ -12,6 +12,9 @@
 # for "archclean" cleaning up for this architecture.
 #
 
+archscripts: scripts_basic
+	$(Q)$(MAKE) $(build)=arch/mips/boot/tools relocs
+
 KBUILD_DEFCONFIG := ip22_defconfig
 
 #
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 2/9] MIPS: tools: Build relocs tool
@ 2015-12-03 10:08   ` Matt Redfearn
  0 siblings, 0 replies; 34+ messages in thread
From: Matt Redfearn @ 2015-12-03 10:08 UTC (permalink / raw)
  To: linux-mips; +Cc: Matt Redfearn

Build the relocs tool as part of the kbuild

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
---
 arch/mips/Makefile | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/mips/Makefile b/arch/mips/Makefile
index 252e347958f3..33fbfd276671 100644
--- a/arch/mips/Makefile
+++ b/arch/mips/Makefile
@@ -12,6 +12,9 @@
 # for "archclean" cleaning up for this architecture.
 #
 
+archscripts: scripts_basic
+	$(Q)$(MAKE) $(build)=arch/mips/boot/tools relocs
+
 KBUILD_DEFCONFIG := ip22_defconfig
 
 #
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 3/9] MIPS: Reserve space for relocation table
@ 2015-12-03 10:08   ` Matt Redfearn
  0 siblings, 0 replies; 34+ messages in thread
From: Matt Redfearn @ 2015-12-03 10:08 UTC (permalink / raw)
  To: linux-mips; +Cc: Matt Redfearn

When CONFIG_RELOCATABLE is enabled, add a new section in the memory map
to be filled with relocation data.

CONFIG_RELOCATION_TABLE_SIZE allows the amount of space reserved to be
adjusted if necessary.

The relocs tool will populate this reserved space with relocation
information. The space is reserved within the elf by filling it with
0's, and an invalid entry is left at the start of the space such that
kernel relocation will be aborted if the table is empty.

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
---
 arch/mips/Kconfig              | 14 ++++++++++++++
 arch/mips/kernel/vmlinux.lds.S | 20 ++++++++++++++++++++
 2 files changed, 34 insertions(+)

diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index e3aa5b0b4ef1..b8ed64dfaafc 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -2431,6 +2431,20 @@ config NUMA
 config SYS_SUPPORTS_NUMA
 	bool
 
+config RELOCATION_TABLE_SIZE
+	hex "Relocation table size"
+	depends on RELOCATABLE
+	range 0x0 0x01000000
+	default "0x00100000"
+	---help---
+	  A table of relocation data will be appended to the kernel binary
+	  and parsed at boot to fix up the relocated kernel.
+
+	  This option allows the amount of space reserved for the table to be
+	  adjusted, although the default of 1Mb should be ok in most cases.
+
+	  If unsure, leave at the default value.
+
 config NODES_SHIFT
 	int
 	default "6"
diff --git a/arch/mips/kernel/vmlinux.lds.S b/arch/mips/kernel/vmlinux.lds.S
index 07d32a4aea60..27d70423f1dd 100644
--- a/arch/mips/kernel/vmlinux.lds.S
+++ b/arch/mips/kernel/vmlinux.lds.S
@@ -128,6 +128,26 @@ SECTIONS
 #ifdef CONFIG_SMP
 	PERCPU_SECTION(1 << CONFIG_MIPS_L1_CACHE_SHIFT)
 #endif
+
+#ifdef CONFIG_RELOCATABLE
+	. = ALIGN(4);
+
+	.data.reloc : {
+		_relocation_start = .;
+		/* Space for relocation table
+		 * This needs to be filled so that the
+		 * relocs tool can overwrite the content.
+		 * An invalid value is left at the start of the
+		 * section to abort relocation if the table
+		 * has not been filled in.
+		 */
+		LONG(0xFFFFFFFF);
+		FILL(0);
+		. += CONFIG_RELOCATION_TABLE_SIZE - 4;
+		_relocation_end = .;
+	}
+#endif
+
 #ifdef CONFIG_MIPS_RAW_APPENDED_DTB
 	__appended_dtb = .;
 	/* leave space for appended DTB */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 3/9] MIPS: Reserve space for relocation table
@ 2015-12-03 10:08   ` Matt Redfearn
  0 siblings, 0 replies; 34+ messages in thread
From: Matt Redfearn @ 2015-12-03 10:08 UTC (permalink / raw)
  To: linux-mips; +Cc: Matt Redfearn

When CONFIG_RELOCATABLE is enabled, add a new section in the memory map
to be filled with relocation data.

CONFIG_RELOCATION_TABLE_SIZE allows the amount of space reserved to be
adjusted if necessary.

The relocs tool will populate this reserved space with relocation
information. The space is reserved within the elf by filling it with
0's, and an invalid entry is left at the start of the space such that
kernel relocation will be aborted if the table is empty.

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
---
 arch/mips/Kconfig              | 14 ++++++++++++++
 arch/mips/kernel/vmlinux.lds.S | 20 ++++++++++++++++++++
 2 files changed, 34 insertions(+)

diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index e3aa5b0b4ef1..b8ed64dfaafc 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -2431,6 +2431,20 @@ config NUMA
 config SYS_SUPPORTS_NUMA
 	bool
 
+config RELOCATION_TABLE_SIZE
+	hex "Relocation table size"
+	depends on RELOCATABLE
+	range 0x0 0x01000000
+	default "0x00100000"
+	---help---
+	  A table of relocation data will be appended to the kernel binary
+	  and parsed at boot to fix up the relocated kernel.
+
+	  This option allows the amount of space reserved for the table to be
+	  adjusted, although the default of 1Mb should be ok in most cases.
+
+	  If unsure, leave at the default value.
+
 config NODES_SHIFT
 	int
 	default "6"
diff --git a/arch/mips/kernel/vmlinux.lds.S b/arch/mips/kernel/vmlinux.lds.S
index 07d32a4aea60..27d70423f1dd 100644
--- a/arch/mips/kernel/vmlinux.lds.S
+++ b/arch/mips/kernel/vmlinux.lds.S
@@ -128,6 +128,26 @@ SECTIONS
 #ifdef CONFIG_SMP
 	PERCPU_SECTION(1 << CONFIG_MIPS_L1_CACHE_SHIFT)
 #endif
+
+#ifdef CONFIG_RELOCATABLE
+	. = ALIGN(4);
+
+	.data.reloc : {
+		_relocation_start = .;
+		/* Space for relocation table
+		 * This needs to be filled so that the
+		 * relocs tool can overwrite the content.
+		 * An invalid value is left at the start of the
+		 * section to abort relocation if the table
+		 * has not been filled in.
+		 */
+		LONG(0xFFFFFFFF);
+		FILL(0);
+		. += CONFIG_RELOCATION_TABLE_SIZE - 4;
+		_relocation_end = .;
+	}
+#endif
+
 #ifdef CONFIG_MIPS_RAW_APPENDED_DTB
 	__appended_dtb = .;
 	/* leave space for appended DTB */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 4/9] MIPS: Generate relocation table when CONFIG_RELOCATABLE
@ 2015-12-03 10:08   ` Matt Redfearn
  0 siblings, 0 replies; 34+ messages in thread
From: Matt Redfearn @ 2015-12-03 10:08 UTC (permalink / raw)
  To: linux-mips; +Cc: Matt Redfearn

When CONFIG_RELOCATABLE is enabled (added in later patch) add
--emit-relocs to vmlinux LDFLAGS so that fully linked vmlinux contains
relocation information.

Run the previously added relocs tool to fill in the .data.relocs section
of vmlinux with a table of relocations. The relocs tool will also remove
(mark as 0 length) the relocation sections added to vmlinux.

When vmlinux is passed to the boot makefile for conversion into a boot
image the now empty relocation sections will be removed and the
populated relocation table will be included in the binary image.

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
---
 arch/mips/Makefile | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/arch/mips/Makefile b/arch/mips/Makefile
index 33fbfd276671..5a01a9e21274 100644
--- a/arch/mips/Makefile
+++ b/arch/mips/Makefile
@@ -96,6 +96,10 @@ LDFLAGS_vmlinux			+= -G 0 -static -n -nostdlib
 KBUILD_AFLAGS_MODULE		+= -mlong-calls
 KBUILD_CFLAGS_MODULE		+= -mlong-calls
 
+ifeq ($(CONFIG_RELOCATABLE),y)
+LDFLAGS_vmlinux			+= --emit-relocs
+endif
+
 #
 # pass -msoft-float to GAS if it supports it.  However on newer binutils
 # (specifically newer than 2.24.51.20140728) we then also need to explicitly
@@ -319,6 +323,10 @@ rom.bin rom.sw: vmlinux
 		$(bootvars-y) $@
 endif
 
+CMD_RELOCS = arch/mips/boot/tools/relocs
+quiet_cmd_relocs = RELOCS  $<
+      cmd_relocs = $(CMD_RELOCS) $<
+
 #
 # Some machines like the Indy need 32-bit ELF binaries for booting purposes.
 # Other need ECOFF, so we build a 32-bit ELF binary for them which we then
@@ -327,6 +335,11 @@ endif
 quiet_cmd_32 = OBJCOPY $@
 	cmd_32 = $(OBJCOPY) -O $(32bit-bfd) $(OBJCOPYFLAGS) $< $@
 vmlinux.32: vmlinux
+ifeq ($(CONFIG_RELOCATABLE)$(CONFIG_64BIT),yy)
+# Currently, objcopy fails to handle the relocations in the elf64
+# So the relocs tool must be run here to remove them first
+	$(call cmd,relocs)
+endif
 	$(call cmd,32)
 
 #
@@ -342,6 +355,9 @@ all:	$(all-y)
 
 # boot
 $(boot-y): $(vmlinux-32) FORCE
+ifeq ($(CONFIG_RELOCATABLE)$(CONFIG_32BIT),yy)
+	$(call cmd,relocs)
+endif
 	$(Q)$(MAKE) $(build)=arch/mips/boot VMLINUX=$(vmlinux-32) \
 		$(bootvars-y) arch/mips/boot/$@
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 4/9] MIPS: Generate relocation table when CONFIG_RELOCATABLE
@ 2015-12-03 10:08   ` Matt Redfearn
  0 siblings, 0 replies; 34+ messages in thread
From: Matt Redfearn @ 2015-12-03 10:08 UTC (permalink / raw)
  To: linux-mips; +Cc: Matt Redfearn

When CONFIG_RELOCATABLE is enabled (added in later patch) add
--emit-relocs to vmlinux LDFLAGS so that fully linked vmlinux contains
relocation information.

Run the previously added relocs tool to fill in the .data.relocs section
of vmlinux with a table of relocations. The relocs tool will also remove
(mark as 0 length) the relocation sections added to vmlinux.

When vmlinux is passed to the boot makefile for conversion into a boot
image the now empty relocation sections will be removed and the
populated relocation table will be included in the binary image.

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
---
 arch/mips/Makefile | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/arch/mips/Makefile b/arch/mips/Makefile
index 33fbfd276671..5a01a9e21274 100644
--- a/arch/mips/Makefile
+++ b/arch/mips/Makefile
@@ -96,6 +96,10 @@ LDFLAGS_vmlinux			+= -G 0 -static -n -nostdlib
 KBUILD_AFLAGS_MODULE		+= -mlong-calls
 KBUILD_CFLAGS_MODULE		+= -mlong-calls
 
+ifeq ($(CONFIG_RELOCATABLE),y)
+LDFLAGS_vmlinux			+= --emit-relocs
+endif
+
 #
 # pass -msoft-float to GAS if it supports it.  However on newer binutils
 # (specifically newer than 2.24.51.20140728) we then also need to explicitly
@@ -319,6 +323,10 @@ rom.bin rom.sw: vmlinux
 		$(bootvars-y) $@
 endif
 
+CMD_RELOCS = arch/mips/boot/tools/relocs
+quiet_cmd_relocs = RELOCS  $<
+      cmd_relocs = $(CMD_RELOCS) $<
+
 #
 # Some machines like the Indy need 32-bit ELF binaries for booting purposes.
 # Other need ECOFF, so we build a 32-bit ELF binary for them which we then
@@ -327,6 +335,11 @@ endif
 quiet_cmd_32 = OBJCOPY $@
 	cmd_32 = $(OBJCOPY) -O $(32bit-bfd) $(OBJCOPYFLAGS) $< $@
 vmlinux.32: vmlinux
+ifeq ($(CONFIG_RELOCATABLE)$(CONFIG_64BIT),yy)
+# Currently, objcopy fails to handle the relocations in the elf64
+# So the relocs tool must be run here to remove them first
+	$(call cmd,relocs)
+endif
 	$(call cmd,32)
 
 #
@@ -342,6 +355,9 @@ all:	$(all-y)
 
 # boot
 $(boot-y): $(vmlinux-32) FORCE
+ifeq ($(CONFIG_RELOCATABLE)$(CONFIG_32BIT),yy)
+	$(call cmd,relocs)
+endif
 	$(Q)$(MAKE) $(build)=arch/mips/boot VMLINUX=$(vmlinux-32) \
 		$(bootvars-y) arch/mips/boot/$@
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 5/9] MIPS: Kernel: Add relocate.c
@ 2015-12-03 10:08   ` Matt Redfearn
  0 siblings, 0 replies; 34+ messages in thread
From: Matt Redfearn @ 2015-12-03 10:08 UTC (permalink / raw)
  To: linux-mips; +Cc: Matt Redfearn

arch/mips/kernel/relocate.c contains the functions necessary to relocate
the kernel elsewhere in memory

The kernel makes a copy of itself at the new address. It uses the
relocation table inserted by the relocs tool to fix symbol references
within the new image.

If copy/relocation is sucessful then the entry point of the new kernel
is returned, otherwise fall back to starting the kernel in place.

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
---
 arch/mips/kernel/Makefile   |   2 +
 arch/mips/kernel/relocate.c | 232 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 234 insertions(+)
 create mode 100644 arch/mips/kernel/relocate.c

diff --git a/arch/mips/kernel/Makefile b/arch/mips/kernel/Makefile
index d982be1ea1c3..694d54b1e7bf 100644
--- a/arch/mips/kernel/Makefile
+++ b/arch/mips/kernel/Makefile
@@ -83,6 +83,8 @@ obj-$(CONFIG_I8253)		+= i8253.o
 
 obj-$(CONFIG_GPIO_TXX9)		+= gpio_txx9.o
 
+obj-$(CONFIG_RELOCATABLE)	+= relocate.o
+
 obj-$(CONFIG_KEXEC)		+= machine_kexec.o relocate_kernel.o crash.o
 obj-$(CONFIG_CRASH_DUMP)	+= crash_dump.o
 obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
diff --git a/arch/mips/kernel/relocate.c b/arch/mips/kernel/relocate.c
new file mode 100644
index 000000000000..3cb97ab25a5f
--- /dev/null
+++ b/arch/mips/kernel/relocate.c
@@ -0,0 +1,232 @@
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License.  See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Support for Kernel relocation at boot time
+ *
+ * Copyright (C) 2015, Imagination Technologies Ltd.
+ * Authors: Matt Redfearn (matt.redfearn@imgtec.com)
+ */
+#include <asm/cacheflush.h>
+#include <asm/sections.h>
+#include <asm/setup.h>
+#include <asm/timex.h>
+#include <linux/elf.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/start_kernel.h>
+#include <linux/string.h>
+
+extern long _relocation_start;	/* End kernel image / start relocation table */
+extern long _relocation_end;	/* End relocation table */
+
+extern long __start___ex_table;	/* Start exception table */
+extern long __stop___ex_table;	/* End exception table */
+
+static inline u32 __init get_synci_step(void)
+{
+	u32 res;
+
+	__asm__ __volatile__("rdhwr  %0, $1" : "=r" (res));
+
+	return res;
+}
+
+static void __init sync_icache(void *kbase, unsigned long kernel_length)
+{
+	void *kend = kbase + kernel_length;
+	u32 step = get_synci_step();
+
+	do {
+		__asm__ __volatile__(
+			"synci  0(%0)"
+			: /* no output */
+			: "r" (kbase));
+
+		kbase += step;
+	} while (kbase < kend);
+
+	/* Completion barrier */
+	__sync();
+}
+
+static int __init apply_r_mips_64_rel(u32 *loc_orig, u32 *loc_new, long offset)
+{
+	*(u64 *)loc_new += offset;
+
+	return 0;
+}
+
+static int __init apply_r_mips_32_rel(u32 *loc_orig, u32 *loc_new, long offset)
+{
+	*loc_new += offset;
+
+	return 0;
+}
+
+static int __init apply_r_mips_26_rel(u32 *loc_orig, u32 *loc_new, long offset)
+{
+	unsigned long target_addr = (*loc_orig) & 0x03ffffff;
+
+	if (offset % 4) {
+		pr_err("Dangerous R_MIPS_26 REL relocation\n");
+		return -ENOEXEC;
+	}
+
+	/* Original target address */
+	target_addr <<= 2;
+	target_addr += (unsigned long)loc_orig & ~0x03ffffff;
+
+	/* Get the new target address */
+	target_addr = (long)target_addr + offset;
+
+	if ((target_addr & 0xf0000000) != ((unsigned long)loc_new & 0xf0000000)) {
+		pr_err("R_MIPS_26 REL relocation overflow\n");
+		return -ENOEXEC;
+	}
+
+	target_addr -= (unsigned long)loc_new & ~0x03ffffff;
+	target_addr >>= 2;
+
+	*loc_new = (*loc_new & ~0x03ffffff) | (target_addr & 0x03ffffff);
+
+	return 0;
+}
+
+
+static int __init apply_r_mips_hi16_rel(u32 *loc_orig, u32 *loc_new, long offset)
+{
+	unsigned long insn = *loc_orig;
+	unsigned long target = (insn & 0xffff) << 16; /* high 16bits of target */
+
+	target += offset;
+
+	*loc_new = (insn & ~0xffff) | ((target >> 16) & 0xffff);
+	return 0;
+}
+
+static int (*reloc_handlers_rel[]) (u32 *, u32 *, long) __initdata = {
+	[R_MIPS_64]		= apply_r_mips_64_rel,
+	[R_MIPS_32]		= apply_r_mips_32_rel,
+	[R_MIPS_26]		= apply_r_mips_26_rel,
+	[R_MIPS_HI16]		= apply_r_mips_hi16_rel,
+};
+
+int __init do_relocations(void *kbase_old, void *kbase_new, long offset)
+{
+	u32 *r;
+	u32 *loc_orig;
+	u32 *loc_new;
+	int type;
+	int res;
+
+	for (r = (u32 *)&_relocation_start; r < (u32 *)&_relocation_end; r++) {
+		/* Sentinel for last relocation */
+		if (*r == 0)
+			break;
+
+		type = (*r >> 24) & 0xff;
+		loc_orig = (void *)(kbase_old + ((*r & 0x00ffffff) << 2));
+		loc_new = (void *)((unsigned long)loc_orig + offset);
+
+		if (reloc_handlers_rel[type] == NULL)
+			/* Unsupported relocation */
+			return -ENOEXEC;
+
+		res = reloc_handlers_rel[type](loc_orig, loc_new, offset);
+		if (res)
+			return res;
+	}
+
+	return 0;
+}
+
+/*
+ * The exception table is filled in by a tool after vmlinux is linked.
+ * It must be relocated separately since there will not be any relocation
+ * information for it filled in by the linker.
+ */
+static int __init relocate_exception_table(long offset)
+{
+	unsigned long *etable_start, *etable_end, *e;
+
+	etable_start = (void *)((unsigned long)&__start___ex_table + offset);
+	etable_end = (void *)((unsigned long)&__stop___ex_table + offset);
+
+	for (e = etable_start; e < etable_end; e++)
+		*e += offset;
+
+	return 0;
+}
+
+static inline void __init *determine_relocation_address(void)
+{
+	/*
+	 * Choose a new address for the kernel
+	 * For now we'll hard code the destination
+	 */
+	return (void *)0xffffffff81000000;
+}
+
+static inline int __init relocation_addr_valid(void *loc_new)
+{
+	if ((unsigned long)loc_new & 0x0000ffff)
+		return 0; /* Inappropriately aligned new location */
+	if ((unsigned long)loc_new < (unsigned long)&_end)
+		return 0; /* New location overlaps original kernel */
+	return 1;
+}
+
+void __init *relocate_kernel(void)
+{
+	void *loc_new;
+	unsigned long kernel_length;
+	long offset = 0;
+	int res = 1;
+
+	kernel_length = (long)(&_relocation_start) - (long)(&_text);
+	loc_new = determine_relocation_address();
+
+	/* Sanity check relocation address */
+	if (relocation_addr_valid(loc_new))
+		offset = (unsigned long)loc_new - (unsigned long)(&_text);
+
+	if (offset) {
+		/* Copy the kernel to it's new location */
+		memcpy(loc_new, &_text, kernel_length);
+
+		/* Perform relocations on the new kernel */
+		res = do_relocations(&_text, loc_new, offset);
+
+		if (res == 0) {
+			/* Sync the caches ready for execution of new kernel */
+			sync_icache(loc_new, kernel_length);
+
+			res = relocate_exception_table(offset);
+		}
+	}
+
+	if (res == 0) {
+		void *bss_new = (void *)((long)&__bss_start + offset);
+		long bss_length = (long)&__bss_stop - (long)&__bss_start;
+		/*
+		 * The original .bss has already been cleared, and
+		 * some variables such as command line parameters
+		 * stored to it so make a copy in the new location.
+		 */
+		memcpy(bss_new, &__bss_start, bss_length);
+
+		/* The current thread is now within the relocated image */
+		__current_thread_info = (void *)((long)&init_thread_union + offset);
+
+		/* Return the new kernel's entry point */
+		return (void *)((long)start_kernel + offset);
+	} else {
+		/*
+		 * Something went wrong in the relocation process
+		 * Just boot the original kernel
+		 */
+		return start_kernel;
+	}
+}
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 5/9] MIPS: Kernel: Add relocate.c
@ 2015-12-03 10:08   ` Matt Redfearn
  0 siblings, 0 replies; 34+ messages in thread
From: Matt Redfearn @ 2015-12-03 10:08 UTC (permalink / raw)
  To: linux-mips; +Cc: Matt Redfearn

arch/mips/kernel/relocate.c contains the functions necessary to relocate
the kernel elsewhere in memory

The kernel makes a copy of itself at the new address. It uses the
relocation table inserted by the relocs tool to fix symbol references
within the new image.

If copy/relocation is sucessful then the entry point of the new kernel
is returned, otherwise fall back to starting the kernel in place.

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
---
 arch/mips/kernel/Makefile   |   2 +
 arch/mips/kernel/relocate.c | 232 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 234 insertions(+)
 create mode 100644 arch/mips/kernel/relocate.c

diff --git a/arch/mips/kernel/Makefile b/arch/mips/kernel/Makefile
index d982be1ea1c3..694d54b1e7bf 100644
--- a/arch/mips/kernel/Makefile
+++ b/arch/mips/kernel/Makefile
@@ -83,6 +83,8 @@ obj-$(CONFIG_I8253)		+= i8253.o
 
 obj-$(CONFIG_GPIO_TXX9)		+= gpio_txx9.o
 
+obj-$(CONFIG_RELOCATABLE)	+= relocate.o
+
 obj-$(CONFIG_KEXEC)		+= machine_kexec.o relocate_kernel.o crash.o
 obj-$(CONFIG_CRASH_DUMP)	+= crash_dump.o
 obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
diff --git a/arch/mips/kernel/relocate.c b/arch/mips/kernel/relocate.c
new file mode 100644
index 000000000000..3cb97ab25a5f
--- /dev/null
+++ b/arch/mips/kernel/relocate.c
@@ -0,0 +1,232 @@
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License.  See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Support for Kernel relocation at boot time
+ *
+ * Copyright (C) 2015, Imagination Technologies Ltd.
+ * Authors: Matt Redfearn (matt.redfearn@imgtec.com)
+ */
+#include <asm/cacheflush.h>
+#include <asm/sections.h>
+#include <asm/setup.h>
+#include <asm/timex.h>
+#include <linux/elf.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/start_kernel.h>
+#include <linux/string.h>
+
+extern long _relocation_start;	/* End kernel image / start relocation table */
+extern long _relocation_end;	/* End relocation table */
+
+extern long __start___ex_table;	/* Start exception table */
+extern long __stop___ex_table;	/* End exception table */
+
+static inline u32 __init get_synci_step(void)
+{
+	u32 res;
+
+	__asm__ __volatile__("rdhwr  %0, $1" : "=r" (res));
+
+	return res;
+}
+
+static void __init sync_icache(void *kbase, unsigned long kernel_length)
+{
+	void *kend = kbase + kernel_length;
+	u32 step = get_synci_step();
+
+	do {
+		__asm__ __volatile__(
+			"synci  0(%0)"
+			: /* no output */
+			: "r" (kbase));
+
+		kbase += step;
+	} while (kbase < kend);
+
+	/* Completion barrier */
+	__sync();
+}
+
+static int __init apply_r_mips_64_rel(u32 *loc_orig, u32 *loc_new, long offset)
+{
+	*(u64 *)loc_new += offset;
+
+	return 0;
+}
+
+static int __init apply_r_mips_32_rel(u32 *loc_orig, u32 *loc_new, long offset)
+{
+	*loc_new += offset;
+
+	return 0;
+}
+
+static int __init apply_r_mips_26_rel(u32 *loc_orig, u32 *loc_new, long offset)
+{
+	unsigned long target_addr = (*loc_orig) & 0x03ffffff;
+
+	if (offset % 4) {
+		pr_err("Dangerous R_MIPS_26 REL relocation\n");
+		return -ENOEXEC;
+	}
+
+	/* Original target address */
+	target_addr <<= 2;
+	target_addr += (unsigned long)loc_orig & ~0x03ffffff;
+
+	/* Get the new target address */
+	target_addr = (long)target_addr + offset;
+
+	if ((target_addr & 0xf0000000) != ((unsigned long)loc_new & 0xf0000000)) {
+		pr_err("R_MIPS_26 REL relocation overflow\n");
+		return -ENOEXEC;
+	}
+
+	target_addr -= (unsigned long)loc_new & ~0x03ffffff;
+	target_addr >>= 2;
+
+	*loc_new = (*loc_new & ~0x03ffffff) | (target_addr & 0x03ffffff);
+
+	return 0;
+}
+
+
+static int __init apply_r_mips_hi16_rel(u32 *loc_orig, u32 *loc_new, long offset)
+{
+	unsigned long insn = *loc_orig;
+	unsigned long target = (insn & 0xffff) << 16; /* high 16bits of target */
+
+	target += offset;
+
+	*loc_new = (insn & ~0xffff) | ((target >> 16) & 0xffff);
+	return 0;
+}
+
+static int (*reloc_handlers_rel[]) (u32 *, u32 *, long) __initdata = {
+	[R_MIPS_64]		= apply_r_mips_64_rel,
+	[R_MIPS_32]		= apply_r_mips_32_rel,
+	[R_MIPS_26]		= apply_r_mips_26_rel,
+	[R_MIPS_HI16]		= apply_r_mips_hi16_rel,
+};
+
+int __init do_relocations(void *kbase_old, void *kbase_new, long offset)
+{
+	u32 *r;
+	u32 *loc_orig;
+	u32 *loc_new;
+	int type;
+	int res;
+
+	for (r = (u32 *)&_relocation_start; r < (u32 *)&_relocation_end; r++) {
+		/* Sentinel for last relocation */
+		if (*r == 0)
+			break;
+
+		type = (*r >> 24) & 0xff;
+		loc_orig = (void *)(kbase_old + ((*r & 0x00ffffff) << 2));
+		loc_new = (void *)((unsigned long)loc_orig + offset);
+
+		if (reloc_handlers_rel[type] == NULL)
+			/* Unsupported relocation */
+			return -ENOEXEC;
+
+		res = reloc_handlers_rel[type](loc_orig, loc_new, offset);
+		if (res)
+			return res;
+	}
+
+	return 0;
+}
+
+/*
+ * The exception table is filled in by a tool after vmlinux is linked.
+ * It must be relocated separately since there will not be any relocation
+ * information for it filled in by the linker.
+ */
+static int __init relocate_exception_table(long offset)
+{
+	unsigned long *etable_start, *etable_end, *e;
+
+	etable_start = (void *)((unsigned long)&__start___ex_table + offset);
+	etable_end = (void *)((unsigned long)&__stop___ex_table + offset);
+
+	for (e = etable_start; e < etable_end; e++)
+		*e += offset;
+
+	return 0;
+}
+
+static inline void __init *determine_relocation_address(void)
+{
+	/*
+	 * Choose a new address for the kernel
+	 * For now we'll hard code the destination
+	 */
+	return (void *)0xffffffff81000000;
+}
+
+static inline int __init relocation_addr_valid(void *loc_new)
+{
+	if ((unsigned long)loc_new & 0x0000ffff)
+		return 0; /* Inappropriately aligned new location */
+	if ((unsigned long)loc_new < (unsigned long)&_end)
+		return 0; /* New location overlaps original kernel */
+	return 1;
+}
+
+void __init *relocate_kernel(void)
+{
+	void *loc_new;
+	unsigned long kernel_length;
+	long offset = 0;
+	int res = 1;
+
+	kernel_length = (long)(&_relocation_start) - (long)(&_text);
+	loc_new = determine_relocation_address();
+
+	/* Sanity check relocation address */
+	if (relocation_addr_valid(loc_new))
+		offset = (unsigned long)loc_new - (unsigned long)(&_text);
+
+	if (offset) {
+		/* Copy the kernel to it's new location */
+		memcpy(loc_new, &_text, kernel_length);
+
+		/* Perform relocations on the new kernel */
+		res = do_relocations(&_text, loc_new, offset);
+
+		if (res == 0) {
+			/* Sync the caches ready for execution of new kernel */
+			sync_icache(loc_new, kernel_length);
+
+			res = relocate_exception_table(offset);
+		}
+	}
+
+	if (res == 0) {
+		void *bss_new = (void *)((long)&__bss_start + offset);
+		long bss_length = (long)&__bss_stop - (long)&__bss_start;
+		/*
+		 * The original .bss has already been cleared, and
+		 * some variables such as command line parameters
+		 * stored to it so make a copy in the new location.
+		 */
+		memcpy(bss_new, &__bss_start, bss_length);
+
+		/* The current thread is now within the relocated image */
+		__current_thread_info = (void *)((long)&init_thread_union + offset);
+
+		/* Return the new kernel's entry point */
+		return (void *)((long)start_kernel + offset);
+	} else {
+		/*
+		 * Something went wrong in the relocation process
+		 * Just boot the original kernel
+		 */
+		return start_kernel;
+	}
+}
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 6/9] MIPS: Call relocate_kernel if CONFIG_RELOCATABLE=y
@ 2015-12-03 10:08   ` Matt Redfearn
  0 siblings, 0 replies; 34+ messages in thread
From: Matt Redfearn @ 2015-12-03 10:08 UTC (permalink / raw)
  To: linux-mips; +Cc: Matt Redfearn

If CONFIG_RELOCATABLE is enabled, jump to relocate_kernel.

This function will return the entry point of the relocated kernel if
copy/relocate is sucessful or the original entry point if not. The stack
pointer must then be pointed into the new image.

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
---
 arch/mips/kernel/head.S | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/arch/mips/kernel/head.S b/arch/mips/kernel/head.S
index 4e4cc5b9a771..7dc043349d66 100644
--- a/arch/mips/kernel/head.S
+++ b/arch/mips/kernel/head.S
@@ -132,7 +132,27 @@ not_found:
 	set_saved_sp	sp, t0, t1
 	PTR_SUBU	sp, 4 * SZREG		# init stack pointer
 
+#ifdef CONFIG_RELOCATABLE
+	/* Copy kernel and apply the relocations */
+	jal		relocate_kernel
+
+	/* Repoint the sp into the new kernel image */
+	PTR_LI		sp, _THREAD_SIZE - 32 - PT_SIZE
+	PTR_ADDU	sp, $28
+	set_saved_sp	sp, t0, t1
+	PTR_SUBU	sp, 4 * SZREG		# init stack pointer
+
+	/*
+	 * relocate_kernel returns the entry point either
+	 * in the relocated kernel or the original if for
+	 * some reason relocation failed - jump there now
+	 * with instruction hazard barrier because of the
+	 * newly sync'd icache.
+	 */
+	jr.hb		v0
+#else
 	j		start_kernel
+#endif
 	END(kernel_entry)
 
 #ifdef CONFIG_SMP
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 6/9] MIPS: Call relocate_kernel if CONFIG_RELOCATABLE=y
@ 2015-12-03 10:08   ` Matt Redfearn
  0 siblings, 0 replies; 34+ messages in thread
From: Matt Redfearn @ 2015-12-03 10:08 UTC (permalink / raw)
  To: linux-mips; +Cc: Matt Redfearn

If CONFIG_RELOCATABLE is enabled, jump to relocate_kernel.

This function will return the entry point of the relocated kernel if
copy/relocate is sucessful or the original entry point if not. The stack
pointer must then be pointed into the new image.

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
---
 arch/mips/kernel/head.S | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/arch/mips/kernel/head.S b/arch/mips/kernel/head.S
index 4e4cc5b9a771..7dc043349d66 100644
--- a/arch/mips/kernel/head.S
+++ b/arch/mips/kernel/head.S
@@ -132,7 +132,27 @@ not_found:
 	set_saved_sp	sp, t0, t1
 	PTR_SUBU	sp, 4 * SZREG		# init stack pointer
 
+#ifdef CONFIG_RELOCATABLE
+	/* Copy kernel and apply the relocations */
+	jal		relocate_kernel
+
+	/* Repoint the sp into the new kernel image */
+	PTR_LI		sp, _THREAD_SIZE - 32 - PT_SIZE
+	PTR_ADDU	sp, $28
+	set_saved_sp	sp, t0, t1
+	PTR_SUBU	sp, 4 * SZREG		# init stack pointer
+
+	/*
+	 * relocate_kernel returns the entry point either
+	 * in the relocated kernel or the original if for
+	 * some reason relocation failed - jump there now
+	 * with instruction hazard barrier because of the
+	 * newly sync'd icache.
+	 */
+	jr.hb		v0
+#else
 	j		start_kernel
+#endif
 	END(kernel_entry)
 
 #ifdef CONFIG_SMP
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 7/9] MIPS: bootmem: When relocatable, free memory below kernel
@ 2015-12-03 10:08   ` Matt Redfearn
  0 siblings, 0 replies; 34+ messages in thread
From: Matt Redfearn @ 2015-12-03 10:08 UTC (permalink / raw)
  To: linux-mips; +Cc: Matt Redfearn

The kernel reserves all memory before the _end symbol as bootmem,
however, once the kernel can be relocated elsewhere in memory this may
result in a large amount of wasted memory. The assumption is that the
memory between the link and relocated address of the kernel may be
released back to the available memory pool.

Memory statistics for a Malta with the kernel relocating by
16Mb, without the patch:
Memory: 105952K/131072K available (4604K kernel code, 242K rwdata,
892K rodata, 1280K init, 183K bss, 25120K reserved, 0K cma-reserved)
And with the patch:
Memory: 122336K/131072K available (4604K kernel code, 242K rwdata,
892K rodata, 1280K init, 183K bss, 8736K reserved, 0K cma-reserved)

The 16Mb offset is removed from the reserved region and added back to
the available region.

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
---
 arch/mips/kernel/setup.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c
index 479515109e5b..15c3e5892ced 100644
--- a/arch/mips/kernel/setup.c
+++ b/arch/mips/kernel/setup.c
@@ -464,6 +464,19 @@ static void __init bootmem_init(void)
 	 */
 	reserve_bootmem(PFN_PHYS(mapstart), bootmap_size, BOOTMEM_DEFAULT);
 
+#ifdef CONFIG_RELOCATABLE
+	/* The kernel reserves all memory below it's _end symbol as bootmem,
+	 * but the kernel may now be at a much higher address. The memory
+	 * between the original and new locations may be returned to the system.
+	 */
+	if (__pa_symbol(_text) > __pa_symbol(VMLINUX_LOAD_ADDRESS)) {
+		unsigned long offset;
+
+		offset = __pa_symbol(_text) - __pa_symbol(VMLINUX_LOAD_ADDRESS);
+		free_bootmem(__pa_symbol(VMLINUX_LOAD_ADDRESS), offset);
+	}
+#endif
+
 	/*
 	 * Reserve initrd memory if needed.
 	 */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 7/9] MIPS: bootmem: When relocatable, free memory below kernel
@ 2015-12-03 10:08   ` Matt Redfearn
  0 siblings, 0 replies; 34+ messages in thread
From: Matt Redfearn @ 2015-12-03 10:08 UTC (permalink / raw)
  To: linux-mips; +Cc: Matt Redfearn

The kernel reserves all memory before the _end symbol as bootmem,
however, once the kernel can be relocated elsewhere in memory this may
result in a large amount of wasted memory. The assumption is that the
memory between the link and relocated address of the kernel may be
released back to the available memory pool.

Memory statistics for a Malta with the kernel relocating by
16Mb, without the patch:
Memory: 105952K/131072K available (4604K kernel code, 242K rwdata,
892K rodata, 1280K init, 183K bss, 25120K reserved, 0K cma-reserved)
And with the patch:
Memory: 122336K/131072K available (4604K kernel code, 242K rwdata,
892K rodata, 1280K init, 183K bss, 8736K reserved, 0K cma-reserved)

The 16Mb offset is removed from the reserved region and added back to
the available region.

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
---
 arch/mips/kernel/setup.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c
index 479515109e5b..15c3e5892ced 100644
--- a/arch/mips/kernel/setup.c
+++ b/arch/mips/kernel/setup.c
@@ -464,6 +464,19 @@ static void __init bootmem_init(void)
 	 */
 	reserve_bootmem(PFN_PHYS(mapstart), bootmap_size, BOOTMEM_DEFAULT);
 
+#ifdef CONFIG_RELOCATABLE
+	/* The kernel reserves all memory below it's _end symbol as bootmem,
+	 * but the kernel may now be at a much higher address. The memory
+	 * between the original and new locations may be returned to the system.
+	 */
+	if (__pa_symbol(_text) > __pa_symbol(VMLINUX_LOAD_ADDRESS)) {
+		unsigned long offset;
+
+		offset = __pa_symbol(_text) - __pa_symbol(VMLINUX_LOAD_ADDRESS);
+		free_bootmem(__pa_symbol(VMLINUX_LOAD_ADDRESS), offset);
+	}
+#endif
+
 	/*
 	 * Reserve initrd memory if needed.
 	 */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 8/9] MIPS: Add CONFIG_RELOCATABLE Kconfig option
@ 2015-12-03 10:08   ` Matt Redfearn
  0 siblings, 0 replies; 34+ messages in thread
From: Matt Redfearn @ 2015-12-03 10:08 UTC (permalink / raw)
  To: linux-mips; +Cc: Matt Redfearn

Add option to KConfig to enable the kernel to relocate itself at
runtime.

Relocation is supported on R2 of the MIPS architecture, 32bit and 64bit.

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
---
 arch/mips/Kconfig | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index b8ed64dfaafc..5b0339c91a33 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -2431,6 +2431,15 @@ config NUMA
 config SYS_SUPPORTS_NUMA
 	bool
 
+config RELOCATABLE
+	bool "Relocatable kernel"
+	depends on CPU_MIPS32_R2 || CPU_MIPS64_R2
+	help
+	  This builds a kernel image that retains relocation information
+	  so it can be loaded someplace besides the default 1MB.
+	  The relocations make the kernel binary about 15% larger,
+	  but are discarded at runtime
+
 config RELOCATION_TABLE_SIZE
 	hex "Relocation table size"
 	depends on RELOCATABLE
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 8/9] MIPS: Add CONFIG_RELOCATABLE Kconfig option
@ 2015-12-03 10:08   ` Matt Redfearn
  0 siblings, 0 replies; 34+ messages in thread
From: Matt Redfearn @ 2015-12-03 10:08 UTC (permalink / raw)
  To: linux-mips; +Cc: Matt Redfearn

Add option to KConfig to enable the kernel to relocate itself at
runtime.

Relocation is supported on R2 of the MIPS architecture, 32bit and 64bit.

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
---
 arch/mips/Kconfig | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index b8ed64dfaafc..5b0339c91a33 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -2431,6 +2431,15 @@ config NUMA
 config SYS_SUPPORTS_NUMA
 	bool
 
+config RELOCATABLE
+	bool "Relocatable kernel"
+	depends on CPU_MIPS32_R2 || CPU_MIPS64_R2
+	help
+	  This builds a kernel image that retains relocation information
+	  so it can be loaded someplace besides the default 1MB.
+	  The relocations make the kernel binary about 15% larger,
+	  but are discarded at runtime
+
 config RELOCATION_TABLE_SIZE
 	hex "Relocation table size"
 	depends on RELOCATABLE
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 9/9] MIPS: Kernel: Implement kASLR using CONFIG_RELOCATABLE
@ 2015-12-03 10:08   ` Matt Redfearn
  0 siblings, 0 replies; 34+ messages in thread
From: Matt Redfearn @ 2015-12-03 10:08 UTC (permalink / raw)
  To: linux-mips; +Cc: Matt Redfearn

This patch adds kASLR to the MIPS kernel.

Entropy is derived from the banner, which will change every build and
random_get_entropy() which should provide additional runtime entropy.

The kernel is relocated by up to RANDOMIZE_BASE_MAX_OFFSET bytes from
its link address (PHYSICAL_START). Because relocation happens so early
in the kernel boot, the amount of physical memory has not yet been
determined, nor has the command line been parsed. This means the only
way to limit relocation within the available memory is via Kconfig.

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
---
 arch/mips/Kconfig           | 30 +++++++++++++++++++++
 arch/mips/kernel/relocate.c | 66 ++++++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 95 insertions(+), 1 deletion(-)

diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index 5b0339c91a33..0f8425e5414b 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -2452,6 +2452,36 @@ config RELOCATION_TABLE_SIZE
 	  This option allows the amount of space reserved for the table to be
 	  adjusted, although the default of 1Mb should be ok in most cases.
 
+config RANDOMIZE_BASE
+	bool "Randomize the address of the kernel image"
+	depends on RELOCATABLE
+	---help---
+	   Randomizes the physical and virtual address at which the
+	   kernel image is loaded, as a security feature that
+	   deters exploit attempts relying on knowledge of the location
+	   of kernel internals.
+
+	   Entropy is generated using any coprocessor 0 registers available.
+
+	   The kernel will be offset by up to RANDOMIZE_BASE_MAX_OFFSET.
+
+	   If unsure, say N.
+
+config RANDOMIZE_BASE_MAX_OFFSET
+	hex "Maximum kASLR offset" if EXPERT
+	depends on RANDOMIZE_BASE
+	range 0x0 0x40000000 if EVA || 64BIT
+	range 0x0 0x08000000
+	default "0x01000000"
+	---help---
+	  When kASLR is active, this provides the maximum offset that will
+	  be applied to the kernel image. It should be set according to the
+	  amount of physical RAM available in the target system minus
+	  PHYSICAL_START and must be a power of 2.
+
+	  This is limited by the size of KSEG0, 256Mb on 32-bit or 1Gb with
+	  EVA or 64-bit. The default is 16Mb.
+
 	  If unsure, leave at the default value.
 
 config NODES_SHIFT
diff --git a/arch/mips/kernel/relocate.c b/arch/mips/kernel/relocate.c
index 3cb97ab25a5f..965d0af28a37 100644
--- a/arch/mips/kernel/relocate.c
+++ b/arch/mips/kernel/relocate.c
@@ -17,6 +17,7 @@
 #include <linux/sched.h>
 #include <linux/start_kernel.h>
 #include <linux/string.h>
+#include <linux/printk.h>
 
 extern long _relocation_start;	/* End kernel image / start relocation table */
 extern long _relocation_end;	/* End relocation table */
@@ -160,6 +161,54 @@ static int __init relocate_exception_table(long offset)
 	return 0;
 }
 
+#ifdef CONFIG_RANDOMIZE_BASE
+
+static inline unsigned long rotate_xor(unsigned long hash, const void *area,
+				size_t size)
+{
+	size_t i;
+	unsigned long *ptr = (unsigned long *)area;
+
+	for (i = 0; i < size / sizeof(hash); i++) {
+		/* Rotate by odd number of bits and XOR. */
+		hash = (hash << ((sizeof(hash) * 8) - 7)) | (hash >> 7);
+		hash ^= ptr[i];
+	}
+
+	return hash;
+}
+
+static inline unsigned long get_random_boot(void)
+{
+	unsigned long entropy = random_get_entropy();
+	unsigned long hash = 0;
+
+	/* Attempt to create a simple but unpredictable starting entropy. */
+	hash = rotate_xor(hash, linux_banner, strlen(linux_banner));
+
+	/* Add in runtime entropy */
+	hash = rotate_xor(hash, &entropy, sizeof(entropy));
+
+	return hash;
+}
+
+static inline void __init *determine_relocation_address(void)
+{
+	/* Choose a new address for the kernel */
+	unsigned long dest = (unsigned long)_end;
+	unsigned long offset;
+
+	/* Round _end up to next 64k boundary */
+	dest = ALIGN(dest, 0xffff);
+
+	offset = get_random_boot() << 16;
+	offset &= (CONFIG_RANDOMIZE_BASE_MAX_OFFSET-1);
+
+	return (void *)(dest + offset);
+}
+
+#else
+
 static inline void __init *determine_relocation_address(void)
 {
 	/*
@@ -169,6 +218,8 @@ static inline void __init *determine_relocation_address(void)
 	return (void *)0xffffffff81000000;
 }
 
+#endif
+
 static inline int __init relocation_addr_valid(void *loc_new)
 {
 	if ((unsigned long)loc_new & 0x0000ffff)
@@ -210,10 +261,23 @@ void __init *relocate_kernel(void)
 	if (res == 0) {
 		void *bss_new = (void *)((long)&__bss_start + offset);
 		long bss_length = (long)&__bss_stop - (long)&__bss_start;
+#if (defined CONFIG_DEBUG_KERNEL) && (defined CONFIG_DEBUG_INFO)
+		/*
+		 * This information is necessary when debugging the kernel
+		 * But is a security vulnerability otherwise!
+		 */
+		void *data_new = (void *)((long)&_sdata + offset);
+
+		pr_info("Booting relocated kernel\n");
+		pr_info(" .text @ 0x%pK\n", loc_new);
+		pr_info(" .data @ 0x%pK\n", data_new);
+		pr_info(" .bss  @ 0x%pK\n", bss_new);
+#endif
 		/*
 		 * The original .bss has already been cleared, and
 		 * some variables such as command line parameters
-		 * stored to it so make a copy in the new location.
+		 * (and possibly the above printk's) stored to it
+		 * so make a copy in the new location.
 		 */
 		memcpy(bss_new, &__bss_start, bss_length);
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 9/9] MIPS: Kernel: Implement kASLR using CONFIG_RELOCATABLE
@ 2015-12-03 10:08   ` Matt Redfearn
  0 siblings, 0 replies; 34+ messages in thread
From: Matt Redfearn @ 2015-12-03 10:08 UTC (permalink / raw)
  To: linux-mips; +Cc: Matt Redfearn

This patch adds kASLR to the MIPS kernel.

Entropy is derived from the banner, which will change every build and
random_get_entropy() which should provide additional runtime entropy.

The kernel is relocated by up to RANDOMIZE_BASE_MAX_OFFSET bytes from
its link address (PHYSICAL_START). Because relocation happens so early
in the kernel boot, the amount of physical memory has not yet been
determined, nor has the command line been parsed. This means the only
way to limit relocation within the available memory is via Kconfig.

Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
---
 arch/mips/Kconfig           | 30 +++++++++++++++++++++
 arch/mips/kernel/relocate.c | 66 ++++++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 95 insertions(+), 1 deletion(-)

diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index 5b0339c91a33..0f8425e5414b 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -2452,6 +2452,36 @@ config RELOCATION_TABLE_SIZE
 	  This option allows the amount of space reserved for the table to be
 	  adjusted, although the default of 1Mb should be ok in most cases.
 
+config RANDOMIZE_BASE
+	bool "Randomize the address of the kernel image"
+	depends on RELOCATABLE
+	---help---
+	   Randomizes the physical and virtual address at which the
+	   kernel image is loaded, as a security feature that
+	   deters exploit attempts relying on knowledge of the location
+	   of kernel internals.
+
+	   Entropy is generated using any coprocessor 0 registers available.
+
+	   The kernel will be offset by up to RANDOMIZE_BASE_MAX_OFFSET.
+
+	   If unsure, say N.
+
+config RANDOMIZE_BASE_MAX_OFFSET
+	hex "Maximum kASLR offset" if EXPERT
+	depends on RANDOMIZE_BASE
+	range 0x0 0x40000000 if EVA || 64BIT
+	range 0x0 0x08000000
+	default "0x01000000"
+	---help---
+	  When kASLR is active, this provides the maximum offset that will
+	  be applied to the kernel image. It should be set according to the
+	  amount of physical RAM available in the target system minus
+	  PHYSICAL_START and must be a power of 2.
+
+	  This is limited by the size of KSEG0, 256Mb on 32-bit or 1Gb with
+	  EVA or 64-bit. The default is 16Mb.
+
 	  If unsure, leave at the default value.
 
 config NODES_SHIFT
diff --git a/arch/mips/kernel/relocate.c b/arch/mips/kernel/relocate.c
index 3cb97ab25a5f..965d0af28a37 100644
--- a/arch/mips/kernel/relocate.c
+++ b/arch/mips/kernel/relocate.c
@@ -17,6 +17,7 @@
 #include <linux/sched.h>
 #include <linux/start_kernel.h>
 #include <linux/string.h>
+#include <linux/printk.h>
 
 extern long _relocation_start;	/* End kernel image / start relocation table */
 extern long _relocation_end;	/* End relocation table */
@@ -160,6 +161,54 @@ static int __init relocate_exception_table(long offset)
 	return 0;
 }
 
+#ifdef CONFIG_RANDOMIZE_BASE
+
+static inline unsigned long rotate_xor(unsigned long hash, const void *area,
+				size_t size)
+{
+	size_t i;
+	unsigned long *ptr = (unsigned long *)area;
+
+	for (i = 0; i < size / sizeof(hash); i++) {
+		/* Rotate by odd number of bits and XOR. */
+		hash = (hash << ((sizeof(hash) * 8) - 7)) | (hash >> 7);
+		hash ^= ptr[i];
+	}
+
+	return hash;
+}
+
+static inline unsigned long get_random_boot(void)
+{
+	unsigned long entropy = random_get_entropy();
+	unsigned long hash = 0;
+
+	/* Attempt to create a simple but unpredictable starting entropy. */
+	hash = rotate_xor(hash, linux_banner, strlen(linux_banner));
+
+	/* Add in runtime entropy */
+	hash = rotate_xor(hash, &entropy, sizeof(entropy));
+
+	return hash;
+}
+
+static inline void __init *determine_relocation_address(void)
+{
+	/* Choose a new address for the kernel */
+	unsigned long dest = (unsigned long)_end;
+	unsigned long offset;
+
+	/* Round _end up to next 64k boundary */
+	dest = ALIGN(dest, 0xffff);
+
+	offset = get_random_boot() << 16;
+	offset &= (CONFIG_RANDOMIZE_BASE_MAX_OFFSET-1);
+
+	return (void *)(dest + offset);
+}
+
+#else
+
 static inline void __init *determine_relocation_address(void)
 {
 	/*
@@ -169,6 +218,8 @@ static inline void __init *determine_relocation_address(void)
 	return (void *)0xffffffff81000000;
 }
 
+#endif
+
 static inline int __init relocation_addr_valid(void *loc_new)
 {
 	if ((unsigned long)loc_new & 0x0000ffff)
@@ -210,10 +261,23 @@ void __init *relocate_kernel(void)
 	if (res == 0) {
 		void *bss_new = (void *)((long)&__bss_start + offset);
 		long bss_length = (long)&__bss_stop - (long)&__bss_start;
+#if (defined CONFIG_DEBUG_KERNEL) && (defined CONFIG_DEBUG_INFO)
+		/*
+		 * This information is necessary when debugging the kernel
+		 * But is a security vulnerability otherwise!
+		 */
+		void *data_new = (void *)((long)&_sdata + offset);
+
+		pr_info("Booting relocated kernel\n");
+		pr_info(" .text @ 0x%pK\n", loc_new);
+		pr_info(" .data @ 0x%pK\n", data_new);
+		pr_info(" .bss  @ 0x%pK\n", bss_new);
+#endif
 		/*
 		 * The original .bss has already been cleared, and
 		 * some variables such as command line parameters
-		 * stored to it so make a copy in the new location.
+		 * (and possibly the above printk's) stored to it
+		 * so make a copy in the new location.
 		 */
 		memcpy(bss_new, &__bss_start, bss_length);
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH 6/9] MIPS: Call relocate_kernel if CONFIG_RELOCATABLE=y
  2015-12-03 10:08   ` Matt Redfearn
  (?)
@ 2015-12-03 14:24   ` Sergei Shtylyov
  2015-12-03 14:53       ` Matt Redfearn
  -1 siblings, 1 reply; 34+ messages in thread
From: Sergei Shtylyov @ 2015-12-03 14:24 UTC (permalink / raw)
  To: Matt Redfearn, linux-mips

Hello.

On 12/3/2015 1:08 PM, Matt Redfearn wrote:

> If CONFIG_RELOCATABLE is enabled, jump to relocate_kernel.
>
> This function will return the entry point of the relocated kernel if
> copy/relocate is sucessful or the original entry point if not. The stack
> pointer must then be pointed into the new image.
>
> Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
> ---
>   arch/mips/kernel/head.S | 20 ++++++++++++++++++++
>   1 file changed, 20 insertions(+)
>
> diff --git a/arch/mips/kernel/head.S b/arch/mips/kernel/head.S
> index 4e4cc5b9a771..7dc043349d66 100644
> --- a/arch/mips/kernel/head.S
> +++ b/arch/mips/kernel/head.S
> @@ -132,7 +132,27 @@ not_found:
>   	set_saved_sp	sp, t0, t1
>   	PTR_SUBU	sp, 4 * SZREG		# init stack pointer
>
> +#ifdef CONFIG_RELOCATABLE
> +	/* Copy kernel and apply the relocations */
> +	jal		relocate_kernel
> +
> +	/* Repoint the sp into the new kernel image */
> +	PTR_LI		sp, _THREAD_SIZE - 32 - PT_SIZE
> +	PTR_ADDU	sp, $28

    Can't you account for it in the previous PTR_LI?

> +	set_saved_sp	sp, t0, t1
> +	PTR_SUBU	sp, 4 * SZREG		# init stack pointer
[...]

MBR, Sergei

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 6/9] MIPS: Call relocate_kernel if CONFIG_RELOCATABLE=y
@ 2015-12-03 14:53       ` Matt Redfearn
  0 siblings, 0 replies; 34+ messages in thread
From: Matt Redfearn @ 2015-12-03 14:53 UTC (permalink / raw)
  To: Sergei Shtylyov, linux-mips

Hi Sergei,

On 03/12/15 14:24, Sergei Shtylyov wrote:
> Hello.
>
> On 12/3/2015 1:08 PM, Matt Redfearn wrote:
>
>> If CONFIG_RELOCATABLE is enabled, jump to relocate_kernel.
>>
>> This function will return the entry point of the relocated kernel if
>> copy/relocate is sucessful or the original entry point if not. The stack
>> pointer must then be pointed into the new image.
>>
>> Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
>> ---
>>   arch/mips/kernel/head.S | 20 ++++++++++++++++++++
>>   1 file changed, 20 insertions(+)
>>
>> diff --git a/arch/mips/kernel/head.S b/arch/mips/kernel/head.S
>> index 4e4cc5b9a771..7dc043349d66 100644
>> --- a/arch/mips/kernel/head.S
>> +++ b/arch/mips/kernel/head.S
>> @@ -132,7 +132,27 @@ not_found:
>>       set_saved_sp    sp, t0, t1
>>       PTR_SUBU    sp, 4 * SZREG        # init stack pointer
>>
>> +#ifdef CONFIG_RELOCATABLE
>> +    /* Copy kernel and apply the relocations */
>> +    jal        relocate_kernel
>> +
>> +    /* Repoint the sp into the new kernel image */
>> +    PTR_LI        sp, _THREAD_SIZE - 32 - PT_SIZE
>> +    PTR_ADDU    sp, $28
>
>    Can't you account for it in the previous PTR_LI?
During relocate_kernel, $28, pointer to the current thread, has been 
moved by an unknown (here) number of bytes to point to the 
init_thread_union within the new kernel. The stack pointer must now be 
pointed there too. Since we don't know the offset from the original 
kernel it's easier to simply recalculate it.

Thanks,
Matt
>
>> +    set_saved_sp    sp, t0, t1
>> +    PTR_SUBU    sp, 4 * SZREG        # init stack pointer
> [...]
>
> MBR, Sergei
>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 6/9] MIPS: Call relocate_kernel if CONFIG_RELOCATABLE=y
@ 2015-12-03 14:53       ` Matt Redfearn
  0 siblings, 0 replies; 34+ messages in thread
From: Matt Redfearn @ 2015-12-03 14:53 UTC (permalink / raw)
  To: Sergei Shtylyov, linux-mips

Hi Sergei,

On 03/12/15 14:24, Sergei Shtylyov wrote:
> Hello.
>
> On 12/3/2015 1:08 PM, Matt Redfearn wrote:
>
>> If CONFIG_RELOCATABLE is enabled, jump to relocate_kernel.
>>
>> This function will return the entry point of the relocated kernel if
>> copy/relocate is sucessful or the original entry point if not. The stack
>> pointer must then be pointed into the new image.
>>
>> Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
>> ---
>>   arch/mips/kernel/head.S | 20 ++++++++++++++++++++
>>   1 file changed, 20 insertions(+)
>>
>> diff --git a/arch/mips/kernel/head.S b/arch/mips/kernel/head.S
>> index 4e4cc5b9a771..7dc043349d66 100644
>> --- a/arch/mips/kernel/head.S
>> +++ b/arch/mips/kernel/head.S
>> @@ -132,7 +132,27 @@ not_found:
>>       set_saved_sp    sp, t0, t1
>>       PTR_SUBU    sp, 4 * SZREG        # init stack pointer
>>
>> +#ifdef CONFIG_RELOCATABLE
>> +    /* Copy kernel and apply the relocations */
>> +    jal        relocate_kernel
>> +
>> +    /* Repoint the sp into the new kernel image */
>> +    PTR_LI        sp, _THREAD_SIZE - 32 - PT_SIZE
>> +    PTR_ADDU    sp, $28
>
>    Can't you account for it in the previous PTR_LI?
During relocate_kernel, $28, pointer to the current thread, has been 
moved by an unknown (here) number of bytes to point to the 
init_thread_union within the new kernel. The stack pointer must now be 
pointed there too. Since we don't know the offset from the original 
kernel it's easier to simply recalculate it.

Thanks,
Matt
>
>> +    set_saved_sp    sp, t0, t1
>> +    PTR_SUBU    sp, 4 * SZREG        # init stack pointer
> [...]
>
> MBR, Sergei
>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 6/9] MIPS: Call relocate_kernel if CONFIG_RELOCATABLE=y
  2015-12-03 14:53       ` Matt Redfearn
  (?)
@ 2015-12-03 17:46       ` Sergei Shtylyov
  2015-12-03 18:54         ` James Hogan
  -1 siblings, 1 reply; 34+ messages in thread
From: Sergei Shtylyov @ 2015-12-03 17:46 UTC (permalink / raw)
  To: Matt Redfearn, linux-mips

On 12/03/2015 05:53 PM, Matt Redfearn wrote:

>>> If CONFIG_RELOCATABLE is enabled, jump to relocate_kernel.
>>>
>>> This function will return the entry point of the relocated kernel if
>>> copy/relocate is sucessful or the original entry point if not. The stack
>>> pointer must then be pointed into the new image.
>>>
>>> Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
>>> ---
>>>   arch/mips/kernel/head.S | 20 ++++++++++++++++++++
>>>   1 file changed, 20 insertions(+)
>>>
>>> diff --git a/arch/mips/kernel/head.S b/arch/mips/kernel/head.S
>>> index 4e4cc5b9a771..7dc043349d66 100644
>>> --- a/arch/mips/kernel/head.S
>>> +++ b/arch/mips/kernel/head.S
>>> @@ -132,7 +132,27 @@ not_found:
>>>       set_saved_sp    sp, t0, t1
>>>       PTR_SUBU    sp, 4 * SZREG        # init stack pointer
>>>
>>> +#ifdef CONFIG_RELOCATABLE
>>> +    /* Copy kernel and apply the relocations */
>>> +    jal        relocate_kernel
>>> +
>>> +    /* Repoint the sp into the new kernel image */
>>> +    PTR_LI        sp, _THREAD_SIZE - 32 - PT_SIZE
>>> +    PTR_ADDU    sp, $28
>>
>>    Can't you account for it in the previous PTR_LI?

> During relocate_kernel, $28, pointer to the current thread,

    Ah, it's a register! I thought it was an immediate. Nevermind then. :-)

[...]

MBR, Sergei

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 6/9] MIPS: Call relocate_kernel if CONFIG_RELOCATABLE=y
  2015-12-03 17:46       ` Sergei Shtylyov
@ 2015-12-03 18:54         ` James Hogan
  2015-12-04  8:20             ` Matt Redfearn
  0 siblings, 1 reply; 34+ messages in thread
From: James Hogan @ 2015-12-03 18:54 UTC (permalink / raw)
  To: Sergei Shtylyov, Matt Redfearn, linux-mips

On 3 December 2015 17:46:14 GMT+00:00, Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> wrote:
>On 12/03/2015 05:53 PM, Matt Redfearn wrote:
>
>>>> If CONFIG_RELOCATABLE is enabled, jump to relocate_kernel.
>>>>
>>>> This function will return the entry point of the relocated kernel
>if
>>>> copy/relocate is sucessful or the original entry point if not. The
>stack
>>>> pointer must then be pointed into the new image.
>>>>
>>>> Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
>>>> ---
>>>>   arch/mips/kernel/head.S | 20 ++++++++++++++++++++
>>>>   1 file changed, 20 insertions(+)
>>>>
>>>> diff --git a/arch/mips/kernel/head.S b/arch/mips/kernel/head.S
>>>> index 4e4cc5b9a771..7dc043349d66 100644
>>>> --- a/arch/mips/kernel/head.S
>>>> +++ b/arch/mips/kernel/head.S
>>>> @@ -132,7 +132,27 @@ not_found:
>>>>       set_saved_sp    sp, t0, t1
>>>>       PTR_SUBU    sp, 4 * SZREG        # init stack pointer
>>>>
>>>> +#ifdef CONFIG_RELOCATABLE
>>>> +    /* Copy kernel and apply the relocations */
>>>> +    jal        relocate_kernel
>>>> +
>>>> +    /* Repoint the sp into the new kernel image */
>>>> +    PTR_LI        sp, _THREAD_SIZE - 32 - PT_SIZE
>>>> +    PTR_ADDU    sp, $28
>>>
>>>    Can't you account for it in the previous PTR_LI?
>
>> During relocate_kernel, $28, pointer to the current thread,
>
>Ah, it's a register! I thought it was an immediate. Nevermind then. :-)

Although, it could still be reduced:
PTR_ADDU sp, gp, _THREAD_SIZE - 32 - PT_SIZE

Assuming the immediate is in range of signed 16bit.

Cheers
James

>
>[...]
>
>MBR, Sergei


-- 
James Hogan

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 0/9] MIPS Relocatable kernel & KASLR
  2015-12-03 10:08 ` Matt Redfearn
                   ` (9 preceding siblings ...)
  (?)
@ 2015-12-03 22:23 ` Joshua Kinard
  2015-12-04  8:14     ` Matt Redfearn
  -1 siblings, 1 reply; 34+ messages in thread
From: Joshua Kinard @ 2015-12-03 22:23 UTC (permalink / raw)
  To: Matt Redfearn; +Cc: linux-mips

On 12/03/2015 05:08, Matt Redfearn wrote:
> This series adds the ability for the MIPS kernel to relocate itself at
> runtime, optionally to an address determined at random each boot. This
> series is based on v4.3 and has been tested on the Malta platform.

[snip]

> * Relocation is currently supported on R2 of the MIPS architecture,
>   32bit and 64bit.

Out of curiosity, why is this capability restricted to MIPS R2 and higher?
IRIX kernels and the 'sash' tool were both relocatable on the older SGI
platforms.  Does the feature, as implemented, rely on R2-specific
instructions/capabilities, or only due to lack of testing on pre-R2 hardware?

--J

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 0/9] MIPS Relocatable kernel & KASLR
@ 2015-12-04  8:14     ` Matt Redfearn
  0 siblings, 0 replies; 34+ messages in thread
From: Matt Redfearn @ 2015-12-04  8:14 UTC (permalink / raw)
  To: Joshua Kinard; +Cc: linux-mips

Hi Joshua,
The patch as it stands uses a couple of MIPS R2 additional instructions 
to deal with synchronizing icache. Firstly, the synci instruction to 
ensure that icache is in sync with the dcache after the relocated kernel 
has been written, and the jr.hb instruction to resolve any hazards 
created by writing the new kernel before jumping to it.

Thanks,
Matt

On 03/12/15 22:23, Joshua Kinard wrote:
> On 12/03/2015 05:08, Matt Redfearn wrote:
>> This series adds the ability for the MIPS kernel to relocate itself at
>> runtime, optionally to an address determined at random each boot. This
>> series is based on v4.3 and has been tested on the Malta platform.
> [snip]
>
>> * Relocation is currently supported on R2 of the MIPS architecture,
>>    32bit and 64bit.
> Out of curiosity, why is this capability restricted to MIPS R2 and higher?
> IRIX kernels and the 'sash' tool were both relocatable on the older SGI
> platforms.  Does the feature, as implemented, rely on R2-specific
> instructions/capabilities, or only due to lack of testing on pre-R2 hardware?
>
> --J
>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 0/9] MIPS Relocatable kernel & KASLR
@ 2015-12-04  8:14     ` Matt Redfearn
  0 siblings, 0 replies; 34+ messages in thread
From: Matt Redfearn @ 2015-12-04  8:14 UTC (permalink / raw)
  To: Joshua Kinard; +Cc: linux-mips

Hi Joshua,
The patch as it stands uses a couple of MIPS R2 additional instructions 
to deal with synchronizing icache. Firstly, the synci instruction to 
ensure that icache is in sync with the dcache after the relocated kernel 
has been written, and the jr.hb instruction to resolve any hazards 
created by writing the new kernel before jumping to it.

Thanks,
Matt

On 03/12/15 22:23, Joshua Kinard wrote:
> On 12/03/2015 05:08, Matt Redfearn wrote:
>> This series adds the ability for the MIPS kernel to relocate itself at
>> runtime, optionally to an address determined at random each boot. This
>> series is based on v4.3 and has been tested on the Malta platform.
> [snip]
>
>> * Relocation is currently supported on R2 of the MIPS architecture,
>>    32bit and 64bit.
> Out of curiosity, why is this capability restricted to MIPS R2 and higher?
> IRIX kernels and the 'sash' tool were both relocatable on the older SGI
> platforms.  Does the feature, as implemented, rely on R2-specific
> instructions/capabilities, or only due to lack of testing on pre-R2 hardware?
>
> --J
>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 6/9] MIPS: Call relocate_kernel if CONFIG_RELOCATABLE=y
@ 2015-12-04  8:20             ` Matt Redfearn
  0 siblings, 0 replies; 34+ messages in thread
From: Matt Redfearn @ 2015-12-04  8:20 UTC (permalink / raw)
  To: James Hogan, Sergei Shtylyov, linux-mips

Hi James,

On 03/12/15 18:54, James Hogan wrote:
> On 3 December 2015 17:46:14 GMT+00:00, Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> wrote:
>> On 12/03/2015 05:53 PM, Matt Redfearn wrote:
>>
>>>>> If CONFIG_RELOCATABLE is enabled, jump to relocate_kernel.
>>>>>
>>>>> This function will return the entry point of the relocated kernel
>> if
>>>>> copy/relocate is sucessful or the original entry point if not. The
>> stack
>>>>> pointer must then be pointed into the new image.
>>>>>
>>>>> Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
>>>>> ---
>>>>>    arch/mips/kernel/head.S | 20 ++++++++++++++++++++
>>>>>    1 file changed, 20 insertions(+)
>>>>>
>>>>> diff --git a/arch/mips/kernel/head.S b/arch/mips/kernel/head.S
>>>>> index 4e4cc5b9a771..7dc043349d66 100644
>>>>> --- a/arch/mips/kernel/head.S
>>>>> +++ b/arch/mips/kernel/head.S
>>>>> @@ -132,7 +132,27 @@ not_found:
>>>>>        set_saved_sp    sp, t0, t1
>>>>>        PTR_SUBU    sp, 4 * SZREG        # init stack pointer
>>>>>
>>>>> +#ifdef CONFIG_RELOCATABLE
>>>>> +    /* Copy kernel and apply the relocations */
>>>>> +    jal        relocate_kernel
>>>>> +
>>>>> +    /* Repoint the sp into the new kernel image */
>>>>> +    PTR_LI        sp, _THREAD_SIZE - 32 - PT_SIZE
>>>>> +    PTR_ADDU    sp, $28
>>>>     Can't you account for it in the previous PTR_LI?
>>> During relocate_kernel, $28, pointer to the current thread,
>> Ah, it's a register! I thought it was an immediate. Nevermind then. :-)
> Although, it could still be reduced:
> PTR_ADDU sp, gp, _THREAD_SIZE - 32 - PT_SIZE
>
> Assuming the immediate is in range of signed 16bit.

The immediate would be 32552, so in range of signed 16bit, but that 
would be brittle if either _THREAD_SIZE or PT_SIZE were to change in 
future....

Thanks,
Matt
>
> Cheers
> James
>
>> [...]
>>
>> MBR, Sergei
>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 6/9] MIPS: Call relocate_kernel if CONFIG_RELOCATABLE=y
@ 2015-12-04  8:20             ` Matt Redfearn
  0 siblings, 0 replies; 34+ messages in thread
From: Matt Redfearn @ 2015-12-04  8:20 UTC (permalink / raw)
  To: James Hogan, Sergei Shtylyov, linux-mips

Hi James,

On 03/12/15 18:54, James Hogan wrote:
> On 3 December 2015 17:46:14 GMT+00:00, Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> wrote:
>> On 12/03/2015 05:53 PM, Matt Redfearn wrote:
>>
>>>>> If CONFIG_RELOCATABLE is enabled, jump to relocate_kernel.
>>>>>
>>>>> This function will return the entry point of the relocated kernel
>> if
>>>>> copy/relocate is sucessful or the original entry point if not. The
>> stack
>>>>> pointer must then be pointed into the new image.
>>>>>
>>>>> Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com>
>>>>> ---
>>>>>    arch/mips/kernel/head.S | 20 ++++++++++++++++++++
>>>>>    1 file changed, 20 insertions(+)
>>>>>
>>>>> diff --git a/arch/mips/kernel/head.S b/arch/mips/kernel/head.S
>>>>> index 4e4cc5b9a771..7dc043349d66 100644
>>>>> --- a/arch/mips/kernel/head.S
>>>>> +++ b/arch/mips/kernel/head.S
>>>>> @@ -132,7 +132,27 @@ not_found:
>>>>>        set_saved_sp    sp, t0, t1
>>>>>        PTR_SUBU    sp, 4 * SZREG        # init stack pointer
>>>>>
>>>>> +#ifdef CONFIG_RELOCATABLE
>>>>> +    /* Copy kernel and apply the relocations */
>>>>> +    jal        relocate_kernel
>>>>> +
>>>>> +    /* Repoint the sp into the new kernel image */
>>>>> +    PTR_LI        sp, _THREAD_SIZE - 32 - PT_SIZE
>>>>> +    PTR_ADDU    sp, $28
>>>>     Can't you account for it in the previous PTR_LI?
>>> During relocate_kernel, $28, pointer to the current thread,
>> Ah, it's a register! I thought it was an immediate. Nevermind then. :-)
> Although, it could still be reduced:
> PTR_ADDU sp, gp, _THREAD_SIZE - 32 - PT_SIZE
>
> Assuming the immediate is in range of signed 16bit.

The immediate would be 32552, so in range of signed 16bit, but that 
would be brittle if either _THREAD_SIZE or PT_SIZE were to change in 
future....

Thanks,
Matt
>
> Cheers
> James
>
>> [...]
>>
>> MBR, Sergei
>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 0/9] MIPS Relocatable kernel & KASLR
  2015-12-04  8:14     ` Matt Redfearn
  (?)
@ 2015-12-04 13:14     ` Joshua Kinard
  -1 siblings, 0 replies; 34+ messages in thread
From: Joshua Kinard @ 2015-12-04 13:14 UTC (permalink / raw)
  To: Matt Redfearn; +Cc: linux-mips


Hazards shouldn't be an issue on the R10000-series of processors, as they
handle all hazards in hardware. So I guess that leaves just finding a
replacement for 'synci' on those CPUs, and then maybe relocs could be used on
at least IP27, IP28, and IP30 systems (and IP32, if we ever solve the coherency
issues there w/ R10K).

Not sure what benefit there would be, though.

--J



On 12/04/2015 03:14, Matt Redfearn wrote:
> Hi Joshua,
> The patch as it stands uses a couple of MIPS R2 additional instructions to deal
> with synchronizing icache. Firstly, the synci instruction to ensure that icache
> is in sync with the dcache after the relocated kernel has been written, and the
> jr.hb instruction to resolve any hazards created by writing the new kernel
> before jumping to it.
> 
> Thanks,
> Matt
> 
> On 03/12/15 22:23, Joshua Kinard wrote:
>> On 12/03/2015 05:08, Matt Redfearn wrote:
>>> This series adds the ability for the MIPS kernel to relocate itself at
>>> runtime, optionally to an address determined at random each boot. This
>>> series is based on v4.3 and has been tested on the Malta platform.
>> [snip]
>>
>>> * Relocation is currently supported on R2 of the MIPS architecture,
>>>    32bit and 64bit.
>> Out of curiosity, why is this capability restricted to MIPS R2 and higher?
>> IRIX kernels and the 'sash' tool were both relocatable on the older SGI
>> platforms.  Does the feature, as implemented, rely on R2-specific
>> instructions/capabilities, or only due to lack of testing on pre-R2 hardware?
>>
>> --J
>>
> 
> 


-- 
Joshua Kinard
Gentoo/MIPS
kumba@gentoo.org
6144R/F5C6C943 2015-04-27
177C 1972 1FB8 F254 BAD0 3E72 5C63 F4E3 F5C6 C943

"The past tempts us, the present confuses us, the future frightens us.  And our
lives slip away, moment by moment, lost in that vast, terrible in-between."

--Emperor Turhan, Centauri Republic

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 6/9] MIPS: Call relocate_kernel if CONFIG_RELOCATABLE=y
  2015-12-04  8:20             ` Matt Redfearn
  (?)
@ 2015-12-04 15:37             ` Ralf Baechle
  2015-12-04 15:45                 ` Matt Redfearn
  -1 siblings, 1 reply; 34+ messages in thread
From: Ralf Baechle @ 2015-12-04 15:37 UTC (permalink / raw)
  To: Matt Redfearn; +Cc: James Hogan, Sergei Shtylyov, linux-mips

On Fri, Dec 04, 2015 at 08:20:05AM +0000, Matt Redfearn wrote:

> >Although, it could still be reduced:
> >PTR_ADDU sp, gp, _THREAD_SIZE - 32 - PT_SIZE
> >
> >Assuming the immediate is in range of signed 16bit.
> 
> The immediate would be 32552, so in range of signed 16bit, but that would be
> brittle if either _THREAD_SIZE or PT_SIZE were to change in future....

The maximum value possible for _THREAD_SIZE would be with 64k pages for
which the expression will exceed the signed 16 bit range.  The good news
is that GAS is smart enough to cope with the situation by suitably
expanding the instruction into a macro unless ".set noat" or ".set nomacro"
mode are enabled:

$ cat s.s 
	addu	$sp, $gp, 65536
[ralf@h7 tmp]$ mips-linux-as -O2 -als -o s.o s.s
GAS LISTING s.s 			page 1


   1 0000 3C010001 		addu	$sp, $gp, 65536
   1      0381E821 
   1      00000000 
   1      00000000 

GAS LISTING s.s 			page 2


NO DEFINED SYMBOLS

NO UNDEFINED SYMBOLS
[ralf@h7 tmp]$ mips-linux-objdump -d s.o 
s.o:     file format elf32-tradbigmips


Disassembly of section .text:

00000000 <.text>:
   0:	3c010001 	lui	at,0x1
   4:	0381e821 	addu	sp,gp,at
	...

And of course that macro should better not be expanded in a branch
delay slot ...

  Ralf

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 6/9] MIPS: Call relocate_kernel if CONFIG_RELOCATABLE=y
@ 2015-12-04 15:45                 ` Matt Redfearn
  0 siblings, 0 replies; 34+ messages in thread
From: Matt Redfearn @ 2015-12-04 15:45 UTC (permalink / raw)
  To: Ralf Baechle; +Cc: James Hogan, Sergei Shtylyov, linux-mips

On 04/12/15 15:37, Ralf Baechle wrote:
> On Fri, Dec 04, 2015 at 08:20:05AM +0000, Matt Redfearn wrote:
>
>>> Although, it could still be reduced:
>>> PTR_ADDU sp, gp, _THREAD_SIZE - 32 - PT_SIZE
>>>
>>> Assuming the immediate is in range of signed 16bit.
>> The immediate would be 32552, so in range of signed 16bit, but that would be
>> brittle if either _THREAD_SIZE or PT_SIZE were to change in future....
> The maximum value possible for _THREAD_SIZE would be with 64k pages for
> which the expression will exceed the signed 16 bit range.  The good news
> is that GAS is smart enough to cope with the situation by suitably
> expanding the instruction into a macro unless ".set noat" or ".set nomacro"
> mode are enabled:
>
> $ cat s.s
> 	addu	$sp, $gp, 65536
> [ralf@h7 tmp]$ mips-linux-as -O2 -als -o s.o s.s
> GAS LISTING s.s 			page 1
>
>
>     1 0000 3C010001 		addu	$sp, $gp, 65536
>     1      0381E821
>     1      00000000
>     1      00000000
>
> GAS LISTING s.s 			page 2
>
>
> NO DEFINED SYMBOLS
>
> NO UNDEFINED SYMBOLS
> [ralf@h7 tmp]$ mips-linux-objdump -d s.o
> s.o:     file format elf32-tradbigmips
>
>
> Disassembly of section .text:
>
> 00000000 <.text>:
>     0:	3c010001 	lui	at,0x1
>     4:	0381e821 	addu	sp,gp,at
> 	...
>
> And of course that macro should better not be expanded in a branch
> delay slot ...
>
>    Ralf
Cool, then it would be neater to do this (and perhaps the other instance 
of this for setting the original kernel stack pointer up). Would you 
prefer to see that in this series?

Thanks,
Matt

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 6/9] MIPS: Call relocate_kernel if CONFIG_RELOCATABLE=y
@ 2015-12-04 15:45                 ` Matt Redfearn
  0 siblings, 0 replies; 34+ messages in thread
From: Matt Redfearn @ 2015-12-04 15:45 UTC (permalink / raw)
  To: Ralf Baechle; +Cc: James Hogan, Sergei Shtylyov, linux-mips

On 04/12/15 15:37, Ralf Baechle wrote:
> On Fri, Dec 04, 2015 at 08:20:05AM +0000, Matt Redfearn wrote:
>
>>> Although, it could still be reduced:
>>> PTR_ADDU sp, gp, _THREAD_SIZE - 32 - PT_SIZE
>>>
>>> Assuming the immediate is in range of signed 16bit.
>> The immediate would be 32552, so in range of signed 16bit, but that would be
>> brittle if either _THREAD_SIZE or PT_SIZE were to change in future....
> The maximum value possible for _THREAD_SIZE would be with 64k pages for
> which the expression will exceed the signed 16 bit range.  The good news
> is that GAS is smart enough to cope with the situation by suitably
> expanding the instruction into a macro unless ".set noat" or ".set nomacro"
> mode are enabled:
>
> $ cat s.s
> 	addu	$sp, $gp, 65536
> [ralf@h7 tmp]$ mips-linux-as -O2 -als -o s.o s.s
> GAS LISTING s.s 			page 1
>
>
>     1 0000 3C010001 		addu	$sp, $gp, 65536
>     1      0381E821
>     1      00000000
>     1      00000000
>
> GAS LISTING s.s 			page 2
>
>
> NO DEFINED SYMBOLS
>
> NO UNDEFINED SYMBOLS
> [ralf@h7 tmp]$ mips-linux-objdump -d s.o
> s.o:     file format elf32-tradbigmips
>
>
> Disassembly of section .text:
>
> 00000000 <.text>:
>     0:	3c010001 	lui	at,0x1
>     4:	0381e821 	addu	sp,gp,at
> 	...
>
> And of course that macro should better not be expanded in a branch
> delay slot ...
>
>    Ralf
Cool, then it would be neater to do this (and perhaps the other instance 
of this for setting the original kernel stack pointer up). Would you 
prefer to see that in this series?

Thanks,
Matt

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2015-12-04 15:45 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-12-03 10:08 [PATCH 0/9] MIPS Relocatable kernel & KASLR Matt Redfearn
2015-12-03 10:08 ` Matt Redfearn
2015-12-03 10:08 ` [PATCH 1/9] MIPS: tools: Add relocs tool Matt Redfearn
2015-12-03 10:08   ` Matt Redfearn
2015-12-03 10:08 ` [PATCH 2/9] MIPS: tools: Build " Matt Redfearn
2015-12-03 10:08   ` Matt Redfearn
2015-12-03 10:08 ` [PATCH 3/9] MIPS: Reserve space for relocation table Matt Redfearn
2015-12-03 10:08   ` Matt Redfearn
2015-12-03 10:08 ` [PATCH 4/9] MIPS: Generate relocation table when CONFIG_RELOCATABLE Matt Redfearn
2015-12-03 10:08   ` Matt Redfearn
2015-12-03 10:08 ` [PATCH 5/9] MIPS: Kernel: Add relocate.c Matt Redfearn
2015-12-03 10:08   ` Matt Redfearn
2015-12-03 10:08 ` [PATCH 6/9] MIPS: Call relocate_kernel if CONFIG_RELOCATABLE=y Matt Redfearn
2015-12-03 10:08   ` Matt Redfearn
2015-12-03 14:24   ` Sergei Shtylyov
2015-12-03 14:53     ` Matt Redfearn
2015-12-03 14:53       ` Matt Redfearn
2015-12-03 17:46       ` Sergei Shtylyov
2015-12-03 18:54         ` James Hogan
2015-12-04  8:20           ` Matt Redfearn
2015-12-04  8:20             ` Matt Redfearn
2015-12-04 15:37             ` Ralf Baechle
2015-12-04 15:45               ` Matt Redfearn
2015-12-04 15:45                 ` Matt Redfearn
2015-12-03 10:08 ` [PATCH 7/9] MIPS: bootmem: When relocatable, free memory below kernel Matt Redfearn
2015-12-03 10:08   ` Matt Redfearn
2015-12-03 10:08 ` [PATCH 8/9] MIPS: Add CONFIG_RELOCATABLE Kconfig option Matt Redfearn
2015-12-03 10:08   ` Matt Redfearn
2015-12-03 10:08 ` [PATCH 9/9] MIPS: Kernel: Implement kASLR using CONFIG_RELOCATABLE Matt Redfearn
2015-12-03 10:08   ` Matt Redfearn
2015-12-03 22:23 ` [PATCH 0/9] MIPS Relocatable kernel & KASLR Joshua Kinard
2015-12-04  8:14   ` Matt Redfearn
2015-12-04  8:14     ` Matt Redfearn
2015-12-04 13:14     ` Joshua Kinard

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.