* [PATCH v3 00/17] LPAE fixes and extensions for Keystone
@ 2012-09-11 17:38 Cyril Chemparathy
2012-09-11 17:38 ` [PATCH v3 01/17] ARM: add mechanism for late code patching Cyril Chemparathy
` (16 more replies)
0 siblings, 17 replies; 29+ messages in thread
From: Cyril Chemparathy @ 2012-09-11 17:38 UTC (permalink / raw)
To: linux-kernel, linux-arm-kernel
Cc: arnd, catalin.marinas, grant.likely, nico, linux, will.deacon,
Cyril Chemparathy
This series is a refreshed subset of the Keystone platform patches posted
earlier (see [1] and [2]). In this series, we've dropped the Keystone
sub-architecture patches, which have remained largely unchanged from before,
and were being provided only for reference. The focus with this series is
to get these LPAE related changes queued up.
These patches have been rebased and verified against linux-next-20120910.
These patches are also available in git:
git://git.kernel.org/pub/scm/linux/kernel/git/cchemparathy/linux-keystone.git upstream/keystone-lpae-v3
[1] http://comments.gmane.org/gmane.linux.kernel/1341497
[2] http://comments.gmane.org/gmane.linux.kernel/1332069
Series changelog:
(01/22) ARM: add mechanism for late code patching
(v3) ability to patch multiple sequential instructions with the IMM8 patch
type
(v3) error handling at module patch time
(v3) reuse __patch_text() from kprobes code
(v2) pulled runtime patching code into separate source files
(v2) reordered arguments to patch macros for consistency with assembly
"Rd, Rt, imm" ordering
(v2) added support for mov immediate patching
(v2) cache flush patched instructions instead of entire kernel code
(v2) pack patch table to reduce table volume
(v2) add to module vermagic to reflect abi change
(v2) misc. cleanups in naming and structure
(02/22) ARM: add self test for runtime patch mechanism
(v3) added tests for both even and odd shifts of immediate values
(v2) added init-time tests to verify instruction encoding
(03/22) ARM: use late patch framework for phys-virt patching
(v3) fixed commit description for unconditional init of __pv_* symbols
(v2) move __pv_offset and __pv_phys_offset to C code
(v2) restore conditional init of __pv_offset and __pv_phys_offset
(04/22) ARM: LPAE: use phys_addr_t on virt <--> phys conversion
(v3) unchanged from v2
(v2) fix patched __phys_to_virt() to use 32-bit operand
(v2) convert non-patch __phys_to_virt and __virt_to_phys to inlines to retain
type checking
(05/22) ARM: LPAE: support 64-bit virt_to_phys patching
(v3) added explicit patch stub for 64-bit to clean up compiler generated
code, both 64-bit and 32-bit cases generate optimal code with this
(v2) use phys_addr_t instead of split high/low phys_offsets
(v2) use mov immediate instead of add to zero when patching in high order
physical address bits
(v2) fix __pv_phys_offset handling for big-endian
(v2) remove set_phys_offset()
(06/22) ARM: LPAE: use signed arithmetic for mask definitions
(07/22) ARM: LPAE: use phys_addr_t in alloc_init_pud()
(08/22) ARM: LPAE: use phys_addr_t in free_memmap()
(v3) unchanged from v2
(v2) unchanged from v1
(09/22) ARM: LPAE: use phys_addr_t for initrd location and size
(v3) unchanged from v2
(v2) revert to unsigned long for initrd size
(10/22) ARM: LPAE: use phys_addr_t in switch_mm()
(v3) remove unnecessary handling for !LPAE in proc-v7-3level
(v2) use phys_addr_t instead of u64 in switch_mm()
(v2) revert on changes to v6 and v7-2level
(v2) fix register mapping for big-endian in v7-3level
(11/22) ARM: LPAE: use 64-bit accessors for TTBR registers
(v3) remove unnecessary condition code clobber
(v2) restore comment in cpu_set_reserved_ttbr0()
(12/22) ARM: LPAE: define ARCH_LOW_ADDRESS_LIMIT for bootmem
(13/22) ARM: LPAE: factor out T1SZ and TTBR1 computations
(v3) unchanged from v2
(v2) unchanged from v1
(14/22) ARM: LPAE: accomodate >32-bit addresses for page table base
(v3) unchanged from v2
(v2) apply arch_pgd_shift only on lpae
(v2) move arch_pgd_shift definition to asm/memory.h
(v2) revert on changes to non-lpae procs
(v2) add check to ensure that the pgd physical address is aligned at an
ARCH_PGD_SHIFT boundary
(15/22) ARM: mm: use physical addresses in highmem sanity checks
(16/22) ARM: mm: cleanup checks for membank overlap with vmalloc area
(17/22) ARM: mm: clean up membank size limit checks
(v3) unchanged from v2
(v2) unchanged from v1
Cyril Chemparathy (14):
ARM: add mechanism for late code patching
ARM: add self test for runtime patch mechanism
ARM: use late patch framework for phys-virt patching
ARM: LPAE: use phys_addr_t on virt <--> phys conversion
ARM: LPAE: support 64-bit virt_to_phys patching
ARM: LPAE: use signed arithmetic for mask definitions
ARM: LPAE: use phys_addr_t in switch_mm()
ARM: LPAE: use 64-bit accessors for TTBR registers
ARM: LPAE: define ARCH_LOW_ADDRESS_LIMIT for bootmem
ARM: LPAE: factor out T1SZ and TTBR1 computations
ARM: LPAE: accomodate >32-bit addresses for page table base
ARM: mm: use physical addresses in highmem sanity checks
ARM: mm: cleanup checks for membank overlap with vmalloc area
ARM: mm: clean up membank size limit checks
Vitaly Andrianov (3):
ARM: LPAE: use phys_addr_t in alloc_init_pud()
ARM: LPAE: use phys_addr_t in free_memmap()
ARM: LPAE: use phys_addr_t for initrd location and size
arch/arm/Kconfig | 16 ++
arch/arm/include/asm/memory.h | 101 +++++++---
arch/arm/include/asm/module.h | 7 +
arch/arm/include/asm/page.h | 2 +-
arch/arm/include/asm/pgtable-3level-hwdef.h | 10 +
arch/arm/include/asm/pgtable-3level.h | 6 +-
arch/arm/include/asm/proc-fns.h | 28 ++-
arch/arm/include/asm/runtime-patch.h | 208 +++++++++++++++++++++
arch/arm/kernel/Makefile | 1 +
arch/arm/kernel/armksyms.c | 4 -
arch/arm/kernel/head.S | 107 +++--------
arch/arm/kernel/module.c | 14 +-
arch/arm/kernel/runtime-patch.c | 268 +++++++++++++++++++++++++++
arch/arm/kernel/setup.c | 15 ++
arch/arm/kernel/smp.c | 11 +-
arch/arm/kernel/vmlinux.lds.S | 13 +-
arch/arm/mm/context.c | 9 +-
arch/arm/mm/init.c | 19 +-
arch/arm/mm/mmu.c | 49 ++---
arch/arm/mm/proc-v7-3level.S | 41 ++--
20 files changed, 728 insertions(+), 201 deletions(-)
create mode 100644 arch/arm/include/asm/runtime-patch.h
create mode 100644 arch/arm/kernel/runtime-patch.c
--
1.7.9.5
^ permalink raw reply [flat|nested] 29+ messages in thread
* [PATCH v3 01/17] ARM: add mechanism for late code patching
2012-09-11 17:38 [PATCH v3 00/17] LPAE fixes and extensions for Keystone Cyril Chemparathy
@ 2012-09-11 17:38 ` Cyril Chemparathy
2012-09-21 18:09 ` Nicolas Pitre
2012-09-11 17:39 ` [PATCH v3 02/17] ARM: add self test for runtime patch mechanism Cyril Chemparathy
` (15 subsequent siblings)
16 siblings, 1 reply; 29+ messages in thread
From: Cyril Chemparathy @ 2012-09-11 17:38 UTC (permalink / raw)
To: linux-kernel, linux-arm-kernel
Cc: arnd, catalin.marinas, grant.likely, nico, linux, will.deacon,
Cyril Chemparathy
The original phys_to_virt/virt_to_phys patching implementation relied on early
patching prior to MMU initialization. On PAE systems running out of >4G
address space, this would have entailed an additional round of patching after
switching over to the high address space.
The approach implemented here conceptually extends the original PHYS_OFFSET
patching implementation with the introduction of "early" patch stubs. Early
patch code is required to be functional out of the box, even before the patch
is applied. This is implemented by inserting functional (but inefficient)
load code into the .runtime.patch.code init section. Having functional code
out of the box then allows us to defer the init time patch application until
later in the init sequence.
In addition to fitting better with our need for physical address-space
switch-over, this implementation should be somewhat more extensible by virtue
of its more readable (and hackable) C implementation. This should prove
useful for other similar init time specialization needs, especially in light
of our multi-platform kernel initiative.
This code has been boot tested in both ARM and Thumb-2 modes on an ARMv7
(Cortex-A8) device.
Note: the obtuse use of stringified symbols in patch_stub() and
early_patch_stub() is intentional. Theoretically this should have been
accomplished with formal operands passed into the asm block, but this requires
the use of the 'c' modifier for instantiating the long (e.g. .long %c0).
However, the 'c' modifier has been found to ICE certain versions of GCC, and
therefore we resort to stringified symbols here.
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Reviewed-by: Nicolas Pitre <nico@linaro.org>
---
arch/arm/Kconfig | 3 +
arch/arm/include/asm/module.h | 7 ++
arch/arm/include/asm/runtime-patch.h | 208 ++++++++++++++++++++++++++++++++++
arch/arm/kernel/Makefile | 1 +
arch/arm/kernel/module.c | 9 +-
arch/arm/kernel/runtime-patch.c | 193 +++++++++++++++++++++++++++++++
arch/arm/kernel/setup.c | 3 +
arch/arm/kernel/vmlinux.lds.S | 10 ++
8 files changed, 433 insertions(+), 1 deletion(-)
create mode 100644 arch/arm/include/asm/runtime-patch.h
create mode 100644 arch/arm/kernel/runtime-patch.c
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 63504f1..36de4ea 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -62,6 +62,9 @@ config ARM
config ARM_HAS_SG_CHAIN
bool
+config ARM_RUNTIME_PATCH
+ bool
+
config NEED_SG_DMA_LENGTH
bool
diff --git a/arch/arm/include/asm/module.h b/arch/arm/include/asm/module.h
index 0d3a28d..c4ebe52 100644
--- a/arch/arm/include/asm/module.h
+++ b/arch/arm/include/asm/module.h
@@ -39,9 +39,16 @@ struct mod_arch_specific {
#define MODULE_ARCH_VERMAGIC_ARMTHUMB ""
#endif
+#ifdef CONFIG_ARM_RUNTIME_PATCH
+#define MODULE_ARCH_VERMAGIC_RT_PATCH "rt-patch "
+#else
+#define MODULE_ARCH_VERMAGIC_RT_PATCH ""
+#endif
+
#define MODULE_ARCH_VERMAGIC \
MODULE_ARCH_VERMAGIC_ARMVSN \
MODULE_ARCH_VERMAGIC_ARMTHUMB \
+ MODULE_ARCH_VERMAGIC_RT_PATCH \
MODULE_ARCH_VERMAGIC_P2V
#endif /* _ASM_ARM_MODULE_H */
diff --git a/arch/arm/include/asm/runtime-patch.h b/arch/arm/include/asm/runtime-patch.h
new file mode 100644
index 0000000..366444d
--- /dev/null
+++ b/arch/arm/include/asm/runtime-patch.h
@@ -0,0 +1,208 @@
+/*
+ * arch/arm/include/asm/runtime-patch.h
+ * Note: this file should not be included by non-asm/.h files
+ *
+ * Copyright 2012 Texas Instruments, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef __ASM_ARM_RUNTIME_PATCH_H
+#define __ASM_ARM_RUNTIME_PATCH_H
+
+#include <linux/stringify.h>
+
+#ifndef __ASSEMBLY__
+
+#ifdef CONFIG_ARM_RUNTIME_PATCH
+
+struct patch_info {
+ void *insn;
+ u16 type;
+ u8 insn_size;
+ u8 data_size;
+ u32 data[0];
+};
+
+#define PATCH_IMM8 0x0001
+struct patch_info_imm8 {
+ u32 *imm;
+ u32 insn;
+};
+
+#define patch_next(p) ((void *)(p) + sizeof(*(p)) + (p)->data_size)
+#define patch_data(p) ((void *)&(p)->data[0])
+
+#define patch_stub(type, code, patch_data, ...) \
+ __asm__("@ patch stub\n" \
+ "1:\n" \
+ code \
+ "2:\n" \
+ " .pushsection .runtime.patch.table, \"a\"\n" \
+ "3:\n" \
+ " .word 1b\n" \
+ " .hword (" __stringify(type) ")\n" \
+ " .byte (2b-1b)\n" \
+ " .byte (5f-4f)\n" \
+ "4:\n" \
+ patch_data \
+ " .align\n" \
+ "5:\n" \
+ " .popsection\n" \
+ __VA_ARGS__)
+
+#define early_patch_stub(type, code, pad, patch_data, ...) \
+ __asm__("@ patch stub\n" \
+ "1:\n" \
+ " b 6f\n" \
+ " .fill " __stringify(pad) ", 1, 0\n" \
+ "2:\n" \
+ " .pushsection .runtime.patch.table, \"a\"\n" \
+ "3:\n" \
+ " .word 1b\n" \
+ " .hword (" __stringify(type) ")\n" \
+ " .byte (2b-1b)\n" \
+ " .byte (5f-4f)\n" \
+ "4:\n" \
+ patch_data \
+ " .align\n" \
+ "5:\n" \
+ " .popsection\n" \
+ " .pushsection .runtime.patch.code, \"ax\"\n" \
+ "6:\n" \
+ code \
+ " b 2b\n" \
+ " .popsection\n" \
+ __VA_ARGS__)
+
+/* constant used to force encoding */
+#define __IMM8 (0x81 << 24)
+
+/*
+ * patch_imm8() - init-time specialized binary operation (imm8 operand)
+ * This effectively does: to = from "insn" sym,
+ * where the value of sym is fixed at init-time, and is patched
+ * in as an immediate operand. This value must be
+ * representible as an 8-bit quantity with an optional
+ * rotation.
+ *
+ * The stub code produced by this variant is non-functional
+ * prior to patching. Use early_patch_imm8() if you need the
+ * code to be functional early on in the init sequence.
+ */
+#define patch_imm8(_insn, _to, _from, _sym, _ofs) \
+ patch_stub( \
+ /* type */ \
+ PATCH_IMM8, \
+ /* code */ \
+ _insn " %[to], %[from], %[imm]\n", \
+ /* patch_data */ \
+ ".long " __stringify(_sym + _ofs) "\n" \
+ _insn " %[to], %[from], %[imm]\n", \
+ /* operands */ \
+ : [to] "=r" (_to) \
+ : [from] "r" (_from), \
+ [imm] "I" (__IMM8), \
+ "i" (&(_sym)) \
+ : "cc")
+
+/*
+ * patch_imm8_mov() - same as patch_imm8(), but for mov/mvn instructions
+ */
+#define patch_imm8_mov(_insn, _to, _sym, _ofs) \
+ patch_stub( \
+ /* type */ \
+ PATCH_IMM8, \
+ /* code */ \
+ _insn " %[to], %[imm]\n", \
+ /* patch_data */ \
+ ".long " __stringify(_sym + _ofs) "\n" \
+ _insn " %[to], %[imm]\n", \
+ /* operands */ \
+ : [to] "=r" (_to) \
+ : [imm] "I" (__IMM8), \
+ "i" (&(_sym)) \
+ : "cc")
+
+/*
+ * early_patch_imm8() - early functional variant of patch_imm8() above. The
+ * same restrictions on the constant apply here. This
+ * version emits workable (albeit inefficient) code at
+ * compile-time, and therefore functions even prior to
+ * patch application.
+ */
+#define early_patch_imm8(_insn, _to, _from, _sym, _ofs) \
+do { \
+ unsigned long __tmp; \
+ early_patch_stub( \
+ /* type */ \
+ PATCH_IMM8, \
+ /* code */ \
+ "ldr %[tmp], =" __stringify(_sym + _ofs) "\n"\
+ "ldr %[tmp], [%[tmp]]\n" \
+ _insn " %[to], %[from], %[tmp]\n", \
+ /* pad */ \
+ 0, \
+ /* patch_data */ \
+ ".long " __stringify(_sym + _ofs) "\n" \
+ _insn " %[to], %[from], %[imm]\n", \
+ /* operands */ \
+ : [to] "=r" (_to), \
+ [tmp] "=&r" (__tmp) \
+ : [from] "r" (_from), \
+ [imm] "I" (__IMM8), \
+ "i" (&(_sym)) \
+ : "cc"); \
+} while (0)
+
+#define early_patch_imm8_mov(_insn, _to, _sym, _ofs) \
+do { \
+ unsigned long __tmp; \
+ early_patch_stub( \
+ /* type */ \
+ PATCH_IMM8 \
+ /* code */ \
+ "ldr %[tmp], =" __stringify(_sym + _ofs) "\n"\
+ "ldr %[tmp], [%[tmp]]\n" \
+ _insn " %[to], %[tmp]\n", \
+ /* pad */ \
+ 0, \
+ /* patch_data */ \
+ ".long " __stringify(_sym + _ofs) "\n" \
+ _insn " %[to], %[imm]\n", \
+ /* operands */ \
+ : [to] "=r" (_to), \
+ [tmp] "=&r" (__tmp) \
+ : [imm] "I" (__IMM8), \
+ "i" (&(_sym)) \
+ : "cc"); \
+} while (0)
+
+int runtime_patch(const void *table, unsigned size);
+void runtime_patch_kernel(void);
+
+#else
+
+static inline int runtime_patch(const void *table, unsigned size)
+{
+ return 0;
+}
+
+static inline void runtime_patch_kernel(void)
+{
+}
+
+#endif /* CONFIG_ARM_RUNTIME_PATCH */
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* __ASM_ARM_RUNTIME_PATCH_H */
diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile
index 5dfef9d..c9d5a30 100644
--- a/arch/arm/kernel/Makefile
+++ b/arch/arm/kernel/Makefile
@@ -80,5 +80,6 @@ endif
head-y := head$(MMUEXT).o
obj-$(CONFIG_DEBUG_LL) += debug.o
obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
+obj-$(CONFIG_ARM_RUNTIME_PATCH) += runtime-patch.o patch.o
extra-y := $(head-y) vmlinux.lds
diff --git a/arch/arm/kernel/module.c b/arch/arm/kernel/module.c
index 1e9be5d..10a2922 100644
--- a/arch/arm/kernel/module.c
+++ b/arch/arm/kernel/module.c
@@ -24,6 +24,7 @@
#include <asm/sections.h>
#include <asm/smp_plat.h>
#include <asm/unwind.h>
+#include <asm/runtime-patch.h>
#ifdef CONFIG_XIP_KERNEL
/*
@@ -276,7 +277,7 @@ int module_finalize(const Elf32_Ehdr *hdr, const Elf_Shdr *sechdrs,
const char *secstrs = (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset;
const Elf_Shdr *sechdrs_end = sechdrs + hdr->e_shnum;
struct mod_unwind_map maps[ARM_SEC_MAX];
- int i;
+ int i, err;
memset(maps, 0, sizeof(maps));
@@ -321,6 +322,12 @@ int module_finalize(const Elf32_Ehdr *hdr, const Elf_Shdr *sechdrs,
if (s)
fixup_pv_table((void *)s->sh_addr, s->sh_size);
#endif
+ s = find_mod_section(hdr, sechdrs, ".runtime.patch.table");
+ if (s) {
+ err = runtime_patch((void *)s->sh_addr, s->sh_size);
+ if (err)
+ return err;
+ }
s = find_mod_section(hdr, sechdrs, ".alt.smp.init");
if (s && !is_smp())
#ifdef CONFIG_SMP_ON_UP
diff --git a/arch/arm/kernel/runtime-patch.c b/arch/arm/kernel/runtime-patch.c
new file mode 100644
index 0000000..28a6367
--- /dev/null
+++ b/arch/arm/kernel/runtime-patch.c
@@ -0,0 +1,193 @@
+/*
+ * arch/arm/kernel/runtime-patch.c
+ *
+ * Copyright 2012 Texas Instruments, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+#include <linux/kernel.h>
+#include <linux/sched.h>
+
+#include <asm/opcodes.h>
+#include <asm/cacheflush.h>
+#include <asm/runtime-patch.h>
+
+#include "patch.h"
+
+static inline void flush_icache_insn(void *insn_ptr, int bytes)
+{
+ unsigned long insn_addr = (unsigned long)insn_ptr;
+ flush_icache_range(insn_addr, insn_addr + bytes - 1);
+}
+
+#ifdef CONFIG_THUMB2_KERNEL
+
+static int do_patch_imm8(u32 insn, u32 imm, u32 *ninsn)
+{
+ u32 op, rot, val;
+ const u32 supported_ops = (BIT(0) | /* and */
+ BIT(1) | /* bic */
+ BIT(2) | /* orr/mov */
+ BIT(3) | /* orn/mvn */
+ BIT(4) | /* eor */
+ BIT(8) | /* add */
+ BIT(10) | /* adc */
+ BIT(11) | /* sbc */
+ BIT(12) | /* sub */
+ BIT(13)); /* rsb */
+
+ insn = __mem_to_opcode_thumb32(insn);
+
+ if (!__opcode_is_thumb32(insn)) {
+ pr_err("patch: invalid thumb2 insn %08x\n", insn);
+ return -EINVAL;
+ }
+
+ /* allow only data processing (immediate)
+ * 1111 0x0x xxx0 xxxx 0xxx xxxx xxxx xxxx */
+ if ((insn & 0xfa008000) != 0xf0000000) {
+ pr_err("patch: unknown insn %08x\n", insn);
+ return -EINVAL;
+ }
+
+ /* extract op code */
+ op = (insn >> 21) & 0xf;
+
+ /* disallow unsupported opcodes */
+ if ((supported_ops & BIT(op)) == 0) {
+ pr_err("patch: unsupported opcode %x\n", op);
+ return -EINVAL;
+ }
+
+ if (imm <= 0xff) {
+ rot = 0;
+ val = imm;
+ } else {
+ rot = 32 - fls(imm); /* clz */
+ if (imm & ~(0xff000000 >> rot)) {
+ pr_err("patch: constant overflow %08x\n", imm);
+ return -EINVAL;
+ }
+ val = (imm >> (24 - rot)) & 0x7f;
+ rot += 8; /* encoded i:imm3:a */
+
+ /* pack least-sig rot bit into most-sig val bit */
+ val |= (rot & 1) << 7;
+ rot >>= 1;
+ }
+
+ *ninsn = insn & ~(BIT(26) | 0x7 << 12 | 0xff);
+ *ninsn |= (rot >> 3) << 26; /* field "i" */
+ *ninsn |= (rot & 0x7) << 12; /* field "imm3" */
+ *ninsn |= val;
+
+ return 0;
+}
+
+#else
+
+static int do_patch_imm8(u32 insn, u32 imm, u32 *ninsn)
+{
+ u32 rot, val, op;
+
+ insn = __mem_to_opcode_arm(insn);
+
+ /* disallow special unconditional instructions
+ * 1111 xxxx xxxx xxxx xxxx xxxx xxxx xxxx */
+ if ((insn >> 24) == 0xf) {
+ pr_err("patch: unconditional insn %08x\n", insn);
+ return -EINVAL;
+ }
+
+ /* allow only data processing (immediate)
+ * xxxx 001x xxxx xxxx xxxx xxxx xxxx xxxx */
+ if (((insn >> 25) & 0x3) != 1) {
+ pr_err("patch: unknown insn %08x\n", insn);
+ return -EINVAL;
+ }
+
+ /* extract op code */
+ op = (insn >> 20) & 0x1f;
+
+ /* disallow unsupported 10xxx op codes */
+ if (((op >> 3) & 0x3) == 2) {
+ pr_err("patch: unsupported opcode %08x\n", insn);
+ return -EINVAL;
+ }
+
+ rot = imm ? __ffs(imm) / 2 : 0;
+ val = imm >> (rot * 2);
+ rot = (-rot) & 0xf;
+
+ /* does this fit in 8-bit? */
+ if (val > 0xff) {
+ pr_err("patch: constant overflow %08x\n", imm);
+ return -EINVAL;
+ }
+
+ /* patch in new immediate and rotation */
+ *ninsn = (insn & ~0xfff) | (rot << 8) | val;
+
+ return 0;
+}
+
+#endif /* CONFIG_THUMB2_KERNEL */
+
+static int apply_patch_imm8(const struct patch_info *p)
+{
+ u32 *insn_ptr = p->insn, ninsn;
+ int count = p->insn_size / sizeof(u32);
+ const struct patch_info_imm8 *info;
+ int err;
+
+
+ if (count <= 0 || p->data_size != count * sizeof(*info)) {
+ pr_err("patch: bad patch, insn size %d, data size %d\n",
+ p->insn_size, p->data_size);
+ return -EINVAL;
+ }
+
+ for (info = patch_data(p); count; count--, info++, insn_ptr++) {
+ err = do_patch_imm8(info->insn, *info->imm, &ninsn);
+ if (err)
+ return err;
+ __patch_text(insn_ptr, ninsn);
+ }
+
+
+ return 0;
+}
+
+int runtime_patch(const void *table, unsigned size)
+{
+ const struct patch_info *p = table, *end = (table + size);
+
+ for (p = table; p < end; p = patch_next(p)) {
+ int err = -EINVAL;
+
+ if (p->type == PATCH_IMM8)
+ err = apply_patch_imm8(p);
+ if (err)
+ return err;
+ }
+ return 0;
+}
+
+void __init runtime_patch_kernel(void)
+{
+ extern unsigned __runtime_patch_table_begin, __runtime_patch_table_end;
+ const void *start = &__runtime_patch_table_begin;
+ const void *end = &__runtime_patch_table_end;
+
+ BUG_ON(runtime_patch(start, end - start));
+}
diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
index 0785472..6fa7dbf 100644
--- a/arch/arm/kernel/setup.c
+++ b/arch/arm/kernel/setup.c
@@ -53,6 +53,7 @@
#include <asm/traps.h>
#include <asm/unwind.h>
#include <asm/memblock.h>
+#include <asm/runtime-patch.h>
#include "atags.h"
#include "tcm.h"
@@ -764,6 +765,8 @@ void __init setup_arch(char **cmdline_p)
if (mdesc->init_early)
mdesc->init_early();
+
+ runtime_patch_kernel();
}
diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S
index 36ff15b..ea35ca0 100644
--- a/arch/arm/kernel/vmlinux.lds.S
+++ b/arch/arm/kernel/vmlinux.lds.S
@@ -167,6 +167,16 @@ SECTIONS
*(.pv_table)
__pv_table_end = .;
}
+ .init.runtime_patch_table : {
+ __runtime_patch_table_begin = .;
+ *(.runtime.patch.table)
+ __runtime_patch_table_end = .;
+ }
+ .init.runtime_patch_code : {
+ __runtime_patch_code_begin = .;
+ *(.runtime.patch.code)
+ __runtime_patch_code_end = .;
+ }
.init.data : {
#ifndef CONFIG_XIP_KERNEL
INIT_DATA
--
1.7.9.5
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v3 02/17] ARM: add self test for runtime patch mechanism
2012-09-11 17:38 [PATCH v3 00/17] LPAE fixes and extensions for Keystone Cyril Chemparathy
2012-09-11 17:38 ` [PATCH v3 01/17] ARM: add mechanism for late code patching Cyril Chemparathy
@ 2012-09-11 17:39 ` Cyril Chemparathy
2012-09-21 17:40 ` Nicolas Pitre
2012-09-11 17:39 ` [PATCH v3 03/17] ARM: use late patch framework for phys-virt patching Cyril Chemparathy
` (14 subsequent siblings)
16 siblings, 1 reply; 29+ messages in thread
From: Cyril Chemparathy @ 2012-09-11 17:39 UTC (permalink / raw)
To: linux-kernel, linux-arm-kernel
Cc: arnd, catalin.marinas, grant.likely, nico, linux, will.deacon,
Cyril Chemparathy
This patch adds basic sanity tests to ensure that the instruction patching
results in valid instruction encodings. This is done by verifying the output
of the patch process against a vector of assembler generated instructions at
init time.
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
---
arch/arm/Kconfig | 12 +++++++
arch/arm/kernel/runtime-patch.c | 75 +++++++++++++++++++++++++++++++++++++++
2 files changed, 87 insertions(+)
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 36de4ea..bfcd29d 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -207,6 +207,18 @@ config ARM_PATCH_PHYS_VIRT
this feature (eg, building a kernel for a single machine) and
you need to shrink the kernel to the minimal size.
+config ARM_RUNTIME_PATCH_TEST
+ bool "Self test runtime patching mechanism" if ARM_RUNTIME_PATCH
+ default y
+ help
+ Select this to enable init time self checking for the runtime kernel
+ patching mechanism. This enables an ISA specific set of tests that
+ ensure that the instructions generated by the patch process are
+ consistent with those generated by the assembler at compile time.
+
+ Only disable this option if you need to shrink the kernel to the
+ minimal size.
+
config NEED_MACH_IO_H
bool
help
diff --git a/arch/arm/kernel/runtime-patch.c b/arch/arm/kernel/runtime-patch.c
index 28a6367..0be9ef3 100644
--- a/arch/arm/kernel/runtime-patch.c
+++ b/arch/arm/kernel/runtime-patch.c
@@ -168,6 +168,78 @@ static int apply_patch_imm8(const struct patch_info *p)
return 0;
}
+#ifdef CONFIG_ARM_RUNTIME_PATCH_TEST
+
+struct patch_test_imm8 {
+ u16 imm;
+ u16 shift;
+ u32 insn;
+};
+
+static void __init __used __naked __patch_test_code_imm8(void)
+{
+ __asm__ __volatile__ (
+
+ /* a single test case */
+ " .macro test_one, imm, sft\n"
+ " .hword \\imm\n"
+ " .hword \\sft\n"
+ " add r1, r2, #(\\imm << \\sft)\n"
+ " .endm\n"
+
+ /* a sequence of tests at 'inc' increments of shift */
+ " .macro test_seq, imm, sft, max, inc\n"
+ " test_one \\imm, \\sft\n"
+ " .if \\sft < \\max\n"
+ " test_seq \\imm, (\\sft + \\inc), \\max, \\inc\n"
+ " .endif\n"
+ " .endm\n"
+
+ /* an empty record to mark the end */
+ " .macro test_end\n"
+ " .hword 0, 0\n"
+ " .word 0\n"
+ " .endm\n"
+
+ /* finally generate the test sequences */
+ " test_seq 0x41, 0, 24, 1\n"
+ " test_seq 0x81, 0, 24, 2\n"
+ " test_end\n"
+ : : : "r1", "r2", "cc");
+}
+
+static void __init test_patch_imm8(void)
+{
+ u32 test_code_addr = (u32)(&__patch_test_code_imm8);
+ struct patch_test_imm8 *test = (void *)(test_code_addr & ~1);
+ u32 ninsn, insn, patched_insn;
+ int i, err;
+
+ insn = test[0].insn;
+ for (i = 0; test[i].insn; i++) {
+ err = do_patch_imm8(insn, test[i].imm << test[i].shift, &ninsn);
+ __patch_text(&patched_insn, ninsn);
+
+ if (err) {
+ pr_err("rtpatch imm8: failed at imm %x, shift %d\n",
+ test[i].imm, test[i].shift);
+ } else if (patched_insn != test[i].insn) {
+ pr_err("rtpatch imm8: failed, need %x got %x\n",
+ test[i].insn, patched_insn);
+ } else {
+ pr_debug("rtpatch imm8: imm %x, shift %d, %x -> %x\n",
+ test[i].imm, test[i].shift, insn,
+ patched_insn);
+ }
+ }
+}
+
+static void __init runtime_patch_test(void)
+{
+ test_patch_imm8();
+}
+#endif
+
int runtime_patch(const void *table, unsigned size)
{
const struct patch_info *p = table, *end = (table + size);
@@ -189,5 +261,8 @@ void __init runtime_patch_kernel(void)
const void *start = &__runtime_patch_table_begin;
const void *end = &__runtime_patch_table_end;
+#ifdef CONFIG_ARM_RUNTIME_PATCH_TEST
+ runtime_patch_test();
+#endif
BUG_ON(runtime_patch(start, end - start));
}
--
1.7.9.5
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v3 03/17] ARM: use late patch framework for phys-virt patching
2012-09-11 17:38 [PATCH v3 00/17] LPAE fixes and extensions for Keystone Cyril Chemparathy
2012-09-11 17:38 ` [PATCH v3 01/17] ARM: add mechanism for late code patching Cyril Chemparathy
2012-09-11 17:39 ` [PATCH v3 02/17] ARM: add self test for runtime patch mechanism Cyril Chemparathy
@ 2012-09-11 17:39 ` Cyril Chemparathy
2012-09-21 18:15 ` Nicolas Pitre
2012-09-11 17:39 ` [PATCH v3 04/17] ARM: LPAE: use phys_addr_t on virt <--> phys conversion Cyril Chemparathy
` (13 subsequent siblings)
16 siblings, 1 reply; 29+ messages in thread
From: Cyril Chemparathy @ 2012-09-11 17:39 UTC (permalink / raw)
To: linux-kernel, linux-arm-kernel
Cc: arnd, catalin.marinas, grant.likely, nico, linux, will.deacon,
Cyril Chemparathy
This patch replaces the original physical offset patching implementation
with one that uses the newly added patching framework.
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
---
arch/arm/Kconfig | 1 +
arch/arm/include/asm/memory.h | 26 +++--------
arch/arm/kernel/armksyms.c | 4 --
arch/arm/kernel/head.S | 95 +++++++----------------------------------
arch/arm/kernel/module.c | 5 ---
arch/arm/kernel/setup.c | 12 ++++++
arch/arm/kernel/vmlinux.lds.S | 5 ---
7 files changed, 36 insertions(+), 112 deletions(-)
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index bfcd29d..d0ac8ea 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -195,6 +195,7 @@ config ARM_PATCH_PHYS_VIRT
default y
depends on !XIP_KERNEL && MMU
depends on !ARCH_REALVIEW || !SPARSEMEM
+ select ARM_RUNTIME_PATCH
help
Patch phys-to-virt and virt-to-phys translation functions at
boot and module load time according to the position of the
diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
index 73cf03a..512d234 100644
--- a/arch/arm/include/asm/memory.h
+++ b/arch/arm/include/asm/memory.h
@@ -18,6 +18,8 @@
#include <linux/types.h>
#include <linux/sizes.h>
+#include <asm/runtime-patch.h>
+
#ifdef CONFIG_NEED_MACH_MEMORY_H
#include <mach/memory.h>
#endif
@@ -151,35 +153,21 @@
#ifndef __virt_to_phys
#ifdef CONFIG_ARM_PATCH_PHYS_VIRT
-/*
- * Constants used to force the right instruction encodings and shifts
- * so that all we need to do is modify the 8-bit constant field.
- */
-#define __PV_BITS_31_24 0x81000000
-
-extern unsigned long __pv_phys_offset;
-#define PHYS_OFFSET __pv_phys_offset
-
-#define __pv_stub(from,to,instr,type) \
- __asm__("@ __pv_stub\n" \
- "1: " instr " %0, %1, %2\n" \
- " .pushsection .pv_table,\"a\"\n" \
- " .long 1b\n" \
- " .popsection\n" \
- : "=r" (to) \
- : "r" (from), "I" (type))
+extern unsigned long __pv_offset;
+extern unsigned long __pv_phys_offset;
+#define PHYS_OFFSET __virt_to_phys(PAGE_OFFSET)
static inline unsigned long __virt_to_phys(unsigned long x)
{
unsigned long t;
- __pv_stub(x, t, "add", __PV_BITS_31_24);
+ early_patch_imm8("add", t, x, __pv_offset, 0);
return t;
}
static inline unsigned long __phys_to_virt(unsigned long x)
{
unsigned long t;
- __pv_stub(x, t, "sub", __PV_BITS_31_24);
+ early_patch_imm8("sub", t, x, __pv_offset, 0);
return t;
}
#else
diff --git a/arch/arm/kernel/armksyms.c b/arch/arm/kernel/armksyms.c
index 60d3b73..6b388f8 100644
--- a/arch/arm/kernel/armksyms.c
+++ b/arch/arm/kernel/armksyms.c
@@ -152,7 +152,3 @@ EXPORT_SYMBOL(mcount);
#endif
EXPORT_SYMBOL(__gnu_mcount_nc);
#endif
-
-#ifdef CONFIG_ARM_PATCH_PHYS_VIRT
-EXPORT_SYMBOL(__pv_phys_offset);
-#endif
diff --git a/arch/arm/kernel/head.S b/arch/arm/kernel/head.S
index 3db960e..69a3c09 100644
--- a/arch/arm/kernel/head.S
+++ b/arch/arm/kernel/head.S
@@ -117,7 +117,7 @@ ENTRY(stext)
bl __fixup_smp
#endif
#ifdef CONFIG_ARM_PATCH_PHYS_VIRT
- bl __fixup_pv_table
+ bl __fixup_pv_offsets
#endif
bl __create_page_tables
@@ -511,92 +511,29 @@ ENDPROC(fixup_smp)
#ifdef CONFIG_ARM_PATCH_PHYS_VIRT
-/* __fixup_pv_table - patch the stub instructions with the delta between
- * PHYS_OFFSET and PAGE_OFFSET, which is assumed to be 16MiB aligned and
- * can be expressed by an immediate shifter operand. The stub instruction
- * has a form of '(add|sub) rd, rn, #imm'.
+/*
+ * __fixup_pv_offsets - update __pv_offset and __pv_phys_offset based on the
+ * runtime location of the kernel.
*/
__HEAD
-__fixup_pv_table:
+__fixup_pv_offsets:
adr r0, 1f
- ldmia r0, {r3-r5, r7}
+ ldmia r0, {r3-r6}
sub r3, r0, r3 @ PHYS_OFFSET - PAGE_OFFSET
- add r4, r4, r3 @ adjust table start address
- add r5, r5, r3 @ adjust table end address
- add r7, r7, r3 @ adjust __pv_phys_offset address
- str r8, [r7] @ save computed PHYS_OFFSET to __pv_phys_offset
- mov r6, r3, lsr #24 @ constant for add/sub instructions
- teq r3, r6, lsl #24 @ must be 16MiB aligned
-THUMB( it ne @ cross section branch )
- bne __error
- str r6, [r7, #4] @ save to __pv_offset
- b __fixup_a_pv_table
-ENDPROC(__fixup_pv_table)
+ add r4, r4, r3 @ virt_to_phys(__pv_phys_offset)
+ add r5, r5, r3 @ virt_to_phys(__pv_offset)
+ add r6, r6, r3 @ virt_to_phys(PAGE_OFFSET) = PHYS_OFFSET
+ str r6, [r4] @ save __pv_phys_offset
+ str r3, [r5] @ save __pv_offset
+ mov pc, lr
+ENDPROC(__fixup_pv_offsets)
.align
1: .long .
- .long __pv_table_begin
- .long __pv_table_end
-2: .long __pv_phys_offset
-
- .text
-__fixup_a_pv_table:
-#ifdef CONFIG_THUMB2_KERNEL
- lsls r6, #24
- beq 2f
- clz r7, r6
- lsr r6, #24
- lsl r6, r7
- bic r6, #0x0080
- lsrs r7, #1
- orrcs r6, #0x0080
- orr r6, r6, r7, lsl #12
- orr r6, #0x4000
- b 2f
-1: add r7, r3
- ldrh ip, [r7, #2]
- and ip, 0x8f00
- orr ip, r6 @ mask in offset bits 31-24
- strh ip, [r7, #2]
-2: cmp r4, r5
- ldrcc r7, [r4], #4 @ use branch for delay slot
- bcc 1b
- bx lr
-#else
- b 2f
-1: ldr ip, [r7, r3]
- bic ip, ip, #0x000000ff
- orr ip, ip, r6 @ mask in offset bits 31-24
- str ip, [r7, r3]
-2: cmp r4, r5
- ldrcc r7, [r4], #4 @ use branch for delay slot
- bcc 1b
- mov pc, lr
+ .long __pv_phys_offset
+ .long __pv_offset
+ .long PAGE_OFFSET
#endif
-ENDPROC(__fixup_a_pv_table)
-
-ENTRY(fixup_pv_table)
- stmfd sp!, {r4 - r7, lr}
- ldr r2, 2f @ get address of __pv_phys_offset
- mov r3, #0 @ no offset
- mov r4, r0 @ r0 = table start
- add r5, r0, r1 @ r1 = table size
- ldr r6, [r2, #4] @ get __pv_offset
- bl __fixup_a_pv_table
- ldmfd sp!, {r4 - r7, pc}
-ENDPROC(fixup_pv_table)
- .align
-2: .long __pv_phys_offset
-
- .data
- .globl __pv_phys_offset
- .type __pv_phys_offset, %object
-__pv_phys_offset:
- .long 0
- .size __pv_phys_offset, . - __pv_phys_offset
-__pv_offset:
- .long 0
-#endif
#include "head-common.S"
diff --git a/arch/arm/kernel/module.c b/arch/arm/kernel/module.c
index 10a2922..021a940 100644
--- a/arch/arm/kernel/module.c
+++ b/arch/arm/kernel/module.c
@@ -317,11 +317,6 @@ int module_finalize(const Elf32_Ehdr *hdr, const Elf_Shdr *sechdrs,
maps[i].txt_sec->sh_addr,
maps[i].txt_sec->sh_size);
#endif
-#ifdef CONFIG_ARM_PATCH_PHYS_VIRT
- s = find_mod_section(hdr, sechdrs, ".pv_table");
- if (s)
- fixup_pv_table((void *)s->sh_addr, s->sh_size);
-#endif
s = find_mod_section(hdr, sechdrs, ".runtime.patch.table");
if (s) {
err = runtime_patch((void *)s->sh_addr, s->sh_size);
diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
index 6fa7dbf..ba649ac 100644
--- a/arch/arm/kernel/setup.c
+++ b/arch/arm/kernel/setup.c
@@ -143,6 +143,18 @@ static union { char c[4]; unsigned long l; } endian_test __initdata = { { 'l', '
DEFINE_PER_CPU(struct cpuinfo_arm, cpu_data);
+#ifdef CONFIG_ARM_PATCH_PHYS_VIRT
+
+/*
+ * These are initialized in head.S code prior to BSS getting cleared out.
+ * The initializers here prevent these from landing in the BSS section.
+ */
+unsigned long __pv_offset = 0xdeadbeef;
+unsigned long __pv_phys_offset = 0xdeadbeef;
+EXPORT_SYMBOL(__pv_phys_offset);
+
+#endif
+
/*
* Standard memory resources
*/
diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S
index ea35ca0..2080111 100644
--- a/arch/arm/kernel/vmlinux.lds.S
+++ b/arch/arm/kernel/vmlinux.lds.S
@@ -162,11 +162,6 @@ SECTIONS
__smpalt_end = .;
}
#endif
- .init.pv_table : {
- __pv_table_begin = .;
- *(.pv_table)
- __pv_table_end = .;
- }
.init.runtime_patch_table : {
__runtime_patch_table_begin = .;
*(.runtime.patch.table)
--
1.7.9.5
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v3 04/17] ARM: LPAE: use phys_addr_t on virt <--> phys conversion
2012-09-11 17:38 [PATCH v3 00/17] LPAE fixes and extensions for Keystone Cyril Chemparathy
` (2 preceding siblings ...)
2012-09-11 17:39 ` [PATCH v3 03/17] ARM: use late patch framework for phys-virt patching Cyril Chemparathy
@ 2012-09-11 17:39 ` Cyril Chemparathy
2012-09-11 17:39 ` [PATCH v3 05/17] ARM: LPAE: support 64-bit virt_to_phys patching Cyril Chemparathy
` (12 subsequent siblings)
16 siblings, 0 replies; 29+ messages in thread
From: Cyril Chemparathy @ 2012-09-11 17:39 UTC (permalink / raw)
To: linux-kernel, linux-arm-kernel
Cc: arnd, catalin.marinas, grant.likely, nico, linux, will.deacon,
Cyril Chemparathy, Vitaly Andrianov
This patch fixes up the types used when converting back and forth between
physical and virtual addresses.
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Reviewed-by: Nicolas Pitre <nico@linaro.org>
---
arch/arm/include/asm/memory.h | 26 ++++++++++++++++++--------
1 file changed, 18 insertions(+), 8 deletions(-)
diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
index 512d234..a4fc01e 100644
--- a/arch/arm/include/asm/memory.h
+++ b/arch/arm/include/asm/memory.h
@@ -157,22 +157,32 @@ extern unsigned long __pv_offset;
extern unsigned long __pv_phys_offset;
#define PHYS_OFFSET __virt_to_phys(PAGE_OFFSET)
-static inline unsigned long __virt_to_phys(unsigned long x)
+static inline phys_addr_t __virt_to_phys(unsigned long x)
{
unsigned long t;
early_patch_imm8("add", t, x, __pv_offset, 0);
return t;
}
-static inline unsigned long __phys_to_virt(unsigned long x)
+static inline unsigned long __phys_to_virt(phys_addr_t x)
{
- unsigned long t;
- early_patch_imm8("sub", t, x, __pv_offset, 0);
+ unsigned long t, xlo = x;
+ early_patch_imm8("sub", t, xlo, __pv_offset, 0);
return t;
}
+
#else
-#define __virt_to_phys(x) ((x) - PAGE_OFFSET + PHYS_OFFSET)
-#define __phys_to_virt(x) ((x) - PHYS_OFFSET + PAGE_OFFSET)
+
+static inline phys_addr_t __virt_to_phys(unsigned long x)
+{
+ return (phys_addr_t)x - PAGE_OFFSET + PHYS_OFFSET;
+}
+
+static inline unsigned long __phys_to_virt(phys_addr_t x)
+{
+ return x - PHYS_OFFSET + PAGE_OFFSET;
+}
+
#endif
#endif
#endif /* __ASSEMBLY__ */
@@ -210,14 +220,14 @@ static inline phys_addr_t virt_to_phys(const volatile void *x)
static inline void *phys_to_virt(phys_addr_t x)
{
- return (void *)(__phys_to_virt((unsigned long)(x)));
+ return (void *)__phys_to_virt(x);
}
/*
* Drivers should NOT use these either.
*/
#define __pa(x) __virt_to_phys((unsigned long)(x))
-#define __va(x) ((void *)__phys_to_virt((unsigned long)(x)))
+#define __va(x) ((void *)__phys_to_virt((phys_addr_t)(x)))
#define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT)
/*
--
1.7.9.5
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v3 05/17] ARM: LPAE: support 64-bit virt_to_phys patching
2012-09-11 17:38 [PATCH v3 00/17] LPAE fixes and extensions for Keystone Cyril Chemparathy
` (3 preceding siblings ...)
2012-09-11 17:39 ` [PATCH v3 04/17] ARM: LPAE: use phys_addr_t on virt <--> phys conversion Cyril Chemparathy
@ 2012-09-11 17:39 ` Cyril Chemparathy
2012-09-11 17:39 ` [PATCH v3 06/17] ARM: LPAE: use signed arithmetic for mask definitions Cyril Chemparathy
` (11 subsequent siblings)
16 siblings, 0 replies; 29+ messages in thread
From: Cyril Chemparathy @ 2012-09-11 17:39 UTC (permalink / raw)
To: linux-kernel, linux-arm-kernel
Cc: arnd, catalin.marinas, grant.likely, nico, linux, will.deacon,
Cyril Chemparathy
This patch adds support for 64-bit physical addresses in virt_to_phys()
patching. This does not do real 64-bit add/sub, but instead patches in the
upper 32-bits of the phys_offset directly into the output of virt_to_phys.
There is no corresponding change on the phys_to_virt() side, because
computations on the upper 32-bits would be discarded anyway.
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
---
arch/arm/include/asm/memory.h | 38 ++++++++++++++++++++++++++++++++++++--
arch/arm/kernel/head.S | 4 ++++
arch/arm/kernel/setup.c | 2 +-
3 files changed, 41 insertions(+), 3 deletions(-)
diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
index a4fc01e..0643454 100644
--- a/arch/arm/include/asm/memory.h
+++ b/arch/arm/include/asm/memory.h
@@ -154,13 +154,47 @@
#ifdef CONFIG_ARM_PATCH_PHYS_VIRT
extern unsigned long __pv_offset;
-extern unsigned long __pv_phys_offset;
+extern phys_addr_t __pv_phys_offset;
#define PHYS_OFFSET __virt_to_phys(PAGE_OFFSET)
static inline phys_addr_t __virt_to_phys(unsigned long x)
{
- unsigned long t;
+ phys_addr_t t;
+
+#ifndef CONFIG_ARM_LPAE
early_patch_imm8("add", t, x, __pv_offset, 0);
+#else
+ unsigned long __tmp;
+
+#ifndef __ARMEB__
+#define PV_PHYS_HIGH "(__pv_phys_offset + 4)"
+#else
+#define PV_PHYS_HIGH "__pv_phys_offset"
+#endif
+
+ early_patch_stub(
+ /* type */ PATCH_IMM8,
+ /* code */
+ "ldr %[tmp], =__pv_offset\n"
+ "ldr %[tmp], [%[tmp]]\n"
+ "add %Q[to], %[from], %[tmp]\n"
+ "ldr %[tmp], =" PV_PHYS_HIGH "\n"
+ "ldr %[tmp], [%[tmp]]\n"
+ "mov %R[to], %[tmp]\n",
+ /* pad */ 4,
+ /* patch_data */
+ ".long __pv_offset\n"
+ "add %Q[to], %[from], %[imm]\n"
+ ".long " PV_PHYS_HIGH "\n"
+ "mov %R[to], %[imm]\n",
+ /* operands */
+ : [to] "=r" (t),
+ [tmp] "=&r" (__tmp)
+ : [from] "r" (x),
+ [imm] "I" (__IMM8),
+ "i" (&__pv_offset),
+ "i" (&__pv_phys_offset));
+#endif
return t;
}
diff --git a/arch/arm/kernel/head.S b/arch/arm/kernel/head.S
index 69a3c09..61fb8df 100644
--- a/arch/arm/kernel/head.S
+++ b/arch/arm/kernel/head.S
@@ -530,7 +530,11 @@ ENDPROC(__fixup_pv_offsets)
.align
1: .long .
+#if defined(CONFIG_ARM_LPAE) && defined(__ARMEB__)
+ .long __pv_phys_offset + 4
+#else
.long __pv_phys_offset
+#endif
.long __pv_offset
.long PAGE_OFFSET
#endif
diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
index ba649ac..94d9853 100644
--- a/arch/arm/kernel/setup.c
+++ b/arch/arm/kernel/setup.c
@@ -150,7 +150,7 @@ DEFINE_PER_CPU(struct cpuinfo_arm, cpu_data);
* The initializers here prevent these from landing in the BSS section.
*/
unsigned long __pv_offset = 0xdeadbeef;
-unsigned long __pv_phys_offset = 0xdeadbeef;
+phys_addr_t __pv_phys_offset = 0xdeadbeef;
EXPORT_SYMBOL(__pv_phys_offset);
#endif
--
1.7.9.5
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v3 06/17] ARM: LPAE: use signed arithmetic for mask definitions
2012-09-11 17:38 [PATCH v3 00/17] LPAE fixes and extensions for Keystone Cyril Chemparathy
` (4 preceding siblings ...)
2012-09-11 17:39 ` [PATCH v3 05/17] ARM: LPAE: support 64-bit virt_to_phys patching Cyril Chemparathy
@ 2012-09-11 17:39 ` Cyril Chemparathy
2012-09-11 17:39 ` [PATCH v3 07/17] ARM: LPAE: use phys_addr_t in alloc_init_pud() Cyril Chemparathy
` (10 subsequent siblings)
16 siblings, 0 replies; 29+ messages in thread
From: Cyril Chemparathy @ 2012-09-11 17:39 UTC (permalink / raw)
To: linux-kernel, linux-arm-kernel
Cc: arnd, catalin.marinas, grant.likely, nico, linux, will.deacon,
Cyril Chemparathy, Vitaly Andrianov
This patch applies to PAGE_MASK, PMD_MASK, and PGDIR_MASK, where forcing
unsigned long math truncates the mask at the 32-bits. This clearly does bad
things on PAE systems.
This patch fixes this problem by defining these masks as signed quantities.
We then rely on sign extension to do the right thing.
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Reviewed-by: Nicolas Pitre <nico@linaro.org>
---
arch/arm/include/asm/page.h | 2 +-
arch/arm/include/asm/pgtable-3level.h | 6 +++---
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/arm/include/asm/page.h b/arch/arm/include/asm/page.h
index ecf9019..1e0fe08 100644
--- a/arch/arm/include/asm/page.h
+++ b/arch/arm/include/asm/page.h
@@ -13,7 +13,7 @@
/* PAGE_SHIFT determines the page size */
#define PAGE_SHIFT 12
#define PAGE_SIZE (_AC(1,UL) << PAGE_SHIFT)
-#define PAGE_MASK (~(PAGE_SIZE-1))
+#define PAGE_MASK (~((1 << PAGE_SHIFT) - 1))
#ifndef __ASSEMBLY__
diff --git a/arch/arm/include/asm/pgtable-3level.h b/arch/arm/include/asm/pgtable-3level.h
index b249035..ae39d11 100644
--- a/arch/arm/include/asm/pgtable-3level.h
+++ b/arch/arm/include/asm/pgtable-3level.h
@@ -48,16 +48,16 @@
#define PMD_SHIFT 21
#define PMD_SIZE (1UL << PMD_SHIFT)
-#define PMD_MASK (~(PMD_SIZE-1))
+#define PMD_MASK (~((1 << PMD_SHIFT) - 1))
#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
-#define PGDIR_MASK (~(PGDIR_SIZE-1))
+#define PGDIR_MASK (~((1 << PGDIR_SHIFT) - 1))
/*
* section address mask and size definitions.
*/
#define SECTION_SHIFT 21
#define SECTION_SIZE (1UL << SECTION_SHIFT)
-#define SECTION_MASK (~(SECTION_SIZE-1))
+#define SECTION_MASK (~((1 << SECTION_SHIFT) - 1))
#define USER_PTRS_PER_PGD (PAGE_OFFSET / PGDIR_SIZE)
--
1.7.9.5
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v3 07/17] ARM: LPAE: use phys_addr_t in alloc_init_pud()
2012-09-11 17:38 [PATCH v3 00/17] LPAE fixes and extensions for Keystone Cyril Chemparathy
` (5 preceding siblings ...)
2012-09-11 17:39 ` [PATCH v3 06/17] ARM: LPAE: use signed arithmetic for mask definitions Cyril Chemparathy
@ 2012-09-11 17:39 ` Cyril Chemparathy
2012-09-11 17:39 ` [PATCH v3 08/17] ARM: LPAE: use phys_addr_t in free_memmap() Cyril Chemparathy
` (9 subsequent siblings)
16 siblings, 0 replies; 29+ messages in thread
From: Cyril Chemparathy @ 2012-09-11 17:39 UTC (permalink / raw)
To: linux-kernel, linux-arm-kernel
Cc: arnd, catalin.marinas, grant.likely, nico, linux, will.deacon,
Vitaly Andrianov, Cyril Chemparathy
From: Vitaly Andrianov <vitalya@ti.com>
This patch fixes the alloc_init_pud() function to use phys_addr_t instead of
unsigned long when passing in the phys argument.
This is an extension to commit 97092e0c56830457af0639f6bd904537a150ea4a (ARM:
pgtable: use phys_addr_t for physical addresses), which applied similar changes
elsewhere in the ARM memory management code.
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
---
arch/arm/mm/mmu.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 941dfb9..95daa67 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -612,7 +612,8 @@ static void __init alloc_init_section(pud_t *pud, unsigned long addr,
}
static void __init alloc_init_pud(pgd_t *pgd, unsigned long addr,
- unsigned long end, unsigned long phys, const struct mem_type *type)
+ unsigned long end, phys_addr_t phys,
+ const struct mem_type *type)
{
pud_t *pud = pud_offset(pgd, addr);
unsigned long next;
--
1.7.9.5
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v3 08/17] ARM: LPAE: use phys_addr_t in free_memmap()
2012-09-11 17:38 [PATCH v3 00/17] LPAE fixes and extensions for Keystone Cyril Chemparathy
` (6 preceding siblings ...)
2012-09-11 17:39 ` [PATCH v3 07/17] ARM: LPAE: use phys_addr_t in alloc_init_pud() Cyril Chemparathy
@ 2012-09-11 17:39 ` Cyril Chemparathy
2012-09-11 17:39 ` [PATCH v3 09/17] ARM: LPAE: use phys_addr_t for initrd location and size Cyril Chemparathy
` (8 subsequent siblings)
16 siblings, 0 replies; 29+ messages in thread
From: Cyril Chemparathy @ 2012-09-11 17:39 UTC (permalink / raw)
To: linux-kernel, linux-arm-kernel
Cc: arnd, catalin.marinas, grant.likely, nico, linux, will.deacon,
Vitaly Andrianov, Cyril Chemparathy
From: Vitaly Andrianov <vitalya@ti.com>
The free_memmap() was mistakenly using unsigned long type to represent
physical addresses. This breaks on PAE systems where memory could be placed
above the 32-bit addressible limit.
This patch fixes this function to properly use phys_addr_t instead.
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
---
arch/arm/mm/init.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index ad722f1..1c5151a 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -457,7 +457,7 @@ static inline void
free_memmap(unsigned long start_pfn, unsigned long end_pfn)
{
struct page *start_pg, *end_pg;
- unsigned long pg, pgend;
+ phys_addr_t pg, pgend;
/*
* Convert start_pfn/end_pfn to a struct page pointer.
@@ -469,8 +469,8 @@ free_memmap(unsigned long start_pfn, unsigned long end_pfn)
* Convert to physical addresses, and
* round start upwards and end downwards.
*/
- pg = (unsigned long)PAGE_ALIGN(__pa(start_pg));
- pgend = (unsigned long)__pa(end_pg) & PAGE_MASK;
+ pg = PAGE_ALIGN(__pa(start_pg));
+ pgend = __pa(end_pg) & PAGE_MASK;
/*
* If there are free pages between these,
--
1.7.9.5
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v3 09/17] ARM: LPAE: use phys_addr_t for initrd location and size
2012-09-11 17:38 [PATCH v3 00/17] LPAE fixes and extensions for Keystone Cyril Chemparathy
` (7 preceding siblings ...)
2012-09-11 17:39 ` [PATCH v3 08/17] ARM: LPAE: use phys_addr_t in free_memmap() Cyril Chemparathy
@ 2012-09-11 17:39 ` Cyril Chemparathy
2012-09-21 18:30 ` Nicolas Pitre
2012-09-11 17:39 ` [PATCH v3 10/17] ARM: LPAE: use phys_addr_t in switch_mm() Cyril Chemparathy
` (7 subsequent siblings)
16 siblings, 1 reply; 29+ messages in thread
From: Cyril Chemparathy @ 2012-09-11 17:39 UTC (permalink / raw)
To: linux-kernel, linux-arm-kernel
Cc: arnd, catalin.marinas, grant.likely, nico, linux, will.deacon,
Vitaly Andrianov, Cyril Chemparathy
From: Vitaly Andrianov <vitalya@ti.com>
This patch fixes the initrd setup code to use phys_addr_t instead of assuming
32-bit addressing. Without this we cannot boot on systems where initrd is
located above the 4G physical address limit.
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
---
arch/arm/mm/init.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index 1c5151a..87ee0ec 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -36,12 +36,13 @@
#include "mm.h"
-static unsigned long phys_initrd_start __initdata = 0;
+static phys_addr_t phys_initrd_start __initdata = 0;
static unsigned long phys_initrd_size __initdata = 0;
static int __init early_initrd(char *p)
{
- unsigned long start, size;
+ phys_addr_t start;
+ unsigned long size;
char *endp;
start = memparse(p, &endp);
@@ -347,14 +348,14 @@ void __init arm_memblock_init(struct meminfo *mi, struct machine_desc *mdesc)
#ifdef CONFIG_BLK_DEV_INITRD
if (phys_initrd_size &&
!memblock_is_region_memory(phys_initrd_start, phys_initrd_size)) {
- pr_err("INITRD: 0x%08lx+0x%08lx is not a memory region - disabling initrd\n",
- phys_initrd_start, phys_initrd_size);
+ pr_err("INITRD: 0x%08llx+0x%08lx is not a memory region - disabling initrd\n",
+ (u64)phys_initrd_start, phys_initrd_size);
phys_initrd_start = phys_initrd_size = 0;
}
if (phys_initrd_size &&
memblock_is_region_reserved(phys_initrd_start, phys_initrd_size)) {
- pr_err("INITRD: 0x%08lx+0x%08lx overlaps in-use memory region - disabling initrd\n",
- phys_initrd_start, phys_initrd_size);
+ pr_err("INITRD: 0x%08llx+0x%08lx overlaps in-use memory region - disabling initrd\n",
+ (u64)phys_initrd_start, phys_initrd_size);
phys_initrd_start = phys_initrd_size = 0;
}
if (phys_initrd_size) {
--
1.7.9.5
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v3 10/17] ARM: LPAE: use phys_addr_t in switch_mm()
2012-09-11 17:38 [PATCH v3 00/17] LPAE fixes and extensions for Keystone Cyril Chemparathy
` (8 preceding siblings ...)
2012-09-11 17:39 ` [PATCH v3 09/17] ARM: LPAE: use phys_addr_t for initrd location and size Cyril Chemparathy
@ 2012-09-11 17:39 ` Cyril Chemparathy
2012-09-21 18:33 ` Nicolas Pitre
2012-09-11 17:39 ` [PATCH v3 11/17] ARM: LPAE: use 64-bit accessors for TTBR registers Cyril Chemparathy
` (6 subsequent siblings)
16 siblings, 1 reply; 29+ messages in thread
From: Cyril Chemparathy @ 2012-09-11 17:39 UTC (permalink / raw)
To: linux-kernel, linux-arm-kernel
Cc: arnd, catalin.marinas, grant.likely, nico, linux, will.deacon,
Cyril Chemparathy, Vitaly Andrianov
This patch modifies the switch_mm() processor functions to use phys_addr_t.
On LPAE systems, we now honor the upper 32-bits of the physical address that
is being passed in, and program these into TTBR as expected.
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
---
arch/arm/include/asm/proc-fns.h | 4 ++--
arch/arm/mm/proc-v7-3level.S | 17 +++++++++++++----
2 files changed, 15 insertions(+), 6 deletions(-)
diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h
index f3628fb..75b5f14 100644
--- a/arch/arm/include/asm/proc-fns.h
+++ b/arch/arm/include/asm/proc-fns.h
@@ -60,7 +60,7 @@ extern struct processor {
/*
* Set the page table
*/
- void (*switch_mm)(unsigned long pgd_phys, struct mm_struct *mm);
+ void (*switch_mm)(phys_addr_t pgd_phys, struct mm_struct *mm);
/*
* Set a possibly extended PTE. Non-extended PTEs should
* ignore 'ext'.
@@ -82,7 +82,7 @@ extern void cpu_proc_init(void);
extern void cpu_proc_fin(void);
extern int cpu_do_idle(void);
extern void cpu_dcache_clean_area(void *, int);
-extern void cpu_do_switch_mm(unsigned long pgd_phys, struct mm_struct *mm);
+extern void cpu_do_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
#ifdef CONFIG_ARM_LPAE
extern void cpu_set_pte_ext(pte_t *ptep, pte_t pte);
#else
diff --git a/arch/arm/mm/proc-v7-3level.S b/arch/arm/mm/proc-v7-3level.S
index 8de0f1d..c4f4251 100644
--- a/arch/arm/mm/proc-v7-3level.S
+++ b/arch/arm/mm/proc-v7-3level.S
@@ -39,6 +39,14 @@
#define TTB_FLAGS_SMP (TTB_IRGN_WBWA|TTB_S|TTB_RGN_OC_WBWA)
#define PMD_FLAGS_SMP (PMD_SECT_WBWA|PMD_SECT_S)
+#ifndef __ARMEB__
+# define rpgdl r0
+# define rpgdh r1
+#else
+# define rpgdl r1
+# define rpgdh r0
+#endif
+
/*
* cpu_v7_switch_mm(pgd_phys, tsk)
*
@@ -47,10 +55,11 @@
*/
ENTRY(cpu_v7_switch_mm)
#ifdef CONFIG_MMU
- ldr r1, [r1, #MM_CONTEXT_ID] @ get mm->context.id
- and r3, r1, #0xff
- mov r3, r3, lsl #(48 - 32) @ ASID
- mcrr p15, 0, r0, r3, c2 @ set TTB 0
+ ldr r2, [r2, #MM_CONTEXT_ID] @ get mm->context.id
+ and r2, r2, #0xff
+ mov r2, r2, lsl #(48 - 32) @ ASID
+ orr rpgdh, rpgdh, r2 @ upper 32-bits of pgd phys
+ mcrr p15, 0, rpgdl, rpgdh, c2 @ set TTB 0
isb
#endif
mov pc, lr
--
1.7.9.5
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v3 11/17] ARM: LPAE: use 64-bit accessors for TTBR registers
2012-09-11 17:38 [PATCH v3 00/17] LPAE fixes and extensions for Keystone Cyril Chemparathy
` (9 preceding siblings ...)
2012-09-11 17:39 ` [PATCH v3 10/17] ARM: LPAE: use phys_addr_t in switch_mm() Cyril Chemparathy
@ 2012-09-11 17:39 ` Cyril Chemparathy
2012-09-11 17:39 ` [PATCH v3 12/17] ARM: LPAE: define ARCH_LOW_ADDRESS_LIMIT for bootmem Cyril Chemparathy
` (5 subsequent siblings)
16 siblings, 0 replies; 29+ messages in thread
From: Cyril Chemparathy @ 2012-09-11 17:39 UTC (permalink / raw)
To: linux-kernel, linux-arm-kernel
Cc: arnd, catalin.marinas, grant.likely, nico, linux, will.deacon,
Cyril Chemparathy, Vitaly Andrianov
This patch adds TTBR accessor macros, and modifies cpu_get_pgd() and
the LPAE version of cpu_set_reserved_ttbr0() to use these instead.
In the process, we also fix these functions to correctly handle cases
where the physical address lies beyond the 4G limit of 32-bit addressing.
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
---
arch/arm/include/asm/proc-fns.h | 24 +++++++++++++++++++-----
arch/arm/mm/context.c | 9 ++-------
2 files changed, 21 insertions(+), 12 deletions(-)
diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h
index 75b5f14..2d270b8 100644
--- a/arch/arm/include/asm/proc-fns.h
+++ b/arch/arm/include/asm/proc-fns.h
@@ -116,13 +116,27 @@ extern void cpu_resume(void);
#define cpu_switch_mm(pgd,mm) cpu_do_switch_mm(virt_to_phys(pgd),mm)
#ifdef CONFIG_ARM_LPAE
+
+#define cpu_get_ttbr(nr) \
+ ({ \
+ u64 ttbr; \
+ __asm__("mrrc p15, " #nr ", %Q0, %R0, c2" \
+ : "=r" (ttbr) \
+ : : ); \
+ ttbr; \
+ })
+
+#define cpu_set_ttbr(nr, val) \
+ do { \
+ u64 ttbr = val; \
+ __asm__("mcrr p15, " #nr ", %Q0, %R0, c2" \
+ : : "r" (ttbr) \
+ : "cc"); \
+ } while (0)
+
#define cpu_get_pgd() \
({ \
- unsigned long pg, pg2; \
- __asm__("mrrc p15, 0, %0, %1, c2" \
- : "=r" (pg), "=r" (pg2) \
- : \
- : "cc"); \
+ u64 pg = cpu_get_ttbr(0); \
pg &= ~(PTRS_PER_PGD*sizeof(pgd_t)-1); \
(pgd_t *)phys_to_virt(pg); \
})
diff --git a/arch/arm/mm/context.c b/arch/arm/mm/context.c
index 4e07eec..f03437e 100644
--- a/arch/arm/mm/context.c
+++ b/arch/arm/mm/context.c
@@ -16,6 +16,7 @@
#include <asm/mmu_context.h>
#include <asm/thread_notify.h>
#include <asm/tlbflush.h>
+#include <asm/proc-fns.h>
static DEFINE_RAW_SPINLOCK(cpu_asid_lock);
unsigned int cpu_last_asid = ASID_FIRST_VERSION;
@@ -23,17 +24,11 @@ unsigned int cpu_last_asid = ASID_FIRST_VERSION;
#ifdef CONFIG_ARM_LPAE
void cpu_set_reserved_ttbr0(void)
{
- unsigned long ttbl = __pa(swapper_pg_dir);
- unsigned long ttbh = 0;
-
/*
* Set TTBR0 to swapper_pg_dir which contains only global entries. The
* ASID is set to 0.
*/
- asm volatile(
- " mcrr p15, 0, %0, %1, c2 @ set TTBR0\n"
- :
- : "r" (ttbl), "r" (ttbh));
+ cpu_set_ttbr(0, __pa(swapper_pg_dir));
isb();
}
#else
--
1.7.9.5
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v3 12/17] ARM: LPAE: define ARCH_LOW_ADDRESS_LIMIT for bootmem
2012-09-11 17:38 [PATCH v3 00/17] LPAE fixes and extensions for Keystone Cyril Chemparathy
` (10 preceding siblings ...)
2012-09-11 17:39 ` [PATCH v3 11/17] ARM: LPAE: use 64-bit accessors for TTBR registers Cyril Chemparathy
@ 2012-09-11 17:39 ` Cyril Chemparathy
2012-09-11 17:39 ` [PATCH v3 13/17] ARM: LPAE: factor out T1SZ and TTBR1 computations Cyril Chemparathy
` (4 subsequent siblings)
16 siblings, 0 replies; 29+ messages in thread
From: Cyril Chemparathy @ 2012-09-11 17:39 UTC (permalink / raw)
To: linux-kernel, linux-arm-kernel
Cc: arnd, catalin.marinas, grant.likely, nico, linux, will.deacon,
Cyril Chemparathy, Vitaly Andrianov
This patch adds an architecture defined override for ARCH_LOW_ADDRESS_LIMIT.
On PAE systems, the absence of this override causes bootmem to incorrectly
limit itself to 32-bit addressable physical memory.
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
---
arch/arm/include/asm/memory.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
index 0643454..73caaeb 100644
--- a/arch/arm/include/asm/memory.h
+++ b/arch/arm/include/asm/memory.h
@@ -307,6 +307,8 @@ static inline __deprecated void *bus_to_virt(unsigned long x)
#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
#define virt_addr_valid(kaddr) ((unsigned long)(kaddr) >= PAGE_OFFSET && (unsigned long)(kaddr) < (unsigned long)high_memory)
+#define ARCH_LOW_ADDRESS_LIMIT PHYS_MASK
+
#endif
#include <asm-generic/memory_model.h>
--
1.7.9.5
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v3 13/17] ARM: LPAE: factor out T1SZ and TTBR1 computations
2012-09-11 17:38 [PATCH v3 00/17] LPAE fixes and extensions for Keystone Cyril Chemparathy
` (11 preceding siblings ...)
2012-09-11 17:39 ` [PATCH v3 12/17] ARM: LPAE: define ARCH_LOW_ADDRESS_LIMIT for bootmem Cyril Chemparathy
@ 2012-09-11 17:39 ` Cyril Chemparathy
2012-09-11 17:39 ` [PATCH v3 14/17] ARM: LPAE: accomodate >32-bit addresses for page table base Cyril Chemparathy
` (3 subsequent siblings)
16 siblings, 0 replies; 29+ messages in thread
From: Cyril Chemparathy @ 2012-09-11 17:39 UTC (permalink / raw)
To: linux-kernel, linux-arm-kernel
Cc: arnd, catalin.marinas, grant.likely, nico, linux, will.deacon,
Cyril Chemparathy, Vitaly Andrianov
This patch moves the TTBR1 offset calculation and the T1SZ calculation out
of the TTB setup assembly code. This should not affect functionality in
any way, but improves code readability as well as readability of subsequent
patches in this series.
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
---
arch/arm/include/asm/pgtable-3level-hwdef.h | 10 ++++++++++
arch/arm/mm/proc-v7-3level.S | 16 ++++------------
2 files changed, 14 insertions(+), 12 deletions(-)
diff --git a/arch/arm/include/asm/pgtable-3level-hwdef.h b/arch/arm/include/asm/pgtable-3level-hwdef.h
index d795282..b501650 100644
--- a/arch/arm/include/asm/pgtable-3level-hwdef.h
+++ b/arch/arm/include/asm/pgtable-3level-hwdef.h
@@ -74,4 +74,14 @@
#define PHYS_MASK_SHIFT (40)
#define PHYS_MASK ((1ULL << PHYS_MASK_SHIFT) - 1)
+#if defined CONFIG_VMSPLIT_2G
+#define TTBR1_OFFSET (1 << 4) /* skip two L1 entries */
+#elif defined CONFIG_VMSPLIT_3G
+#define TTBR1_OFFSET (4096 * (1 + 3)) /* only L2, skip pgd + 3*pmd */
+#else
+#define TTBR1_OFFSET 0
+#endif
+
+#define TTBR1_SIZE (((PAGE_OFFSET >> 30) - 1) << 16)
+
#endif
diff --git a/arch/arm/mm/proc-v7-3level.S b/arch/arm/mm/proc-v7-3level.S
index c4f4251..5d93f00 100644
--- a/arch/arm/mm/proc-v7-3level.S
+++ b/arch/arm/mm/proc-v7-3level.S
@@ -128,18 +128,10 @@ ENDPROC(cpu_v7_set_pte_ext)
* booting secondary CPUs would end up using TTBR1 for the identity
* mapping set up in TTBR0.
*/
- bhi 9001f @ PHYS_OFFSET > PAGE_OFFSET?
- orr \tmp, \tmp, #(((PAGE_OFFSET >> 30) - 1) << 16) @ TTBCR.T1SZ
-#if defined CONFIG_VMSPLIT_2G
- /* PAGE_OFFSET == 0x80000000, T1SZ == 1 */
- add \ttbr1, \ttbr1, #1 << 4 @ skip two L1 entries
-#elif defined CONFIG_VMSPLIT_3G
- /* PAGE_OFFSET == 0xc0000000, T1SZ == 2 */
- add \ttbr1, \ttbr1, #4096 * (1 + 3) @ only L2 used, skip pgd+3*pmd
-#endif
- /* CONFIG_VMSPLIT_1G does not need TTBR1 adjustment */
-9001: mcr p15, 0, \tmp, c2, c0, 2 @ TTB control register
- mcrr p15, 1, \ttbr1, \zero, c2 @ load TTBR1
+ orrls \tmp, \tmp, #TTBR1_SIZE @ TTBCR.T1SZ
+ mcr p15, 0, \tmp, c2, c0, 2 @ TTBCR
+ addls \ttbr1, \ttbr1, #TTBR1_OFFSET
+ mcrr p15, 1, \ttbr1, \zero, c2 @ load TTBR1
.endm
__CPUINIT
--
1.7.9.5
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v3 14/17] ARM: LPAE: accomodate >32-bit addresses for page table base
2012-09-11 17:38 [PATCH v3 00/17] LPAE fixes and extensions for Keystone Cyril Chemparathy
` (12 preceding siblings ...)
2012-09-11 17:39 ` [PATCH v3 13/17] ARM: LPAE: factor out T1SZ and TTBR1 computations Cyril Chemparathy
@ 2012-09-11 17:39 ` Cyril Chemparathy
2012-09-11 17:39 ` [PATCH v3 15/17] ARM: mm: use physical addresses in highmem sanity checks Cyril Chemparathy
` (2 subsequent siblings)
16 siblings, 0 replies; 29+ messages in thread
From: Cyril Chemparathy @ 2012-09-11 17:39 UTC (permalink / raw)
To: linux-kernel, linux-arm-kernel
Cc: arnd, catalin.marinas, grant.likely, nico, linux, will.deacon,
Cyril Chemparathy, Vitaly Andrianov
This patch redefines the early boot time use of the R4 register to steal a few
low order bits (ARCH_PGD_SHIFT bits) on LPAE systems. This allows for up to
38-bit physical addresses.
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
---
arch/arm/include/asm/memory.h | 15 +++++++++++++++
arch/arm/kernel/head.S | 10 ++++------
arch/arm/kernel/smp.c | 11 +++++++++--
arch/arm/mm/proc-v7-3level.S | 8 ++++++++
4 files changed, 36 insertions(+), 8 deletions(-)
diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
index 73caaeb..edf540b 100644
--- a/arch/arm/include/asm/memory.h
+++ b/arch/arm/include/asm/memory.h
@@ -18,6 +18,7 @@
#include <linux/types.h>
#include <linux/sizes.h>
+#include <asm/cache.h>
#include <asm/runtime-patch.h>
#ifdef CONFIG_NEED_MACH_MEMORY_H
@@ -143,6 +144,20 @@
#define page_to_phys(page) (__pfn_to_phys(page_to_pfn(page)))
#define phys_to_page(phys) (pfn_to_page(__phys_to_pfn(phys)))
+/*
+ * Minimum guaranted alignment in pgd_alloc(). The page table pointers passed
+ * around in head.S and proc-*.S are shifted by this amount, in order to
+ * leave spare high bits for systems with physical address extension. This
+ * does not fully accomodate the 40-bit addressing capability of ARM LPAE, but
+ * gives us about 38-bits or so.
+ */
+#ifdef CONFIG_ARM_LPAE
+#define ARCH_PGD_SHIFT L1_CACHE_SHIFT
+#else
+#define ARCH_PGD_SHIFT 0
+#endif
+#define ARCH_PGD_MASK ((1 << ARCH_PGD_SHIFT) - 1)
+
#ifndef __ASSEMBLY__
/*
diff --git a/arch/arm/kernel/head.S b/arch/arm/kernel/head.S
index 61fb8df..9664db0 100644
--- a/arch/arm/kernel/head.S
+++ b/arch/arm/kernel/head.S
@@ -152,7 +152,7 @@ ENDPROC(stext)
*
* Returns:
* r0, r3, r5-r7 corrupted
- * r4 = physical page table address
+ * r4 = page table (see ARCH_PGD_SHIFT in asm/memory.h)
*/
__create_page_tables:
pgtbl r4, r8 @ page table address
@@ -306,6 +306,7 @@ __create_page_tables:
#endif
#ifdef CONFIG_ARM_LPAE
sub r4, r4, #0x1000 @ point to the PGD table
+ mov r4, r4, lsr #ARCH_PGD_SHIFT
#endif
mov pc, lr
ENDPROC(__create_page_tables)
@@ -379,7 +380,7 @@ __secondary_data:
* r0 = cp#15 control register
* r1 = machine ID
* r2 = atags or dtb pointer
- * r4 = page table pointer
+ * r4 = page table (see ARCH_PGD_SHIFT in asm/memory.h)
* r9 = processor ID
* r13 = *virtual* address to jump to upon completion
*/
@@ -398,10 +399,7 @@ __enable_mmu:
#ifdef CONFIG_CPU_ICACHE_DISABLE
bic r0, r0, #CR_I
#endif
-#ifdef CONFIG_ARM_LPAE
- mov r5, #0
- mcrr p15, 0, r4, r5, c2 @ load TTBR0
-#else
+#ifndef CONFIG_ARM_LPAE
mov r5, #(domain_val(DOMAIN_USER, DOMAIN_MANAGER) | \
domain_val(DOMAIN_KERNEL, DOMAIN_MANAGER) | \
domain_val(DOMAIN_TABLE, DOMAIN_MANAGER) | \
diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
index 8e03567..349fdf3 100644
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -61,6 +61,13 @@ enum ipi_msg_type {
static DECLARE_COMPLETION(cpu_running);
+static unsigned long get_arch_pgd(pgd_t *pgd)
+{
+ phys_addr_t pgdir = virt_to_phys(pgd);
+ BUG_ON(pgdir & ARCH_PGD_MASK);
+ return pgdir >> ARCH_PGD_SHIFT;
+}
+
int __cpuinit __cpu_up(unsigned int cpu, struct task_struct *idle)
{
int ret;
@@ -70,8 +77,8 @@ int __cpuinit __cpu_up(unsigned int cpu, struct task_struct *idle)
* its stack and the page tables.
*/
secondary_data.stack = task_stack_page(idle) + THREAD_START_SP;
- secondary_data.pgdir = virt_to_phys(idmap_pgd);
- secondary_data.swapper_pg_dir = virt_to_phys(swapper_pg_dir);
+ secondary_data.pgdir = get_arch_pgd(idmap_pgd);
+ secondary_data.swapper_pg_dir = get_arch_pgd(swapper_pg_dir);
__cpuc_flush_dcache_area(&secondary_data, sizeof(secondary_data));
outer_clean_range(__pa(&secondary_data), __pa(&secondary_data + 1));
diff --git a/arch/arm/mm/proc-v7-3level.S b/arch/arm/mm/proc-v7-3level.S
index 5d93f00..e920763 100644
--- a/arch/arm/mm/proc-v7-3level.S
+++ b/arch/arm/mm/proc-v7-3level.S
@@ -111,6 +111,7 @@ ENDPROC(cpu_v7_set_pte_ext)
*/
.macro v7_ttb_setup, zero, ttbr0, ttbr1, tmp
ldr \tmp, =swapper_pg_dir @ swapper_pg_dir virtual address
+ mov \tmp, \tmp, lsr #ARCH_PGD_SHIFT
cmp \ttbr1, \tmp @ PHYS_OFFSET > PAGE_OFFSET? (branch below)
mrc p15, 0, \tmp, c2, c0, 2 @ TTB control register
orr \tmp, \tmp, #TTB_EAE
@@ -130,8 +131,15 @@ ENDPROC(cpu_v7_set_pte_ext)
*/
orrls \tmp, \tmp, #TTBR1_SIZE @ TTBCR.T1SZ
mcr p15, 0, \tmp, c2, c0, 2 @ TTBCR
+ mov \tmp, \ttbr1, lsr #(32 - ARCH_PGD_SHIFT) @ upper bits
+ mov \ttbr1, \ttbr1, lsl #ARCH_PGD_SHIFT @ lower bits
addls \ttbr1, \ttbr1, #TTBR1_OFFSET
mcrr p15, 1, \ttbr1, \zero, c2 @ load TTBR1
+ mov \tmp, \ttbr0, lsr #(32 - ARCH_PGD_SHIFT) @ upper bits
+ mov \ttbr0, \ttbr0, lsl #ARCH_PGD_SHIFT @ lower bits
+ mcrr p15, 0, \ttbr0, \zero, c2 @ load TTBR0
+ mcrr p15, 1, \ttbr1, \zero, c2 @ load TTBR1
+ mcrr p15, 0, \ttbr0, \zero, c2 @ load TTBR0
.endm
__CPUINIT
--
1.7.9.5
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v3 15/17] ARM: mm: use physical addresses in highmem sanity checks
2012-09-11 17:38 [PATCH v3 00/17] LPAE fixes and extensions for Keystone Cyril Chemparathy
` (13 preceding siblings ...)
2012-09-11 17:39 ` [PATCH v3 14/17] ARM: LPAE: accomodate >32-bit addresses for page table base Cyril Chemparathy
@ 2012-09-11 17:39 ` Cyril Chemparathy
2012-09-11 17:39 ` [PATCH v3 16/17] ARM: mm: cleanup checks for membank overlap with vmalloc area Cyril Chemparathy
2012-09-11 17:39 ` [PATCH v3 17/17] ARM: mm: clean up membank size limit checks Cyril Chemparathy
16 siblings, 0 replies; 29+ messages in thread
From: Cyril Chemparathy @ 2012-09-11 17:39 UTC (permalink / raw)
To: linux-kernel, linux-arm-kernel
Cc: arnd, catalin.marinas, grant.likely, nico, linux, will.deacon,
Cyril Chemparathy, Vitaly Andrianov
This patch modifies the highmem sanity checking code to use physical addresses
instead. This change eliminates the wrap-around problems associated with the
original virtual address based checks, and this simplifies the code a bit.
The one constraint imposed here is that low physical memory must be mapped in
a monotonically increasing fashion if there are multiple banks of memory,
i.e., x < y must => pa(x) < pa(y).
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
---
arch/arm/mm/mmu.c | 22 ++++++++++------------
1 file changed, 10 insertions(+), 12 deletions(-)
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 95daa67..ac4e7ae 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -913,6 +913,7 @@ phys_addr_t arm_lowmem_limit __initdata = 0;
void __init sanity_check_meminfo(void)
{
int i, j, highmem = 0;
+ phys_addr_t vmalloc_limit = __pa(vmalloc_min - 1) + 1;
for (i = 0, j = 0; i < meminfo.nr_banks; i++) {
struct membank *bank = &meminfo.bank[j];
@@ -922,8 +923,7 @@ void __init sanity_check_meminfo(void)
highmem = 1;
#ifdef CONFIG_HIGHMEM
- if (__va(bank->start) >= vmalloc_min ||
- __va(bank->start) < (void *)PAGE_OFFSET)
+ if (bank->start >= vmalloc_limit)
highmem = 1;
bank->highmem = highmem;
@@ -932,8 +932,8 @@ void __init sanity_check_meminfo(void)
* Split those memory banks which are partially overlapping
* the vmalloc area greatly simplifying things later.
*/
- if (!highmem && __va(bank->start) < vmalloc_min &&
- bank->size > vmalloc_min - __va(bank->start)) {
+ if (!highmem && bank->start < vmalloc_limit &&
+ bank->size > vmalloc_limit - bank->start) {
if (meminfo.nr_banks >= NR_BANKS) {
printk(KERN_CRIT "NR_BANKS too low, "
"ignoring high memory\n");
@@ -942,12 +942,12 @@ void __init sanity_check_meminfo(void)
(meminfo.nr_banks - i) * sizeof(*bank));
meminfo.nr_banks++;
i++;
- bank[1].size -= vmalloc_min - __va(bank->start);
- bank[1].start = __pa(vmalloc_min - 1) + 1;
+ bank[1].size -= vmalloc_limit - bank->start;
+ bank[1].start = vmalloc_limit;
bank[1].highmem = highmem = 1;
j++;
}
- bank->size = vmalloc_min - __va(bank->start);
+ bank->size = vmalloc_limit - bank->start;
}
#else
bank->highmem = highmem;
@@ -967,8 +967,7 @@ void __init sanity_check_meminfo(void)
* Check whether this memory bank would entirely overlap
* the vmalloc area.
*/
- if (__va(bank->start) >= vmalloc_min ||
- __va(bank->start) < (void *)PAGE_OFFSET) {
+ if (bank->start >= vmalloc_limit) {
printk(KERN_NOTICE "Ignoring RAM at %.8llx-%.8llx "
"(vmalloc region overlap).\n",
(unsigned long long)bank->start,
@@ -980,9 +979,8 @@ void __init sanity_check_meminfo(void)
* Check whether this memory bank would partially overlap
* the vmalloc area.
*/
- if (__va(bank->start + bank->size - 1) >= vmalloc_min ||
- __va(bank->start + bank->size - 1) <= __va(bank->start)) {
- unsigned long newsize = vmalloc_min - __va(bank->start);
+ if (bank->start + bank->size > vmalloc_limit)
+ unsigned long newsize = vmalloc_limit - bank->start;
printk(KERN_NOTICE "Truncating RAM at %.8llx-%.8llx "
"to -%.8llx (vmalloc region overlap).\n",
(unsigned long long)bank->start,
--
1.7.9.5
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v3 16/17] ARM: mm: cleanup checks for membank overlap with vmalloc area
2012-09-11 17:38 [PATCH v3 00/17] LPAE fixes and extensions for Keystone Cyril Chemparathy
` (14 preceding siblings ...)
2012-09-11 17:39 ` [PATCH v3 15/17] ARM: mm: use physical addresses in highmem sanity checks Cyril Chemparathy
@ 2012-09-11 17:39 ` Cyril Chemparathy
2012-09-11 17:39 ` [PATCH v3 17/17] ARM: mm: clean up membank size limit checks Cyril Chemparathy
16 siblings, 0 replies; 29+ messages in thread
From: Cyril Chemparathy @ 2012-09-11 17:39 UTC (permalink / raw)
To: linux-kernel, linux-arm-kernel
Cc: arnd, catalin.marinas, grant.likely, nico, linux, will.deacon,
Cyril Chemparathy, Vitaly Andrianov
On Keystone platforms, physical memory is entirely outside the 32-bit
addressible range. Therefore, the (bank->start > ULONG_MAX) check below marks
the entire system memory as highmem, and this causes unpleasentness all over.
This patch eliminates the extra bank start check (against ULONG_MAX) by
checking bank->start against the physical address corresponding to vmalloc_min
instead.
In the process, this patch also cleans up parts of the highmem sanity check
code by removing what has now become a redundant check for banks that entirely
overlap with the vmalloc range.
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
---
arch/arm/mm/mmu.c | 19 +------------------
1 file changed, 1 insertion(+), 18 deletions(-)
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index ac4e7ae..6c35483 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -919,15 +919,12 @@ void __init sanity_check_meminfo(void)
struct membank *bank = &meminfo.bank[j];
*bank = meminfo.bank[i];
- if (bank->start > ULONG_MAX)
- highmem = 1;
-
-#ifdef CONFIG_HIGHMEM
if (bank->start >= vmalloc_limit)
highmem = 1;
bank->highmem = highmem;
+#ifdef CONFIG_HIGHMEM
/*
* Split those memory banks which are partially overlapping
* the vmalloc area greatly simplifying things later.
@@ -950,8 +947,6 @@ void __init sanity_check_meminfo(void)
bank->size = vmalloc_limit - bank->start;
}
#else
- bank->highmem = highmem;
-
/*
* Highmem banks not allowed with !CONFIG_HIGHMEM.
*/
@@ -964,18 +959,6 @@ void __init sanity_check_meminfo(void)
}
/*
- * Check whether this memory bank would entirely overlap
- * the vmalloc area.
- */
- if (bank->start >= vmalloc_limit) {
- printk(KERN_NOTICE "Ignoring RAM at %.8llx-%.8llx "
- "(vmalloc region overlap).\n",
- (unsigned long long)bank->start,
- (unsigned long long)bank->start + bank->size - 1);
- continue;
- }
-
- /*
* Check whether this memory bank would partially overlap
* the vmalloc area.
*/
--
1.7.9.5
^ permalink raw reply related [flat|nested] 29+ messages in thread
* [PATCH v3 17/17] ARM: mm: clean up membank size limit checks
2012-09-11 17:38 [PATCH v3 00/17] LPAE fixes and extensions for Keystone Cyril Chemparathy
` (15 preceding siblings ...)
2012-09-11 17:39 ` [PATCH v3 16/17] ARM: mm: cleanup checks for membank overlap with vmalloc area Cyril Chemparathy
@ 2012-09-11 17:39 ` Cyril Chemparathy
2012-09-21 18:42 ` Nicolas Pitre
16 siblings, 1 reply; 29+ messages in thread
From: Cyril Chemparathy @ 2012-09-11 17:39 UTC (permalink / raw)
To: linux-kernel, linux-arm-kernel
Cc: arnd, catalin.marinas, grant.likely, nico, linux, will.deacon,
Cyril Chemparathy, Vitaly Andrianov
This patch cleans up the highmem sanity check code by simplifying the range
checks with a pre-calculated size_limit. This patch should otherwise have no
functional impact on behavior.
This patch also removes a redundant (bank->start < vmalloc_limit) check, since
this is already covered by the !highmem condition.
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
---
arch/arm/mm/mmu.c | 19 +++++++++++--------
1 file changed, 11 insertions(+), 8 deletions(-)
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 6c35483..50d9df5 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -917,10 +917,15 @@ void __init sanity_check_meminfo(void)
for (i = 0, j = 0; i < meminfo.nr_banks; i++) {
struct membank *bank = &meminfo.bank[j];
+ phys_addr_t size_limit;
+
*bank = meminfo.bank[i];
+ size_limit = bank->size;
if (bank->start >= vmalloc_limit)
highmem = 1;
+ else
+ size_limit = vmalloc_limit - bank->start;
bank->highmem = highmem;
@@ -929,8 +934,7 @@ void __init sanity_check_meminfo(void)
* Split those memory banks which are partially overlapping
* the vmalloc area greatly simplifying things later.
*/
- if (!highmem && bank->start < vmalloc_limit &&
- bank->size > vmalloc_limit - bank->start) {
+ if (!highmem && bank->size > size_limit) {
if (meminfo.nr_banks >= NR_BANKS) {
printk(KERN_CRIT "NR_BANKS too low, "
"ignoring high memory\n");
@@ -939,12 +943,12 @@ void __init sanity_check_meminfo(void)
(meminfo.nr_banks - i) * sizeof(*bank));
meminfo.nr_banks++;
i++;
- bank[1].size -= vmalloc_limit - bank->start;
+ bank[1].size -= size_limit;
bank[1].start = vmalloc_limit;
bank[1].highmem = highmem = 1;
j++;
}
- bank->size = vmalloc_limit - bank->start;
+ bank->size = size_limit;
}
#else
/*
@@ -962,14 +966,13 @@ void __init sanity_check_meminfo(void)
* Check whether this memory bank would partially overlap
* the vmalloc area.
*/
- if (bank->start + bank->size > vmalloc_limit)
- unsigned long newsize = vmalloc_limit - bank->start;
+ if (bank->size > size_limit) {
printk(KERN_NOTICE "Truncating RAM at %.8llx-%.8llx "
"to -%.8llx (vmalloc region overlap).\n",
(unsigned long long)bank->start,
(unsigned long long)bank->start + bank->size - 1,
- (unsigned long long)bank->start + newsize - 1);
- bank->size = newsize;
+ (unsigned long long)bank->start + size_limit - 1);
+ bank->size = size_limit;
}
#endif
if (!bank->highmem && bank->start + bank->size > arm_lowmem_limit)
--
1.7.9.5
^ permalink raw reply related [flat|nested] 29+ messages in thread
* Re: [PATCH v3 02/17] ARM: add self test for runtime patch mechanism
2012-09-11 17:39 ` [PATCH v3 02/17] ARM: add self test for runtime patch mechanism Cyril Chemparathy
@ 2012-09-21 17:40 ` Nicolas Pitre
2012-09-21 22:25 ` Cyril Chemparathy
0 siblings, 1 reply; 29+ messages in thread
From: Nicolas Pitre @ 2012-09-21 17:40 UTC (permalink / raw)
To: Cyril Chemparathy
Cc: linux-kernel, linux-arm-kernel, arnd, catalin.marinas,
grant.likely, linux, will.deacon
On Tue, 11 Sep 2012, Cyril Chemparathy wrote:
> This patch adds basic sanity tests to ensure that the instruction patching
> results in valid instruction encodings. This is done by verifying the output
> of the patch process against a vector of assembler generated instructions at
> init time.
>
> Signed-off-by: Cyril Chemparathy <cyril@ti.com>
> ---
> arch/arm/Kconfig | 12 +++++++
> arch/arm/kernel/runtime-patch.c | 75 +++++++++++++++++++++++++++++++++++++++
> 2 files changed, 87 insertions(+)
>
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index 36de4ea..bfcd29d 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -207,6 +207,18 @@ config ARM_PATCH_PHYS_VIRT
> this feature (eg, building a kernel for a single machine) and
> you need to shrink the kernel to the minimal size.
>
> +config ARM_RUNTIME_PATCH_TEST
> + bool "Self test runtime patching mechanism" if ARM_RUNTIME_PATCH
> + default y
Here you probably want this instead:
bool "Self test runtime patching mechanism"
default y
depends on ARM_RUNTIME_PATCH
Otherwise ARM_RUNTIME_PATCH_TEST will be forced to y whenever
ARM_RUNTIME_PATCH is unset. That doesn't currently affect the build
since the containing .c file is only compiled when ARM_RUNTIME_PATCH is
set but that is still not strictly right.
[...]
> @@ -189,5 +261,8 @@ void __init runtime_patch_kernel(void)
> const void *start = &__runtime_patch_table_begin;
> const void *end = &__runtime_patch_table_end;
>
> +#ifdef CONFIG_ARM_RUNTIME_PATCH_TEST
> + runtime_patch_test();
> +#endif
> BUG_ON(runtime_patch(start, end - start));
I think you shoulld have runtime_patch_test() return a possible error
code and use BUG_ON() with it as well.
With those minor changes you can add...
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Nicolas
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v3 01/17] ARM: add mechanism for late code patching
2012-09-11 17:38 ` [PATCH v3 01/17] ARM: add mechanism for late code patching Cyril Chemparathy
@ 2012-09-21 18:09 ` Nicolas Pitre
2012-09-21 22:30 ` Cyril Chemparathy
0 siblings, 1 reply; 29+ messages in thread
From: Nicolas Pitre @ 2012-09-21 18:09 UTC (permalink / raw)
To: Cyril Chemparathy
Cc: linux-kernel, linux-arm-kernel, arnd, catalin.marinas,
grant.likely, linux, will.deacon
On Tue, 11 Sep 2012, Cyril Chemparathy wrote:
> The original phys_to_virt/virt_to_phys patching implementation relied on early
> patching prior to MMU initialization. On PAE systems running out of >4G
> address space, this would have entailed an additional round of patching after
> switching over to the high address space.
>
> The approach implemented here conceptually extends the original PHYS_OFFSET
> patching implementation with the introduction of "early" patch stubs. Early
> patch code is required to be functional out of the box, even before the patch
> is applied. This is implemented by inserting functional (but inefficient)
> load code into the .runtime.patch.code init section. Having functional code
> out of the box then allows us to defer the init time patch application until
> later in the init sequence.
>
> In addition to fitting better with our need for physical address-space
> switch-over, this implementation should be somewhat more extensible by virtue
> of its more readable (and hackable) C implementation. This should prove
> useful for other similar init time specialization needs, especially in light
> of our multi-platform kernel initiative.
>
> This code has been boot tested in both ARM and Thumb-2 modes on an ARMv7
> (Cortex-A8) device.
>
> Note: the obtuse use of stringified symbols in patch_stub() and
> early_patch_stub() is intentional. Theoretically this should have been
> accomplished with formal operands passed into the asm block, but this requires
> the use of the 'c' modifier for instantiating the long (e.g. .long %c0).
> However, the 'c' modifier has been found to ICE certain versions of GCC, and
> therefore we resort to stringified symbols here.
>
> Signed-off-by: Cyril Chemparathy <cyril@ti.com>
> Reviewed-by: Nicolas Pitre <nico@linaro.org>
I know I provided review before, but here's another nit I'd like fixed:
> --- /dev/null
> +++ b/arch/arm/include/asm/runtime-patch.h
> @@ -0,0 +1,208 @@
> +/*
> + * arch/arm/include/asm/runtime-patch.h
> + * Note: this file should not be included by non-asm/.h files
> + *
> + * Copyright 2012 Texas Instruments, Inc.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program. If not, see <http://www.gnu.org/licenses/>.
> + */
> +#ifndef __ASM_ARM_RUNTIME_PATCH_H
> +#define __ASM_ARM_RUNTIME_PATCH_H
> +
> +#include <linux/stringify.h>
> +
> +#ifndef __ASSEMBLY__
> +
> +#ifdef CONFIG_ARM_RUNTIME_PATCH
[...]
> +#else
> +
> +static inline int runtime_patch(const void *table, unsigned size)
> +{
> + return 0;
> +}
This is wrong. If runtime_patch() is ever called when
CONFIG_ARM_RUNTIME_PATCH is not set, then i'd better return an error and
not pretend it performed the requested action. Returning -ENOSYS would
be appropriate.
This is especially important in the following context:
> --- a/arch/arm/kernel/module.c
> +++ b/arch/arm/kernel/module.c
> @@ -321,6 +322,12 @@ int module_finalize(const Elf32_Ehdr *hdr, const Elf_Shdr *sechdrs,
> if (s)
> fixup_pv_table((void *)s->sh_addr, s->sh_size);
> #endif
> + s = find_mod_section(hdr, sechdrs, ".runtime.patch.table");
> + if (s) {
> + err = runtime_patch((void *)s->sh_addr, s->sh_size);
> + if (err)
> + return err;
> + }
Despite the vermagic check, If ever a .runtime.patch.table section is
found in a module to be loaded in a kernel with no support for it then
it is best to return an error than see the kernel crashing later on when
the fallback stubs have been discarded.
Nicolas
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v3 03/17] ARM: use late patch framework for phys-virt patching
2012-09-11 17:39 ` [PATCH v3 03/17] ARM: use late patch framework for phys-virt patching Cyril Chemparathy
@ 2012-09-21 18:15 ` Nicolas Pitre
0 siblings, 0 replies; 29+ messages in thread
From: Nicolas Pitre @ 2012-09-21 18:15 UTC (permalink / raw)
To: Cyril Chemparathy
Cc: linux-kernel, linux-arm-kernel, arnd, catalin.marinas,
grant.likely, linux, will.deacon
On Tue, 11 Sep 2012, Cyril Chemparathy wrote:
> This patch replaces the original physical offset patching implementation
> with one that uses the newly added patching framework.
>
> Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Please also remove the MODULE_ARCH_VERMAGIC_P2V entirely from module.h
in this patch. This corresponds to the old patch table format which is
no longer supported once this patch is applied. The new mechanism is
covered by MODULE_ARCH_VERMAGIC_RT_PATCH already.
Once that is done, you may add...
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Nicolas
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v3 09/17] ARM: LPAE: use phys_addr_t for initrd location and size
2012-09-11 17:39 ` [PATCH v3 09/17] ARM: LPAE: use phys_addr_t for initrd location and size Cyril Chemparathy
@ 2012-09-21 18:30 ` Nicolas Pitre
0 siblings, 0 replies; 29+ messages in thread
From: Nicolas Pitre @ 2012-09-21 18:30 UTC (permalink / raw)
To: Cyril Chemparathy
Cc: linux-kernel, linux-arm-kernel, arnd, catalin.marinas,
grant.likely, linux, will.deacon, Vitaly Andrianov
On Tue, 11 Sep 2012, Cyril Chemparathy wrote:
> From: Vitaly Andrianov <vitalya@ti.com>
>
> This patch fixes the initrd setup code to use phys_addr_t instead of assuming
> 32-bit addressing. Without this we cannot boot on systems where initrd is
> located above the 4G physical address limit.
>
> Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
> Signed-off-by: Cyril Chemparathy <cyril@ti.com>
> Acked-by: Nicolas Pitre <nico@linaro.org>
Nit: please adjust the patch title. No need for phys_addr_t on the size.
> ---
> arch/arm/mm/init.c | 13 +++++++------
> 1 file changed, 7 insertions(+), 6 deletions(-)
>
> diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
> index 1c5151a..87ee0ec 100644
> --- a/arch/arm/mm/init.c
> +++ b/arch/arm/mm/init.c
> @@ -36,12 +36,13 @@
>
> #include "mm.h"
>
> -static unsigned long phys_initrd_start __initdata = 0;
> +static phys_addr_t phys_initrd_start __initdata = 0;
> static unsigned long phys_initrd_size __initdata = 0;
>
> static int __init early_initrd(char *p)
> {
> - unsigned long start, size;
> + phys_addr_t start;
> + unsigned long size;
> char *endp;
>
> start = memparse(p, &endp);
> @@ -347,14 +348,14 @@ void __init arm_memblock_init(struct meminfo *mi, struct machine_desc *mdesc)
> #ifdef CONFIG_BLK_DEV_INITRD
> if (phys_initrd_size &&
> !memblock_is_region_memory(phys_initrd_start, phys_initrd_size)) {
> - pr_err("INITRD: 0x%08lx+0x%08lx is not a memory region - disabling initrd\n",
> - phys_initrd_start, phys_initrd_size);
> + pr_err("INITRD: 0x%08llx+0x%08lx is not a memory region - disabling initrd\n",
> + (u64)phys_initrd_start, phys_initrd_size);
> phys_initrd_start = phys_initrd_size = 0;
> }
> if (phys_initrd_size &&
> memblock_is_region_reserved(phys_initrd_start, phys_initrd_size)) {
> - pr_err("INITRD: 0x%08lx+0x%08lx overlaps in-use memory region - disabling initrd\n",
> - phys_initrd_start, phys_initrd_size);
> + pr_err("INITRD: 0x%08llx+0x%08lx overlaps in-use memory region - disabling initrd\n",
> + (u64)phys_initrd_start, phys_initrd_size);
> phys_initrd_start = phys_initrd_size = 0;
> }
> if (phys_initrd_size) {
> --
> 1.7.9.5
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v3 10/17] ARM: LPAE: use phys_addr_t in switch_mm()
2012-09-11 17:39 ` [PATCH v3 10/17] ARM: LPAE: use phys_addr_t in switch_mm() Cyril Chemparathy
@ 2012-09-21 18:33 ` Nicolas Pitre
2012-09-21 18:41 ` Russell King - ARM Linux
0 siblings, 1 reply; 29+ messages in thread
From: Nicolas Pitre @ 2012-09-21 18:33 UTC (permalink / raw)
To: Cyril Chemparathy
Cc: linux-kernel, linux-arm-kernel, arnd, catalin.marinas,
grant.likely, linux, will.deacon, Vitaly Andrianov
On Tue, 11 Sep 2012, Cyril Chemparathy wrote:
> This patch modifies the switch_mm() processor functions to use phys_addr_t.
> On LPAE systems, we now honor the upper 32-bits of the physical address that
> is being passed in, and program these into TTBR as expected.
>
> Signed-off-by: Cyril Chemparathy <cyril@ti.com>
> Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Reviewed-by: Nicolas Pitre <nico@linaro.org>
> ---
> arch/arm/include/asm/proc-fns.h | 4 ++--
> arch/arm/mm/proc-v7-3level.S | 17 +++++++++++++----
> 2 files changed, 15 insertions(+), 6 deletions(-)
>
> diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h
> index f3628fb..75b5f14 100644
> --- a/arch/arm/include/asm/proc-fns.h
> +++ b/arch/arm/include/asm/proc-fns.h
> @@ -60,7 +60,7 @@ extern struct processor {
> /*
> * Set the page table
> */
> - void (*switch_mm)(unsigned long pgd_phys, struct mm_struct *mm);
> + void (*switch_mm)(phys_addr_t pgd_phys, struct mm_struct *mm);
> /*
> * Set a possibly extended PTE. Non-extended PTEs should
> * ignore 'ext'.
> @@ -82,7 +82,7 @@ extern void cpu_proc_init(void);
> extern void cpu_proc_fin(void);
> extern int cpu_do_idle(void);
> extern void cpu_dcache_clean_area(void *, int);
> -extern void cpu_do_switch_mm(unsigned long pgd_phys, struct mm_struct *mm);
> +extern void cpu_do_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
> #ifdef CONFIG_ARM_LPAE
> extern void cpu_set_pte_ext(pte_t *ptep, pte_t pte);
> #else
> diff --git a/arch/arm/mm/proc-v7-3level.S b/arch/arm/mm/proc-v7-3level.S
> index 8de0f1d..c4f4251 100644
> --- a/arch/arm/mm/proc-v7-3level.S
> +++ b/arch/arm/mm/proc-v7-3level.S
> @@ -39,6 +39,14 @@
> #define TTB_FLAGS_SMP (TTB_IRGN_WBWA|TTB_S|TTB_RGN_OC_WBWA)
> #define PMD_FLAGS_SMP (PMD_SECT_WBWA|PMD_SECT_S)
>
> +#ifndef __ARMEB__
> +# define rpgdl r0
> +# define rpgdh r1
> +#else
> +# define rpgdl r1
> +# define rpgdh r0
> +#endif
> +
> /*
> * cpu_v7_switch_mm(pgd_phys, tsk)
> *
> @@ -47,10 +55,11 @@
> */
> ENTRY(cpu_v7_switch_mm)
> #ifdef CONFIG_MMU
> - ldr r1, [r1, #MM_CONTEXT_ID] @ get mm->context.id
> - and r3, r1, #0xff
> - mov r3, r3, lsl #(48 - 32) @ ASID
> - mcrr p15, 0, r0, r3, c2 @ set TTB 0
> + ldr r2, [r2, #MM_CONTEXT_ID] @ get mm->context.id
> + and r2, r2, #0xff
> + mov r2, r2, lsl #(48 - 32) @ ASID
> + orr rpgdh, rpgdh, r2 @ upper 32-bits of pgd phys
> + mcrr p15, 0, rpgdl, rpgdh, c2 @ set TTB 0
> isb
> #endif
> mov pc, lr
> --
> 1.7.9.5
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v3 10/17] ARM: LPAE: use phys_addr_t in switch_mm()
2012-09-21 18:33 ` Nicolas Pitre
@ 2012-09-21 18:41 ` Russell King - ARM Linux
2012-09-21 18:53 ` Nicolas Pitre
0 siblings, 1 reply; 29+ messages in thread
From: Russell King - ARM Linux @ 2012-09-21 18:41 UTC (permalink / raw)
To: Nicolas Pitre
Cc: Cyril Chemparathy, linux-kernel, linux-arm-kernel, arnd,
catalin.marinas, grant.likely, will.deacon, Vitaly Andrianov
On Fri, Sep 21, 2012 at 02:33:43PM -0400, Nicolas Pitre wrote:
> On Tue, 11 Sep 2012, Cyril Chemparathy wrote:
>
> > This patch modifies the switch_mm() processor functions to use phys_addr_t.
> > On LPAE systems, we now honor the upper 32-bits of the physical address that
> > is being passed in, and program these into TTBR as expected.
> >
> > Signed-off-by: Cyril Chemparathy <cyril@ti.com>
> > Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
>
> Reviewed-by: Nicolas Pitre <nico@linaro.org>
Err... you may have reviewed it but did you read it?
> > diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h
> > index f3628fb..75b5f14 100644
> > --- a/arch/arm/include/asm/proc-fns.h
> > +++ b/arch/arm/include/asm/proc-fns.h
> > @@ -60,7 +60,7 @@ extern struct processor {
> > /*
> > * Set the page table
> > */
> > - void (*switch_mm)(unsigned long pgd_phys, struct mm_struct *mm);
> > + void (*switch_mm)(phys_addr_t pgd_phys, struct mm_struct *mm);
> > /*
> > * Set a possibly extended PTE. Non-extended PTEs should
> > * ignore 'ext'.
> > @@ -82,7 +82,7 @@ extern void cpu_proc_init(void);
> > extern void cpu_proc_fin(void);
> > extern int cpu_do_idle(void);
> > extern void cpu_dcache_clean_area(void *, int);
> > -extern void cpu_do_switch_mm(unsigned long pgd_phys, struct mm_struct *mm);
> > +extern void cpu_do_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
phys_addr_t can be either 64-bit or 32-bit. Which it ends up depends on
a configuration option. If it's 32-bit, then mm is in r1, otherwise it
is in r2...
> > #ifdef CONFIG_MMU
> > - ldr r1, [r1, #MM_CONTEXT_ID] @ get mm->context.id
> > - and r3, r1, #0xff
> > - mov r3, r3, lsl #(48 - 32) @ ASID
> > - mcrr p15, 0, r0, r3, c2 @ set TTB 0
> > + ldr r2, [r2, #MM_CONTEXT_ID] @ get mm->context.id
which breaks this when phys_addr_t is 32-bit.
Doing it this way means we have to have similar conditionals in other
files which make use of the 'mm' argument.
Moving the 'mm' argument into arg0 would mean that stays as r0, but
then the pgd_phys argument is passed in either r1 when 32-bit, or
r2,r3 when 64-bit on EABI and r1,r2 on OABI. That's hardly desirable
behaviour either, because it all too easily allows bugs to creap in.
The easiest solution would be to just change pgd_phys to be uint64_t
and be done with it. Then you always know in assembly what registers
values are going to be passed in (except for the LE/BE issue with r0/r1).
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v3 17/17] ARM: mm: clean up membank size limit checks
2012-09-11 17:39 ` [PATCH v3 17/17] ARM: mm: clean up membank size limit checks Cyril Chemparathy
@ 2012-09-21 18:42 ` Nicolas Pitre
2012-09-21 22:49 ` Cyril Chemparathy
0 siblings, 1 reply; 29+ messages in thread
From: Nicolas Pitre @ 2012-09-21 18:42 UTC (permalink / raw)
To: Cyril Chemparathy
Cc: linux-kernel, linux-arm-kernel, arnd, catalin.marinas,
grant.likely, linux, will.deacon, Vitaly Andrianov
On Tue, 11 Sep 2012, Cyril Chemparathy wrote:
> This patch cleans up the highmem sanity check code by simplifying the range
> checks with a pre-calculated size_limit. This patch should otherwise have no
> functional impact on behavior.
>
> This patch also removes a redundant (bank->start < vmalloc_limit) check, since
> this is already covered by the !highmem condition.
>
> Signed-off-by: Cyril Chemparathy <cyril@ti.com>
> Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
> ---
> arch/arm/mm/mmu.c | 19 +++++++++++--------
> 1 file changed, 11 insertions(+), 8 deletions(-)
>
> diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
> index 6c35483..50d9df5 100644
> --- a/arch/arm/mm/mmu.c
> +++ b/arch/arm/mm/mmu.c
> @@ -917,10 +917,15 @@ void __init sanity_check_meminfo(void)
>
> for (i = 0, j = 0; i < meminfo.nr_banks; i++) {
> struct membank *bank = &meminfo.bank[j];
> + phys_addr_t size_limit;
> +
> *bank = meminfo.bank[i];
> + size_limit = bank->size;
>
> if (bank->start >= vmalloc_limit)
> highmem = 1;
> + else
> + size_limit = vmalloc_limit - bank->start;
>
> bank->highmem = highmem;
>
> @@ -929,8 +934,7 @@ void __init sanity_check_meminfo(void)
> * Split those memory banks which are partially overlapping
> * the vmalloc area greatly simplifying things later.
> */
> - if (!highmem && bank->start < vmalloc_limit &&
> - bank->size > vmalloc_limit - bank->start) {
> + if (!highmem && bank->size > size_limit) {
> if (meminfo.nr_banks >= NR_BANKS) {
> printk(KERN_CRIT "NR_BANKS too low, "
> "ignoring high memory\n");
> @@ -939,12 +943,12 @@ void __init sanity_check_meminfo(void)
> (meminfo.nr_banks - i) * sizeof(*bank));
> meminfo.nr_banks++;
> i++;
> - bank[1].size -= vmalloc_limit - bank->start;
> + bank[1].size -= size_limit;
> bank[1].start = vmalloc_limit;
> bank[1].highmem = highmem = 1;
> j++;
> }
> - bank->size = vmalloc_limit - bank->start;
> + bank->size = size_limit;
> }
> #else
> /*
> @@ -962,14 +966,13 @@ void __init sanity_check_meminfo(void)
> * Check whether this memory bank would partially overlap
> * the vmalloc area.
> */
> - if (bank->start + bank->size > vmalloc_limit)
> - unsigned long newsize = vmalloc_limit - bank->start;
> + if (bank->size > size_limit) {
> printk(KERN_NOTICE "Truncating RAM at %.8llx-%.8llx "
> "to -%.8llx (vmalloc region overlap).\n",
> (unsigned long long)bank->start,
> (unsigned long long)bank->start + bank->size - 1,
> - (unsigned long long)bank->start + newsize - 1);
> - bank->size = newsize;
> + (unsigned long long)bank->start + size_limit - 1);
> + bank->size = size_limit;
> }
> #endif
> if (!bank->highmem && bank->start + bank->size > arm_lowmem_limit)
> --
> 1.7.9.5
>
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v3 10/17] ARM: LPAE: use phys_addr_t in switch_mm()
2012-09-21 18:41 ` Russell King - ARM Linux
@ 2012-09-21 18:53 ` Nicolas Pitre
0 siblings, 0 replies; 29+ messages in thread
From: Nicolas Pitre @ 2012-09-21 18:53 UTC (permalink / raw)
To: Russell King - ARM Linux
Cc: Cyril Chemparathy, linux-kernel, linux-arm-kernel, arnd,
catalin.marinas, grant.likely, will.deacon, Vitaly Andrianov
On Fri, 21 Sep 2012, Russell King - ARM Linux wrote:
> On Fri, Sep 21, 2012 at 02:33:43PM -0400, Nicolas Pitre wrote:
> > On Tue, 11 Sep 2012, Cyril Chemparathy wrote:
> >
> > > This patch modifies the switch_mm() processor functions to use phys_addr_t.
> > > On LPAE systems, we now honor the upper 32-bits of the physical address that
> > > is being passed in, and program these into TTBR as expected.
> > >
> > > Signed-off-by: Cyril Chemparathy <cyril@ti.com>
> > > Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
> >
> > Reviewed-by: Nicolas Pitre <nico@linaro.org>
>
> Err... you may have reviewed it but did you read it?
Sure I did.
> > > diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h
> > > index f3628fb..75b5f14 100644
> > > --- a/arch/arm/include/asm/proc-fns.h
> > > +++ b/arch/arm/include/asm/proc-fns.h
> > > @@ -60,7 +60,7 @@ extern struct processor {
> > > /*
> > > * Set the page table
> > > */
> > > - void (*switch_mm)(unsigned long pgd_phys, struct mm_struct *mm);
> > > + void (*switch_mm)(phys_addr_t pgd_phys, struct mm_struct *mm);
> > > /*
> > > * Set a possibly extended PTE. Non-extended PTEs should
> > > * ignore 'ext'.
> > > @@ -82,7 +82,7 @@ extern void cpu_proc_init(void);
> > > extern void cpu_proc_fin(void);
> > > extern int cpu_do_idle(void);
> > > extern void cpu_dcache_clean_area(void *, int);
> > > -extern void cpu_do_switch_mm(unsigned long pgd_phys, struct mm_struct *mm);
> > > +extern void cpu_do_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm);
>
> phys_addr_t can be either 64-bit or 32-bit. Which it ends up depends on
> a configuration option. If it's 32-bit, then mm is in r1, otherwise it
> is in r2...
Right. And that configuration option is CONFIG_ARM_LPAE.
> > > #ifdef CONFIG_MMU
> > > - ldr r1, [r1, #MM_CONTEXT_ID] @ get mm->context.id
> > > - and r3, r1, #0xff
> > > - mov r3, r3, lsl #(48 - 32) @ ASID
> > > - mcrr p15, 0, r0, r3, c2 @ set TTB 0
> > > + ldr r2, [r2, #MM_CONTEXT_ID] @ get mm->context.id
>
> which breaks this when phys_addr_t is 32-bit.
... which can't happen in this case because this code is only compiled
when CONFIG_ARM_LPAE=y.
> Doing it this way means we have to have similar conditionals in other
> files which make use of the 'mm' argument.
No because none of the other files may ever be used when
CONFIG_ARM_LPAE=y.
Nicolas
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v3 02/17] ARM: add self test for runtime patch mechanism
2012-09-21 17:40 ` Nicolas Pitre
@ 2012-09-21 22:25 ` Cyril Chemparathy
0 siblings, 0 replies; 29+ messages in thread
From: Cyril Chemparathy @ 2012-09-21 22:25 UTC (permalink / raw)
To: Nicolas Pitre
Cc: linux-kernel, linux-arm-kernel, arnd, catalin.marinas,
grant.likely, linux, will.deacon
On 9/21/2012 1:40 PM, Nicolas Pitre wrote:
> On Tue, 11 Sep 2012, Cyril Chemparathy wrote:
>
>> This patch adds basic sanity tests to ensure that the instruction patching
>> results in valid instruction encodings. This is done by verifying the output
>> of the patch process against a vector of assembler generated instructions at
>> init time.
>>
>> Signed-off-by: Cyril Chemparathy <cyril@ti.com>
>> ---
>> arch/arm/Kconfig | 12 +++++++
>> arch/arm/kernel/runtime-patch.c | 75 +++++++++++++++++++++++++++++++++++++++
>> 2 files changed, 87 insertions(+)
>>
>> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
>> index 36de4ea..bfcd29d 100644
>> --- a/arch/arm/Kconfig
>> +++ b/arch/arm/Kconfig
>> @@ -207,6 +207,18 @@ config ARM_PATCH_PHYS_VIRT
>> this feature (eg, building a kernel for a single machine) and
>> you need to shrink the kernel to the minimal size.
>>
>> +config ARM_RUNTIME_PATCH_TEST
>> + bool "Self test runtime patching mechanism" if ARM_RUNTIME_PATCH
>> + default y
>
> Here you probably want this instead:
>
> bool "Self test runtime patching mechanism"
> default y
> depends on ARM_RUNTIME_PATCH
>
> Otherwise ARM_RUNTIME_PATCH_TEST will be forced to y whenever
> ARM_RUNTIME_PATCH is unset. That doesn't currently affect the build
> since the containing .c file is only compiled when ARM_RUNTIME_PATCH is
> set but that is still not strictly right.
>
Indeed. Excellent. Thanks.
> [...]
>> @@ -189,5 +261,8 @@ void __init runtime_patch_kernel(void)
>> const void *start = &__runtime_patch_table_begin;
>> const void *end = &__runtime_patch_table_end;
>>
>> +#ifdef CONFIG_ARM_RUNTIME_PATCH_TEST
>> + runtime_patch_test();
>> +#endif
>> BUG_ON(runtime_patch(start, end - start));
>
> I think you shoulld have runtime_patch_test() return a possible error
> code and use BUG_ON() with it as well.
>
Sure. Will do in v4.
> With those minor changes you can add...
>
> Reviewed-by: Nicolas Pitre <nico@linaro.org>
>
Thanks.
--
- Cyril
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v3 01/17] ARM: add mechanism for late code patching
2012-09-21 18:09 ` Nicolas Pitre
@ 2012-09-21 22:30 ` Cyril Chemparathy
0 siblings, 0 replies; 29+ messages in thread
From: Cyril Chemparathy @ 2012-09-21 22:30 UTC (permalink / raw)
To: Nicolas Pitre
Cc: linux-kernel, linux-arm-kernel, arnd, catalin.marinas,
grant.likely, linux, will.deacon
On 9/21/2012 2:09 PM, Nicolas Pitre wrote:
> On Tue, 11 Sep 2012, Cyril Chemparathy wrote:
>
>> The original phys_to_virt/virt_to_phys patching implementation relied on early
>> patching prior to MMU initialization. On PAE systems running out of >4G
>> address space, this would have entailed an additional round of patching after
>> switching over to the high address space.
>>
>> The approach implemented here conceptually extends the original PHYS_OFFSET
>> patching implementation with the introduction of "early" patch stubs. Early
>> patch code is required to be functional out of the box, even before the patch
>> is applied. This is implemented by inserting functional (but inefficient)
>> load code into the .runtime.patch.code init section. Having functional code
>> out of the box then allows us to defer the init time patch application until
>> later in the init sequence.
>>
>> In addition to fitting better with our need for physical address-space
>> switch-over, this implementation should be somewhat more extensible by virtue
>> of its more readable (and hackable) C implementation. This should prove
>> useful for other similar init time specialization needs, especially in light
>> of our multi-platform kernel initiative.
>>
>> This code has been boot tested in both ARM and Thumb-2 modes on an ARMv7
>> (Cortex-A8) device.
>>
>> Note: the obtuse use of stringified symbols in patch_stub() and
>> early_patch_stub() is intentional. Theoretically this should have been
>> accomplished with formal operands passed into the asm block, but this requires
>> the use of the 'c' modifier for instantiating the long (e.g. .long %c0).
>> However, the 'c' modifier has been found to ICE certain versions of GCC, and
>> therefore we resort to stringified symbols here.
>>
>> Signed-off-by: Cyril Chemparathy <cyril@ti.com>
>> Reviewed-by: Nicolas Pitre <nico@linaro.org>
>
> I know I provided review before, but here's another nit I'd like fixed:
>
Fixed for v4. Thanks.
>> --- /dev/null
>> +++ b/arch/arm/include/asm/runtime-patch.h
>> @@ -0,0 +1,208 @@
>> +/*
>> + * arch/arm/include/asm/runtime-patch.h
>> + * Note: this file should not be included by non-asm/.h files
>> + *
>> + * Copyright 2012 Texas Instruments, Inc.
>> + *
>> + * This program is free software; you can redistribute it and/or modify it
>> + * under the terms and conditions of the GNU General Public License,
>> + * version 2, as published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope it will be useful, but WITHOUT
>> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
>> + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
>> + * more details.
>> + *
>> + * You should have received a copy of the GNU General Public License along with
>> + * this program. If not, see <http://www.gnu.org/licenses/>.
>> + */
>> +#ifndef __ASM_ARM_RUNTIME_PATCH_H
>> +#define __ASM_ARM_RUNTIME_PATCH_H
>> +
>> +#include <linux/stringify.h>
>> +
>> +#ifndef __ASSEMBLY__
>> +
>> +#ifdef CONFIG_ARM_RUNTIME_PATCH
>
> [...]
>
>> +#else
>> +
>> +static inline int runtime_patch(const void *table, unsigned size)
>> +{
>> + return 0;
>> +}
>
> This is wrong. If runtime_patch() is ever called when
> CONFIG_ARM_RUNTIME_PATCH is not set, then i'd better return an error and
> not pretend it performed the requested action. Returning -ENOSYS would
> be appropriate.
>
> This is especially important in the following context:
>
>> --- a/arch/arm/kernel/module.c
>> +++ b/arch/arm/kernel/module.c
>> @@ -321,6 +322,12 @@ int module_finalize(const Elf32_Ehdr *hdr, const Elf_Shdr *sechdrs,
>> if (s)
>> fixup_pv_table((void *)s->sh_addr, s->sh_size);
>> #endif
>> + s = find_mod_section(hdr, sechdrs, ".runtime.patch.table");
>> + if (s) {
>> + err = runtime_patch((void *)s->sh_addr, s->sh_size);
>> + if (err)
>> + return err;
>> + }
>
> Despite the vermagic check, If ever a .runtime.patch.table section is
> found in a module to be loaded in a kernel with no support for it then
> it is best to return an error than see the kernel crashing later on when
> the fallback stubs have been discarded.
>
>
> Nicolas
>
--
Thanks
- Cyril
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: [PATCH v3 17/17] ARM: mm: clean up membank size limit checks
2012-09-21 18:42 ` Nicolas Pitre
@ 2012-09-21 22:49 ` Cyril Chemparathy
0 siblings, 0 replies; 29+ messages in thread
From: Cyril Chemparathy @ 2012-09-21 22:49 UTC (permalink / raw)
To: Nicolas Pitre
Cc: linux-kernel, linux-arm-kernel, arnd, catalin.marinas,
grant.likely, linux, will.deacon, Vitaly Andrianov
On 9/21/2012 2:42 PM, Nicolas Pitre wrote:
> On Tue, 11 Sep 2012, Cyril Chemparathy wrote:
>
>> This patch cleans up the highmem sanity check code by simplifying the range
>> checks with a pre-calculated size_limit. This patch should otherwise have no
>> functional impact on behavior.
>>
>> This patch also removes a redundant (bank->start < vmalloc_limit) check, since
>> this is already covered by the !highmem condition.
>>
>> Signed-off-by: Cyril Chemparathy <cyril@ti.com>
>> Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
>
> Acked-by: Nicolas Pitre <nico@linaro.org>
>
Thanks, Nico.
Could you please take another peek at patch 05/17 (support 64-bit
virt_to_phys patching)? You had reviewed it in an earlier posting, but
I've had to tweak the code to optimize the compiler generated inline
expansion code.
Patch 14/17 (accomodate >32-bit addresses for page table base) could use
some attention as well. The same goes with 12/17 (define
ARCH_LOW_ADDRESS_LIMIT for bootmem), if you could.
--
Thanks
- Cyril
^ permalink raw reply [flat|nested] 29+ messages in thread
end of thread, other threads:[~2012-09-21 22:49 UTC | newest]
Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-09-11 17:38 [PATCH v3 00/17] LPAE fixes and extensions for Keystone Cyril Chemparathy
2012-09-11 17:38 ` [PATCH v3 01/17] ARM: add mechanism for late code patching Cyril Chemparathy
2012-09-21 18:09 ` Nicolas Pitre
2012-09-21 22:30 ` Cyril Chemparathy
2012-09-11 17:39 ` [PATCH v3 02/17] ARM: add self test for runtime patch mechanism Cyril Chemparathy
2012-09-21 17:40 ` Nicolas Pitre
2012-09-21 22:25 ` Cyril Chemparathy
2012-09-11 17:39 ` [PATCH v3 03/17] ARM: use late patch framework for phys-virt patching Cyril Chemparathy
2012-09-21 18:15 ` Nicolas Pitre
2012-09-11 17:39 ` [PATCH v3 04/17] ARM: LPAE: use phys_addr_t on virt <--> phys conversion Cyril Chemparathy
2012-09-11 17:39 ` [PATCH v3 05/17] ARM: LPAE: support 64-bit virt_to_phys patching Cyril Chemparathy
2012-09-11 17:39 ` [PATCH v3 06/17] ARM: LPAE: use signed arithmetic for mask definitions Cyril Chemparathy
2012-09-11 17:39 ` [PATCH v3 07/17] ARM: LPAE: use phys_addr_t in alloc_init_pud() Cyril Chemparathy
2012-09-11 17:39 ` [PATCH v3 08/17] ARM: LPAE: use phys_addr_t in free_memmap() Cyril Chemparathy
2012-09-11 17:39 ` [PATCH v3 09/17] ARM: LPAE: use phys_addr_t for initrd location and size Cyril Chemparathy
2012-09-21 18:30 ` Nicolas Pitre
2012-09-11 17:39 ` [PATCH v3 10/17] ARM: LPAE: use phys_addr_t in switch_mm() Cyril Chemparathy
2012-09-21 18:33 ` Nicolas Pitre
2012-09-21 18:41 ` Russell King - ARM Linux
2012-09-21 18:53 ` Nicolas Pitre
2012-09-11 17:39 ` [PATCH v3 11/17] ARM: LPAE: use 64-bit accessors for TTBR registers Cyril Chemparathy
2012-09-11 17:39 ` [PATCH v3 12/17] ARM: LPAE: define ARCH_LOW_ADDRESS_LIMIT for bootmem Cyril Chemparathy
2012-09-11 17:39 ` [PATCH v3 13/17] ARM: LPAE: factor out T1SZ and TTBR1 computations Cyril Chemparathy
2012-09-11 17:39 ` [PATCH v3 14/17] ARM: LPAE: accomodate >32-bit addresses for page table base Cyril Chemparathy
2012-09-11 17:39 ` [PATCH v3 15/17] ARM: mm: use physical addresses in highmem sanity checks Cyril Chemparathy
2012-09-11 17:39 ` [PATCH v3 16/17] ARM: mm: cleanup checks for membank overlap with vmalloc area Cyril Chemparathy
2012-09-11 17:39 ` [PATCH v3 17/17] ARM: mm: clean up membank size limit checks Cyril Chemparathy
2012-09-21 18:42 ` Nicolas Pitre
2012-09-21 22:49 ` Cyril Chemparathy
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).