linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] arm64: Patching branches for fun and profit
@ 2015-03-19 13:59 Marc Zyngier
  2015-03-19 13:59 ` [PATCH 1/5] arm64: insn: Add aarch64_insn_decode_immediate Marc Zyngier
                   ` (5 more replies)
  0 siblings, 6 replies; 12+ messages in thread
From: Marc Zyngier @ 2015-03-19 13:59 UTC (permalink / raw)
  To: linux-arm-kernel

The current alternative instruction framework is not kind to branches,
potentially leading to all kind of hacks in the code that uses
alternatives. This series expands it to deal with immediate branches
(for a start), and applies it to the VGIC world switch.

Patch #1 adds the required infrastructure to extract the immediate
from an instruction.

Patch #2 allows the use of an immediate b or bl instruction as an
alternative, computing the target branch as the instruction is being
patched in.

Patch #3 defines a feature framework that works exactly like the CPU
errata infrastructure (and shares a lot with it).

Patch #4 adds detection of the system register GICv3 CPU interface.

Patch #5 enables dynamic patching of the KVM code.

This has been tested with GICv3 on a FastModel.

Marc Zyngier (5):
  arm64: insn: Add aarch64_insn_decode_immediate
  arm64: alternative: Allow immediate branch as alternative instruction
  arm64: Extract feature parsing code from cpu_errata.c
  arm64: alternative: Introduce feature for GICv3 CPU interface
  arm64: KVM: Switch vgic save/restore to alternative_insn

 arch/arm/include/asm/kvm_host.h     |  5 ---
 arch/arm64/include/asm/cpufeature.h | 23 ++++++++++-
 arch/arm64/include/asm/insn.h       |  1 +
 arch/arm64/include/asm/kvm_host.h   | 23 -----------
 arch/arm64/kernel/Makefile          |  2 +-
 arch/arm64/kernel/alternative.c     | 55 ++++++++++++++++++++++++-
 arch/arm64/kernel/asm-offsets.c     |  1 -
 arch/arm64/kernel/cpu_errata.c      | 36 ++---------------
 arch/arm64/kernel/cpufeature.c      | 63 +++++++++++++++++++++++++++++
 arch/arm64/kernel/cpuinfo.c         |  1 +
 arch/arm64/kernel/insn.c            | 81 +++++++++++++++++++++++++++++--------
 arch/arm64/kvm/hyp.S                | 18 ++-------
 virt/kvm/arm/vgic.c                 |  3 --
 13 files changed, 214 insertions(+), 98 deletions(-)
 create mode 100644 arch/arm64/kernel/cpufeature.c

-- 
2.1.4

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/5] arm64: insn: Add aarch64_insn_decode_immediate
  2015-03-19 13:59 [PATCH 0/5] arm64: Patching branches for fun and profit Marc Zyngier
@ 2015-03-19 13:59 ` Marc Zyngier
  2015-03-19 13:59 ` [PATCH 2/5] arm64: alternative: Allow immediate branch as alternative instruction Marc Zyngier
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 12+ messages in thread
From: Marc Zyngier @ 2015-03-19 13:59 UTC (permalink / raw)
  To: linux-arm-kernel

Patching an instruction sometimes requires extracting the immediate
field from this instruction. To facilitate this, and avoid
potential duplication of code, add aarch64_insn_decode_immediate
as the reciprocal to aarch64_insn_encode_immediate.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/insn.h |  1 +
 arch/arm64/kernel/insn.c      | 81 ++++++++++++++++++++++++++++++++++---------
 2 files changed, 66 insertions(+), 16 deletions(-)

diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
index d2f4942..f81b328 100644
--- a/arch/arm64/include/asm/insn.h
+++ b/arch/arm64/include/asm/insn.h
@@ -285,6 +285,7 @@ bool aarch64_insn_is_nop(u32 insn);
 int aarch64_insn_read(void *addr, u32 *insnp);
 int aarch64_insn_write(void *addr, u32 insn);
 enum aarch64_insn_encoding_class aarch64_get_insn_class(u32 insn);
+u64 aarch64_insn_decode_immediate(enum aarch64_insn_imm_type type, u32 insn);
 u32 aarch64_insn_encode_immediate(enum aarch64_insn_imm_type type,
 				  u32 insn, u64 imm);
 u32 aarch64_insn_gen_branch_imm(unsigned long pc, unsigned long addr,
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index c8eca88..9249020 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -265,23 +265,13 @@ int __kprobes aarch64_insn_patch_text(void *addrs[], u32 insns[], int cnt)
 	return aarch64_insn_patch_text_sync(addrs, insns, cnt);
 }
 
-u32 __kprobes aarch64_insn_encode_immediate(enum aarch64_insn_imm_type type,
-				  u32 insn, u64 imm)
+static int __kprobes aarch64_get_imm_shift_mask(enum aarch64_insn_imm_type type,
+						u32 *maskp, int *shiftp)
 {
-	u32 immlo, immhi, lomask, himask, mask;
+	u32 mask;
 	int shift;
 
 	switch (type) {
-	case AARCH64_INSN_IMM_ADR:
-		lomask = 0x3;
-		himask = 0x7ffff;
-		immlo = imm & lomask;
-		imm >>= 2;
-		immhi = imm & himask;
-		imm = (immlo << 24) | (immhi);
-		mask = (lomask << 24) | (himask);
-		shift = 5;
-		break;
 	case AARCH64_INSN_IMM_26:
 		mask = BIT(26) - 1;
 		shift = 0;
@@ -320,9 +310,68 @@ u32 __kprobes aarch64_insn_encode_immediate(enum aarch64_insn_imm_type type,
 		shift = 16;
 		break;
 	default:
-		pr_err("aarch64_insn_encode_immediate: unknown immediate encoding %d\n",
-			type);
-		return 0;
+		return -EINVAL;
+	}
+
+	*maskp = mask;
+	*shiftp = shift;
+
+	return 0;
+}
+
+#define ADR_IMM_HILOSPLIT	2
+#define ADR_IMM_SIZE		SZ_2M
+#define ADR_IMM_LOMASK		((1 << ADR_IMM_HILOSPLIT) - 1)
+#define ADR_IMM_HIMASK		((ADR_IMM_SIZE >> ADR_IMM_HILOSPLIT) - 1)
+#define ADR_IMM_LOSHIFT		29
+#define ADR_IMM_HISHIFT		5
+
+u64 aarch64_insn_decode_immediate(enum aarch64_insn_imm_type type, u32 insn)
+{
+	u32 immlo, immhi, mask;
+	int shift;
+
+	switch (type) {
+	case AARCH64_INSN_IMM_ADR:
+		shift = 0;
+		immlo = (insn >> ADR_IMM_LOSHIFT) & ADR_IMM_LOMASK;
+		immhi = (insn >> ADR_IMM_HISHIFT) & ADR_IMM_HIMASK;
+		insn = (immhi << ADR_IMM_HILOSPLIT) | immlo;
+		mask = ADR_IMM_SIZE - 1;
+		break;
+	default:
+		if (aarch64_get_imm_shift_mask(type, &mask, &shift) < 0) {
+			pr_err("aarch64_insn_decode_immediate: unknown immediate encoding %d\n",
+			       type);
+			return 0;
+		}
+	}
+
+	return (insn >> shift) & mask;
+}
+
+u32 __kprobes aarch64_insn_encode_immediate(enum aarch64_insn_imm_type type,
+				  u32 insn, u64 imm)
+{
+	u32 immlo, immhi, mask;
+	int shift;
+
+	switch (type) {
+	case AARCH64_INSN_IMM_ADR:
+		shift = 0;
+		immlo = (imm & ADR_IMM_LOMASK) << ADR_IMM_LOSHIFT;
+		imm >>= ADR_IMM_HILOSPLIT;
+		immhi = (imm & ADR_IMM_HIMASK) << ADR_IMM_HISHIFT;
+		imm = immlo | immhi;
+		mask = ((ADR_IMM_LOMASK << ADR_IMM_LOSHIFT) |
+			(ADR_IMM_HIMASK << ADR_IMM_HISHIFT));
+		break;
+	default:
+		if (aarch64_get_imm_shift_mask(type, &mask, &shift) < 0) {
+			pr_err("aarch64_insn_encode_immediate: unknown immediate encoding %d\n",
+			       type);
+			return 0;
+		}
 	}
 
 	/* Update the immediate field. */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/5] arm64: alternative: Allow immediate branch as alternative instruction
  2015-03-19 13:59 [PATCH 0/5] arm64: Patching branches for fun and profit Marc Zyngier
  2015-03-19 13:59 ` [PATCH 1/5] arm64: insn: Add aarch64_insn_decode_immediate Marc Zyngier
@ 2015-03-19 13:59 ` Marc Zyngier
  2015-03-26 22:03   ` Will Deacon
  2015-03-19 13:59 ` [PATCH 3/5] arm64: Extract feature parsing code from cpu_errata.c Marc Zyngier
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 12+ messages in thread
From: Marc Zyngier @ 2015-03-19 13:59 UTC (permalink / raw)
  To: linux-arm-kernel

Since all immediate branches are PC-relative on Aarch64, these
instructions cannot be used as an alternative with the simplistic
approach we currently have (the immediate has been computed from
the .altinstr_replacement section, and end-up being completely off
if we insert it directly).

This patch handles the b and bl instructions in a different way,
using the insn framework to recompute the immediate, and generate
the right displacement.

Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/kernel/alternative.c | 55 +++++++++++++++++++++++++++++++++++++++--
 1 file changed, 53 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
index ad7821d..ac13c58 100644
--- a/arch/arm64/kernel/alternative.c
+++ b/arch/arm64/kernel/alternative.c
@@ -24,6 +24,7 @@
 #include <asm/cacheflush.h>
 #include <asm/alternative.h>
 #include <asm/cpufeature.h>
+#include <asm/insn.h>
 #include <linux/stop_machine.h>
 
 extern struct alt_instr __alt_instructions[], __alt_instructions_end[];
@@ -33,6 +34,48 @@ struct alt_region {
 	struct alt_instr *end;
 };
 
+/*
+ * Decode the imm field of a b/bl instruction, and return the byte
+ * offset as a signed value (so it can be used when computing a new
+ * branch target).
+ */
+static s32 get_branch_offset(u32 insn)
+{
+	s32 imm = aarch64_insn_decode_immediate(AARCH64_INSN_IMM_26, insn);
+
+	/* sign-extend the immediate before turning it into a byte offset */
+	return (imm << 6) >> 4;
+}
+
+static u32 get_alt_insn(u8 *insnptr, u8 *altinsnptr)
+{
+	u32 insn;
+
+	aarch64_insn_read(altinsnptr, &insn);
+
+	/* Stop the world on instructions we don't support... */
+	BUG_ON(aarch64_insn_is_cbz(insn));
+	BUG_ON(aarch64_insn_is_cbnz(insn));
+	BUG_ON(aarch64_insn_is_bcond(insn));
+	/* ... and there is probably more. */
+
+	if (aarch64_insn_is_b(insn) || aarch64_insn_is_bl(insn)) {
+		enum aarch64_insn_branch_type type;
+		unsigned long target;
+
+		if (aarch64_insn_is_b(insn))
+			type = AARCH64_INSN_BRANCH_NOLINK;
+		else
+			type = AARCH64_INSN_BRANCH_LINK;
+
+		target = (unsigned long)altinsnptr + get_branch_offset(insn);
+		insn = aarch64_insn_gen_branch_imm((unsigned long)insnptr,
+						   target, type);
+	}
+
+	return insn;
+}
+
 static int __apply_alternatives(void *alt_region)
 {
 	struct alt_instr *alt;
@@ -40,16 +83,24 @@ static int __apply_alternatives(void *alt_region)
 	u8 *origptr, *replptr;
 
 	for (alt = region->begin; alt < region->end; alt++) {
+		u32 insn;
+		int i;
+
 		if (!cpus_have_cap(alt->cpufeature))
 			continue;
 
-		BUG_ON(alt->alt_len > alt->orig_len);
+		BUG_ON(alt->alt_len != alt->orig_len);
 
 		pr_info_once("patching kernel code\n");
 
 		origptr = (u8 *)&alt->orig_offset + alt->orig_offset;
 		replptr = (u8 *)&alt->alt_offset + alt->alt_offset;
-		memcpy(origptr, replptr, alt->alt_len);
+
+		for (i = 0; i < alt->alt_len; i += sizeof(insn)) {
+			insn = get_alt_insn(origptr + i, replptr + i);
+			*(u32 *)(origptr + i) = insn;
+		}
+
 		flush_icache_range((uintptr_t)origptr,
 				   (uintptr_t)(origptr + alt->alt_len));
 	}
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 3/5] arm64: Extract feature parsing code from cpu_errata.c
  2015-03-19 13:59 [PATCH 0/5] arm64: Patching branches for fun and profit Marc Zyngier
  2015-03-19 13:59 ` [PATCH 1/5] arm64: insn: Add aarch64_insn_decode_immediate Marc Zyngier
  2015-03-19 13:59 ` [PATCH 2/5] arm64: alternative: Allow immediate branch as alternative instruction Marc Zyngier
@ 2015-03-19 13:59 ` Marc Zyngier
  2015-03-19 13:59 ` [PATCH 4/5] arm64: alternative: Introduce feature for GICv3 CPU interface Marc Zyngier
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 12+ messages in thread
From: Marc Zyngier @ 2015-03-19 13:59 UTC (permalink / raw)
  To: linux-arm-kernel

As we detect more architectural features at runtime, it makes
sense to reuse the existing framework whilst avoiding to call
a feature an erratum...

This patch extract the core capability parsing, moves it into
a new file (cpufeature.c), and let the CPU errata detection code
use it.

Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/cpufeature.h | 15 ++++++++++++
 arch/arm64/kernel/Makefile          |  2 +-
 arch/arm64/kernel/cpu_errata.c      | 36 ++++------------------------
 arch/arm64/kernel/cpufeature.c      | 47 +++++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/cpuinfo.c         |  1 +
 5 files changed, 68 insertions(+), 33 deletions(-)
 create mode 100644 arch/arm64/kernel/cpufeature.c

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index b6c16d5..6ae35d1 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -28,6 +28,18 @@
 
 #ifndef __ASSEMBLY__
 
+struct arm64_cpu_capabilities {
+	const char *desc;
+	u16 capability;
+	bool (*matches)(const struct arm64_cpu_capabilities *);
+	union {
+		struct {	/* To be used for erratum handling only */
+			u32 midr_model;
+			u32 midr_range_min, midr_range_max;
+		};
+	};
+};
+
 extern DECLARE_BITMAP(cpu_hwcaps, ARM64_NCAPS);
 
 static inline bool cpu_have_feature(unsigned int num)
@@ -51,7 +63,10 @@ static inline void cpus_set_cap(unsigned int num)
 		__set_bit(num, cpu_hwcaps);
 }
 
+void check_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
+			    const char *info);
 void check_local_cpu_errata(void);
+void check_local_cpu_features(void);
 bool cpu_supports_mixed_endian_el0(void);
 bool system_supports_mixed_endian_el0(void);
 
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index d5e7074..b12e15b 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -17,7 +17,7 @@ arm64-obj-y		:= debug-monitors.o entry.o irq.o fpsimd.o		\
 			   sys.o stacktrace.o time.o traps.o io.o vdso.o	\
 			   hyp-stub.o psci.o psci-call.o cpu_ops.o insn.o	\
 			   return_address.o cpuinfo.o cpu_errata.o		\
-			   alternative.o cacheinfo.o
+			   cpufeature.o alternative.o cacheinfo.o
 
 arm64-obj-$(CONFIG_COMPAT)		+= sys32.o kuser32.o signal32.o 	\
 					   sys_compat.o entry32.o		\
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index fa62637..a66f4fa 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -16,8 +16,6 @@
  * along with this program.  If not, see <http://www.gnu.org/licenses/>.
  */
 
-#define pr_fmt(fmt) "alternatives: " fmt
-
 #include <linux/types.h>
 #include <asm/cpu.h>
 #include <asm/cputype.h>
@@ -26,27 +24,11 @@
 #define MIDR_CORTEX_A53 MIDR_CPU_PART(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A53)
 #define MIDR_CORTEX_A57 MIDR_CPU_PART(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A57)
 
-/*
- * Add a struct or another datatype to the union below if you need
- * different means to detect an affected CPU.
- */
-struct arm64_cpu_capabilities {
-	const char *desc;
-	u16 capability;
-	bool (*is_affected)(struct arm64_cpu_capabilities *);
-	union {
-		struct {
-			u32 midr_model;
-			u32 midr_range_min, midr_range_max;
-		};
-	};
-};
-
 #define CPU_MODEL_MASK (MIDR_IMPLEMENTOR_MASK | MIDR_PARTNUM_MASK | \
 			MIDR_ARCHITECTURE_MASK)
 
 static bool __maybe_unused
-is_affected_midr_range(struct arm64_cpu_capabilities *entry)
+is_affected_midr_range(const struct arm64_cpu_capabilities *entry)
 {
 	u32 midr = read_cpuid_id();
 
@@ -59,12 +41,12 @@ is_affected_midr_range(struct arm64_cpu_capabilities *entry)
 }
 
 #define MIDR_RANGE(model, min, max) \
-	.is_affected = is_affected_midr_range, \
+	.matches = is_affected_midr_range, \
 	.midr_model = model, \
 	.midr_range_min = min, \
 	.midr_range_max = max
 
-struct arm64_cpu_capabilities arm64_errata[] = {
+const struct arm64_cpu_capabilities arm64_errata[] = {
 #if	defined(CONFIG_ARM64_ERRATUM_826319) || \
 	defined(CONFIG_ARM64_ERRATUM_827319) || \
 	defined(CONFIG_ARM64_ERRATUM_824069)
@@ -97,15 +79,5 @@ struct arm64_cpu_capabilities arm64_errata[] = {
 
 void check_local_cpu_errata(void)
 {
-	struct arm64_cpu_capabilities *cpus = arm64_errata;
-	int i;
-
-	for (i = 0; cpus[i].desc; i++) {
-		if (!cpus[i].is_affected(&cpus[i]))
-			continue;
-
-		if (!cpus_have_cap(cpus[i].capability))
-			pr_info("enabling workaround for %s\n", cpus[i].desc);
-		cpus_set_cap(cpus[i].capability);
-	}
+	check_cpu_capabilities(arm64_errata, "enabling workaround for");
 }
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
new file mode 100644
index 0000000..3d9967e
--- /dev/null
+++ b/arch/arm64/kernel/cpufeature.c
@@ -0,0 +1,47 @@
+/*
+ * Contains CPU feature definitions
+ *
+ * Copyright (C) 2015 ARM Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#define pr_fmt(fmt) "alternatives: " fmt
+
+#include <linux/types.h>
+#include <asm/cpu.h>
+#include <asm/cpufeature.h>
+
+static const struct arm64_cpu_capabilities arm64_features[] = {
+	{},
+};
+
+void check_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
+			    const char *info)
+{
+	int i;
+
+	for (i = 0; caps[i].desc; i++) {
+		if (!caps[i].matches(&caps[i]))
+			continue;
+
+		if (!cpus_have_cap(caps[i].capability))
+			pr_info("%s %s\n", info, caps[i].desc);
+		cpus_set_cap(caps[i].capability);
+	}
+}
+
+void check_local_cpu_features(void)
+{
+	check_cpu_capabilities(arm64_features, "detected feature");
+}
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index 9298556..75d5a86 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -236,6 +236,7 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
 	cpuinfo_detect_icache_policy(info);
 
 	check_local_cpu_errata();
+	check_local_cpu_features();
 	update_cpu_features(info);
 }
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 4/5] arm64: alternative: Introduce feature for GICv3 CPU interface
  2015-03-19 13:59 [PATCH 0/5] arm64: Patching branches for fun and profit Marc Zyngier
                   ` (2 preceding siblings ...)
  2015-03-19 13:59 ` [PATCH 3/5] arm64: Extract feature parsing code from cpu_errata.c Marc Zyngier
@ 2015-03-19 13:59 ` Marc Zyngier
  2015-03-19 13:59 ` [PATCH 5/5] arm64: KVM: Switch vgic save/restore to alternative_insn Marc Zyngier
  2015-03-26 22:04 ` [PATCH 0/5] arm64: Patching branches for fun and profit Will Deacon
  5 siblings, 0 replies; 12+ messages in thread
From: Marc Zyngier @ 2015-03-19 13:59 UTC (permalink / raw)
  To: linux-arm-kernel

Add a new item to the feature set (ARM64_HAS_SYSREG_GIC_CPUIF)
to indicate that we have a system register GIC CPU interface

This will help KVM switching to alternative instruction patching.

Reviewed-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/cpufeature.h |  8 +++++++-
 arch/arm64/kernel/cpufeature.c      | 16 ++++++++++++++++
 2 files changed, 23 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 6ae35d1..d9e57b5 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -23,8 +23,9 @@
 
 #define ARM64_WORKAROUND_CLEAN_CACHE		0
 #define ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE	1
+#define ARM64_HAS_SYSREG_GIC_CPUIF		2
 
-#define ARM64_NCAPS				2
+#define ARM64_NCAPS				3
 
 #ifndef __ASSEMBLY__
 
@@ -37,6 +38,11 @@ struct arm64_cpu_capabilities {
 			u32 midr_model;
 			u32 midr_range_min, midr_range_max;
 		};
+
+		struct {	/* Feature register checking */
+			u64 register_mask;
+			u64 register_value;
+		};
 	};
 };
 
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 3d9967e..b0bea2b3 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -22,7 +22,23 @@
 #include <asm/cpu.h>
 #include <asm/cpufeature.h>
 
+static bool
+has_id_aa64pfr0_feature(const struct arm64_cpu_capabilities *entry)
+{
+	u64 val;
+
+	val = read_cpuid(id_aa64pfr0_el1);
+	return (val & entry->register_mask) == entry->register_value;
+}
+
 static const struct arm64_cpu_capabilities arm64_features[] = {
+	{
+		.desc = "system register GIC CPU interface",
+		.capability = ARM64_HAS_SYSREG_GIC_CPUIF,
+		.matches = has_id_aa64pfr0_feature,
+		.register_mask = (0xf << 24),
+		.register_value = (1 << 24),
+	},
 	{},
 };
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 5/5] arm64: KVM: Switch vgic save/restore to alternative_insn
  2015-03-19 13:59 [PATCH 0/5] arm64: Patching branches for fun and profit Marc Zyngier
                   ` (3 preceding siblings ...)
  2015-03-19 13:59 ` [PATCH 4/5] arm64: alternative: Introduce feature for GICv3 CPU interface Marc Zyngier
@ 2015-03-19 13:59 ` Marc Zyngier
  2015-03-26 22:04 ` [PATCH 0/5] arm64: Patching branches for fun and profit Will Deacon
  5 siblings, 0 replies; 12+ messages in thread
From: Marc Zyngier @ 2015-03-19 13:59 UTC (permalink / raw)
  To: linux-arm-kernel

So far, we configured the world-switch by having a small array
of pointers to the save and restore functions, depending on the
GIC used on the platform.

Loading these values each time is a bit silly (they never change),
and it makes sense to rely on the instruction patching instead.

This leads to a nice cleanup of the code.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/include/asm/kvm_host.h   |  5 -----
 arch/arm64/include/asm/kvm_host.h | 23 -----------------------
 arch/arm64/kernel/asm-offsets.c   |  1 -
 arch/arm64/kvm/hyp.S              | 18 ++++--------------
 virt/kvm/arm/vgic.c               |  3 ---
 5 files changed, 4 insertions(+), 46 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 41008cd..2fcad3b 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -225,11 +225,6 @@ static inline int kvm_arch_dev_ioctl_check_extension(long ext)
 	return 0;
 }
 
-static inline void vgic_arch_setup(const struct vgic_params *vgic)
-{
-	BUG_ON(vgic->type != VGIC_V2);
-}
-
 int kvm_perf_init(void);
 int kvm_perf_teardown(void);
 
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 8ac3c70..b0aa3a4 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -228,29 +228,6 @@ struct vgic_sr_vectors {
 	void	*restore_vgic;
 };
 
-static inline void vgic_arch_setup(const struct vgic_params *vgic)
-{
-	extern struct vgic_sr_vectors __vgic_sr_vectors;
-
-	switch(vgic->type)
-	{
-	case VGIC_V2:
-		__vgic_sr_vectors.save_vgic	= __save_vgic_v2_state;
-		__vgic_sr_vectors.restore_vgic	= __restore_vgic_v2_state;
-		break;
-
-#ifdef CONFIG_ARM_GIC_V3
-	case VGIC_V3:
-		__vgic_sr_vectors.save_vgic	= __save_vgic_v3_state;
-		__vgic_sr_vectors.restore_vgic	= __restore_vgic_v3_state;
-		break;
-#endif
-
-	default:
-		BUG();
-	}
-}
-
 static inline void kvm_arch_hardware_disable(void) {}
 static inline void kvm_arch_hardware_unsetup(void) {}
 static inline void kvm_arch_sync_events(struct kvm *kvm) {}
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 14dd3d1..cf9d97b 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -128,7 +128,6 @@ int main(void)
   DEFINE(VCPU_VGIC_CPU,		offsetof(struct kvm_vcpu, arch.vgic_cpu));
   DEFINE(VGIC_SAVE_FN,		offsetof(struct vgic_sr_vectors, save_vgic));
   DEFINE(VGIC_RESTORE_FN,	offsetof(struct vgic_sr_vectors, restore_vgic));
-  DEFINE(VGIC_SR_VECTOR_SZ,	sizeof(struct vgic_sr_vectors));
   DEFINE(VGIC_V2_CPU_HCR,	offsetof(struct vgic_cpu, vgic_v2.vgic_hcr));
   DEFINE(VGIC_V2_CPU_VMCR,	offsetof(struct vgic_cpu, vgic_v2.vgic_vmcr));
   DEFINE(VGIC_V2_CPU_MISR,	offsetof(struct vgic_cpu, vgic_v2.vgic_misr));
diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
index 5befd01..0d55a53 100644
--- a/arch/arm64/kvm/hyp.S
+++ b/arch/arm64/kvm/hyp.S
@@ -17,8 +17,10 @@
 
 #include <linux/linkage.h>
 
+#include <asm/alternative-asm.h>
 #include <asm/asm-offsets.h>
 #include <asm/assembler.h>
+#include <asm/cpufeature.h>
 #include <asm/debug-monitors.h>
 #include <asm/esr.h>
 #include <asm/fpsimdmacros.h>
@@ -808,10 +810,7 @@
  * Call into the vgic backend for state saving
  */
 .macro save_vgic_state
-	adr	x24, __vgic_sr_vectors
-	ldr	x24, [x24, VGIC_SAVE_FN]
-	kern_hyp_va	x24
-	blr	x24
+	alternative_insn "bl __save_vgic_v2_state", "bl __save_vgic_v3_state", ARM64_HAS_SYSREG_GIC_CPUIF
 	mrs	x24, hcr_el2
 	mov	x25, #HCR_INT_OVERRIDE
 	neg	x25, x25
@@ -828,10 +827,7 @@
 	orr	x24, x24, #HCR_INT_OVERRIDE
 	orr	x24, x24, x25
 	msr	hcr_el2, x24
-	adr	x24, __vgic_sr_vectors
-	ldr	x24, [x24, #VGIC_RESTORE_FN]
-	kern_hyp_va	x24
-	blr	x24
+	alternative_insn "bl __restore_vgic_v2_state", "bl __restore_vgic_v3_state", ARM64_HAS_SYSREG_GIC_CPUIF
 .endm
 
 .macro save_timer_state
@@ -1062,12 +1058,6 @@ ENTRY(__kvm_flush_vm_context)
 	ret
 ENDPROC(__kvm_flush_vm_context)
 
-	// struct vgic_sr_vectors __vgi_sr_vectors;
-	.align 3
-ENTRY(__vgic_sr_vectors)
-	.skip	VGIC_SR_VECTOR_SZ
-ENDPROC(__vgic_sr_vectors)
-
 __kvm_hyp_panic:
 	// Guess the context by looking at VTTBR:
 	// If zero, then we're already a host.
diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c
index 0cc6ab6..23d3c81 100644
--- a/virt/kvm/arm/vgic.c
+++ b/virt/kvm/arm/vgic.c
@@ -1903,9 +1903,6 @@ int kvm_vgic_hyp_init(void)
 		goto out_free_irq;
 	}
 
-	/* Callback into for arch code for setup */
-	vgic_arch_setup(vgic);
-
 	on_each_cpu(vgic_init_maintenance_interrupt, NULL, 1);
 
 	return 0;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/5] arm64: alternative: Allow immediate branch as alternative instruction
  2015-03-19 13:59 ` [PATCH 2/5] arm64: alternative: Allow immediate branch as alternative instruction Marc Zyngier
@ 2015-03-26 22:03   ` Will Deacon
  2015-03-26 22:19     ` Marc Zyngier
  0 siblings, 1 reply; 12+ messages in thread
From: Will Deacon @ 2015-03-26 22:03 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Mar 19, 2015 at 01:59:33PM +0000, Marc Zyngier wrote:
> Since all immediate branches are PC-relative on Aarch64, these
> instructions cannot be used as an alternative with the simplistic
> approach we currently have (the immediate has been computed from
> the .altinstr_replacement section, and end-up being completely off
> if we insert it directly).
> 
> This patch handles the b and bl instructions in a different way,
> using the insn framework to recompute the immediate, and generate
> the right displacement.
> 
> Reviewed-by: Andre Przywara <andre.przywara@arm.com>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/kernel/alternative.c | 55 +++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 53 insertions(+), 2 deletions(-)

[...]

>  static int __apply_alternatives(void *alt_region)
>  {
>  	struct alt_instr *alt;
> @@ -40,16 +83,24 @@ static int __apply_alternatives(void *alt_region)
>  	u8 *origptr, *replptr;
>  
>  	for (alt = region->begin; alt < region->end; alt++) {
> +		u32 insn;
> +		int i;
> +
>  		if (!cpus_have_cap(alt->cpufeature))
>  			continue;
>  
> -		BUG_ON(alt->alt_len > alt->orig_len);
> +		BUG_ON(alt->alt_len != alt->orig_len);
>  
>  		pr_info_once("patching kernel code\n");
>  
>  		origptr = (u8 *)&alt->orig_offset + alt->orig_offset;
>  		replptr = (u8 *)&alt->alt_offset + alt->alt_offset;
> -		memcpy(origptr, replptr, alt->alt_len);
> +
> +		for (i = 0; i < alt->alt_len; i += sizeof(insn)) {
> +			insn = get_alt_insn(origptr + i, replptr + i);
> +			*(u32 *)(origptr + i) = insn;

My brain's not firing on all cylinders right now, but do you need a
cpu_to_le32 here?

Will

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 0/5] arm64: Patching branches for fun and profit
  2015-03-19 13:59 [PATCH 0/5] arm64: Patching branches for fun and profit Marc Zyngier
                   ` (4 preceding siblings ...)
  2015-03-19 13:59 ` [PATCH 5/5] arm64: KVM: Switch vgic save/restore to alternative_insn Marc Zyngier
@ 2015-03-26 22:04 ` Will Deacon
  5 siblings, 0 replies; 12+ messages in thread
From: Will Deacon @ 2015-03-26 22:04 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Mar 19, 2015 at 01:59:31PM +0000, Marc Zyngier wrote:
> The current alternative instruction framework is not kind to branches,
> potentially leading to all kind of hacks in the code that uses
> alternatives. This series expands it to deal with immediate branches
> (for a start), and applies it to the VGIC world switch.
> 
> Patch #1 adds the required infrastructure to extract the immediate
> from an instruction.
> 
> Patch #2 allows the use of an immediate b or bl instruction as an
> alternative, computing the target branch as the instruction is being
> patched in.
> 
> Patch #3 defines a feature framework that works exactly like the CPU
> errata infrastructure (and shares a lot with it).
> 
> Patch #4 adds detection of the system register GICv3 CPU interface.
> 
> Patch #5 enables dynamic patching of the KVM code.
> 
> This has been tested with GICv3 on a FastModel.

Modulo my one minor comment:

  Acked-by: Will Deacon <will.deacon@arm.com>

for the series.

Will

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 2/5] arm64: alternative: Allow immediate branch as alternative instruction
  2015-03-26 22:03   ` Will Deacon
@ 2015-03-26 22:19     ` Marc Zyngier
  2015-03-26 22:31       ` Will Deacon
  2015-03-27 10:42       ` Jon Medhurst (Tixy)
  0 siblings, 2 replies; 12+ messages in thread
From: Marc Zyngier @ 2015-03-26 22:19 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, 26 Mar 2015 22:03:23 +0000
Will Deacon <will.deacon@arm.com> wrote:

> On Thu, Mar 19, 2015 at 01:59:33PM +0000, Marc Zyngier wrote:
> > Since all immediate branches are PC-relative on Aarch64, these
> > instructions cannot be used as an alternative with the simplistic
> > approach we currently have (the immediate has been computed from
> > the .altinstr_replacement section, and end-up being completely off
> > if we insert it directly).
> > 
> > This patch handles the b and bl instructions in a different way,
> > using the insn framework to recompute the immediate, and generate
> > the right displacement.
> > 
> > Reviewed-by: Andre Przywara <andre.przywara@arm.com>
> > Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> > ---
> >  arch/arm64/kernel/alternative.c | 55 +++++++++++++++++++++++++++++++++++++++--
> >  1 file changed, 53 insertions(+), 2 deletions(-)
> 
> [...]
> 
> >  static int __apply_alternatives(void *alt_region)
> >  {
> >  	struct alt_instr *alt;
> > @@ -40,16 +83,24 @@ static int __apply_alternatives(void *alt_region)
> >  	u8 *origptr, *replptr;
> >  
> >  	for (alt = region->begin; alt < region->end; alt++) {
> > +		u32 insn;
> > +		int i;
> > +
> >  		if (!cpus_have_cap(alt->cpufeature))
> >  			continue;
> >  
> > -		BUG_ON(alt->alt_len > alt->orig_len);
> > +		BUG_ON(alt->alt_len != alt->orig_len);
> >  
> >  		pr_info_once("patching kernel code\n");
> >  
> >  		origptr = (u8 *)&alt->orig_offset + alt->orig_offset;
> >  		replptr = (u8 *)&alt->alt_offset + alt->alt_offset;
> > -		memcpy(origptr, replptr, alt->alt_len);
> > +
> > +		for (i = 0; i < alt->alt_len; i += sizeof(insn)) {
> > +			insn = get_alt_insn(origptr + i, replptr + i);
> > +			*(u32 *)(origptr + i) = insn;
> 
> My brain's not firing on all cylinders right now, but do you need a
> cpu_to_le32 here?

I'm not 100% awake myself (probably some acute form of firmwaritis),
but I suspect you're quite right (get_alt_insn calls aarch64_insn_read,
which does a le32_to_cpu). Obviously, we need to revert the conversion
when writing the instruction back.

Do you want a fixup on top of this, or would you prefer me to respin
the series?

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 2/5] arm64: alternative: Allow immediate branch as alternative instruction
  2015-03-26 22:19     ` Marc Zyngier
@ 2015-03-26 22:31       ` Will Deacon
  2015-03-27 10:42       ` Jon Medhurst (Tixy)
  1 sibling, 0 replies; 12+ messages in thread
From: Will Deacon @ 2015-03-26 22:31 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Mar 26, 2015 at 10:19:55PM +0000, Marc Zyngier wrote:
> On Thu, 26 Mar 2015 22:03:23 +0000
> Will Deacon <will.deacon@arm.com> wrote:
> > On Thu, Mar 19, 2015 at 01:59:33PM +0000, Marc Zyngier wrote:
> > > +		for (i = 0; i < alt->alt_len; i += sizeof(insn)) {
> > > +			insn = get_alt_insn(origptr + i, replptr + i);
> > > +			*(u32 *)(origptr + i) = insn;
> > 
> > My brain's not firing on all cylinders right now, but do you need a
> > cpu_to_le32 here?
> 
> I'm not 100% awake myself (probably some acute form of firmwaritis),
> but I suspect you're quite right (get_alt_insn calls aarch64_insn_read,
> which does a le32_to_cpu). Obviously, we need to revert the conversion
> when writing the instruction back.
> 
> Do you want a fixup on top of this, or would you prefer me to respin
> the series?

Please respin and add my acks.

Will

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 2/5] arm64: alternative: Allow immediate branch as alternative instruction
  2015-03-26 22:19     ` Marc Zyngier
  2015-03-26 22:31       ` Will Deacon
@ 2015-03-27 10:42       ` Jon Medhurst (Tixy)
  2015-03-27 11:08         ` Marc Zyngier
  1 sibling, 1 reply; 12+ messages in thread
From: Jon Medhurst (Tixy) @ 2015-03-27 10:42 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, 2015-03-26 at 22:19 +0000, Marc Zyngier wrote:
> On Thu, 26 Mar 2015 22:03:23 +0000
> Will Deacon <will.deacon@arm.com> wrote:
> 
> > On Thu, Mar 19, 2015 at 01:59:33PM +0000, Marc Zyngier wrote:
> > > Since all immediate branches are PC-relative on Aarch64, these
> > > instructions cannot be used as an alternative with the simplistic
> > > approach we currently have (the immediate has been computed from
> > > the .altinstr_replacement section, and end-up being completely off
> > > if we insert it directly).
> > > 
> > > This patch handles the b and bl instructions in a different way,
> > > using the insn framework to recompute the immediate, and generate
> > > the right displacement.
> > > 
> > > Reviewed-by: Andre Przywara <andre.przywara@arm.com>
> > > Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> > > ---
> > >  arch/arm64/kernel/alternative.c | 55 +++++++++++++++++++++++++++++++++++++++--
> > >  1 file changed, 53 insertions(+), 2 deletions(-)
> > 
> > [...]
> > 
> > >  static int __apply_alternatives(void *alt_region)
> > >  {
> > >  	struct alt_instr *alt;
> > > @@ -40,16 +83,24 @@ static int __apply_alternatives(void *alt_region)
> > >  	u8 *origptr, *replptr;
> > >  
> > >  	for (alt = region->begin; alt < region->end; alt++) {
> > > +		u32 insn;
> > > +		int i;
> > > +
> > >  		if (!cpus_have_cap(alt->cpufeature))
> > >  			continue;
> > >  
> > > -		BUG_ON(alt->alt_len > alt->orig_len);
> > > +		BUG_ON(alt->alt_len != alt->orig_len);
> > >  
> > >  		pr_info_once("patching kernel code\n");
> > >  
> > >  		origptr = (u8 *)&alt->orig_offset + alt->orig_offset;
> > >  		replptr = (u8 *)&alt->alt_offset + alt->alt_offset;
> > > -		memcpy(origptr, replptr, alt->alt_len);
> > > +
> > > +		for (i = 0; i < alt->alt_len; i += sizeof(insn)) {
> > > +			insn = get_alt_insn(origptr + i, replptr + i);
> > > +			*(u32 *)(origptr + i) = insn;
> > 
> > My brain's not firing on all cylinders right now, but do you need a
> > cpu_to_le32 here?
> 
> I'm not 100% awake myself (probably some acute form of firmwaritis),
> but I suspect you're quite right (get_alt_insn calls aarch64_insn_read,
> which does a le32_to_cpu). Obviously, we need to revert the conversion
> when writing the instruction back.

Isn't aarch64_insn_write the inverse of aarch64_insn_read and more
correct than using cpu_to_le32?

-- 
Tixy

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 2/5] arm64: alternative: Allow immediate branch as alternative instruction
  2015-03-27 10:42       ` Jon Medhurst (Tixy)
@ 2015-03-27 11:08         ` Marc Zyngier
  0 siblings, 0 replies; 12+ messages in thread
From: Marc Zyngier @ 2015-03-27 11:08 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Tixy,

On 27/03/15 10:42, Jon Medhurst (Tixy) wrote:
> On Thu, 2015-03-26 at 22:19 +0000, Marc Zyngier wrote:
>> On Thu, 26 Mar 2015 22:03:23 +0000
>> Will Deacon <will.deacon@arm.com> wrote:
>>
>>> On Thu, Mar 19, 2015 at 01:59:33PM +0000, Marc Zyngier wrote:
>>>> Since all immediate branches are PC-relative on Aarch64, these
>>>> instructions cannot be used as an alternative with the simplistic
>>>> approach we currently have (the immediate has been computed from
>>>> the .altinstr_replacement section, and end-up being completely off
>>>> if we insert it directly).
>>>>
>>>> This patch handles the b and bl instructions in a different way,
>>>> using the insn framework to recompute the immediate, and generate
>>>> the right displacement.
>>>>
>>>> Reviewed-by: Andre Przywara <andre.przywara@arm.com>
>>>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>>>> ---
>>>>  arch/arm64/kernel/alternative.c | 55 +++++++++++++++++++++++++++++++++++++++--
>>>>  1 file changed, 53 insertions(+), 2 deletions(-)
>>>
>>> [...]
>>>
>>>>  static int __apply_alternatives(void *alt_region)
>>>>  {
>>>>  	struct alt_instr *alt;
>>>> @@ -40,16 +83,24 @@ static int __apply_alternatives(void *alt_region)
>>>>  	u8 *origptr, *replptr;
>>>>  
>>>>  	for (alt = region->begin; alt < region->end; alt++) {
>>>> +		u32 insn;
>>>> +		int i;
>>>> +
>>>>  		if (!cpus_have_cap(alt->cpufeature))
>>>>  			continue;
>>>>  
>>>> -		BUG_ON(alt->alt_len > alt->orig_len);
>>>> +		BUG_ON(alt->alt_len != alt->orig_len);
>>>>  
>>>>  		pr_info_once("patching kernel code\n");
>>>>  
>>>>  		origptr = (u8 *)&alt->orig_offset + alt->orig_offset;
>>>>  		replptr = (u8 *)&alt->alt_offset + alt->alt_offset;
>>>> -		memcpy(origptr, replptr, alt->alt_len);
>>>> +
>>>> +		for (i = 0; i < alt->alt_len; i += sizeof(insn)) {
>>>> +			insn = get_alt_insn(origptr + i, replptr + i);
>>>> +			*(u32 *)(origptr + i) = insn;
>>>
>>> My brain's not firing on all cylinders right now, but do you need a
>>> cpu_to_le32 here?
>>
>> I'm not 100% awake myself (probably some acute form of firmwaritis),
>> but I suspect you're quite right (get_alt_insn calls aarch64_insn_read,
>> which does a le32_to_cpu). Obviously, we need to revert the conversion
>> when writing the instruction back.
> 
> Isn't aarch64_insn_write the inverse of aarch64_insn_read and more
> correct than using cpu_to_le32?

You're perfectly right, and you've actually uncovered a bug: if we have
CONFIG_DEBUG_SET_MODULE_RONX or CONFIG_DEBUG_RODATA, we'd end up trying
to patch the read-only mapping, with disastrous results,
aarch64_insn_write does the right thing, so we might as well use that.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2015-03-27 11:08 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-19 13:59 [PATCH 0/5] arm64: Patching branches for fun and profit Marc Zyngier
2015-03-19 13:59 ` [PATCH 1/5] arm64: insn: Add aarch64_insn_decode_immediate Marc Zyngier
2015-03-19 13:59 ` [PATCH 2/5] arm64: alternative: Allow immediate branch as alternative instruction Marc Zyngier
2015-03-26 22:03   ` Will Deacon
2015-03-26 22:19     ` Marc Zyngier
2015-03-26 22:31       ` Will Deacon
2015-03-27 10:42       ` Jon Medhurst (Tixy)
2015-03-27 11:08         ` Marc Zyngier
2015-03-19 13:59 ` [PATCH 3/5] arm64: Extract feature parsing code from cpu_errata.c Marc Zyngier
2015-03-19 13:59 ` [PATCH 4/5] arm64: alternative: Introduce feature for GICv3 CPU interface Marc Zyngier
2015-03-19 13:59 ` [PATCH 5/5] arm64: KVM: Switch vgic save/restore to alternative_insn Marc Zyngier
2015-03-26 22:04 ` [PATCH 0/5] arm64: Patching branches for fun and profit Will Deacon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).