linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v1 0/7] Optimize jump label implementation on ARM64
@ 2013-09-25 10:44 Jiang Liu
  2013-09-25 10:44 ` [PATCH v1 1/7] arm64: introduce basic aarch64 instruction decoding helpers Jiang Liu
                   ` (6 more replies)
  0 siblings, 7 replies; 22+ messages in thread
From: Jiang Liu @ 2013-09-25 10:44 UTC (permalink / raw)
  To: Steven Rostedt, Catalin Marinas, Will Deacon
  Cc: Jiang Liu, linux-arm-kernel, linux-kernel

From: Jiang Liu <jiang.liu@huawei.com>

This patchset tries to optimize arch specfic jump label implementation
for ARM64 by dynamic kernel text patching.

To enable this feature, your toolchain must support "asm goto" extension
and "%c" constraint extesion. Current GCC for AARCH64 doesn't support
"%c", so you need a GCC patch similiar to this:
http://gcc.gnu.org/viewcvs/gcc/trunk/gcc/config/arm/arm.c?view=patch&r1=175293&r2=175565&pathrev=175565
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=48637

It has been tested on ARM Fast mode, but not on real hardware yet.

Any comments are welcomed!

Jiang Liu (7):
  arm64: introduce basic aarch64 instruction decoding helpers
  arm64: introduce interfaces to hotpatch kernel and module code
  arm64: move encode_insn_immediate() from module.c to insn.c
  arm64: introduce aarch64_insn_gen_{nop|branch_imm}() helper functions
  arm64, jump label: detect %c support for ARM64
  arm64, jump label: optimize jump label implementation
  jump_label: use defined macros instead of hard-coding for better
    readability

 arch/arm64/Kconfig                  |   1 +
 arch/arm64/include/asm/insn.h       |  75 ++++++++++++
 arch/arm64/include/asm/jump_label.h |  52 ++++++++
 arch/arm64/kernel/Makefile          |   3 +-
 arch/arm64/kernel/insn.c            | 238 ++++++++++++++++++++++++++++++++++++
 arch/arm64/kernel/jump_label.c      |  60 +++++++++
 arch/arm64/kernel/module.c          | 151 +++++------------------
 include/linux/jump_label.h          |   3 +-
 scripts/gcc-goto.sh                 |   2 +-
 9 files changed, 463 insertions(+), 122 deletions(-)
 create mode 100644 arch/arm64/include/asm/insn.h
 create mode 100644 arch/arm64/include/asm/jump_label.h
 create mode 100644 arch/arm64/kernel/insn.c
 create mode 100644 arch/arm64/kernel/jump_label.c

-- 
1.8.1.2


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v1 1/7] arm64: introduce basic aarch64 instruction decoding helpers
  2013-09-25 10:44 [PATCH v1 0/7] Optimize jump label implementation on ARM64 Jiang Liu
@ 2013-09-25 10:44 ` Jiang Liu
  2013-09-25 12:58   ` Christopher Covington
  2013-09-25 10:44 ` [PATCH v1 2/7] arm64: introduce interfaces to hotpatch kernel and module code Jiang Liu
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 22+ messages in thread
From: Jiang Liu @ 2013-09-25 10:44 UTC (permalink / raw)
  To: Steven Rostedt, Catalin Marinas, Will Deacon, Jiang Liu,
	Marc Zyngier, Arnd Bergmann, linux-arm-kernel, linux-kernel
  Cc: Jiang Liu

From: Jiang Liu <jiang.liu@huawei.com>

Introduce basic aarch64 instruction decoding helper
aarch64_get_insn_class() and aarch64_insn_hotpatch_safe().

Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Cc: Jiang Liu <liuj97@gmail.com>
---
 arch/arm64/include/asm/insn.h | 53 ++++++++++++++++++++++++++++++++
 arch/arm64/kernel/Makefile    |  2 +-
 arch/arm64/kernel/insn.c      | 71 +++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 125 insertions(+), 1 deletion(-)
 create mode 100644 arch/arm64/include/asm/insn.h
 create mode 100644 arch/arm64/kernel/insn.c

diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
new file mode 100644
index 0000000..e7d1bc8
--- /dev/null
+++ b/arch/arm64/include/asm/insn.h
@@ -0,0 +1,53 @@
+/*
+ * Copyright (C) 2013 Huawei Ltd.
+ * Author: Jiang Liu <jiang.liu@huawei.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef	_ASM_ARM64_INSN_H
+#define	_ASM_ARM64_INSN_H
+#include <linux/types.h>
+
+enum aarch64_insn_class {
+	AARCH64_INSN_CLS_UNKNOWN,	/* UNALLOCATED */
+	AARCH64_INSN_CLS_DP_IMM,	/* Data processing - immediate */
+	AARCH64_INSN_CLS_DP_REG,	/* Data processing - register */
+	AARCH64_INSN_CLS_DP_FPSIMD,	/* Data processing - SIMD and FP */
+	AARCH64_INSN_CLS_LDST,		/* Loads and stores */
+	AARCH64_INSN_CLS_BR_SYS,	/* Branch, exception generation and
+					 * system instructions */
+};
+
+#define	__AARCH64_INSN_FUNCS(abbr, mask, val)	\
+static __always_inline bool aarch64_insn_is_##abbr(u32 code) \
+{ return (code & (mask)) == (val); }	\
+static __always_inline u32 aarch64_insn_get_##abbr##_mask(void) \
+{ return (mask); } \
+static __always_inline u32 aarch64_insn_get_##abbr##_value(void) \
+{ return (val); }
+
+__AARCH64_INSN_FUNCS(b,		0xFC000000, 0x14000000)
+__AARCH64_INSN_FUNCS(bl,	0xFC000000, 0x94000000)
+__AARCH64_INSN_FUNCS(svc,	0xFFE0001F, 0xD4000001)
+__AARCH64_INSN_FUNCS(hvc,	0xFFE0001F, 0xD4000002)
+__AARCH64_INSN_FUNCS(smc,	0xFFE0001F, 0xD4000003)
+__AARCH64_INSN_FUNCS(brk,	0xFFE0001F, 0xD4200000)
+__AARCH64_INSN_FUNCS(nop,	0xFFFFFFFF, 0xD503201F)
+
+#undef	__AARCH64_INSN_FUNCS
+
+enum aarch64_insn_class aarch64_get_insn_class(u32 insn);
+
+bool aarch64_insn_hotpatch_safe(u32 old_insn, u32 new_insn);
+
+#endif	/* _ASM_ARM64_INSN_H */
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 7b4b564..9af6cb3 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -9,7 +9,7 @@ AFLAGS_head.o		:= -DTEXT_OFFSET=$(TEXT_OFFSET)
 arm64-obj-y		:= cputable.o debug-monitors.o entry.o irq.o fpsimd.o	\
 			   entry-fpsimd.o process.o ptrace.o setup.o signal.o	\
 			   sys.o stacktrace.o time.o traps.o io.o vdso.o	\
-			   hyp-stub.o psci.o
+			   hyp-stub.o psci.o insn.o
 
 arm64-obj-$(CONFIG_COMPAT)		+= sys32.o kuser32.o signal32.o 	\
 					   sys_compat.o
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
new file mode 100644
index 0000000..8541c3a
--- /dev/null
+++ b/arch/arm64/kernel/insn.c
@@ -0,0 +1,71 @@
+/*
+ * Copyright (C) 2013 Huawei Ltd.
+ * Author: Jiang Liu <jiang.liu@huawei.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+#include <linux/kernel.h>
+#include <asm/insn.h>
+
+static int aarch64_insn_cls[] = {
+	AARCH64_INSN_CLS_UNKNOWN,	/* 0 0 X X */
+	AARCH64_INSN_CLS_UNKNOWN,	/* 0 0 X X */
+	AARCH64_INSN_CLS_UNKNOWN,	/* 0 0 X X */
+	AARCH64_INSN_CLS_UNKNOWN,	/* 0 0 X X */
+	AARCH64_INSN_CLS_LDST,		/* X 1 X 0 */
+	AARCH64_INSN_CLS_DP_REG,	/* X 1 0 1 */
+	AARCH64_INSN_CLS_LDST,		/* X 1 X 0 */
+	AARCH64_INSN_CLS_DP_FPSIMD,	/* 0 1 1 1 */
+	AARCH64_INSN_CLS_DP_IMM,	/* 1 0 0 X */
+	AARCH64_INSN_CLS_DP_IMM,	/* 1 0 0 X */
+	AARCH64_INSN_CLS_BR_SYS,	/* 1 0 1 X */
+	AARCH64_INSN_CLS_BR_SYS,	/* 1 0 1 X */
+	AARCH64_INSN_CLS_LDST,		/* X 1 X 0 */
+	AARCH64_INSN_CLS_DP_REG,	/* X 1 0 1 */
+	AARCH64_INSN_CLS_LDST,		/* X 1 X 0 */
+	AARCH64_INSN_CLS_DP_FPSIMD,	/* 1 1 1 1 */
+};
+
+enum aarch64_insn_class __kprobes aarch64_get_insn_class(u32 insn)
+{
+	return aarch64_insn_cls[(insn >> 25) & 0xf];
+}
+
+static bool __kprobes __aarch64_insn_hotpatch_safe(u32 insn)
+{
+	if (aarch64_get_insn_class(insn) != AARCH64_INSN_CLS_BR_SYS)
+		return false;
+
+	return	aarch64_insn_is_b(insn) ||
+		aarch64_insn_is_bl(insn) ||
+		aarch64_insn_is_svc(insn) ||
+		aarch64_insn_is_hvc(insn) ||
+		aarch64_insn_is_smc(insn) ||
+		aarch64_insn_is_brk(insn) ||
+		aarch64_insn_is_nop(insn);
+}
+
+/*
+ * ARMv8-A Section B2.6.5:
+ * Concurrent modification and execution of instructions can lead to the
+ * resulting instruction performing any behavior that can be achieved by
+ * executing any sequence of instructions that can be executed from the
+ * same Exception level, except where the instruction before modification
+ * and the instruction after modification is a B, BL, NOP, BKPT, SVC, HVC,
+ * or SMC instruction.
+ */
+bool __kprobes aarch64_insn_hotpatch_safe(u32 old_insn, u32 new_insn)
+{
+	return __aarch64_insn_hotpatch_safe(old_insn) &&
+	       __aarch64_insn_hotpatch_safe(new_insn);
+}
-- 
1.8.1.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 2/7] arm64: introduce interfaces to hotpatch kernel and module code
  2013-09-25 10:44 [PATCH v1 0/7] Optimize jump label implementation on ARM64 Jiang Liu
  2013-09-25 10:44 ` [PATCH v1 1/7] arm64: introduce basic aarch64 instruction decoding helpers Jiang Liu
@ 2013-09-25 10:44 ` Jiang Liu
  2013-09-25 14:35   ` Steven Rostedt
  2013-09-25 14:35   ` Sandeepa Prabhu
  2013-09-25 10:44 ` [PATCH v1 3/7] arm64: move encode_insn_immediate() from module.c to insn.c Jiang Liu
                   ` (4 subsequent siblings)
  6 siblings, 2 replies; 22+ messages in thread
From: Jiang Liu @ 2013-09-25 10:44 UTC (permalink / raw)
  To: Steven Rostedt, Catalin Marinas, Will Deacon, Jiang Liu,
	linux-arm-kernel, linux-kernel
  Cc: Jiang Liu

From: Jiang Liu <jiang.liu@huawei.com>

Introduce aarch64_insn_patch_text() and __aarch64_insn_patch_text()
to patch kernel and module code.

Function aarch64_insn_patch_text() is a heavy version which may use
stop_machine() to serialize all online CPUs, and function
__aarch64_insn_patch_text() is light version without explicitly
serialization.

Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Cc: Jiang Liu <liuj97@gmail.com>
---
 arch/arm64/include/asm/insn.h |  2 ++
 arch/arm64/kernel/insn.c      | 64 +++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 66 insertions(+)

diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
index e7d1bc8..0ea7193 100644
--- a/arch/arm64/include/asm/insn.h
+++ b/arch/arm64/include/asm/insn.h
@@ -49,5 +49,7 @@ __AARCH64_INSN_FUNCS(nop,	0xFFFFFFFF, 0xD503201F)
 enum aarch64_insn_class aarch64_get_insn_class(u32 insn);
 
 bool aarch64_insn_hotpatch_safe(u32 old_insn, u32 new_insn);
+int aarch64_insn_patch_text(void *addr, u32 *insns, int cnt);
+int __aarch64_insn_patch_text(void *addr, u32 *insns, int cnt);
 
 #endif	/* _ASM_ARM64_INSN_H */
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 8541c3a..50facfc 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -15,6 +15,8 @@
  * along with this program.  If not, see <http://www.gnu.org/licenses/>.
  */
 #include <linux/kernel.h>
+#include <linux/stop_machine.h>
+#include <asm/cacheflush.h>
 #include <asm/insn.h>
 
 static int aarch64_insn_cls[] = {
@@ -69,3 +71,65 @@ bool __kprobes aarch64_insn_hotpatch_safe(u32 old_insn, u32 new_insn)
 	return __aarch64_insn_hotpatch_safe(old_insn) &&
 	       __aarch64_insn_hotpatch_safe(new_insn);
 }
+
+struct aarch64_insn_patch {
+	void	*text_addr;
+	u32	*new_insns;
+	int	insn_cnt;
+};
+
+int __kprobes __aarch64_insn_patch_text(void *addr, u32 *insns, int cnt)
+{
+	int i;
+	u32 *tp = addr;
+
+	/* instructions must be word aligned */
+	if (cnt <= 0 || ((uintptr_t)addr & 0x3))
+		return -EINVAL;
+
+	for (i = 0; i < cnt; i++)
+		tp[i] = insns[i];
+
+	flush_icache_range((uintptr_t)tp, (uintptr_t)tp + cnt * sizeof(u32));
+
+	return 0;
+}
+
+static int __kprobes aarch64_insn_patch_text_cb(void *arg)
+{
+	struct aarch64_insn_patch *pp = arg;
+
+	return __aarch64_insn_patch_text(pp->text_addr, pp->new_insns,
+					 pp->insn_cnt);
+}
+
+int __kprobes aarch64_insn_patch_text(void *addr, u32 *insns, int cnt)
+{
+	int ret;
+	bool safe = false;
+
+	/* instructions must be word aligned */
+	if (cnt <= 0 || ((uintptr_t)addr & 0x3))
+		return -EINVAL;
+
+	if (cnt == 1)
+		safe = aarch64_insn_hotpatch_safe(*(u32 *)addr, insns[0]);
+
+	if (safe) {
+		ret = __aarch64_insn_patch_text(addr, insns, cnt);
+	} else {
+		struct aarch64_insn_patch patch = {
+			.text_addr = addr,
+			.new_insns = insns,
+			.insn_cnt = cnt,
+		};
+
+		/*
+		 * Execute __aarch64_insn_patch_text() on every online CPU,
+		 * which ensure serialization among all online CPUs.
+		 */
+		ret = stop_machine(aarch64_insn_patch_text_cb, &patch, NULL);
+	}
+
+	return ret;
+}
-- 
1.8.1.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 3/7] arm64: move encode_insn_immediate() from module.c to insn.c
  2013-09-25 10:44 [PATCH v1 0/7] Optimize jump label implementation on ARM64 Jiang Liu
  2013-09-25 10:44 ` [PATCH v1 1/7] arm64: introduce basic aarch64 instruction decoding helpers Jiang Liu
  2013-09-25 10:44 ` [PATCH v1 2/7] arm64: introduce interfaces to hotpatch kernel and module code Jiang Liu
@ 2013-09-25 10:44 ` Jiang Liu
  2013-09-25 10:44 ` [PATCH v1 4/7] arm64: introduce aarch64_insn_gen_{nop|branch_imm}() helper functions Jiang Liu
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 22+ messages in thread
From: Jiang Liu @ 2013-09-25 10:44 UTC (permalink / raw)
  To: Steven Rostedt, Catalin Marinas, Will Deacon, Jiang Liu,
	linux-arm-kernel, linux-kernel
  Cc: Jiang Liu

From: Jiang Liu <jiang.liu@huawei.com>

Function encode_insn_immediate() will be used by other instruction
manipulate related functions, so move it into insn.c and rename it
as aarch64_insn_encode_immediate().

Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Cc: Jiang Liu <liuj97@gmail.com>
---
 arch/arm64/include/asm/insn.h |  14 ++++
 arch/arm64/kernel/insn.c      |  77 +++++++++++++++++++++
 arch/arm64/kernel/module.c    | 151 +++++++++---------------------------------
 3 files changed, 123 insertions(+), 119 deletions(-)

diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
index 0ea7193..0c715c1 100644
--- a/arch/arm64/include/asm/insn.h
+++ b/arch/arm64/include/asm/insn.h
@@ -28,6 +28,18 @@ enum aarch64_insn_class {
 					 * system instructions */
 };
 
+enum aarch64_insn_imm_type {
+	AARCH64_INSN_IMM_MOVNZ,
+	AARCH64_INSN_IMM_MOVK,
+	AARCH64_INSN_IMM_ADR,
+	AARCH64_INSN_IMM_26,
+	AARCH64_INSN_IMM_19,
+	AARCH64_INSN_IMM_16,
+	AARCH64_INSN_IMM_14,
+	AARCH64_INSN_IMM_12,
+	AARCH64_INSN_IMM_9,
+};
+
 #define	__AARCH64_INSN_FUNCS(abbr, mask, val)	\
 static __always_inline bool aarch64_insn_is_##abbr(u32 code) \
 { return (code & (mask)) == (val); }	\
@@ -47,6 +59,8 @@ __AARCH64_INSN_FUNCS(nop,	0xFFFFFFFF, 0xD503201F)
 #undef	__AARCH64_INSN_FUNCS
 
 enum aarch64_insn_class aarch64_get_insn_class(u32 insn);
+u32 aarch64_insn_encode_immediate(enum aarch64_insn_imm_type type,
+				  u32 insn, u64 imm);
 
 bool aarch64_insn_hotpatch_safe(u32 old_insn, u32 new_insn);
 int aarch64_insn_patch_text(void *addr, u32 *insns, int cnt);
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 50facfc..0bae0d9 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -133,3 +133,80 @@ int __kprobes aarch64_insn_patch_text(void *addr, u32 *insns, int cnt)
 
 	return ret;
 }
+
+u32 aarch64_insn_encode_immediate(enum aarch64_insn_imm_type type,
+				  u32 insn, u64 imm)
+{
+	u32 immlo, immhi, lomask, himask, mask;
+	int shift;
+
+	switch (type) {
+	case AARCH64_INSN_IMM_MOVNZ:
+		/*
+		 * For signed MOVW relocations, we have to manipulate the
+		 * instruction encoding depending on whether or not the
+		 * immediate is less than zero.
+		 */
+		insn &= ~(3 << 29);
+		if ((s64)imm >= 0) {
+			/* >=0: Set the instruction to MOVZ (opcode 10b). */
+			insn |= 2 << 29;
+		} else {
+			/*
+			 * <0: Set the instruction to MOVN (opcode 00b).
+			 *     Since we've masked the opcode already, we
+			 *     don't need to do anything other than
+			 *     inverting the new immediate field.
+			 */
+			imm = ~imm;
+		}
+	case AARCH64_INSN_IMM_MOVK:
+		mask = BIT(16) - 1;
+		shift = 5;
+		break;
+	case AARCH64_INSN_IMM_ADR:
+		lomask = 0x3;
+		himask = 0x7ffff;
+		immlo = imm & lomask;
+		imm >>= 2;
+		immhi = imm & himask;
+		imm = (immlo << 24) | (immhi);
+		mask = (lomask << 24) | (himask);
+		shift = 5;
+		break;
+	case AARCH64_INSN_IMM_26:
+		mask = BIT(26) - 1;
+		shift = 0;
+		break;
+	case AARCH64_INSN_IMM_19:
+		mask = BIT(19) - 1;
+		shift = 5;
+		break;
+	case AARCH64_INSN_IMM_16:
+		mask = BIT(16) - 1;
+		shift = 5;
+		break;
+	case AARCH64_INSN_IMM_14:
+		mask = BIT(14) - 1;
+		shift = 5;
+		break;
+	case AARCH64_INSN_IMM_12:
+		mask = BIT(12) - 1;
+		shift = 10;
+		break;
+	case AARCH64_INSN_IMM_9:
+		mask = BIT(9) - 1;
+		shift = 12;
+		break;
+	default:
+		pr_err("aarch64_insn_encode_immediate: unknown immediate encoding %d\n",
+			type);
+		return 0;
+	}
+
+	/* Update the immediate field. */
+	insn &= ~(mask << shift);
+	insn |= (imm & mask) << shift;
+
+	return insn;
+}
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index ca0e3d5..69e3c31 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -25,6 +25,7 @@
 #include <linux/mm.h>
 #include <linux/moduleloader.h>
 #include <linux/vmalloc.h>
+#include <asm/insn.h>
 
 void *module_alloc(unsigned long size)
 {
@@ -94,96 +95,8 @@ static int reloc_data(enum aarch64_reloc_op op, void *place, u64 val, int len)
 	return 0;
 }
 
-enum aarch64_imm_type {
-	INSN_IMM_MOVNZ,
-	INSN_IMM_MOVK,
-	INSN_IMM_ADR,
-	INSN_IMM_26,
-	INSN_IMM_19,
-	INSN_IMM_16,
-	INSN_IMM_14,
-	INSN_IMM_12,
-	INSN_IMM_9,
-};
-
-static u32 encode_insn_immediate(enum aarch64_imm_type type, u32 insn, u64 imm)
-{
-	u32 immlo, immhi, lomask, himask, mask;
-	int shift;
-
-	switch (type) {
-	case INSN_IMM_MOVNZ:
-		/*
-		 * For signed MOVW relocations, we have to manipulate the
-		 * instruction encoding depending on whether or not the
-		 * immediate is less than zero.
-		 */
-		insn &= ~(3 << 29);
-		if ((s64)imm >= 0) {
-			/* >=0: Set the instruction to MOVZ (opcode 10b). */
-			insn |= 2 << 29;
-		} else {
-			/*
-			 * <0: Set the instruction to MOVN (opcode 00b).
-			 *     Since we've masked the opcode already, we
-			 *     don't need to do anything other than
-			 *     inverting the new immediate field.
-			 */
-			imm = ~imm;
-		}
-	case INSN_IMM_MOVK:
-		mask = BIT(16) - 1;
-		shift = 5;
-		break;
-	case INSN_IMM_ADR:
-		lomask = 0x3;
-		himask = 0x7ffff;
-		immlo = imm & lomask;
-		imm >>= 2;
-		immhi = imm & himask;
-		imm = (immlo << 24) | (immhi);
-		mask = (lomask << 24) | (himask);
-		shift = 5;
-		break;
-	case INSN_IMM_26:
-		mask = BIT(26) - 1;
-		shift = 0;
-		break;
-	case INSN_IMM_19:
-		mask = BIT(19) - 1;
-		shift = 5;
-		break;
-	case INSN_IMM_16:
-		mask = BIT(16) - 1;
-		shift = 5;
-		break;
-	case INSN_IMM_14:
-		mask = BIT(14) - 1;
-		shift = 5;
-		break;
-	case INSN_IMM_12:
-		mask = BIT(12) - 1;
-		shift = 10;
-		break;
-	case INSN_IMM_9:
-		mask = BIT(9) - 1;
-		shift = 12;
-		break;
-	default:
-		pr_err("encode_insn_immediate: unknown immediate encoding %d\n",
-			type);
-		return 0;
-	}
-
-	/* Update the immediate field. */
-	insn &= ~(mask << shift);
-	insn |= (imm & mask) << shift;
-
-	return insn;
-}
-
 static int reloc_insn_movw(enum aarch64_reloc_op op, void *place, u64 val,
-			   int lsb, enum aarch64_imm_type imm_type)
+			   int lsb, enum aarch64_insn_imm_type imm_type)
 {
 	u64 imm, limit = 0;
 	s64 sval;
@@ -194,7 +107,7 @@ static int reloc_insn_movw(enum aarch64_reloc_op op, void *place, u64 val,
 	imm = sval & 0xffff;
 
 	/* Update the instruction with the new encoding. */
-	*(u32 *)place = encode_insn_immediate(imm_type, insn, imm);
+	*(u32 *)place = aarch64_insn_encode_immediate(imm_type, insn, imm);
 
 	/* Shift out the immediate field. */
 	sval >>= 16;
@@ -203,9 +116,9 @@ static int reloc_insn_movw(enum aarch64_reloc_op op, void *place, u64 val,
 	 * For unsigned immediates, the overflow check is straightforward.
 	 * For signed immediates, the sign bit is actually the bit past the
 	 * most significant bit of the field.
-	 * The INSN_IMM_16 immediate type is unsigned.
+	 * The AARCH64_INSN_IMM_16 immediate type is unsigned.
 	 */
-	if (imm_type != INSN_IMM_16) {
+	if (imm_type != AARCH64_INSN_IMM_16) {
 		sval++;
 		limit++;
 	}
@@ -218,7 +131,7 @@ static int reloc_insn_movw(enum aarch64_reloc_op op, void *place, u64 val,
 }
 
 static int reloc_insn_imm(enum aarch64_reloc_op op, void *place, u64 val,
-			  int lsb, int len, enum aarch64_imm_type imm_type)
+			  int lsb, int len, enum aarch64_insn_imm_type imm_type)
 {
 	u64 imm, imm_mask;
 	s64 sval;
@@ -233,7 +146,7 @@ static int reloc_insn_imm(enum aarch64_reloc_op op, void *place, u64 val,
 	imm = sval & imm_mask;
 
 	/* Update the instruction's immediate field. */
-	*(u32 *)place = encode_insn_immediate(imm_type, insn, imm);
+	*(u32 *)place = aarch64_insn_encode_immediate(imm_type, insn, imm);
 
 	/*
 	 * Extract the upper value bits (including the sign bit) and
@@ -315,125 +228,125 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
 			overflow_check = false;
 		case R_AARCH64_MOVW_UABS_G0:
 			ovf = reloc_insn_movw(RELOC_OP_ABS, loc, val, 0,
-					      INSN_IMM_16);
+					      AARCH64_INSN_IMM_16);
 			break;
 		case R_AARCH64_MOVW_UABS_G1_NC:
 			overflow_check = false;
 		case R_AARCH64_MOVW_UABS_G1:
 			ovf = reloc_insn_movw(RELOC_OP_ABS, loc, val, 16,
-					      INSN_IMM_16);
+					      AARCH64_INSN_IMM_16);
 			break;
 		case R_AARCH64_MOVW_UABS_G2_NC:
 			overflow_check = false;
 		case R_AARCH64_MOVW_UABS_G2:
 			ovf = reloc_insn_movw(RELOC_OP_ABS, loc, val, 32,
-					      INSN_IMM_16);
+					      AARCH64_INSN_IMM_16);
 			break;
 		case R_AARCH64_MOVW_UABS_G3:
 			/* We're using the top bits so we can't overflow. */
 			overflow_check = false;
 			ovf = reloc_insn_movw(RELOC_OP_ABS, loc, val, 48,
-					      INSN_IMM_16);
+					      AARCH64_INSN_IMM_16);
 			break;
 		case R_AARCH64_MOVW_SABS_G0:
 			ovf = reloc_insn_movw(RELOC_OP_ABS, loc, val, 0,
-					      INSN_IMM_MOVNZ);
+					      AARCH64_INSN_IMM_MOVNZ);
 			break;
 		case R_AARCH64_MOVW_SABS_G1:
 			ovf = reloc_insn_movw(RELOC_OP_ABS, loc, val, 16,
-					      INSN_IMM_MOVNZ);
+					      AARCH64_INSN_IMM_MOVNZ);
 			break;
 		case R_AARCH64_MOVW_SABS_G2:
 			ovf = reloc_insn_movw(RELOC_OP_ABS, loc, val, 32,
-					      INSN_IMM_MOVNZ);
+					      AARCH64_INSN_IMM_MOVNZ);
 			break;
 		case R_AARCH64_MOVW_PREL_G0_NC:
 			overflow_check = false;
 			ovf = reloc_insn_movw(RELOC_OP_PREL, loc, val, 0,
-					      INSN_IMM_MOVK);
+					      AARCH64_INSN_IMM_MOVK);
 			break;
 		case R_AARCH64_MOVW_PREL_G0:
 			ovf = reloc_insn_movw(RELOC_OP_PREL, loc, val, 0,
-					      INSN_IMM_MOVNZ);
+					      AARCH64_INSN_IMM_MOVNZ);
 			break;
 		case R_AARCH64_MOVW_PREL_G1_NC:
 			overflow_check = false;
 			ovf = reloc_insn_movw(RELOC_OP_PREL, loc, val, 16,
-					      INSN_IMM_MOVK);
+					      AARCH64_INSN_IMM_MOVK);
 			break;
 		case R_AARCH64_MOVW_PREL_G1:
 			ovf = reloc_insn_movw(RELOC_OP_PREL, loc, val, 16,
-					      INSN_IMM_MOVNZ);
+					      AARCH64_INSN_IMM_MOVNZ);
 			break;
 		case R_AARCH64_MOVW_PREL_G2_NC:
 			overflow_check = false;
 			ovf = reloc_insn_movw(RELOC_OP_PREL, loc, val, 32,
-					      INSN_IMM_MOVK);
+					      AARCH64_INSN_IMM_MOVK);
 			break;
 		case R_AARCH64_MOVW_PREL_G2:
 			ovf = reloc_insn_movw(RELOC_OP_PREL, loc, val, 32,
-					      INSN_IMM_MOVNZ);
+					      AARCH64_INSN_IMM_MOVNZ);
 			break;
 		case R_AARCH64_MOVW_PREL_G3:
 			/* We're using the top bits so we can't overflow. */
 			overflow_check = false;
 			ovf = reloc_insn_movw(RELOC_OP_PREL, loc, val, 48,
-					      INSN_IMM_MOVNZ);
+					      AARCH64_INSN_IMM_MOVNZ);
 			break;
 
 		/* Immediate instruction relocations. */
 		case R_AARCH64_LD_PREL_LO19:
 			ovf = reloc_insn_imm(RELOC_OP_PREL, loc, val, 2, 19,
-					     INSN_IMM_19);
+					     AARCH64_INSN_IMM_19);
 			break;
 		case R_AARCH64_ADR_PREL_LO21:
 			ovf = reloc_insn_imm(RELOC_OP_PREL, loc, val, 0, 21,
-					     INSN_IMM_ADR);
+					     AARCH64_INSN_IMM_ADR);
 			break;
 		case R_AARCH64_ADR_PREL_PG_HI21_NC:
 			overflow_check = false;
 		case R_AARCH64_ADR_PREL_PG_HI21:
 			ovf = reloc_insn_imm(RELOC_OP_PAGE, loc, val, 12, 21,
-					     INSN_IMM_ADR);
+					     AARCH64_INSN_IMM_ADR);
 			break;
 		case R_AARCH64_ADD_ABS_LO12_NC:
 		case R_AARCH64_LDST8_ABS_LO12_NC:
 			overflow_check = false;
 			ovf = reloc_insn_imm(RELOC_OP_ABS, loc, val, 0, 12,
-					     INSN_IMM_12);
+					     AARCH64_INSN_IMM_12);
 			break;
 		case R_AARCH64_LDST16_ABS_LO12_NC:
 			overflow_check = false;
 			ovf = reloc_insn_imm(RELOC_OP_ABS, loc, val, 1, 11,
-					     INSN_IMM_12);
+					     AARCH64_INSN_IMM_12);
 			break;
 		case R_AARCH64_LDST32_ABS_LO12_NC:
 			overflow_check = false;
 			ovf = reloc_insn_imm(RELOC_OP_ABS, loc, val, 2, 10,
-					     INSN_IMM_12);
+					     AARCH64_INSN_IMM_12);
 			break;
 		case R_AARCH64_LDST64_ABS_LO12_NC:
 			overflow_check = false;
 			ovf = reloc_insn_imm(RELOC_OP_ABS, loc, val, 3, 9,
-					     INSN_IMM_12);
+					     AARCH64_INSN_IMM_12);
 			break;
 		case R_AARCH64_LDST128_ABS_LO12_NC:
 			overflow_check = false;
 			ovf = reloc_insn_imm(RELOC_OP_ABS, loc, val, 4, 8,
-					     INSN_IMM_12);
+					     AARCH64_INSN_IMM_12);
 			break;
 		case R_AARCH64_TSTBR14:
 			ovf = reloc_insn_imm(RELOC_OP_PREL, loc, val, 2, 14,
-					     INSN_IMM_14);
+					     AARCH64_INSN_IMM_14);
 			break;
 		case R_AARCH64_CONDBR19:
 			ovf = reloc_insn_imm(RELOC_OP_PREL, loc, val, 2, 19,
-					     INSN_IMM_19);
+					     AARCH64_INSN_IMM_19);
 			break;
 		case R_AARCH64_JUMP26:
 		case R_AARCH64_CALL26:
 			ovf = reloc_insn_imm(RELOC_OP_PREL, loc, val, 2, 26,
-					     INSN_IMM_26);
+					     AARCH64_INSN_IMM_26);
 			break;
 
 		default:
-- 
1.8.1.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 4/7] arm64: introduce aarch64_insn_gen_{nop|branch_imm}() helper functions
  2013-09-25 10:44 [PATCH v1 0/7] Optimize jump label implementation on ARM64 Jiang Liu
                   ` (2 preceding siblings ...)
  2013-09-25 10:44 ` [PATCH v1 3/7] arm64: move encode_insn_immediate() from module.c to insn.c Jiang Liu
@ 2013-09-25 10:44 ` Jiang Liu
  2013-09-25 10:44 ` [PATCH v1 5/7] arm64, jump label: detect %c support for ARM64 Jiang Liu
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 22+ messages in thread
From: Jiang Liu @ 2013-09-25 10:44 UTC (permalink / raw)
  To: Steven Rostedt, Catalin Marinas, Will Deacon, Jiang Liu,
	linux-arm-kernel, linux-kernel
  Cc: Jiang Liu

From: Jiang Liu <jiang.liu@huawei.com>

Introduce aarch64_insn_gen_{nop|branch_imm}() helper functions, which
will be used to implement jump label on ARM64.

Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Cc: Jiang Liu <liuj97@gmail.com>
---
 arch/arm64/include/asm/insn.h |  6 ++++++
 arch/arm64/kernel/insn.c      | 26 ++++++++++++++++++++++++++
 2 files changed, 32 insertions(+)

diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
index 0c715c1..28470bc 100644
--- a/arch/arm64/include/asm/insn.h
+++ b/arch/arm64/include/asm/insn.h
@@ -61,6 +61,12 @@ __AARCH64_INSN_FUNCS(nop,	0xFFFFFFFF, 0xD503201F)
 enum aarch64_insn_class aarch64_get_insn_class(u32 insn);
 u32 aarch64_insn_encode_immediate(enum aarch64_insn_imm_type type,
 				  u32 insn, u64 imm);
+u32 aarch64_insn_gen_branch_imm(unsigned long pc, unsigned long addr,
+				bool link);
+static __always_inline u32 aarch64_insn_gen_nop(void)
+{
+	return aarch64_insn_get_nop_value();
+}
 
 bool aarch64_insn_hotpatch_safe(u32 old_insn, u32 new_insn);
 int aarch64_insn_patch_text(void *addr, u32 *insns, int cnt);
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 0bae0d9..a96e3a1 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -14,6 +14,7 @@
  * You should have received a copy of the GNU General Public License
  * along with this program.  If not, see <http://www.gnu.org/licenses/>.
  */
+#include <linux/bitops.h>
 #include <linux/kernel.h>
 #include <linux/stop_machine.h>
 #include <asm/cacheflush.h>
@@ -210,3 +211,28 @@ u32 aarch64_insn_encode_immediate(enum aarch64_insn_imm_type type,
 
 	return insn;
 }
+
+u32 aarch64_insn_gen_branch_imm(unsigned long pc, unsigned long addr, bool link)
+{
+	u32 insn;
+	long offset;
+
+	/*
+	 * PC: A 64-bit Program Counter holding the address of the current
+	 * instruction.
+	 */
+	BUG_ON((pc & 0x3) || (addr & 0x3));
+	offset = ((long)addr - (long)pc) >> 2;
+	/* B/BR support [-128M, 128M) offset */
+	if (offset >= BIT(25) || abs(offset) > BIT(25)) {
+		WARN_ON_ONCE(1);
+		return 0;
+	}
+
+	if (link)
+		insn = aarch64_insn_get_bl_value();
+	else
+		insn = aarch64_insn_get_b_value();
+
+	return aarch64_insn_encode_immediate(AARCH64_INSN_IMM_26, insn, offset);
+}
-- 
1.8.1.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 5/7] arm64, jump label: detect %c support for ARM64
  2013-09-25 10:44 [PATCH v1 0/7] Optimize jump label implementation on ARM64 Jiang Liu
                   ` (3 preceding siblings ...)
  2013-09-25 10:44 ` [PATCH v1 4/7] arm64: introduce aarch64_insn_gen_{nop|branch_imm}() helper functions Jiang Liu
@ 2013-09-25 10:44 ` Jiang Liu
  2013-09-25 10:44 ` [PATCH v1 6/7] arm64, jump label: optimize jump label implementation Jiang Liu
  2013-09-25 10:44 ` [PATCH v1 7/7] jump_label: use defined macros instead of hard-coding for better readability Jiang Liu
  6 siblings, 0 replies; 22+ messages in thread
From: Jiang Liu @ 2013-09-25 10:44 UTC (permalink / raw)
  To: Steven Rostedt, Catalin Marinas, Will Deacon, Jiang Liu, linux-kernel
  Cc: linux-arm-kernel, Jiang Liu

From: Jiang Liu <jiang.liu@huawei.com>

As commit a9468f30b5eac6 "ARM: 7333/2: jump label: detect %c
support for ARM", this patch detects the same thing for ARM64
because some ARM64 GCC versions have the same issue.

Some versions of ARM64 GCC which do support asm goto, do not
support the %c specifier. Since we need the %c to support jump
labels on ARM64, detect that too in the asm goto detection script
to avoid build errors with these versions.

Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Cc: Jiang Liu <liuj97@gmail.com>
---
 scripts/gcc-goto.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/scripts/gcc-goto.sh b/scripts/gcc-goto.sh
index a2af2e8..c9469d3 100644
--- a/scripts/gcc-goto.sh
+++ b/scripts/gcc-goto.sh
@@ -5,7 +5,7 @@
 cat << "END" | $@ -x c - -c -o /dev/null >/dev/null 2>&1 && echo "y"
 int main(void)
 {
-#ifdef __arm__
+#if defined(__arm__) || defined(__aarch64__)
 	/*
 	 * Not related to asm goto, but used by jump label
 	 * and broken on some ARM GCC versions (see GCC Bug 48637).
-- 
1.8.1.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 6/7] arm64, jump label: optimize jump label implementation
  2013-09-25 10:44 [PATCH v1 0/7] Optimize jump label implementation on ARM64 Jiang Liu
                   ` (4 preceding siblings ...)
  2013-09-25 10:44 ` [PATCH v1 5/7] arm64, jump label: detect %c support for ARM64 Jiang Liu
@ 2013-09-25 10:44 ` Jiang Liu
  2013-09-25 10:44 ` [PATCH v1 7/7] jump_label: use defined macros instead of hard-coding for better readability Jiang Liu
  6 siblings, 0 replies; 22+ messages in thread
From: Jiang Liu @ 2013-09-25 10:44 UTC (permalink / raw)
  To: Steven Rostedt, Catalin Marinas, Will Deacon, Jiang Liu,
	Marc Zyngier, Arnd Bergmann, linux-arm-kernel, linux-kernel
  Cc: Jiang Liu

From: Jiang Liu <jiang.liu@huawei.com>

Optimize jump label implementation for ARM64 by dynamically patching
kernel text.

Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Cc: Jiang Liu <liuj97@gmail.com>
---
 arch/arm64/Kconfig                  |  1 +
 arch/arm64/include/asm/jump_label.h | 52 ++++++++++++++++++++++++++++++++
 arch/arm64/kernel/Makefile          |  1 +
 arch/arm64/kernel/jump_label.c      | 60 +++++++++++++++++++++++++++++++++++++
 4 files changed, 114 insertions(+)
 create mode 100644 arch/arm64/include/asm/jump_label.h
 create mode 100644 arch/arm64/kernel/jump_label.c

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index c044548..da388e4 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -17,6 +17,7 @@ config ARM64
 	select GENERIC_SMP_IDLE_THREAD
 	select GENERIC_TIME_VSYSCALL
 	select HARDIRQS_SW_RESEND
+	select HAVE_ARCH_JUMP_LABEL
 	select HAVE_ARCH_TRACEHOOK
 	select HAVE_DEBUG_BUGVERBOSE
 	select HAVE_DEBUG_KMEMLEAK
diff --git a/arch/arm64/include/asm/jump_label.h b/arch/arm64/include/asm/jump_label.h
new file mode 100644
index 0000000..d268fab
--- /dev/null
+++ b/arch/arm64/include/asm/jump_label.h
@@ -0,0 +1,52 @@
+/*
+ * Copyright (C) 2013 Huawei Ltd.
+ * Author: Jiang Liu <jiang.liu@huawei.com>
+ *
+ * Based on arch/arm/include/asm/jump_label.h
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef _ASM_ARM64_JUMP_LABEL_H
+#define _ASM_ARM64_JUMP_LABEL_H
+#include <linux/types.h>
+
+#ifdef __KERNEL__
+
+#define JUMP_LABEL_NOP_SIZE 4
+
+static __always_inline bool arch_static_branch(struct static_key *key)
+{
+	asm goto("1:\n\t"
+		 "nop\n\t"
+		 ".pushsection __jump_table,  \"aw\"\n\t"
+		 ".align 3\n\t"
+		 ".quad 1b, %l[l_yes], %c0\n\t"
+		 ".popsection\n\t"
+		 :  :  "i"(key) :  : l_yes);
+
+	return false;
+l_yes:
+	return true;
+}
+
+#endif /* __KERNEL__ */
+
+typedef u64 jump_label_t;
+
+struct jump_entry {
+	jump_label_t code;
+	jump_label_t target;
+	jump_label_t key;
+};
+
+#endif	/* _ASM_ARM64_JUMP_LABEL_H */
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 9af6cb3..b7db65e 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -18,6 +18,7 @@ arm64-obj-$(CONFIG_SMP)			+= smp.o smp_spin_table.o smp_psci.o
 arm64-obj-$(CONFIG_HW_PERF_EVENTS)	+= perf_event.o
 arm64-obj-$(CONFIG_HAVE_HW_BREAKPOINT)+= hw_breakpoint.o
 arm64-obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
+arm64-obj-$(CONFIG_JUMP_LABEL)		+= jump_label.o
 
 obj-y					+= $(arm64-obj-y) vdso/
 obj-m					+= $(arm64-obj-m)
diff --git a/arch/arm64/kernel/jump_label.c b/arch/arm64/kernel/jump_label.c
new file mode 100644
index 0000000..6cbe2ed
--- /dev/null
+++ b/arch/arm64/kernel/jump_label.c
@@ -0,0 +1,60 @@
+/*
+ * Copyright (C) 2013 Huawei Ltd.
+ * Author: Jiang Liu <jiang.liu@huawei.com>
+ *
+ * Based on arch/arm/kernel/jump_label.c
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+#include <linux/kernel.h>
+#include <linux/jump_label.h>
+#include <asm/jump_label.h>
+#include <asm/insn.h>
+
+#ifdef HAVE_JUMP_LABEL
+
+static void __arch_jump_label_transform(struct jump_entry *entry,
+					enum jump_label_type type,
+					bool is_static)
+{
+	void *addr = (void *)entry->code;
+	u32 insn;
+
+	if (type == JUMP_LABEL_ENABLE) {
+		/* no way out if instruction offset is out of range(+/-128M) */
+		insn = aarch64_insn_gen_branch_imm(entry->code,
+						   entry->target, 0);
+		BUG_ON(!insn);
+	} else {
+		insn = aarch64_insn_gen_nop();
+	}
+
+	if (is_static)
+		__aarch64_insn_patch_text(addr, &insn, 1);
+	else
+		aarch64_insn_patch_text(addr, &insn, 1);
+}
+
+void arch_jump_label_transform(struct jump_entry *entry,
+			       enum jump_label_type type)
+{
+	__arch_jump_label_transform(entry, type, false);
+}
+
+void arch_jump_label_transform_static(struct jump_entry *entry,
+				      enum jump_label_type type)
+{
+	__arch_jump_label_transform(entry, type, true);
+}
+
+#endif	/* HAVE_JUMP_LABEL */
-- 
1.8.1.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v1 7/7] jump_label: use defined macros instead of hard-coding for better readability
  2013-09-25 10:44 [PATCH v1 0/7] Optimize jump label implementation on ARM64 Jiang Liu
                   ` (5 preceding siblings ...)
  2013-09-25 10:44 ` [PATCH v1 6/7] arm64, jump label: optimize jump label implementation Jiang Liu
@ 2013-09-25 10:44 ` Jiang Liu
  2013-09-25 14:53   ` Steven Rostedt
  6 siblings, 1 reply; 22+ messages in thread
From: Jiang Liu @ 2013-09-25 10:44 UTC (permalink / raw)
  To: Steven Rostedt, Catalin Marinas, Will Deacon, Jiang Liu,
	Andrew Jones, Raghavendra K T, Konrad Rzeszutek Wilk,
	H. Peter Anvin, linux-kernel
  Cc: linux-arm-kernel, Jiang Liu

From: Jiang Liu <jiang.liu@huawei.com>

Use macro JUMP_LABEL_TRUE_BRANCH instead of hard-coding for better
readability.

Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Cc: Jiang Liu <liuj97@gmail.com>
---
 include/linux/jump_label.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h
index a507907..b697fbd 100644
--- a/include/linux/jump_label.h
+++ b/include/linux/jump_label.h
@@ -116,7 +116,8 @@ extern void static_key_slow_dec(struct static_key *key);
 extern void jump_label_apply_nops(struct module *mod);
 
 #define STATIC_KEY_INIT_TRUE ((struct static_key) \
-	{ .enabled = ATOMIC_INIT(1), .entries = (void *)1 })
+	{ .enabled = ATOMIC_INIT(1), \
+	  .entries = (void *)JUMP_LABEL_TRUE_BRANCH })
 #define STATIC_KEY_INIT_FALSE ((struct static_key) \
 	{ .enabled = ATOMIC_INIT(0), .entries = (void *)0 })
 
-- 
1.8.1.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 1/7] arm64: introduce basic aarch64 instruction decoding helpers
  2013-09-25 10:44 ` [PATCH v1 1/7] arm64: introduce basic aarch64 instruction decoding helpers Jiang Liu
@ 2013-09-25 12:58   ` Christopher Covington
  2013-09-25 15:50     ` Jiang Liu
  0 siblings, 1 reply; 22+ messages in thread
From: Christopher Covington @ 2013-09-25 12:58 UTC (permalink / raw)
  To: Jiang Liu, Jiang Liu
  Cc: Steven Rostedt, Catalin Marinas, Will Deacon, Jiang Liu,
	Marc Zyngier, Arnd Bergmann, linux-arm-kernel, linux-kernel

Hi Jiang,

On 09/25/2013 06:44 AM, Jiang Liu wrote:

[...]

> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c

[...]

> +static int aarch64_insn_cls[] = {
> +	AARCH64_INSN_CLS_UNKNOWN,	/* 0 0 X X */
> +	AARCH64_INSN_CLS_UNKNOWN,	/* 0 0 X X */
> +	AARCH64_INSN_CLS_UNKNOWN,	/* 0 0 X X */
> +	AARCH64_INSN_CLS_UNKNOWN,	/* 0 0 X X */
> +	AARCH64_INSN_CLS_LDST,		/* X 1 X 0 */
> +	AARCH64_INSN_CLS_DP_REG,	/* X 1 0 1 */
> +	AARCH64_INSN_CLS_LDST,		/* X 1 X 0 */
> +	AARCH64_INSN_CLS_DP_FPSIMD,	/* 0 1 1 1 */
> +	AARCH64_INSN_CLS_DP_IMM,	/* 1 0 0 X */
> +	AARCH64_INSN_CLS_DP_IMM,	/* 1 0 0 X */
> +	AARCH64_INSN_CLS_BR_SYS,	/* 1 0 1 X */
> +	AARCH64_INSN_CLS_BR_SYS,	/* 1 0 1 X */
> +	AARCH64_INSN_CLS_LDST,		/* X 1 X 0 */
> +	AARCH64_INSN_CLS_DP_REG,	/* X 1 0 1 */
> +	AARCH64_INSN_CLS_LDST,		/* X 1 X 0 */
> +	AARCH64_INSN_CLS_DP_FPSIMD,	/* 1 1 1 1 */
> +};

As I read this, I was initially puzzled as to why there are duplicate entries
in the list. Since the data structure doesn't really support don't-cares,
perhaps it would be clearer to use 1's and 0's instead of X's.

[...]

Thanks,
Christopher

-- 
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by the Linux Foundation.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 2/7] arm64: introduce interfaces to hotpatch kernel and module code
  2013-09-25 10:44 ` [PATCH v1 2/7] arm64: introduce interfaces to hotpatch kernel and module code Jiang Liu
@ 2013-09-25 14:35   ` Steven Rostedt
  2013-09-25 14:42     ` Sandeepa Prabhu
  2013-09-25 14:35   ` Sandeepa Prabhu
  1 sibling, 1 reply; 22+ messages in thread
From: Steven Rostedt @ 2013-09-25 14:35 UTC (permalink / raw)
  To: Jiang Liu
  Cc: Catalin Marinas, Will Deacon, Jiang Liu, linux-arm-kernel, linux-kernel

On Wed, 25 Sep 2013 18:44:22 +0800
Jiang Liu <liuj97@gmail.com> wrote:
> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> index 8541c3a..50facfc 100644
> --- a/arch/arm64/kernel/insn.c
> +++ b/arch/arm64/kernel/insn.c
> @@ -15,6 +15,8 @@
>   * along with this program.  If not, see <http://www.gnu.org/licenses/>.
>   */
>  #include <linux/kernel.h>
> +#include <linux/stop_machine.h>
> +#include <asm/cacheflush.h>
>  #include <asm/insn.h>
>  
>  static int aarch64_insn_cls[] = {
> @@ -69,3 +71,65 @@ bool __kprobes aarch64_insn_hotpatch_safe(u32 old_insn, u32 new_insn)
>  	return __aarch64_insn_hotpatch_safe(old_insn) &&
>  	       __aarch64_insn_hotpatch_safe(new_insn);
>  }
> +
> +struct aarch64_insn_patch {
> +	void	*text_addr;
> +	u32	*new_insns;
> +	int	insn_cnt;
> +};
> +
> +int __kprobes __aarch64_insn_patch_text(void *addr, u32 *insns, int cnt)
> +{
> +	int i;
> +	u32 *tp = addr;
> +
> +	/* instructions must be word aligned */
> +	if (cnt <= 0 || ((uintptr_t)addr & 0x3))
> +		return -EINVAL;

On aarch64, are instructions always word aligned? If not, it should be
safe for stop machine to modify non word aligned instructions, but this
patch looks like it doesn't allow stop_machine() to do so.

-- Steve

> +
> +	for (i = 0; i < cnt; i++)
> +		tp[i] = insns[i];
> +
> +	flush_icache_range((uintptr_t)tp, (uintptr_t)tp + cnt * sizeof(u32));
> +
> +	return 0;
> +}
> +
> +static int __kprobes aarch64_insn_patch_text_cb(void *arg)
> +{
> +	struct aarch64_insn_patch *pp = arg;
> +
> +	return __aarch64_insn_patch_text(pp->text_addr, pp->new_insns,
> +					 pp->insn_cnt);
> +}
> +
> +int __kprobes aarch64_insn_patch_text(void *addr, u32 *insns, int cnt)
> +{
> +	int ret;
> +	bool safe = false;
> +
> +	/* instructions must be word aligned */
> +	if (cnt <= 0 || ((uintptr_t)addr & 0x3))
> +		return -EINVAL;
> +
> +	if (cnt == 1)
> +		safe = aarch64_insn_hotpatch_safe(*(u32 *)addr, insns[0]);
> +
> +	if (safe) {
> +		ret = __aarch64_insn_patch_text(addr, insns, cnt);
> +	} else {
> +		struct aarch64_insn_patch patch = {
> +			.text_addr = addr,
> +			.new_insns = insns,
> +			.insn_cnt = cnt,
> +		};
> +
> +		/*
> +		 * Execute __aarch64_insn_patch_text() on every online CPU,
> +		 * which ensure serialization among all online CPUs.
> +		 */
> +		ret = stop_machine(aarch64_insn_patch_text_cb, &patch, NULL);
> +	}
> +
> +	return ret;
> +}


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 2/7] arm64: introduce interfaces to hotpatch kernel and module code
  2013-09-25 10:44 ` [PATCH v1 2/7] arm64: introduce interfaces to hotpatch kernel and module code Jiang Liu
  2013-09-25 14:35   ` Steven Rostedt
@ 2013-09-25 14:35   ` Sandeepa Prabhu
  2013-09-25 15:54     ` Jiang Liu
  2013-09-27 15:41     ` Jiang Liu
  1 sibling, 2 replies; 22+ messages in thread
From: Sandeepa Prabhu @ 2013-09-25 14:35 UTC (permalink / raw)
  To: Jiang Liu
  Cc: Steven Rostedt, Catalin Marinas, Will Deacon, Jiang Liu,
	linux-arm-kernel, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 5234 bytes --]

On 25 September 2013 16:14, Jiang Liu <liuj97@gmail.com> wrote:
> From: Jiang Liu <jiang.liu@huawei.com>
>
> Introduce aarch64_insn_patch_text() and __aarch64_insn_patch_text()
> to patch kernel and module code.
>
> Function aarch64_insn_patch_text() is a heavy version which may use
> stop_machine() to serialize all online CPUs, and function
> __aarch64_insn_patch_text() is light version without explicitly
> serialization.
Hi Jiang,

I have written kprobes support for aarch64, and need both the
functionality (lightweight and stop_machine() versions).
I would like to rebase these API in kprobes, however slight changes
would require in case of stop_machine version, which I explained
below.
[Though kprobes cannot share Instruction encode support of jump labels
as, decoding & simulation quite different for kprobes/uprobes and
based around single stepping]

>
> Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
> Cc: Jiang Liu <liuj97@gmail.com>
> ---
>  arch/arm64/include/asm/insn.h |  2 ++
>  arch/arm64/kernel/insn.c      | 64 +++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 66 insertions(+)
>
> diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
> index e7d1bc8..0ea7193 100644
> --- a/arch/arm64/include/asm/insn.h
> +++ b/arch/arm64/include/asm/insn.h
> @@ -49,5 +49,7 @@ __AARCH64_INSN_FUNCS(nop,     0xFFFFFFFF, 0xD503201F)
>  enum aarch64_insn_class aarch64_get_insn_class(u32 insn);
>
>  bool aarch64_insn_hotpatch_safe(u32 old_insn, u32 new_insn);
> +int aarch64_insn_patch_text(void *addr, u32 *insns, int cnt);
> +int __aarch64_insn_patch_text(void *addr, u32 *insns, int cnt);
>
>  #endif /* _ASM_ARM64_INSN_H */
> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> index 8541c3a..50facfc 100644
> --- a/arch/arm64/kernel/insn.c
> +++ b/arch/arm64/kernel/insn.c
> @@ -15,6 +15,8 @@
>   * along with this program.  If not, see <http://www.gnu.org/licenses/>.
>   */
>  #include <linux/kernel.h>
> +#include <linux/stop_machine.h>
> +#include <asm/cacheflush.h>
>  #include <asm/insn.h>
>
>  static int aarch64_insn_cls[] = {
> @@ -69,3 +71,65 @@ bool __kprobes aarch64_insn_hotpatch_safe(u32 old_insn, u32 new_insn)
>         return __aarch64_insn_hotpatch_safe(old_insn) &&
>                __aarch64_insn_hotpatch_safe(new_insn);
>  }
> +
> +struct aarch64_insn_patch {
> +       void    *text_addr;
> +       u32     *new_insns;
> +       int     insn_cnt;
> +};
> +
> +int __kprobes __aarch64_insn_patch_text(void *addr, u32 *insns, int cnt)
> +{
> +       int i;
> +       u32 *tp = addr;
> +
> +       /* instructions must be word aligned */
> +       if (cnt <= 0 || ((uintptr_t)addr & 0x3))
> +               return -EINVAL;
> +
> +       for (i = 0; i < cnt; i++)
> +               tp[i] = insns[i];
> +
> +       flush_icache_range((uintptr_t)tp, (uintptr_t)tp + cnt * sizeof(u32));
> +
> +       return 0;
> +}
Looks fine, but do you need to check for CPU big endian mode here? (I
think swab32() needed if EL1 is in big-endian mode)

> +
> +static int __kprobes aarch64_insn_patch_text_cb(void *arg)
> +{
> +       struct aarch64_insn_patch *pp = arg;
> +
> +       return __aarch64_insn_patch_text(pp->text_addr, pp->new_insns,
> +                                        pp->insn_cnt);
> +}
> +
> +int __kprobes aarch64_insn_patch_text(void *addr, u32 *insns, int cnt)
> +{
> +       int ret;
> +       bool safe = false;
> +
> +       /* instructions must be word aligned */
> +       if (cnt <= 0 || ((uintptr_t)addr & 0x3))
> +               return -EINVAL;
> +
> +       if (cnt == 1)
> +               safe = aarch64_insn_hotpatch_safe(*(u32 *)addr, insns[0]);
> +
> +       if (safe) {
> +               ret = __aarch64_insn_patch_text(addr, insns, cnt);
> +       } else {

Can you move the code below this into separate API that just apply
patch with stop_machine? And then a wrapper function for jump label
specific handling that checks for aarch64_insn_hotpatch_safe() ?
Also, it will be good to move the patching code out of insn.c to
patch.c (refer to arch/arm/kernel/patch.c).

Please refer to attached file (my current implementation) to make
sense of exactly what kprobes would need (ignore the big-endian part
for now). I think splitting the code should be straight-forward and we
can avoid two different implementations. Please let me know if this
can be done, I will rebase my patches above your next version.

Thanks,
Sandeepa
> +               struct aarch64_insn_patch patch = {
> +                       .text_addr = addr,
> +                       .new_insns = insns,
> +                       .insn_cnt = cnt,
> +               };
> +
> +               /*
> +                * Execute __aarch64_insn_patch_text() on every online CPU,
> +                * which ensure serialization among all online CPUs.
> +                */
> +               ret = stop_machine(aarch64_insn_patch_text_cb, &patch, NULL);
> +       }
> +
> +       return ret;
> +}
> --
> 1.8.1.2
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

[-- Attachment #2: patch_text.c --]
[-- Type: text/x-csrc, Size: 1969 bytes --]

/*
 * arch/arm64/kernel/patch_text.c
 *
 * Copyright (C) 2013 Linaro Limited.
 * Based on arch/arm/kernel/patch.c
 *
 * This program is free software; you can redistribute it and/or modify
 * it under the terms of the GNU General Public License version 2 as
 * published by the Free Software Foundation.
 *
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
 * General Public License for more details.
 */
#include <linux/kernel.h>
#include <linux/kprobes.h>
#include <linux/stop_machine.h>
#include <linux/swab.h>
#include <asm/cacheflush.h>
#include <asm/smp_plat.h>

#include "patch_text.h"

#define opcode_to_mem_le(x) (x)
#define opcode_to_mem_be(x) swab32(x)

/*
 * get cpu endian mode from SCTLR_EL1.EE
 * 0x1=Big Endian, 0x0=Litle Endian.
 */
static inline unsigned int is_el1_big_endian(void)
{
	u32 sctlr;

	asm volatile ("mrs %0, sctlr_el1" : "=r" (sctlr));
	return (sctlr >> 25) & 0x1;
}

struct patch {
	void *addr;
	unsigned int insn;
};

/* Patch text - Supported for kernel in AArch64 mode only */
void __kprobes __patch_text(void *addr, unsigned int insn)
{
	int size = sizeof(u32);

	/* address 32-bit alignment check */
	if ((unsigned long)addr % size) {
		pr_warn("text patching failed @unaligned addr %p\n", addr);
		return;
	}

	if (is_el1_big_endian())
		*(u32 *) addr = opcode_to_mem_be(insn);
	else
		*(u32 *) addr = opcode_to_mem_le(insn);

	/* sync Icache, ISB is issued as part of this */
	flush_icache_range((uintptr_t) (addr), (uintptr_t) (addr) + size);
}

static int __kprobes patch_text_stop_machine(void *data)
{
	struct patch *patch = data;

	__patch_text(patch->addr, patch->insn);
	return 0;
}

void __kprobes patch_text(void *addr, unsigned int insn)
{
	struct patch patch = {
		.addr = addr,
		.insn = insn,
	};
	stop_machine(patch_text_stop_machine, &patch, cpu_online_mask);
}

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 2/7] arm64: introduce interfaces to hotpatch kernel and module code
  2013-09-25 14:35   ` Steven Rostedt
@ 2013-09-25 14:42     ` Sandeepa Prabhu
  2013-09-25 15:16       ` Steven Rostedt
  0 siblings, 1 reply; 22+ messages in thread
From: Sandeepa Prabhu @ 2013-09-25 14:42 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Jiang Liu, Catalin Marinas, Will Deacon, Jiang Liu,
	linux-arm-kernel, linux-kernel

On 25 September 2013 20:05, Steven Rostedt <rostedt@goodmis.org> wrote:
> On Wed, 25 Sep 2013 18:44:22 +0800
> Jiang Liu <liuj97@gmail.com> wrote:
>> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
>> index 8541c3a..50facfc 100644
>> --- a/arch/arm64/kernel/insn.c
>> +++ b/arch/arm64/kernel/insn.c
>> @@ -15,6 +15,8 @@
>>   * along with this program.  If not, see <http://www.gnu.org/licenses/>.
>>   */
>>  #include <linux/kernel.h>
>> +#include <linux/stop_machine.h>
>> +#include <asm/cacheflush.h>
>>  #include <asm/insn.h>
>>
>>  static int aarch64_insn_cls[] = {
>> @@ -69,3 +71,65 @@ bool __kprobes aarch64_insn_hotpatch_safe(u32 old_insn, u32 new_insn)
>>       return __aarch64_insn_hotpatch_safe(old_insn) &&
>>              __aarch64_insn_hotpatch_safe(new_insn);
>>  }
>> +
>> +struct aarch64_insn_patch {
>> +     void    *text_addr;
>> +     u32     *new_insns;
>> +     int     insn_cnt;
>> +};
>> +
>> +int __kprobes __aarch64_insn_patch_text(void *addr, u32 *insns, int cnt)
>> +{
>> +     int i;
>> +     u32 *tp = addr;
>> +
>> +     /* instructions must be word aligned */
>> +     if (cnt <= 0 || ((uintptr_t)addr & 0x3))
>> +             return -EINVAL;
>
> On aarch64, are instructions always word aligned? If not, it should be
> safe for stop machine to modify non word aligned instructions, but this
> patch looks like it doesn't allow stop_machine() to do so.
Steve,

Yes, aarch64 instructions must be word-aligned, else instruction fetch
would generate Misaligned PC fault.

Thanks,
Sandeepa
>
> -- Steve
>
>> +
>> +     for (i = 0; i < cnt; i++)
>> +             tp[i] = insns[i];
>> +
>> +     flush_icache_range((uintptr_t)tp, (uintptr_t)tp + cnt * sizeof(u32));
>> +
>> +     return 0;
>> +}
>> +
>> +static int __kprobes aarch64_insn_patch_text_cb(void *arg)
>> +{
>> +     struct aarch64_insn_patch *pp = arg;
>> +
>> +     return __aarch64_insn_patch_text(pp->text_addr, pp->new_insns,
>> +                                      pp->insn_cnt);
>> +}
>> +
>> +int __kprobes aarch64_insn_patch_text(void *addr, u32 *insns, int cnt)
>> +{
>> +     int ret;
>> +     bool safe = false;
>> +
>> +     /* instructions must be word aligned */
>> +     if (cnt <= 0 || ((uintptr_t)addr & 0x3))
>> +             return -EINVAL;
>> +
>> +     if (cnt == 1)
>> +             safe = aarch64_insn_hotpatch_safe(*(u32 *)addr, insns[0]);
>> +
>> +     if (safe) {
>> +             ret = __aarch64_insn_patch_text(addr, insns, cnt);
>> +     } else {
>> +             struct aarch64_insn_patch patch = {
>> +                     .text_addr = addr,
>> +                     .new_insns = insns,
>> +                     .insn_cnt = cnt,
>> +             };
>> +
>> +             /*
>> +              * Execute __aarch64_insn_patch_text() on every online CPU,
>> +              * which ensure serialization among all online CPUs.
>> +              */
>> +             ret = stop_machine(aarch64_insn_patch_text_cb, &patch, NULL);
>> +     }
>> +
>> +     return ret;
>> +}
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 7/7] jump_label: use defined macros instead of hard-coding for better readability
  2013-09-25 10:44 ` [PATCH v1 7/7] jump_label: use defined macros instead of hard-coding for better readability Jiang Liu
@ 2013-09-25 14:53   ` Steven Rostedt
  2013-09-25 15:47     ` Jiang Liu
  0 siblings, 1 reply; 22+ messages in thread
From: Steven Rostedt @ 2013-09-25 14:53 UTC (permalink / raw)
  To: Jiang Liu
  Cc: Catalin Marinas, Will Deacon, Jiang Liu, Andrew Jones,
	Raghavendra K T, Konrad Rzeszutek Wilk, H. Peter Anvin,
	linux-kernel, linux-arm-kernel

On Wed, 25 Sep 2013 18:44:27 +0800
Jiang Liu <liuj97@gmail.com> wrote:

> From: Jiang Liu <jiang.liu@huawei.com>
> 
> Use macro JUMP_LABEL_TRUE_BRANCH instead of hard-coding for better
> readability.
> 
> Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
> Cc: Jiang Liu <liuj97@gmail.com>
> ---
>  include/linux/jump_label.h | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h
> index a507907..b697fbd 100644
> --- a/include/linux/jump_label.h
> +++ b/include/linux/jump_label.h
> @@ -116,7 +116,8 @@ extern void static_key_slow_dec(struct static_key *key);
>  extern void jump_label_apply_nops(struct module *mod);
>  
>  #define STATIC_KEY_INIT_TRUE ((struct static_key) \
> -	{ .enabled = ATOMIC_INIT(1), .entries = (void *)1 })
> +	{ .enabled = ATOMIC_INIT(1), \
> +	  .entries = (void *)JUMP_LABEL_TRUE_BRANCH })
>  #define STATIC_KEY_INIT_FALSE ((struct static_key) \
>  	{ .enabled = ATOMIC_INIT(0), .entries = (void *)0 })
>  

Probably should add a comment that there is no
"JUMP_LABEL_FALSE_BRANCH" or maybe we should add one?

-- Steve



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 2/7] arm64: introduce interfaces to hotpatch kernel and module code
  2013-09-25 14:42     ` Sandeepa Prabhu
@ 2013-09-25 15:16       ` Steven Rostedt
  2013-09-26  2:47         ` Sandeepa Prabhu
  0 siblings, 1 reply; 22+ messages in thread
From: Steven Rostedt @ 2013-09-25 15:16 UTC (permalink / raw)
  To: Sandeepa Prabhu
  Cc: Jiang Liu, Catalin Marinas, Will Deacon, Jiang Liu,
	linux-arm-kernel, linux-kernel

On Wed, 25 Sep 2013 20:12:17 +0530
Sandeepa Prabhu <sandeepa.prabhu@linaro.org> wrote:


> > On aarch64, are instructions always word aligned? If not, it should be
> > safe for stop machine to modify non word aligned instructions, but this
> > patch looks like it doesn't allow stop_machine() to do so.
> Steve,
> 
> Yes, aarch64 instructions must be word-aligned, else instruction fetch
> would generate Misaligned PC fault.
> 

Thanks for clarifying, as IIUC, there's ARM architectures that allow
for 2 and 4 byte instructions.

-- Steve

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 7/7] jump_label: use defined macros instead of hard-coding for better readability
  2013-09-25 14:53   ` Steven Rostedt
@ 2013-09-25 15:47     ` Jiang Liu
  2013-09-25 16:30       ` Steven Rostedt
  0 siblings, 1 reply; 22+ messages in thread
From: Jiang Liu @ 2013-09-25 15:47 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Catalin Marinas, Will Deacon, Jiang Liu, Andrew Jones,
	Raghavendra K T, Konrad Rzeszutek Wilk, H. Peter Anvin,
	linux-kernel, linux-arm-kernel

On 09/25/2013 10:53 PM, Steven Rostedt wrote:
> On Wed, 25 Sep 2013 18:44:27 +0800
> Jiang Liu <liuj97@gmail.com> wrote:
> 
>> From: Jiang Liu <jiang.liu@huawei.com>
>>
>> Use macro JUMP_LABEL_TRUE_BRANCH instead of hard-coding for better
>> readability.
>>
>> Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
>> Cc: Jiang Liu <liuj97@gmail.com>
>> ---
>>  include/linux/jump_label.h | 3 ++-
>>  1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h
>> index a507907..b697fbd 100644
>> --- a/include/linux/jump_label.h
>> +++ b/include/linux/jump_label.h
>> @@ -116,7 +116,8 @@ extern void static_key_slow_dec(struct static_key *key);
>>  extern void jump_label_apply_nops(struct module *mod);
>>  
>>  #define STATIC_KEY_INIT_TRUE ((struct static_key) \
>> -	{ .enabled = ATOMIC_INIT(1), .entries = (void *)1 })
>> +	{ .enabled = ATOMIC_INIT(1), \
>> +	  .entries = (void *)JUMP_LABEL_TRUE_BRANCH })
>>  #define STATIC_KEY_INIT_FALSE ((struct static_key) \
>>  	{ .enabled = ATOMIC_INIT(0), .entries = (void *)0 })
>>  
> 
> Probably should add a comment that there is no
> "JUMP_LABEL_FALSE_BRANCH" or maybe we should add one?
Hi Steven,
	How about
JUMP_LABEL_TYPE_MASK/JUMP_LABEL_TRUE_BRANCH/JUMP_LABEL_FALSE_BRANCH
tuple? That should have the best readability.
Thanks!

> 
> -- Steve
> 
> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 1/7] arm64: introduce basic aarch64 instruction decoding helpers
  2013-09-25 12:58   ` Christopher Covington
@ 2013-09-25 15:50     ` Jiang Liu
  0 siblings, 0 replies; 22+ messages in thread
From: Jiang Liu @ 2013-09-25 15:50 UTC (permalink / raw)
  To: Christopher Covington
  Cc: Jiang Liu, Steven Rostedt, Catalin Marinas, Will Deacon,
	Marc Zyngier, Arnd Bergmann, linux-arm-kernel, linux-kernel

On 09/25/2013 08:58 PM, Christopher Covington wrote:
> Hi Jiang,
> 
> On 09/25/2013 06:44 AM, Jiang Liu wrote:
> 
> [...]
> 
>> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
> 
> [...]
> 
>> +static int aarch64_insn_cls[] = {
>> +	AARCH64_INSN_CLS_UNKNOWN,	/* 0 0 X X */
>> +	AARCH64_INSN_CLS_UNKNOWN,	/* 0 0 X X */
>> +	AARCH64_INSN_CLS_UNKNOWN,	/* 0 0 X X */
>> +	AARCH64_INSN_CLS_UNKNOWN,	/* 0 0 X X */
>> +	AARCH64_INSN_CLS_LDST,		/* X 1 X 0 */
>> +	AARCH64_INSN_CLS_DP_REG,	/* X 1 0 1 */
>> +	AARCH64_INSN_CLS_LDST,		/* X 1 X 0 */
>> +	AARCH64_INSN_CLS_DP_FPSIMD,	/* 0 1 1 1 */
>> +	AARCH64_INSN_CLS_DP_IMM,	/* 1 0 0 X */
>> +	AARCH64_INSN_CLS_DP_IMM,	/* 1 0 0 X */
>> +	AARCH64_INSN_CLS_BR_SYS,	/* 1 0 1 X */
>> +	AARCH64_INSN_CLS_BR_SYS,	/* 1 0 1 X */
>> +	AARCH64_INSN_CLS_LDST,		/* X 1 X 0 */
>> +	AARCH64_INSN_CLS_DP_REG,	/* X 1 0 1 */
>> +	AARCH64_INSN_CLS_LDST,		/* X 1 X 0 */
>> +	AARCH64_INSN_CLS_DP_FPSIMD,	/* 1 1 1 1 */
>> +};
> 
> As I read this, I was initially puzzled as to why there are duplicate entries
> in the list. Since the data structure doesn't really support don't-cares,
> perhaps it would be clearer to use 1's and 0's instead of X's.
Thanks for review, Chris!
I will try to improve the comments in next version.

> 
> [...]
> 
> Thanks,
> Christopher
> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 2/7] arm64: introduce interfaces to hotpatch kernel and module code
  2013-09-25 14:35   ` Sandeepa Prabhu
@ 2013-09-25 15:54     ` Jiang Liu
  2013-09-26  2:51       ` Sandeepa Prabhu
  2013-09-27 15:41     ` Jiang Liu
  1 sibling, 1 reply; 22+ messages in thread
From: Jiang Liu @ 2013-09-25 15:54 UTC (permalink / raw)
  To: Sandeepa Prabhu
  Cc: Steven Rostedt, Catalin Marinas, Will Deacon, Jiang Liu,
	linux-arm-kernel, linux-kernel

Hi Sandeepa,
	Great to know that you are working on kprobe for ARM64.
I have done some investigation, found it's an huge work for me
then gave up:(
	I will try refine the implementation based on your requirement.
	Thanks!

On 09/25/2013 10:35 PM, Sandeepa Prabhu wrote:
> On 25 September 2013 16:14, Jiang Liu <liuj97@gmail.com> wrote:
>> From: Jiang Liu <jiang.liu@huawei.com>
>>
>> Introduce aarch64_insn_patch_text() and __aarch64_insn_patch_text()
>> to patch kernel and module code.
>>
>> Function aarch64_insn_patch_text() is a heavy version which may use
>> stop_machine() to serialize all online CPUs, and function
>> __aarch64_insn_patch_text() is light version without explicitly
>> serialization.
> Hi Jiang,
> 
> I have written kprobes support for aarch64, and need both the
> functionality (lightweight and stop_machine() versions).
> I would like to rebase these API in kprobes, however slight changes
> would require in case of stop_machine version, which I explained
> below.
> [Though kprobes cannot share Instruction encode support of jump labels
> as, decoding & simulation quite different for kprobes/uprobes and
> based around single stepping]
> 
>>
>> Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
>> Cc: Jiang Liu <liuj97@gmail.com>
>> ---
>>  arch/arm64/include/asm/insn.h |  2 ++
>>  arch/arm64/kernel/insn.c      | 64 +++++++++++++++++++++++++++++++++++++++++++
>>  2 files changed, 66 insertions(+)
>>
>> diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
>> index e7d1bc8..0ea7193 100644
>> --- a/arch/arm64/include/asm/insn.h
>> +++ b/arch/arm64/include/asm/insn.h
>> @@ -49,5 +49,7 @@ __AARCH64_INSN_FUNCS(nop,     0xFFFFFFFF, 0xD503201F)
>>  enum aarch64_insn_class aarch64_get_insn_class(u32 insn);
>>
>>  bool aarch64_insn_hotpatch_safe(u32 old_insn, u32 new_insn);
>> +int aarch64_insn_patch_text(void *addr, u32 *insns, int cnt);
>> +int __aarch64_insn_patch_text(void *addr, u32 *insns, int cnt);
>>
>>  #endif /* _ASM_ARM64_INSN_H */
>> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
>> index 8541c3a..50facfc 100644
>> --- a/arch/arm64/kernel/insn.c
>> +++ b/arch/arm64/kernel/insn.c
>> @@ -15,6 +15,8 @@
>>   * along with this program.  If not, see <http://www.gnu.org/licenses/>.
>>   */
>>  #include <linux/kernel.h>
>> +#include <linux/stop_machine.h>
>> +#include <asm/cacheflush.h>
>>  #include <asm/insn.h>
>>
>>  static int aarch64_insn_cls[] = {
>> @@ -69,3 +71,65 @@ bool __kprobes aarch64_insn_hotpatch_safe(u32 old_insn, u32 new_insn)
>>         return __aarch64_insn_hotpatch_safe(old_insn) &&
>>                __aarch64_insn_hotpatch_safe(new_insn);
>>  }
>> +
>> +struct aarch64_insn_patch {
>> +       void    *text_addr;
>> +       u32     *new_insns;
>> +       int     insn_cnt;
>> +};
>> +
>> +int __kprobes __aarch64_insn_patch_text(void *addr, u32 *insns, int cnt)
>> +{
>> +       int i;
>> +       u32 *tp = addr;
>> +
>> +       /* instructions must be word aligned */
>> +       if (cnt <= 0 || ((uintptr_t)addr & 0x3))
>> +               return -EINVAL;
>> +
>> +       for (i = 0; i < cnt; i++)
>> +               tp[i] = insns[i];
>> +
>> +       flush_icache_range((uintptr_t)tp, (uintptr_t)tp + cnt * sizeof(u32));
>> +
>> +       return 0;
>> +}
> Looks fine, but do you need to check for CPU big endian mode here? (I
> think swab32() needed if EL1 is in big-endian mode)
> 
>> +
>> +static int __kprobes aarch64_insn_patch_text_cb(void *arg)
>> +{
>> +       struct aarch64_insn_patch *pp = arg;
>> +
>> +       return __aarch64_insn_patch_text(pp->text_addr, pp->new_insns,
>> +                                        pp->insn_cnt);
>> +}
>> +
>> +int __kprobes aarch64_insn_patch_text(void *addr, u32 *insns, int cnt)
>> +{
>> +       int ret;
>> +       bool safe = false;
>> +
>> +       /* instructions must be word aligned */
>> +       if (cnt <= 0 || ((uintptr_t)addr & 0x3))
>> +               return -EINVAL;
>> +
>> +       if (cnt == 1)
>> +               safe = aarch64_insn_hotpatch_safe(*(u32 *)addr, insns[0]);
>> +
>> +       if (safe) {
>> +               ret = __aarch64_insn_patch_text(addr, insns, cnt);
>> +       } else {
> 
> Can you move the code below this into separate API that just apply
> patch with stop_machine? And then a wrapper function for jump label
> specific handling that checks for aarch64_insn_hotpatch_safe() ?
> Also, it will be good to move the patching code out of insn.c to
> patch.c (refer to arch/arm/kernel/patch.c).
> 
> Please refer to attached file (my current implementation) to make
> sense of exactly what kprobes would need (ignore the big-endian part
> for now). I think splitting the code should be straight-forward and we
> can avoid two different implementations. Please let me know if this
> can be done, I will rebase my patches above your next version.
> 
> Thanks,
> Sandeepa
>> +               struct aarch64_insn_patch patch = {
>> +                       .text_addr = addr,
>> +                       .new_insns = insns,
>> +                       .insn_cnt = cnt,
>> +               };
>> +
>> +               /*
>> +                * Execute __aarch64_insn_patch_text() on every online CPU,
>> +                * which ensure serialization among all online CPUs.
>> +                */
>> +               ret = stop_machine(aarch64_insn_patch_text_cb, &patch, NULL);
>> +       }
>> +
>> +       return ret;
>> +}
>> --
>> 1.8.1.2
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at  http://www.tux.org/lkml/


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 7/7] jump_label: use defined macros instead of hard-coding for better readability
  2013-09-25 15:47     ` Jiang Liu
@ 2013-09-25 16:30       ` Steven Rostedt
  0 siblings, 0 replies; 22+ messages in thread
From: Steven Rostedt @ 2013-09-25 16:30 UTC (permalink / raw)
  To: Jiang Liu
  Cc: Catalin Marinas, Will Deacon, Jiang Liu, Andrew Jones,
	Raghavendra K T, Konrad Rzeszutek Wilk, H. Peter Anvin,
	linux-kernel, linux-arm-kernel, Jason Baron


[ Added Jason to Cc. ]

On Wed, 25 Sep 2013 23:47:19 +0800
Jiang Liu <liuj97@gmail.com> wrote:

> On 09/25/2013 10:53 PM, Steven Rostedt wrote:
> > On Wed, 25 Sep 2013 18:44:27 +0800
> > Jiang Liu <liuj97@gmail.com> wrote:
> > 
> >> From: Jiang Liu <jiang.liu@huawei.com>
> >>
> >> Use macro JUMP_LABEL_TRUE_BRANCH instead of hard-coding for better
> >> readability.
> >>
> >> Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
> >> Cc: Jiang Liu <liuj97@gmail.com>
> >> ---
> >>  include/linux/jump_label.h | 3 ++-
> >>  1 file changed, 2 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h
> >> index a507907..b697fbd 100644
> >> --- a/include/linux/jump_label.h
> >> +++ b/include/linux/jump_label.h
> >> @@ -116,7 +116,8 @@ extern void static_key_slow_dec(struct static_key *key);
> >>  extern void jump_label_apply_nops(struct module *mod);
> >>  
> >>  #define STATIC_KEY_INIT_TRUE ((struct static_key) \
> >> -	{ .enabled = ATOMIC_INIT(1), .entries = (void *)1 })
> >> +	{ .enabled = ATOMIC_INIT(1), \
> >> +	  .entries = (void *)JUMP_LABEL_TRUE_BRANCH })
> >>  #define STATIC_KEY_INIT_FALSE ((struct static_key) \
> >>  	{ .enabled = ATOMIC_INIT(0), .entries = (void *)0 })
> >>  
> > 
> > Probably should add a comment that there is no
> > "JUMP_LABEL_FALSE_BRANCH" or maybe we should add one?
> Hi Steven,
> 	How about
> JUMP_LABEL_TYPE_MASK/JUMP_LABEL_TRUE_BRANCH/JUMP_LABEL_FALSE_BRANCH
> tuple? That should have the best readability.
> Thanks!
> 

You mean a mask of ~1, and TRUE be 1 and FALSE be 0.

That may work.

-- Steve

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 2/7] arm64: introduce interfaces to hotpatch kernel and module code
  2013-09-25 15:16       ` Steven Rostedt
@ 2013-09-26  2:47         ` Sandeepa Prabhu
  0 siblings, 0 replies; 22+ messages in thread
From: Sandeepa Prabhu @ 2013-09-26  2:47 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Jiang Liu, Catalin Marinas, Will Deacon, Jiang Liu,
	linux-arm-kernel, linux-kernel

On 25 September 2013 20:46, Steven Rostedt <rostedt@goodmis.org> wrote:
> On Wed, 25 Sep 2013 20:12:17 +0530
> Sandeepa Prabhu <sandeepa.prabhu@linaro.org> wrote:
>
>
>> > On aarch64, are instructions always word aligned? If not, it should be
>> > safe for stop machine to modify non word aligned instructions, but this
>> > patch looks like it doesn't allow stop_machine() to do so.
>> Steve,
>>
>> Yes, aarch64 instructions must be word-aligned, else instruction fetch
>> would generate Misaligned PC fault.
>>
>
> Thanks for clarifying, as IIUC, there's ARM architectures that allow
> for 2 and 4 byte instructions.
Yes, ARM 32-bit mode would support both 32-bit and 16-bit alignment
based on ARM or Thumb mode, whereas
AArch64 (in arch/arm64/) is always 32-bit instructions and PC need to
be aligned to 32-bit address.

Thanks,
Sandeepa
>
> -- Steve

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 2/7] arm64: introduce interfaces to hotpatch kernel and module code
  2013-09-25 15:54     ` Jiang Liu
@ 2013-09-26  2:51       ` Sandeepa Prabhu
  0 siblings, 0 replies; 22+ messages in thread
From: Sandeepa Prabhu @ 2013-09-26  2:51 UTC (permalink / raw)
  To: Jiang Liu
  Cc: Steven Rostedt, Catalin Marinas, Will Deacon, Jiang Liu,
	linux-arm-kernel, linux-kernel

On 25 September 2013 21:24, Jiang Liu <liuj97@gmail.com> wrote:
> Hi Sandeepa,
>         Great to know that you are working on kprobe for ARM64.
> I have done some investigation, found it's an huge work for me
> then gave up:(
>         I will try refine the implementation based on your requirement.
>         Thanks!
Hi Jiang,
Thanks. please CC me when you post next version of this patch, then I
can rebase my code and verify it.

Thanks,
Sandeepa

>
> On 09/25/2013 10:35 PM, Sandeepa Prabhu wrote:
>> On 25 September 2013 16:14, Jiang Liu <liuj97@gmail.com> wrote:
>>> From: Jiang Liu <jiang.liu@huawei.com>
>>>
>>> Introduce aarch64_insn_patch_text() and __aarch64_insn_patch_text()
>>> to patch kernel and module code.
>>>
>>> Function aarch64_insn_patch_text() is a heavy version which may use
>>> stop_machine() to serialize all online CPUs, and function
>>> __aarch64_insn_patch_text() is light version without explicitly
>>> serialization.
>> Hi Jiang,
>>
>> I have written kprobes support for aarch64, and need both the
>> functionality (lightweight and stop_machine() versions).
>> I would like to rebase these API in kprobes, however slight changes
>> would require in case of stop_machine version, which I explained
>> below.
>> [Though kprobes cannot share Instruction encode support of jump labels
>> as, decoding & simulation quite different for kprobes/uprobes and
>> based around single stepping]
>>
>>>
>>> Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
>>> Cc: Jiang Liu <liuj97@gmail.com>
>>> ---
>>>  arch/arm64/include/asm/insn.h |  2 ++
>>>  arch/arm64/kernel/insn.c      | 64 +++++++++++++++++++++++++++++++++++++++++++
>>>  2 files changed, 66 insertions(+)
>>>
>>> diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
>>> index e7d1bc8..0ea7193 100644
>>> --- a/arch/arm64/include/asm/insn.h
>>> +++ b/arch/arm64/include/asm/insn.h
>>> @@ -49,5 +49,7 @@ __AARCH64_INSN_FUNCS(nop,     0xFFFFFFFF, 0xD503201F)
>>>  enum aarch64_insn_class aarch64_get_insn_class(u32 insn);
>>>
>>>  bool aarch64_insn_hotpatch_safe(u32 old_insn, u32 new_insn);
>>> +int aarch64_insn_patch_text(void *addr, u32 *insns, int cnt);
>>> +int __aarch64_insn_patch_text(void *addr, u32 *insns, int cnt);
>>>
>>>  #endif /* _ASM_ARM64_INSN_H */
>>> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
>>> index 8541c3a..50facfc 100644
>>> --- a/arch/arm64/kernel/insn.c
>>> +++ b/arch/arm64/kernel/insn.c
>>> @@ -15,6 +15,8 @@
>>>   * along with this program.  If not, see <http://www.gnu.org/licenses/>.
>>>   */
>>>  #include <linux/kernel.h>
>>> +#include <linux/stop_machine.h>
>>> +#include <asm/cacheflush.h>
>>>  #include <asm/insn.h>
>>>
>>>  static int aarch64_insn_cls[] = {
>>> @@ -69,3 +71,65 @@ bool __kprobes aarch64_insn_hotpatch_safe(u32 old_insn, u32 new_insn)
>>>         return __aarch64_insn_hotpatch_safe(old_insn) &&
>>>                __aarch64_insn_hotpatch_safe(new_insn);
>>>  }
>>> +
>>> +struct aarch64_insn_patch {
>>> +       void    *text_addr;
>>> +       u32     *new_insns;
>>> +       int     insn_cnt;
>>> +};
>>> +
>>> +int __kprobes __aarch64_insn_patch_text(void *addr, u32 *insns, int cnt)
>>> +{
>>> +       int i;
>>> +       u32 *tp = addr;
>>> +
>>> +       /* instructions must be word aligned */
>>> +       if (cnt <= 0 || ((uintptr_t)addr & 0x3))
>>> +               return -EINVAL;
>>> +
>>> +       for (i = 0; i < cnt; i++)
>>> +               tp[i] = insns[i];
>>> +
>>> +       flush_icache_range((uintptr_t)tp, (uintptr_t)tp + cnt * sizeof(u32));
>>> +
>>> +       return 0;
>>> +}
>> Looks fine, but do you need to check for CPU big endian mode here? (I
>> think swab32() needed if EL1 is in big-endian mode)
>>
>>> +
>>> +static int __kprobes aarch64_insn_patch_text_cb(void *arg)
>>> +{
>>> +       struct aarch64_insn_patch *pp = arg;
>>> +
>>> +       return __aarch64_insn_patch_text(pp->text_addr, pp->new_insns,
>>> +                                        pp->insn_cnt);
>>> +}
>>> +
>>> +int __kprobes aarch64_insn_patch_text(void *addr, u32 *insns, int cnt)
>>> +{
>>> +       int ret;
>>> +       bool safe = false;
>>> +
>>> +       /* instructions must be word aligned */
>>> +       if (cnt <= 0 || ((uintptr_t)addr & 0x3))
>>> +               return -EINVAL;
>>> +
>>> +       if (cnt == 1)
>>> +               safe = aarch64_insn_hotpatch_safe(*(u32 *)addr, insns[0]);
>>> +
>>> +       if (safe) {
>>> +               ret = __aarch64_insn_patch_text(addr, insns, cnt);
>>> +       } else {
>>
>> Can you move the code below this into separate API that just apply
>> patch with stop_machine? And then a wrapper function for jump label
>> specific handling that checks for aarch64_insn_hotpatch_safe() ?
>> Also, it will be good to move the patching code out of insn.c to
>> patch.c (refer to arch/arm/kernel/patch.c).
>>
>> Please refer to attached file (my current implementation) to make
>> sense of exactly what kprobes would need (ignore the big-endian part
>> for now). I think splitting the code should be straight-forward and we
>> can avoid two different implementations. Please let me know if this
>> can be done, I will rebase my patches above your next version.
>>
>> Thanks,
>> Sandeepa
>>> +               struct aarch64_insn_patch patch = {
>>> +                       .text_addr = addr,
>>> +                       .new_insns = insns,
>>> +                       .insn_cnt = cnt,
>>> +               };
>>> +
>>> +               /*
>>> +                * Execute __aarch64_insn_patch_text() on every online CPU,
>>> +                * which ensure serialization among all online CPUs.
>>> +                */
>>> +               ret = stop_machine(aarch64_insn_patch_text_cb, &patch, NULL);
>>> +       }
>>> +
>>> +       return ret;
>>> +}
>>> --
>>> 1.8.1.2
>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>> Please read the FAQ at  http://www.tux.org/lkml/
>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 2/7] arm64: introduce interfaces to hotpatch kernel and module code
  2013-09-25 14:35   ` Sandeepa Prabhu
  2013-09-25 15:54     ` Jiang Liu
@ 2013-09-27 15:41     ` Jiang Liu
  2013-09-28  1:22       ` Sandeepa Prabhu
  1 sibling, 1 reply; 22+ messages in thread
From: Jiang Liu @ 2013-09-27 15:41 UTC (permalink / raw)
  To: Sandeepa Prabhu
  Cc: Steven Rostedt, Catalin Marinas, Will Deacon, Jiang Liu,
	linux-arm-kernel, linux-kernel

On 09/25/2013 10:35 PM, Sandeepa Prabhu wrote:
> On 25 September 2013 16:14, Jiang Liu <liuj97@gmail.com> wrote:
>> From: Jiang Liu <jiang.liu@huawei.com>
>>
>> Introduce aarch64_insn_patch_text() and __aarch64_insn_patch_text()
>> to patch kernel and module code.
>>
>> Function aarch64_insn_patch_text() is a heavy version which may use
>> stop_machine() to serialize all online CPUs, and function
>> __aarch64_insn_patch_text() is light version without explicitly
>> serialization.
> Hi Jiang,
> 
> I have written kprobes support for aarch64, and need both the
> functionality (lightweight and stop_machine() versions).
> I would like to rebase these API in kprobes, however slight changes
> would require in case of stop_machine version, which I explained
> below.
> [Though kprobes cannot share Instruction encode support of jump labels
> as, decoding & simulation quite different for kprobes/uprobes and
> based around single stepping]
> 
>>
>> Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
>> Cc: Jiang Liu <liuj97@gmail.com>
>> ---
>>  arch/arm64/include/asm/insn.h |  2 ++
>>  arch/arm64/kernel/insn.c      | 64 +++++++++++++++++++++++++++++++++++++++++++
>>  2 files changed, 66 insertions(+)
>>
>> diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
>> index e7d1bc8..0ea7193 100644
>> --- a/arch/arm64/include/asm/insn.h
>> +++ b/arch/arm64/include/asm/insn.h
>> @@ -49,5 +49,7 @@ __AARCH64_INSN_FUNCS(nop,     0xFFFFFFFF, 0xD503201F)
>>  enum aarch64_insn_class aarch64_get_insn_class(u32 insn);
>>
>>  bool aarch64_insn_hotpatch_safe(u32 old_insn, u32 new_insn);
>> +int aarch64_insn_patch_text(void *addr, u32 *insns, int cnt);
>> +int __aarch64_insn_patch_text(void *addr, u32 *insns, int cnt);
>>
>>  #endif /* _ASM_ARM64_INSN_H */
>> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
>> index 8541c3a..50facfc 100644
>> --- a/arch/arm64/kernel/insn.c
>> +++ b/arch/arm64/kernel/insn.c
>> @@ -15,6 +15,8 @@
>>   * along with this program.  If not, see <http://www.gnu.org/licenses/>.
>>   */
>>  #include <linux/kernel.h>
>> +#include <linux/stop_machine.h>
>> +#include <asm/cacheflush.h>
>>  #include <asm/insn.h>
>>
>>  static int aarch64_insn_cls[] = {
>> @@ -69,3 +71,65 @@ bool __kprobes aarch64_insn_hotpatch_safe(u32 old_insn, u32 new_insn)
>>         return __aarch64_insn_hotpatch_safe(old_insn) &&
>>                __aarch64_insn_hotpatch_safe(new_insn);
>>  }
>> +
>> +struct aarch64_insn_patch {
>> +       void    *text_addr;
>> +       u32     *new_insns;
>> +       int     insn_cnt;
>> +};
>> +
>> +int __kprobes __aarch64_insn_patch_text(void *addr, u32 *insns, int cnt)
>> +{
>> +       int i;
>> +       u32 *tp = addr;
>> +
>> +       /* instructions must be word aligned */
>> +       if (cnt <= 0 || ((uintptr_t)addr & 0x3))
>> +               return -EINVAL;
>> +
>> +       for (i = 0; i < cnt; i++)
>> +               tp[i] = insns[i];
>> +
>> +       flush_icache_range((uintptr_t)tp, (uintptr_t)tp + cnt * sizeof(u32));
>> +
>> +       return 0;
>> +}
> Looks fine, but do you need to check for CPU big endian mode here? (I
> think swab32() needed if EL1 is in big-endian mode)
Hi Sandeepa,
	Thanks for reminder, we do need to take care of data access endian
issue here, will fix it in next version.

> 
>> +
>> +static int __kprobes aarch64_insn_patch_text_cb(void *arg)
>> +{
>> +       struct aarch64_insn_patch *pp = arg;
>> +
>> +       return __aarch64_insn_patch_text(pp->text_addr, pp->new_insns,
>> +                                        pp->insn_cnt);
>> +}
>> +
>> +int __kprobes aarch64_insn_patch_text(void *addr, u32 *insns, int cnt)
>> +{
>> +       int ret;
>> +       bool safe = false;
>> +
>> +       /* instructions must be word aligned */
>> +       if (cnt <= 0 || ((uintptr_t)addr & 0x3))
>> +               return -EINVAL;
>> +
>> +       if (cnt == 1)
>> +               safe = aarch64_insn_hotpatch_safe(*(u32 *)addr, insns[0]);
>> +
>> +       if (safe) {
>> +               ret = __aarch64_insn_patch_text(addr, insns, cnt);
>> +       } else {
> 
> Can you move the code below this into separate API that just apply
> patch with stop_machine? And then a wrapper function for jump label
> specific handling that checks for aarch64_insn_hotpatch_safe() ?
> Also, it will be good to move the patching code out of insn.c to
> patch.c (refer to arch/arm/kernel/patch.c).
> 
> Please refer to attached file (my current implementation) to make
> sense of exactly what kprobes would need (ignore the big-endian part
> for now). I think splitting the code should be straight-forward and we
> can avoid two different implementations. Please let me know if this
> can be done, I will rebase my patches above your next version.

After reading the attached file, I feel current implementation of
aarch64_insn_patch_text() should satisfy kprobe's requirements too.

The extra optimization of aarch64_insn_hotpatch_safe() should work
for kprobe too. For example, if we install a kprobe on a function call
instruction(BL), we need to replace "BL offset1" with "BRK" or "BL
offset2". In such cases, we could directly patch the code without
stop_machine(). Is there any special requirement that we must use
stop_machine() for kprobe here?

Thanks!
Gerry

> 
> Thanks,
> Sandeepa
>> +               struct aarch64_insn_patch patch = {
>> +                       .text_addr = addr,
>> +                       .new_insns = insns,
>> +                       .insn_cnt = cnt,
>> +               };
>> +
>> +               /*
>> +                * Execute __aarch64_insn_patch_text() on every online CPU,
>> +                * which ensure serialization among all online CPUs.
>> +                */
>> +               ret = stop_machine(aarch64_insn_patch_text_cb, &patch, NULL);
>> +       }
>> +
>> +       return ret;
>> +}
>> --
>> 1.8.1.2
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at  http://www.tux.org/lkml/


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v1 2/7] arm64: introduce interfaces to hotpatch kernel and module code
  2013-09-27 15:41     ` Jiang Liu
@ 2013-09-28  1:22       ` Sandeepa Prabhu
  0 siblings, 0 replies; 22+ messages in thread
From: Sandeepa Prabhu @ 2013-09-28  1:22 UTC (permalink / raw)
  To: Jiang Liu
  Cc: Steven Rostedt, Catalin Marinas, Will Deacon, Jiang Liu,
	linux-arm-kernel, linux-kernel

On 27 September 2013 21:11, Jiang Liu <liuj97@gmail.com> wrote:
> On 09/25/2013 10:35 PM, Sandeepa Prabhu wrote:
>> On 25 September 2013 16:14, Jiang Liu <liuj97@gmail.com> wrote:
>>> From: Jiang Liu <jiang.liu@huawei.com>
>>>
>>> Introduce aarch64_insn_patch_text() and __aarch64_insn_patch_text()
>>> to patch kernel and module code.
>>>
>>> Function aarch64_insn_patch_text() is a heavy version which may use
>>> stop_machine() to serialize all online CPUs, and function
>>> __aarch64_insn_patch_text() is light version without explicitly
>>> serialization.
>> Hi Jiang,
>>
>> I have written kprobes support for aarch64, and need both the
>> functionality (lightweight and stop_machine() versions).
>> I would like to rebase these API in kprobes, however slight changes
>> would require in case of stop_machine version, which I explained
>> below.
>> [Though kprobes cannot share Instruction encode support of jump labels
>> as, decoding & simulation quite different for kprobes/uprobes and
>> based around single stepping]
>>
>>>
>>> Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
>>> Cc: Jiang Liu <liuj97@gmail.com>
>>> ---
>>>  arch/arm64/include/asm/insn.h |  2 ++
>>>  arch/arm64/kernel/insn.c      | 64 +++++++++++++++++++++++++++++++++++++++++++
>>>  2 files changed, 66 insertions(+)
>>>
>>> diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
>>> index e7d1bc8..0ea7193 100644
>>> --- a/arch/arm64/include/asm/insn.h
>>> +++ b/arch/arm64/include/asm/insn.h
>>> @@ -49,5 +49,7 @@ __AARCH64_INSN_FUNCS(nop,     0xFFFFFFFF, 0xD503201F)
>>>  enum aarch64_insn_class aarch64_get_insn_class(u32 insn);
>>>
>>>  bool aarch64_insn_hotpatch_safe(u32 old_insn, u32 new_insn);
>>> +int aarch64_insn_patch_text(void *addr, u32 *insns, int cnt);
>>> +int __aarch64_insn_patch_text(void *addr, u32 *insns, int cnt);
>>>
>>>  #endif /* _ASM_ARM64_INSN_H */
>>> diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
>>> index 8541c3a..50facfc 100644
>>> --- a/arch/arm64/kernel/insn.c
>>> +++ b/arch/arm64/kernel/insn.c
>>> @@ -15,6 +15,8 @@
>>>   * along with this program.  If not, see <http://www.gnu.org/licenses/>.
>>>   */
>>>  #include <linux/kernel.h>
>>> +#include <linux/stop_machine.h>
>>> +#include <asm/cacheflush.h>
>>>  #include <asm/insn.h>
>>>
>>>  static int aarch64_insn_cls[] = {
>>> @@ -69,3 +71,65 @@ bool __kprobes aarch64_insn_hotpatch_safe(u32 old_insn, u32 new_insn)
>>>         return __aarch64_insn_hotpatch_safe(old_insn) &&
>>>                __aarch64_insn_hotpatch_safe(new_insn);
>>>  }
>>> +
>>> +struct aarch64_insn_patch {
>>> +       void    *text_addr;
>>> +       u32     *new_insns;
>>> +       int     insn_cnt;
>>> +};
>>> +
>>> +int __kprobes __aarch64_insn_patch_text(void *addr, u32 *insns, int cnt)
>>> +{
>>> +       int i;
>>> +       u32 *tp = addr;
>>> +
>>> +       /* instructions must be word aligned */
>>> +       if (cnt <= 0 || ((uintptr_t)addr & 0x3))
>>> +               return -EINVAL;
>>> +
>>> +       for (i = 0; i < cnt; i++)
>>> +               tp[i] = insns[i];
>>> +
>>> +       flush_icache_range((uintptr_t)tp, (uintptr_t)tp + cnt * sizeof(u32));
>>> +
>>> +       return 0;
>>> +}
>> Looks fine, but do you need to check for CPU big endian mode here? (I
>> think swab32() needed if EL1 is in big-endian mode)
> Hi Sandeepa,
>         Thanks for reminder, we do need to take care of data access endian
> issue here, will fix it in next version.
>
>>
>>> +
>>> +static int __kprobes aarch64_insn_patch_text_cb(void *arg)
>>> +{
>>> +       struct aarch64_insn_patch *pp = arg;
>>> +
>>> +       return __aarch64_insn_patch_text(pp->text_addr, pp->new_insns,
>>> +                                        pp->insn_cnt);
>>> +}
>>> +
>>> +int __kprobes aarch64_insn_patch_text(void *addr, u32 *insns, int cnt)
>>> +{
>>> +       int ret;
>>> +       bool safe = false;
>>> +
>>> +       /* instructions must be word aligned */
>>> +       if (cnt <= 0 || ((uintptr_t)addr & 0x3))
>>> +               return -EINVAL;
>>> +
>>> +       if (cnt == 1)
>>> +               safe = aarch64_insn_hotpatch_safe(*(u32 *)addr, insns[0]);
>>> +
>>> +       if (safe) {
>>> +               ret = __aarch64_insn_patch_text(addr, insns, cnt);
>>> +       } else {
>>
>> Can you move the code below this into separate API that just apply
>> patch with stop_machine? And then a wrapper function for jump label
>> specific handling that checks for aarch64_insn_hotpatch_safe() ?
>> Also, it will be good to move the patching code out of insn.c to
>> patch.c (refer to arch/arm/kernel/patch.c).
>>
>> Please refer to attached file (my current implementation) to make
>> sense of exactly what kprobes would need (ignore the big-endian part
>> for now). I think splitting the code should be straight-forward and we
>> can avoid two different implementations. Please let me know if this
>> can be done, I will rebase my patches above your next version.
>
> After reading the attached file, I feel current implementation of
> aarch64_insn_patch_text() should satisfy kprobe's requirements too.
>
> The extra optimization of aarch64_insn_hotpatch_safe() should work
> for kprobe too. For example, if we install a kprobe on a function call
> instruction(BL), we need to replace "BL offset1" with "BRK" or "BL
> offset2". In such cases, we could directly patch the code without
> stop_machine(). Is there any special requirement that we must use
> stop_machine() for kprobe here?
Kprobes will have it's own checks before placing a break-point so
additional check is not required. Though it might work, that's not the
point, it does not need extra check while kprobes logic would have
already validated before calling patching API.

And stop_machine is needed to apply code patch in sync with all cpus,
kprobes on most other platforms use stop_machine to update code
section.

Thanks,
Sandeepa
>
> Thanks!
> Gerry
>
>>
>> Thanks,
>> Sandeepa
>>> +               struct aarch64_insn_patch patch = {
>>> +                       .text_addr = addr,
>>> +                       .new_insns = insns,
>>> +                       .insn_cnt = cnt,
>>> +               };
>>> +
>>> +               /*
>>> +                * Execute __aarch64_insn_patch_text() on every online CPU,
>>> +                * which ensure serialization among all online CPUs.
>>> +                */
>>> +               ret = stop_machine(aarch64_insn_patch_text_cb, &patch, NULL);
>>> +       }
>>> +
>>> +       return ret;
>>> +}
>>> --
>>> 1.8.1.2
>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>> Please read the FAQ at  http://www.tux.org/lkml/
>

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2013-09-28  1:22 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-09-25 10:44 [PATCH v1 0/7] Optimize jump label implementation on ARM64 Jiang Liu
2013-09-25 10:44 ` [PATCH v1 1/7] arm64: introduce basic aarch64 instruction decoding helpers Jiang Liu
2013-09-25 12:58   ` Christopher Covington
2013-09-25 15:50     ` Jiang Liu
2013-09-25 10:44 ` [PATCH v1 2/7] arm64: introduce interfaces to hotpatch kernel and module code Jiang Liu
2013-09-25 14:35   ` Steven Rostedt
2013-09-25 14:42     ` Sandeepa Prabhu
2013-09-25 15:16       ` Steven Rostedt
2013-09-26  2:47         ` Sandeepa Prabhu
2013-09-25 14:35   ` Sandeepa Prabhu
2013-09-25 15:54     ` Jiang Liu
2013-09-26  2:51       ` Sandeepa Prabhu
2013-09-27 15:41     ` Jiang Liu
2013-09-28  1:22       ` Sandeepa Prabhu
2013-09-25 10:44 ` [PATCH v1 3/7] arm64: move encode_insn_immediate() from module.c to insn.c Jiang Liu
2013-09-25 10:44 ` [PATCH v1 4/7] arm64: introduce aarch64_insn_gen_{nop|branch_imm}() helper functions Jiang Liu
2013-09-25 10:44 ` [PATCH v1 5/7] arm64, jump label: detect %c support for ARM64 Jiang Liu
2013-09-25 10:44 ` [PATCH v1 6/7] arm64, jump label: optimize jump label implementation Jiang Liu
2013-09-25 10:44 ` [PATCH v1 7/7] jump_label: use defined macros instead of hard-coding for better readability Jiang Liu
2013-09-25 14:53   ` Steven Rostedt
2013-09-25 15:47     ` Jiang Liu
2013-09-25 16:30       ` Steven Rostedt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).