All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC 0/3] x86: Patchable constants
@ 2018-02-07 14:59 ` Kirill A. Shutemov
  0 siblings, 0 replies; 20+ messages in thread
From: Kirill A. Shutemov @ 2018-02-07 14:59 UTC (permalink / raw)
  To: Linus Torvalds, x86, Tom Lendacky, Peter Zijlstra
  Cc: Dave Hansen, Andy Lutomirski, Borislav Petkov, linux-mm,
	linux-kernel, Kirill A. Shutemov

This patchset introduces concept of patchable constants: constant values
that can be adjusted at boot-time in response to system configuration or
user input (kernel command-line).

Patchable constants can replace variables that never changes at runtime
(only at boot-time), but used in very hot path.

Patchable constants implemented by replacing a constant with call to
inline function that returns the constant value using inline assembler.
In inline assembler we also write down into separate section location of
the instruction that loads the constant. This way we can find the
location later and adjust the value.

My idea was to make __PHYSICAL_MASK a patchable constant in hope to offset
overhead of having it dynamic. We need it dynamic for memory encryption
implementation (both AMD SME and Intel MKTME).

But this didn't pay off. :(

This conversion makes GCC generate worse code. Conversion __PHYSICAL_MASK
to a patchable constant adds about 5k in .text on defconfig and makes it
slightly slower at runtime (~0.2% on my box).

That's not result I hoped for.

I this wanted to share it just in case if anybody find a better use of it
or a way to improve it.

Other candidates may be PAGE_OFFSET/VMALLOC_START/VMEMMAP_START.

Kudos to PeterZ for hints on how it can be implemented.

Any feedback?

Kirill A. Shutemov (3):
  x86: Introduce patchable constants
  x86/mm/encrypt: Convert __PHYSICAL_MASK to patchable constant
  x86/mm/encrypt: Convert sme_me_mask to patchable constant

 arch/x86/Kconfig                       |   5 ++
 arch/x86/include/asm/mem_encrypt.h     |   5 +-
 arch/x86/include/asm/page_types.h      |  11 ++-
 arch/x86/include/asm/patchable_const.h |  28 ++++++++
 arch/x86/kernel/Makefile               |   3 +
 arch/x86/kernel/module.c               |  14 ++++
 arch/x86/kernel/patchable_const.c      | 119 +++++++++++++++++++++++++++++++++
 arch/x86/mm/mem_encrypt.c              |  20 +++---
 8 files changed, 192 insertions(+), 13 deletions(-)
 create mode 100644 arch/x86/include/asm/patchable_const.h
 create mode 100644 arch/x86/kernel/patchable_const.c

-- 
2.15.1

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [RFC 0/3] x86: Patchable constants
@ 2018-02-07 14:59 ` Kirill A. Shutemov
  0 siblings, 0 replies; 20+ messages in thread
From: Kirill A. Shutemov @ 2018-02-07 14:59 UTC (permalink / raw)
  To: Linus Torvalds, x86, Tom Lendacky, Peter Zijlstra
  Cc: Dave Hansen, Andy Lutomirski, Borislav Petkov, linux-mm,
	linux-kernel, Kirill A. Shutemov

This patchset introduces concept of patchable constants: constant values
that can be adjusted at boot-time in response to system configuration or
user input (kernel command-line).

Patchable constants can replace variables that never changes at runtime
(only at boot-time), but used in very hot path.

Patchable constants implemented by replacing a constant with call to
inline function that returns the constant value using inline assembler.
In inline assembler we also write down into separate section location of
the instruction that loads the constant. This way we can find the
location later and adjust the value.

My idea was to make __PHYSICAL_MASK a patchable constant in hope to offset
overhead of having it dynamic. We need it dynamic for memory encryption
implementation (both AMD SME and Intel MKTME).

But this didn't pay off. :(

This conversion makes GCC generate worse code. Conversion __PHYSICAL_MASK
to a patchable constant adds about 5k in .text on defconfig and makes it
slightly slower at runtime (~0.2% on my box).

That's not result I hoped for.

I this wanted to share it just in case if anybody find a better use of it
or a way to improve it.

Other candidates may be PAGE_OFFSET/VMALLOC_START/VMEMMAP_START.

Kudos to PeterZ for hints on how it can be implemented.

Any feedback?

Kirill A. Shutemov (3):
  x86: Introduce patchable constants
  x86/mm/encrypt: Convert __PHYSICAL_MASK to patchable constant
  x86/mm/encrypt: Convert sme_me_mask to patchable constant

 arch/x86/Kconfig                       |   5 ++
 arch/x86/include/asm/mem_encrypt.h     |   5 +-
 arch/x86/include/asm/page_types.h      |  11 ++-
 arch/x86/include/asm/patchable_const.h |  28 ++++++++
 arch/x86/kernel/Makefile               |   3 +
 arch/x86/kernel/module.c               |  14 ++++
 arch/x86/kernel/patchable_const.c      | 119 +++++++++++++++++++++++++++++++++
 arch/x86/mm/mem_encrypt.c              |  20 +++---
 8 files changed, 192 insertions(+), 13 deletions(-)
 create mode 100644 arch/x86/include/asm/patchable_const.h
 create mode 100644 arch/x86/kernel/patchable_const.c

-- 
2.15.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [RFC 1/3] x86: Introduce patchable constants
  2018-02-07 14:59 ` Kirill A. Shutemov
@ 2018-02-07 14:59   ` Kirill A. Shutemov
  -1 siblings, 0 replies; 20+ messages in thread
From: Kirill A. Shutemov @ 2018-02-07 14:59 UTC (permalink / raw)
  To: Linus Torvalds, x86, Tom Lendacky, Peter Zijlstra
  Cc: Dave Hansen, Andy Lutomirski, Borislav Petkov, linux-mm,
	linux-kernel, Kirill A. Shutemov

This patch introduces concept of patchable constants: constant values
that can be adjusted at boot-time in response to system configuration or
user input (kernel command-line).

Patchable constants can replace variables that never changes at runtime
(only at boot-time), but used in very hot path.

Patchable constants implemented by replacing a constant with call to
inline function that returns the constant value using inline assembler.
In inline assembler we also write down into separate section location of
the instruction that loads the constant. This way we can find the
location later and adjust the value.

The implementation only supports unsigned 64-bit values on 64-bit
systems. We can add support for other data types later.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 arch/x86/Kconfig                       |   4 ++
 arch/x86/include/asm/patchable_const.h |  28 ++++++++
 arch/x86/kernel/Makefile               |   3 +
 arch/x86/kernel/module.c               |  14 ++++
 arch/x86/kernel/patchable_const.c      | 114 +++++++++++++++++++++++++++++++++
 5 files changed, 163 insertions(+)
 create mode 100644 arch/x86/include/asm/patchable_const.h
 create mode 100644 arch/x86/kernel/patchable_const.c

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index b0771ceabb4b..78fc28e4f643 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -337,6 +337,10 @@ config PGTABLE_LEVELS
 	default 3 if X86_PAE
 	default 2
 
+config PATCHABLE_CONST
+	bool
+	depends on X86_64
+
 source "init/Kconfig"
 source "kernel/Kconfig.freezer"
 
diff --git a/arch/x86/include/asm/patchable_const.h b/arch/x86/include/asm/patchable_const.h
new file mode 100644
index 000000000000..a432da46a46e
--- /dev/null
+++ b/arch/x86/include/asm/patchable_const.h
@@ -0,0 +1,28 @@
+#ifndef __X86_PATCHABLE_CONST
+#define __X86_PATCHABLE_CONST
+
+#ifndef __ASSEMBLY__
+
+#include <linux/types.h>
+#include <linux/stringify.h>
+#include <asm/asm.h>
+
+void module_patch_const_u64(const char *name,
+	       unsigned long **start, unsigned long **stop);
+
+#define DECLARE_PATCHABLE_CONST_U64(id_str)					\
+extern int id_str ## _SET(u64 value);						\
+static __always_inline __attribute_const__ u64 id_str ## _READ(void)		\
+{										\
+       u64 ret;									\
+       asm (									\
+	       "1: movabsq $(" __stringify(id_str ## _DEFAULT) "), %0\n"	\
+	       ".pushsection \"const_u64_" __stringify(id_str) "\",\"a\"\n"	\
+	       _ASM_PTR "1b\n"							\
+	       ".popsection\n" : "=r" (ret));					\
+       return ret;								\
+}
+
+#endif /* __ASSEMBLY__ */
+
+#endif
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 29786c87e864..e6a2e400f236 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -20,6 +20,7 @@ CFLAGS_REMOVE_kvmclock.o = -pg
 CFLAGS_REMOVE_ftrace.o = -pg
 CFLAGS_REMOVE_early_printk.o = -pg
 CFLAGS_REMOVE_head64.o = -pg
+CFLAGS_REMOVE_patchable_const.o = -pg
 endif
 
 KASAN_SANITIZE_head$(BITS).o				:= n
@@ -27,6 +28,7 @@ KASAN_SANITIZE_dumpstack.o				:= n
 KASAN_SANITIZE_dumpstack_$(BITS).o			:= n
 KASAN_SANITIZE_stacktrace.o				:= n
 KASAN_SANITIZE_paravirt.o				:= n
+KASAN_SANITIZE_patchable_const.o			:= n
 
 OBJECT_FILES_NON_STANDARD_relocate_kernel_$(BITS).o	:= y
 OBJECT_FILES_NON_STANDARD_test_nx.o			:= y
@@ -110,6 +112,7 @@ obj-$(CONFIG_AMD_NB)		+= amd_nb.o
 obj-$(CONFIG_DEBUG_NMI_SELFTEST) += nmi_selftest.o
 
 obj-$(CONFIG_KVM_GUEST)		+= kvm.o kvmclock.o
+obj-$(CONFIG_PATCHABLE_CONST)	+= patchable_const.o
 obj-$(CONFIG_PARAVIRT)		+= paravirt.o paravirt_patch_$(BITS).o
 obj-$(CONFIG_PARAVIRT_SPINLOCKS)+= paravirt-spinlocks.o
 obj-$(CONFIG_PARAVIRT_CLOCK)	+= pvclock.o
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index da0c160e5589..eeb80b39fa89 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -36,6 +36,7 @@
 #include <asm/pgtable.h>
 #include <asm/setup.h>
 #include <asm/unwind.h>
+#include <asm/patchable_const.h>
 
 #if 0
 #define DEBUGP(fmt, ...)				\
@@ -243,6 +244,19 @@ int module_finalize(const Elf_Ehdr *hdr,
 			orc = s;
 		if (!strcmp(".orc_unwind_ip", secstrings + s->sh_name))
 			orc_ip = s;
+
+		if (IS_ENABLED(CONFIG_PATCHABLE_CONST) &&
+				!strncmp(secstrings + s->sh_name,
+					"const_u64_", strlen("const_u64_"))) {
+			const char *name;
+			unsigned long **start, **stop;
+
+			name = secstrings + s->sh_name + strlen("const_u64_");
+			start = (void *)s->sh_addr;
+			stop = (void *)s->sh_addr + s->sh_size;
+
+			module_patch_const_u64(name, start, stop);
+		}
 	}
 
 	if (alt) {
diff --git a/arch/x86/kernel/patchable_const.c b/arch/x86/kernel/patchable_const.c
new file mode 100644
index 000000000000..d44d91cafee2
--- /dev/null
+++ b/arch/x86/kernel/patchable_const.c
@@ -0,0 +1,114 @@
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/memory.h>
+#include <linux/module.h>
+#include <asm/insn.h>
+#include <asm/text-patching.h>
+
+struct const_u64_table {
+	const char *name;
+	u64 orig;
+	u64 *new;
+};
+
+#define PATCHABLE_CONST_U64(id_str)						\
+extern unsigned long *__start_const_u64_ ## id_str[];				\
+extern unsigned long *__stop_const_u64_ ## id_str[];				\
+static __init_or_module u64 id_str ## _CURRENT = id_str ## _DEFAULT;		\
+__init __nostackprotector int id_str ## _SET(u64 new)				\
+{										\
+	int ret;								\
+	ret = patch_const_u64(__start_const_u64_ ## id_str,			\
+			__stop_const_u64_ ## id_str, id_str ## _CURRENT, new);	\
+	if (!ret)								\
+		id_str ## _CURRENT = new;					\
+	return ret;								\
+}
+
+static __init_or_module __nostackprotector
+int patch_const_u64(unsigned long **start, unsigned long **stop,
+		u64 orig, u64 new)
+{
+	char buf[MAX_INSN_SIZE];
+	struct insn insn;
+	unsigned long **iter;
+
+	pr_debug("Patch const: %#llx -> %#llx\n", orig, new);
+
+	mutex_lock(&text_mutex);
+	for (iter = start; iter < stop; iter++) {
+		memcpy(buf, *iter, MAX_INSN_SIZE);
+
+		kernel_insn_init(&insn, buf, MAX_INSN_SIZE);
+		insn_get_length(&insn);
+
+		/*
+		 * We expect to see 10-byte MOV instruction here:
+		 *  - 1 byte REX prefix;
+		 *  - 1 byte opcode;
+		 *  - 8 byte immediate value;
+		 *
+		 * Back off, if something else is found.
+		 */
+		if (insn.length != 10)
+			break;
+
+		insn_get_opcode(&insn);
+
+		/* MOV r64, imm64: REX.W + B8 + rd io */
+		if (!X86_REX_W(insn.rex_prefix.bytes[0]))
+			break;
+		if ((insn.opcode.bytes[0] & ~7) != 0xb8)
+			break;
+
+		/* Check that the original value is correct */
+		if (memcmp(buf + 2, &orig, sizeof(orig)))
+			break;
+
+		memcpy(buf + 2, &new, 8);
+		text_poke(*iter, buf, 10);
+	}
+
+	if (iter == stop) {
+		/* Everything if fine: DONE */
+		mutex_unlock(&text_mutex);
+		return 0;
+	}
+
+	/* Something went wrong. */
+	pr_err("Unexpected instruction found at %px: %10ph\n", iter, buf);
+
+	/* Undo */
+	while (--iter != start) {
+		memcpy(&buf, *iter, MAX_INSN_SIZE);
+		memcpy(buf + 2, &orig, 8);
+		text_poke(*iter, buf, 10);
+	}
+
+	mutex_unlock(&text_mutex);
+	return -EFAULT;
+}
+
+#ifdef CONFIG_MODULES
+/* Add an entry for a constant here if it expected to be seen in the modules */
+static const struct const_u64_table const_u64_table[] = {
+};
+
+__init_or_module __nostackprotector
+void module_patch_const_u64(const char *name,
+		unsigned long **start, unsigned long **stop)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(const_u64_table); i++) {
+		if (strcmp(name, const_u64_table[i].name))
+			continue;
+
+		patch_const_u64(start, stop, const_u64_table[i].orig,
+				*const_u64_table[i].new);
+		return;
+	}
+
+	pr_err("Unknown patchable constant: '%s'\n", name);
+}
+#endif
-- 
2.15.1

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC 1/3] x86: Introduce patchable constants
@ 2018-02-07 14:59   ` Kirill A. Shutemov
  0 siblings, 0 replies; 20+ messages in thread
From: Kirill A. Shutemov @ 2018-02-07 14:59 UTC (permalink / raw)
  To: Linus Torvalds, x86, Tom Lendacky, Peter Zijlstra
  Cc: Dave Hansen, Andy Lutomirski, Borislav Petkov, linux-mm,
	linux-kernel, Kirill A. Shutemov

This patch introduces concept of patchable constants: constant values
that can be adjusted at boot-time in response to system configuration or
user input (kernel command-line).

Patchable constants can replace variables that never changes at runtime
(only at boot-time), but used in very hot path.

Patchable constants implemented by replacing a constant with call to
inline function that returns the constant value using inline assembler.
In inline assembler we also write down into separate section location of
the instruction that loads the constant. This way we can find the
location later and adjust the value.

The implementation only supports unsigned 64-bit values on 64-bit
systems. We can add support for other data types later.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 arch/x86/Kconfig                       |   4 ++
 arch/x86/include/asm/patchable_const.h |  28 ++++++++
 arch/x86/kernel/Makefile               |   3 +
 arch/x86/kernel/module.c               |  14 ++++
 arch/x86/kernel/patchable_const.c      | 114 +++++++++++++++++++++++++++++++++
 5 files changed, 163 insertions(+)
 create mode 100644 arch/x86/include/asm/patchable_const.h
 create mode 100644 arch/x86/kernel/patchable_const.c

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index b0771ceabb4b..78fc28e4f643 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -337,6 +337,10 @@ config PGTABLE_LEVELS
 	default 3 if X86_PAE
 	default 2
 
+config PATCHABLE_CONST
+	bool
+	depends on X86_64
+
 source "init/Kconfig"
 source "kernel/Kconfig.freezer"
 
diff --git a/arch/x86/include/asm/patchable_const.h b/arch/x86/include/asm/patchable_const.h
new file mode 100644
index 000000000000..a432da46a46e
--- /dev/null
+++ b/arch/x86/include/asm/patchable_const.h
@@ -0,0 +1,28 @@
+#ifndef __X86_PATCHABLE_CONST
+#define __X86_PATCHABLE_CONST
+
+#ifndef __ASSEMBLY__
+
+#include <linux/types.h>
+#include <linux/stringify.h>
+#include <asm/asm.h>
+
+void module_patch_const_u64(const char *name,
+	       unsigned long **start, unsigned long **stop);
+
+#define DECLARE_PATCHABLE_CONST_U64(id_str)					\
+extern int id_str ## _SET(u64 value);						\
+static __always_inline __attribute_const__ u64 id_str ## _READ(void)		\
+{										\
+       u64 ret;									\
+       asm (									\
+	       "1: movabsq $(" __stringify(id_str ## _DEFAULT) "), %0\n"	\
+	       ".pushsection \"const_u64_" __stringify(id_str) "\",\"a\"\n"	\
+	       _ASM_PTR "1b\n"							\
+	       ".popsection\n" : "=r" (ret));					\
+       return ret;								\
+}
+
+#endif /* __ASSEMBLY__ */
+
+#endif
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 29786c87e864..e6a2e400f236 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -20,6 +20,7 @@ CFLAGS_REMOVE_kvmclock.o = -pg
 CFLAGS_REMOVE_ftrace.o = -pg
 CFLAGS_REMOVE_early_printk.o = -pg
 CFLAGS_REMOVE_head64.o = -pg
+CFLAGS_REMOVE_patchable_const.o = -pg
 endif
 
 KASAN_SANITIZE_head$(BITS).o				:= n
@@ -27,6 +28,7 @@ KASAN_SANITIZE_dumpstack.o				:= n
 KASAN_SANITIZE_dumpstack_$(BITS).o			:= n
 KASAN_SANITIZE_stacktrace.o				:= n
 KASAN_SANITIZE_paravirt.o				:= n
+KASAN_SANITIZE_patchable_const.o			:= n
 
 OBJECT_FILES_NON_STANDARD_relocate_kernel_$(BITS).o	:= y
 OBJECT_FILES_NON_STANDARD_test_nx.o			:= y
@@ -110,6 +112,7 @@ obj-$(CONFIG_AMD_NB)		+= amd_nb.o
 obj-$(CONFIG_DEBUG_NMI_SELFTEST) += nmi_selftest.o
 
 obj-$(CONFIG_KVM_GUEST)		+= kvm.o kvmclock.o
+obj-$(CONFIG_PATCHABLE_CONST)	+= patchable_const.o
 obj-$(CONFIG_PARAVIRT)		+= paravirt.o paravirt_patch_$(BITS).o
 obj-$(CONFIG_PARAVIRT_SPINLOCKS)+= paravirt-spinlocks.o
 obj-$(CONFIG_PARAVIRT_CLOCK)	+= pvclock.o
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index da0c160e5589..eeb80b39fa89 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -36,6 +36,7 @@
 #include <asm/pgtable.h>
 #include <asm/setup.h>
 #include <asm/unwind.h>
+#include <asm/patchable_const.h>
 
 #if 0
 #define DEBUGP(fmt, ...)				\
@@ -243,6 +244,19 @@ int module_finalize(const Elf_Ehdr *hdr,
 			orc = s;
 		if (!strcmp(".orc_unwind_ip", secstrings + s->sh_name))
 			orc_ip = s;
+
+		if (IS_ENABLED(CONFIG_PATCHABLE_CONST) &&
+				!strncmp(secstrings + s->sh_name,
+					"const_u64_", strlen("const_u64_"))) {
+			const char *name;
+			unsigned long **start, **stop;
+
+			name = secstrings + s->sh_name + strlen("const_u64_");
+			start = (void *)s->sh_addr;
+			stop = (void *)s->sh_addr + s->sh_size;
+
+			module_patch_const_u64(name, start, stop);
+		}
 	}
 
 	if (alt) {
diff --git a/arch/x86/kernel/patchable_const.c b/arch/x86/kernel/patchable_const.c
new file mode 100644
index 000000000000..d44d91cafee2
--- /dev/null
+++ b/arch/x86/kernel/patchable_const.c
@@ -0,0 +1,114 @@
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/memory.h>
+#include <linux/module.h>
+#include <asm/insn.h>
+#include <asm/text-patching.h>
+
+struct const_u64_table {
+	const char *name;
+	u64 orig;
+	u64 *new;
+};
+
+#define PATCHABLE_CONST_U64(id_str)						\
+extern unsigned long *__start_const_u64_ ## id_str[];				\
+extern unsigned long *__stop_const_u64_ ## id_str[];				\
+static __init_or_module u64 id_str ## _CURRENT = id_str ## _DEFAULT;		\
+__init __nostackprotector int id_str ## _SET(u64 new)				\
+{										\
+	int ret;								\
+	ret = patch_const_u64(__start_const_u64_ ## id_str,			\
+			__stop_const_u64_ ## id_str, id_str ## _CURRENT, new);	\
+	if (!ret)								\
+		id_str ## _CURRENT = new;					\
+	return ret;								\
+}
+
+static __init_or_module __nostackprotector
+int patch_const_u64(unsigned long **start, unsigned long **stop,
+		u64 orig, u64 new)
+{
+	char buf[MAX_INSN_SIZE];
+	struct insn insn;
+	unsigned long **iter;
+
+	pr_debug("Patch const: %#llx -> %#llx\n", orig, new);
+
+	mutex_lock(&text_mutex);
+	for (iter = start; iter < stop; iter++) {
+		memcpy(buf, *iter, MAX_INSN_SIZE);
+
+		kernel_insn_init(&insn, buf, MAX_INSN_SIZE);
+		insn_get_length(&insn);
+
+		/*
+		 * We expect to see 10-byte MOV instruction here:
+		 *  - 1 byte REX prefix;
+		 *  - 1 byte opcode;
+		 *  - 8 byte immediate value;
+		 *
+		 * Back off, if something else is found.
+		 */
+		if (insn.length != 10)
+			break;
+
+		insn_get_opcode(&insn);
+
+		/* MOV r64, imm64: REX.W + B8 + rd io */
+		if (!X86_REX_W(insn.rex_prefix.bytes[0]))
+			break;
+		if ((insn.opcode.bytes[0] & ~7) != 0xb8)
+			break;
+
+		/* Check that the original value is correct */
+		if (memcmp(buf + 2, &orig, sizeof(orig)))
+			break;
+
+		memcpy(buf + 2, &new, 8);
+		text_poke(*iter, buf, 10);
+	}
+
+	if (iter == stop) {
+		/* Everything if fine: DONE */
+		mutex_unlock(&text_mutex);
+		return 0;
+	}
+
+	/* Something went wrong. */
+	pr_err("Unexpected instruction found at %px: %10ph\n", iter, buf);
+
+	/* Undo */
+	while (--iter != start) {
+		memcpy(&buf, *iter, MAX_INSN_SIZE);
+		memcpy(buf + 2, &orig, 8);
+		text_poke(*iter, buf, 10);
+	}
+
+	mutex_unlock(&text_mutex);
+	return -EFAULT;
+}
+
+#ifdef CONFIG_MODULES
+/* Add an entry for a constant here if it expected to be seen in the modules */
+static const struct const_u64_table const_u64_table[] = {
+};
+
+__init_or_module __nostackprotector
+void module_patch_const_u64(const char *name,
+		unsigned long **start, unsigned long **stop)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(const_u64_table); i++) {
+		if (strcmp(name, const_u64_table[i].name))
+			continue;
+
+		patch_const_u64(start, stop, const_u64_table[i].orig,
+				*const_u64_table[i].new);
+		return;
+	}
+
+	pr_err("Unknown patchable constant: '%s'\n", name);
+}
+#endif
-- 
2.15.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC 2/3] x86/mm/encrypt: Convert __PHYSICAL_MASK to patchable constant
  2018-02-07 14:59 ` Kirill A. Shutemov
@ 2018-02-07 14:59   ` Kirill A. Shutemov
  -1 siblings, 0 replies; 20+ messages in thread
From: Kirill A. Shutemov @ 2018-02-07 14:59 UTC (permalink / raw)
  To: Linus Torvalds, x86, Tom Lendacky, Peter Zijlstra
  Cc: Dave Hansen, Andy Lutomirski, Borislav Petkov, linux-mm,
	linux-kernel, Kirill A. Shutemov

AMD SME claims one bit from physical memory address to indicate that the
page is encrypted. This bit has to be mask out from __PHYSICAL_MASK.

As an alternative, we can replace __PHYSICAL_MASK with patchable
constant and adjust it directly at boot time.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 arch/x86/Kconfig                  |  1 +
 arch/x86/include/asm/page_types.h | 11 ++++++++++-
 arch/x86/kernel/patchable_const.c |  3 +++
 arch/x86/mm/mem_encrypt.c         |  5 +++++
 4 files changed, 19 insertions(+), 1 deletion(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 78fc28e4f643..2f791aaac1a8 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1471,6 +1471,7 @@ config ARCH_HAS_MEM_ENCRYPT
 config AMD_MEM_ENCRYPT
 	bool "AMD Secure Memory Encryption (SME) support"
 	depends on X86_64 && CPU_SUP_AMD
+	select PATCHABLE_CONST
 	---help---
 	  Say yes to enable support for the encryption of system memory.
 	  This requires an AMD processor that supports Secure Memory
diff --git a/arch/x86/include/asm/page_types.h b/arch/x86/include/asm/page_types.h
index 1e53560a84bb..8ff82468c9af 100644
--- a/arch/x86/include/asm/page_types.h
+++ b/arch/x86/include/asm/page_types.h
@@ -5,6 +5,7 @@
 #include <linux/const.h>
 #include <linux/types.h>
 #include <linux/mem_encrypt.h>
+#include <asm/patchable_const.h>
 
 /* PAGE_SHIFT determines the page size */
 #define PAGE_SHIFT		12
@@ -17,7 +18,8 @@
 #define PUD_PAGE_SIZE		(_AC(1, UL) << PUD_SHIFT)
 #define PUD_PAGE_MASK		(~(PUD_PAGE_SIZE-1))
 
-#define __PHYSICAL_MASK		((phys_addr_t)(__sme_clr((1ULL << __PHYSICAL_MASK_SHIFT) - 1)))
+#define __PHYSICAL_MASK_DEFAULT	((_AC(1, ULL) << __PHYSICAL_MASK_SHIFT) - 1)
+
 #define __VIRTUAL_MASK		((1UL << __VIRTUAL_MASK_SHIFT) - 1)
 
 /* Cast *PAGE_MASK to a signed type so that it is sign-extended if
@@ -55,6 +57,13 @@
 
 #ifndef __ASSEMBLY__
 
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+DECLARE_PATCHABLE_CONST_U64(__PHYSICAL_MASK);
+#define __PHYSICAL_MASK		__PHYSICAL_MASK_READ()
+#else
+#define __PHYSICAL_MASK		((phys_addr_t)__PHYSICAL_MASK_DEFAULT)
+#endif
+
 extern int devmem_is_allowed(unsigned long pagenr);
 
 extern unsigned long max_low_pfn_mapped;
diff --git a/arch/x86/kernel/patchable_const.c b/arch/x86/kernel/patchable_const.c
index d44d91cafee2..8d48c4c101ca 100644
--- a/arch/x86/kernel/patchable_const.c
+++ b/arch/x86/kernel/patchable_const.c
@@ -89,9 +89,12 @@ int patch_const_u64(unsigned long **start, unsigned long **stop,
 	return -EFAULT;
 }
 
+PATCHABLE_CONST_U64(__PHYSICAL_MASK);
+
 #ifdef CONFIG_MODULES
 /* Add an entry for a constant here if it expected to be seen in the modules */
 static const struct const_u64_table const_u64_table[] = {
+	{"__PHYSICAL_MASK", __PHYSICAL_MASK_DEFAULT, &__PHYSICAL_MASK_CURRENT},
 };
 
 __init_or_module __nostackprotector
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 1a53071e2e17..5135b59ce6a5 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -1033,4 +1033,9 @@ void __init __nostackprotector sme_enable(struct boot_params *bp)
 		sme_me_mask = 0;
 	else
 		sme_me_mask = active_by_default ? me_mask : 0;
+
+	if (__PHYSICAL_MASK_SET(__PHYSICAL_MASK & ~sme_me_mask)) {
+		/* Can we handle it? */
+		BUG();
+	}
 }
-- 
2.15.1

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC 2/3] x86/mm/encrypt: Convert __PHYSICAL_MASK to patchable constant
@ 2018-02-07 14:59   ` Kirill A. Shutemov
  0 siblings, 0 replies; 20+ messages in thread
From: Kirill A. Shutemov @ 2018-02-07 14:59 UTC (permalink / raw)
  To: Linus Torvalds, x86, Tom Lendacky, Peter Zijlstra
  Cc: Dave Hansen, Andy Lutomirski, Borislav Petkov, linux-mm,
	linux-kernel, Kirill A. Shutemov

AMD SME claims one bit from physical memory address to indicate that the
page is encrypted. This bit has to be mask out from __PHYSICAL_MASK.

As an alternative, we can replace __PHYSICAL_MASK with patchable
constant and adjust it directly at boot time.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 arch/x86/Kconfig                  |  1 +
 arch/x86/include/asm/page_types.h | 11 ++++++++++-
 arch/x86/kernel/patchable_const.c |  3 +++
 arch/x86/mm/mem_encrypt.c         |  5 +++++
 4 files changed, 19 insertions(+), 1 deletion(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 78fc28e4f643..2f791aaac1a8 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1471,6 +1471,7 @@ config ARCH_HAS_MEM_ENCRYPT
 config AMD_MEM_ENCRYPT
 	bool "AMD Secure Memory Encryption (SME) support"
 	depends on X86_64 && CPU_SUP_AMD
+	select PATCHABLE_CONST
 	---help---
 	  Say yes to enable support for the encryption of system memory.
 	  This requires an AMD processor that supports Secure Memory
diff --git a/arch/x86/include/asm/page_types.h b/arch/x86/include/asm/page_types.h
index 1e53560a84bb..8ff82468c9af 100644
--- a/arch/x86/include/asm/page_types.h
+++ b/arch/x86/include/asm/page_types.h
@@ -5,6 +5,7 @@
 #include <linux/const.h>
 #include <linux/types.h>
 #include <linux/mem_encrypt.h>
+#include <asm/patchable_const.h>
 
 /* PAGE_SHIFT determines the page size */
 #define PAGE_SHIFT		12
@@ -17,7 +18,8 @@
 #define PUD_PAGE_SIZE		(_AC(1, UL) << PUD_SHIFT)
 #define PUD_PAGE_MASK		(~(PUD_PAGE_SIZE-1))
 
-#define __PHYSICAL_MASK		((phys_addr_t)(__sme_clr((1ULL << __PHYSICAL_MASK_SHIFT) - 1)))
+#define __PHYSICAL_MASK_DEFAULT	((_AC(1, ULL) << __PHYSICAL_MASK_SHIFT) - 1)
+
 #define __VIRTUAL_MASK		((1UL << __VIRTUAL_MASK_SHIFT) - 1)
 
 /* Cast *PAGE_MASK to a signed type so that it is sign-extended if
@@ -55,6 +57,13 @@
 
 #ifndef __ASSEMBLY__
 
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+DECLARE_PATCHABLE_CONST_U64(__PHYSICAL_MASK);
+#define __PHYSICAL_MASK		__PHYSICAL_MASK_READ()
+#else
+#define __PHYSICAL_MASK		((phys_addr_t)__PHYSICAL_MASK_DEFAULT)
+#endif
+
 extern int devmem_is_allowed(unsigned long pagenr);
 
 extern unsigned long max_low_pfn_mapped;
diff --git a/arch/x86/kernel/patchable_const.c b/arch/x86/kernel/patchable_const.c
index d44d91cafee2..8d48c4c101ca 100644
--- a/arch/x86/kernel/patchable_const.c
+++ b/arch/x86/kernel/patchable_const.c
@@ -89,9 +89,12 @@ int patch_const_u64(unsigned long **start, unsigned long **stop,
 	return -EFAULT;
 }
 
+PATCHABLE_CONST_U64(__PHYSICAL_MASK);
+
 #ifdef CONFIG_MODULES
 /* Add an entry for a constant here if it expected to be seen in the modules */
 static const struct const_u64_table const_u64_table[] = {
+	{"__PHYSICAL_MASK", __PHYSICAL_MASK_DEFAULT, &__PHYSICAL_MASK_CURRENT},
 };
 
 __init_or_module __nostackprotector
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 1a53071e2e17..5135b59ce6a5 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -1033,4 +1033,9 @@ void __init __nostackprotector sme_enable(struct boot_params *bp)
 		sme_me_mask = 0;
 	else
 		sme_me_mask = active_by_default ? me_mask : 0;
+
+	if (__PHYSICAL_MASK_SET(__PHYSICAL_MASK & ~sme_me_mask)) {
+		/* Can we handle it? */
+		BUG();
+	}
 }
-- 
2.15.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC 3/3] x86/mm/encrypt: Convert sme_me_mask to patchable constant
  2018-02-07 14:59 ` Kirill A. Shutemov
@ 2018-02-07 14:59   ` Kirill A. Shutemov
  -1 siblings, 0 replies; 20+ messages in thread
From: Kirill A. Shutemov @ 2018-02-07 14:59 UTC (permalink / raw)
  To: Linus Torvalds, x86, Tom Lendacky, Peter Zijlstra
  Cc: Dave Hansen, Andy Lutomirski, Borislav Petkov, linux-mm,
	linux-kernel, Kirill A. Shutemov

We only change sme_me_mask very early in boot. It may be a candidate for
conversion to patchable constant.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 arch/x86/include/asm/mem_encrypt.h |  5 ++++-
 arch/x86/kernel/patchable_const.c  |  2 ++
 arch/x86/mm/mem_encrypt.c          | 15 ++++-----------
 3 files changed, 10 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
index 22c5f3e6f820..4131ddf262f3 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -18,10 +18,13 @@
 #include <linux/init.h>
 
 #include <asm/bootparam.h>
+#include <asm/patchable_const.h>
 
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 
-extern u64 sme_me_mask;
+#define sme_me_mask_DEFAULT 0
+DECLARE_PATCHABLE_CONST_U64(sme_me_mask);
+#define sme_me_mask sme_me_mask_READ()
 
 void sme_encrypt_execute(unsigned long encrypted_kernel_vaddr,
 			 unsigned long decrypted_kernel_vaddr,
diff --git a/arch/x86/kernel/patchable_const.c b/arch/x86/kernel/patchable_const.c
index 8d48c4c101ca..1bf2980d91b4 100644
--- a/arch/x86/kernel/patchable_const.c
+++ b/arch/x86/kernel/patchable_const.c
@@ -90,11 +90,13 @@ int patch_const_u64(unsigned long **start, unsigned long **stop,
 }
 
 PATCHABLE_CONST_U64(__PHYSICAL_MASK);
+PATCHABLE_CONST_U64(sme_me_mask);
 
 #ifdef CONFIG_MODULES
 /* Add an entry for a constant here if it expected to be seen in the modules */
 static const struct const_u64_table const_u64_table[] = {
 	{"__PHYSICAL_MASK", __PHYSICAL_MASK_DEFAULT, &__PHYSICAL_MASK_CURRENT},
+	{"sme_me_mask", sme_me_mask_DEFAULT, &sme_me_mask_CURRENT},
 };
 
 __init_or_module __nostackprotector
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 5135b59ce6a5..c93b5c5eeccf 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -36,13 +36,6 @@ static char sme_cmdline_arg[] __initdata = "mem_encrypt";
 static char sme_cmdline_on[]  __initdata = "on";
 static char sme_cmdline_off[] __initdata = "off";
 
-/*
- * Since SME related variables are set early in the boot process they must
- * reside in the .data section so as not to be zeroed out when the .bss
- * section is later cleared.
- */
-u64 sme_me_mask __section(.data) = 0;
-EXPORT_SYMBOL(sme_me_mask);
 DEFINE_STATIC_KEY_FALSE(sev_enable_key);
 EXPORT_SYMBOL_GPL(sev_enable_key);
 
@@ -997,7 +990,7 @@ void __init __nostackprotector sme_enable(struct boot_params *bp)
 			return;
 
 		/* SEV state cannot be controlled by a command line option */
-		sme_me_mask = me_mask;
+		sme_me_mask_SET(me_mask);
 		sev_enabled = true;
 		return;
 	}
@@ -1028,11 +1021,11 @@ void __init __nostackprotector sme_enable(struct boot_params *bp)
 	cmdline_find_option(cmdline_ptr, cmdline_arg, buffer, sizeof(buffer));
 
 	if (!strncmp(buffer, cmdline_on, sizeof(buffer)))
-		sme_me_mask = me_mask;
+		sme_me_mask_SET(me_mask);
 	else if (!strncmp(buffer, cmdline_off, sizeof(buffer)))
-		sme_me_mask = 0;
+		sme_me_mask_SET(0);
 	else
-		sme_me_mask = active_by_default ? me_mask : 0;
+		sme_me_mask_SET(active_by_default ? me_mask : 0);
 
 	if (__PHYSICAL_MASK_SET(__PHYSICAL_MASK & ~sme_me_mask)) {
 		/* Can we handle it? */
-- 
2.15.1

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [RFC 3/3] x86/mm/encrypt: Convert sme_me_mask to patchable constant
@ 2018-02-07 14:59   ` Kirill A. Shutemov
  0 siblings, 0 replies; 20+ messages in thread
From: Kirill A. Shutemov @ 2018-02-07 14:59 UTC (permalink / raw)
  To: Linus Torvalds, x86, Tom Lendacky, Peter Zijlstra
  Cc: Dave Hansen, Andy Lutomirski, Borislav Petkov, linux-mm,
	linux-kernel, Kirill A. Shutemov

We only change sme_me_mask very early in boot. It may be a candidate for
conversion to patchable constant.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 arch/x86/include/asm/mem_encrypt.h |  5 ++++-
 arch/x86/kernel/patchable_const.c  |  2 ++
 arch/x86/mm/mem_encrypt.c          | 15 ++++-----------
 3 files changed, 10 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
index 22c5f3e6f820..4131ddf262f3 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -18,10 +18,13 @@
 #include <linux/init.h>
 
 #include <asm/bootparam.h>
+#include <asm/patchable_const.h>
 
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 
-extern u64 sme_me_mask;
+#define sme_me_mask_DEFAULT 0
+DECLARE_PATCHABLE_CONST_U64(sme_me_mask);
+#define sme_me_mask sme_me_mask_READ()
 
 void sme_encrypt_execute(unsigned long encrypted_kernel_vaddr,
 			 unsigned long decrypted_kernel_vaddr,
diff --git a/arch/x86/kernel/patchable_const.c b/arch/x86/kernel/patchable_const.c
index 8d48c4c101ca..1bf2980d91b4 100644
--- a/arch/x86/kernel/patchable_const.c
+++ b/arch/x86/kernel/patchable_const.c
@@ -90,11 +90,13 @@ int patch_const_u64(unsigned long **start, unsigned long **stop,
 }
 
 PATCHABLE_CONST_U64(__PHYSICAL_MASK);
+PATCHABLE_CONST_U64(sme_me_mask);
 
 #ifdef CONFIG_MODULES
 /* Add an entry for a constant here if it expected to be seen in the modules */
 static const struct const_u64_table const_u64_table[] = {
 	{"__PHYSICAL_MASK", __PHYSICAL_MASK_DEFAULT, &__PHYSICAL_MASK_CURRENT},
+	{"sme_me_mask", sme_me_mask_DEFAULT, &sme_me_mask_CURRENT},
 };
 
 __init_or_module __nostackprotector
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 5135b59ce6a5..c93b5c5eeccf 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -36,13 +36,6 @@ static char sme_cmdline_arg[] __initdata = "mem_encrypt";
 static char sme_cmdline_on[]  __initdata = "on";
 static char sme_cmdline_off[] __initdata = "off";
 
-/*
- * Since SME related variables are set early in the boot process they must
- * reside in the .data section so as not to be zeroed out when the .bss
- * section is later cleared.
- */
-u64 sme_me_mask __section(.data) = 0;
-EXPORT_SYMBOL(sme_me_mask);
 DEFINE_STATIC_KEY_FALSE(sev_enable_key);
 EXPORT_SYMBOL_GPL(sev_enable_key);
 
@@ -997,7 +990,7 @@ void __init __nostackprotector sme_enable(struct boot_params *bp)
 			return;
 
 		/* SEV state cannot be controlled by a command line option */
-		sme_me_mask = me_mask;
+		sme_me_mask_SET(me_mask);
 		sev_enabled = true;
 		return;
 	}
@@ -1028,11 +1021,11 @@ void __init __nostackprotector sme_enable(struct boot_params *bp)
 	cmdline_find_option(cmdline_ptr, cmdline_arg, buffer, sizeof(buffer));
 
 	if (!strncmp(buffer, cmdline_on, sizeof(buffer)))
-		sme_me_mask = me_mask;
+		sme_me_mask_SET(me_mask);
 	else if (!strncmp(buffer, cmdline_off, sizeof(buffer)))
-		sme_me_mask = 0;
+		sme_me_mask_SET(0);
 	else
-		sme_me_mask = active_by_default ? me_mask : 0;
+		sme_me_mask_SET(active_by_default ? me_mask : 0);
 
 	if (__PHYSICAL_MASK_SET(__PHYSICAL_MASK & ~sme_me_mask)) {
 		/* Can we handle it? */
-- 
2.15.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [RFC 0/3] x86: Patchable constants
  2018-02-07 14:59 ` Kirill A. Shutemov
@ 2018-02-07 16:25   ` Peter Zijlstra
  -1 siblings, 0 replies; 20+ messages in thread
From: Peter Zijlstra @ 2018-02-07 16:25 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: Linus Torvalds, x86, Tom Lendacky, Dave Hansen, Andy Lutomirski,
	Borislav Petkov, linux-mm, linux-kernel

On Wed, Feb 07, 2018 at 05:59:10PM +0300, Kirill A. Shutemov wrote:
> This conversion makes GCC generate worse code. Conversion __PHYSICAL_MASK
> to a patchable constant adds about 5k in .text on defconfig and makes it
> slightly slower at runtime (~0.2% on my box).

Do you have explicit examples for the worse code? That might give clue
on how to improve things.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 0/3] x86: Patchable constants
@ 2018-02-07 16:25   ` Peter Zijlstra
  0 siblings, 0 replies; 20+ messages in thread
From: Peter Zijlstra @ 2018-02-07 16:25 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: Linus Torvalds, x86, Tom Lendacky, Dave Hansen, Andy Lutomirski,
	Borislav Petkov, linux-mm, linux-kernel

On Wed, Feb 07, 2018 at 05:59:10PM +0300, Kirill A. Shutemov wrote:
> This conversion makes GCC generate worse code. Conversion __PHYSICAL_MASK
> to a patchable constant adds about 5k in .text on defconfig and makes it
> slightly slower at runtime (~0.2% on my box).

Do you have explicit examples for the worse code? That might give clue
on how to improve things.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 0/3] x86: Patchable constants
  2018-02-07 14:59 ` Kirill A. Shutemov
@ 2018-02-07 17:01   ` Linus Torvalds
  -1 siblings, 0 replies; 20+ messages in thread
From: Linus Torvalds @ 2018-02-07 17:01 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: the arch/x86 maintainers, Tom Lendacky, Peter Zijlstra,
	Dave Hansen, Andy Lutomirski, Borislav Petkov, linux-mm,
	Linux Kernel Mailing List, Peter Anvin

On Wed, Feb 7, 2018 at 6:59 AM, Kirill A. Shutemov
<kirill.shutemov@linux.intel.com> wrote:
> This patchset introduces concept of patchable constants: constant values
> that can be adjusted at boot-time in response to system configuration or
> user input (kernel command-line).
>
> Patchable constants can replace variables that never changes at runtime
> (only at boot-time), but used in very hot path.

So I actually wanted something very close to this, but I think your
approach is much too simplistic.

You force all constants into a register, which means that the
resulting code is always going to be very far from non-optimal.

You also force a big "movabsq" instruction, which really is huge, and
almost never needed. Together with the "use a register", it just makes
for big code.

What I wanted was something that can take things like a shift by a
variable that is set once, and turn it into a shift by a boot-time
constant. Which means that you literally end up patching the 8-bit
immediate in the shift instruction itself.

In particular, was looking at the dcache hashing code, and (to quote
an old email of mine), what I wanted was to simplify the run-time
constant part of this:

│ mov $0x20,%ecx
│ sub 0xaf8bd5(%rip),%ecx # ffffffff81d34600 <d_hash_shift>
│ mov 0x8(%rsi),%r9
│ add %r14d,%eax
│ imul $0x9e370001,%eax,%eax
│ shr %cl,%eax

and it was the expression "32-d_hash_shift" that is really a constant,
and that sequence of

│ mov $0x20,%ecx
│ sub 0xaf8bd5(%rip),%ecx # ffffffff81d34600 <d_hash_shift>
│ shr %cl,%eax

should be just a single

│ shr $CONSTANT,%eax

at runtime.

Look - much smaller code, and register %rcx isn't used at all. And no
D$ miss on loading that constant (that is a constant depending on
boot-time setup only).

It's rather more complex, but it actually gives a much bigger win. The
code itself will be much better, and smaller.

The *infrastructure* for the code gets pretty hairy, though.

The good news is that the patch already existed to at least _some_
degree. Peter Anvin did it about 18 months ago.

It was not really pursued all the way because it *is* a lot of extra
complexity, and I think there was some other hold-up, but he did have
skeleton code for the actual replacement.

There was a thread on the x86 arch list with the subject line

    Disgusting pseudo-self-modifying code idea: "variable constants"

but I'm unable to actually find the patch. I know there was at least a
vert early prototype.

Adding hpa to the cc in the hope that he has some prototype code still
laying around..

                Linus

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 0/3] x86: Patchable constants
@ 2018-02-07 17:01   ` Linus Torvalds
  0 siblings, 0 replies; 20+ messages in thread
From: Linus Torvalds @ 2018-02-07 17:01 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: the arch/x86 maintainers, Tom Lendacky, Peter Zijlstra,
	Dave Hansen, Andy Lutomirski, Borislav Petkov, linux-mm,
	Linux Kernel Mailing List, Peter Anvin

On Wed, Feb 7, 2018 at 6:59 AM, Kirill A. Shutemov
<kirill.shutemov@linux.intel.com> wrote:
> This patchset introduces concept of patchable constants: constant values
> that can be adjusted at boot-time in response to system configuration or
> user input (kernel command-line).
>
> Patchable constants can replace variables that never changes at runtime
> (only at boot-time), but used in very hot path.

So I actually wanted something very close to this, but I think your
approach is much too simplistic.

You force all constants into a register, which means that the
resulting code is always going to be very far from non-optimal.

You also force a big "movabsq" instruction, which really is huge, and
almost never needed. Together with the "use a register", it just makes
for big code.

What I wanted was something that can take things like a shift by a
variable that is set once, and turn it into a shift by a boot-time
constant. Which means that you literally end up patching the 8-bit
immediate in the shift instruction itself.

In particular, was looking at the dcache hashing code, and (to quote
an old email of mine), what I wanted was to simplify the run-time
constant part of this:

│ mov $0x20,%ecx
│ sub 0xaf8bd5(%rip),%ecx # ffffffff81d34600 <d_hash_shift>
│ mov 0x8(%rsi),%r9
│ add %r14d,%eax
│ imul $0x9e370001,%eax,%eax
│ shr %cl,%eax

and it was the expression "32-d_hash_shift" that is really a constant,
and that sequence of

│ mov $0x20,%ecx
│ sub 0xaf8bd5(%rip),%ecx # ffffffff81d34600 <d_hash_shift>
│ shr %cl,%eax

should be just a single

│ shr $CONSTANT,%eax

at runtime.

Look - much smaller code, and register %rcx isn't used at all. And no
D$ miss on loading that constant (that is a constant depending on
boot-time setup only).

It's rather more complex, but it actually gives a much bigger win. The
code itself will be much better, and smaller.

The *infrastructure* for the code gets pretty hairy, though.

The good news is that the patch already existed to at least _some_
degree. Peter Anvin did it about 18 months ago.

It was not really pursued all the way because it *is* a lot of extra
complexity, and I think there was some other hold-up, but he did have
skeleton code for the actual replacement.

There was a thread on the x86 arch list with the subject line

    Disgusting pseudo-self-modifying code idea: "variable constants"

but I'm unable to actually find the patch. I know there was at least a
vert early prototype.

Adding hpa to the cc in the hope that he has some prototype code still
laying around..

                Linus

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 0/3] x86: Patchable constants
  2018-02-07 16:25   ` Peter Zijlstra
@ 2018-02-07 17:12     ` Kirill A. Shutemov
  -1 siblings, 0 replies; 20+ messages in thread
From: Kirill A. Shutemov @ 2018-02-07 17:12 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Kirill A. Shutemov, Linus Torvalds, x86, Tom Lendacky,
	Dave Hansen, Andy Lutomirski, Borislav Petkov, linux-mm,
	linux-kernel

On Wed, Feb 07, 2018 at 05:25:07PM +0100, Peter Zijlstra wrote:
> On Wed, Feb 07, 2018 at 05:59:10PM +0300, Kirill A. Shutemov wrote:
> > This conversion makes GCC generate worse code. Conversion __PHYSICAL_MASK
> > to a patchable constant adds about 5k in .text on defconfig and makes it
> > slightly slower at runtime (~0.2% on my box).
> 
> Do you have explicit examples for the worse code? That might give clue
> on how to improve things.

Clarification: increase by 5k was for defconfig, where I replaced pure constant
define __PHYSICAL_MASK with patchable constant.

With CONFIG_AMD_MEM_ENCRYPT=y increase is smaller: ~1.8k.

The disassembler below is for CONFIG_AMD_MEM_ENCRYPT=y.

Before:

Dump of assembler code for function migration_entry_wait:
   0xffffffff8118ab40 <+0>:     mov    (%rsi),%rcx
   0xffffffff8118ab43 <+3>:     shr    $0x9,%rdx
   0xffffffff8118ab47 <+7>:     mov    0x10aecba(%rip),%r8        # 0xffffffff82239808 <sme_me_mask>
   0xffffffff8118ab4e <+14>:    mov    %rdx,%rax
   0xffffffff8118ab51 <+17>:    mov    0x10aec90(%rip),%r9        # 0xffffffff822397e8 <vmemmap_base>
   0xffffffff8118ab58 <+24>:    movabs $0x3fffffe00000,%rsi
   0xffffffff8118ab62 <+34>:    and    $0xff8,%eax
   0xffffffff8118ab67 <+39>:    test   $0x80,%cl
   0xffffffff8118ab6a <+42>:    not    %r8
   0xffffffff8118ab6d <+45>:    jne    0xffffffff8118ab79 <migration_entry_wait+57>
   0xffffffff8118ab6f <+47>:    movabs $0x3ffffffff000,%rsi
   0xffffffff8118ab79 <+57>:    and    %r8,%rsi
   0xffffffff8118ab7c <+60>:    add    0x10aec75(%rip),%rax        # 0xffffffff822397f8 <page_offset_base>
   0xffffffff8118ab83 <+67>:    and    %rsi,%rcx
   0xffffffff8118ab86 <+70>:    mov    %rcx,%rdx
   0xffffffff8118ab89 <+73>:    shr    $0x6,%rdx
   0xffffffff8118ab8d <+77>:    mov    %rax,%rsi
   0xffffffff8118ab90 <+80>:    lea    0x30(%r9,%rdx,1),%rdx
   0xffffffff8118ab95 <+85>:    add    %rcx,%rsi
   0xffffffff8118ab98 <+88>:    jmpq   0xffffffff8118aa20 <__migration_entry_wait>

After:

Dump of assembler code for function migration_entry_wait:
   0xffffffff8118b3e0 <+0>:     mov    (%rsi),%rsi
   0xffffffff8118b3e3 <+3>:     mov    %rdx,%rax
   0xffffffff8118b3e6 <+6>:     mov    0x10ae41b(%rip),%r8        # 0xffffffff82239808 <vmemmap_base>
   0xffffffff8118b3ed <+13>:    shr    $0x9,%rax
   0xffffffff8118b3f1 <+17>:    and    $0xff8,%eax
   0xffffffff8118b3f6 <+22>:    test   $0x80,%sil
   0xffffffff8118b3fa <+26>:    jne    0xffffffff8118b432 <migration_entry_wait+82>
   0xffffffff8118b3fc <+28>:    mov    %rsi,%rdx
   0xffffffff8118b3ff <+31>:    movabs $0x3fffffffffff,%rcx
   0xffffffff8118b409 <+41>:    and    %rcx,%rdx
   0xffffffff8118b40c <+44>:    and    $0xfffffffffffff000,%rcx
   0xffffffff8118b413 <+51>:    shr    $0xc,%rdx
   0xffffffff8118b417 <+55>:    shl    $0x6,%rdx
   0xffffffff8118b41b <+59>:    lea    0x30(%r8,%rdx,1),%rdx
   0xffffffff8118b420 <+64>:    add    0x10ae3f1(%rip),%rax        # 0xffffffff82239818 <page_offset_base>
   0xffffffff8118b427 <+71>:    and    %rcx,%rsi
   0xffffffff8118b42a <+74>:    add    %rax,%rsi
   0xffffffff8118b42d <+77>:    jmpq   0xffffffff8118b2c0 <__migration_entry_wait>
   0xffffffff8118b432 <+82>:    mov    %rsi,%rdx
   0xffffffff8118b435 <+85>:    and    $0xffffffffffe00000,%rdx
   0xffffffff8118b43c <+92>:    movabs $0x3fffffffffff,%rcx
   0xffffffff8118b446 <+102>:   and    %rcx,%rdx
   0xffffffff8118b449 <+105>:   and    $0xffffffffffe00000,%rcx
   0xffffffff8118b450 <+112>:   shr    $0x6,%rdx
   0xffffffff8118b454 <+116>:   lea    0x30(%r8,%rdx,1),%rdx
   0xffffffff8118b459 <+121>:   jmp    0xffffffff8118b420 <migration_entry_wait+64>

-- 
 Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 0/3] x86: Patchable constants
@ 2018-02-07 17:12     ` Kirill A. Shutemov
  0 siblings, 0 replies; 20+ messages in thread
From: Kirill A. Shutemov @ 2018-02-07 17:12 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Kirill A. Shutemov, Linus Torvalds, x86, Tom Lendacky,
	Dave Hansen, Andy Lutomirski, Borislav Petkov, linux-mm,
	linux-kernel

On Wed, Feb 07, 2018 at 05:25:07PM +0100, Peter Zijlstra wrote:
> On Wed, Feb 07, 2018 at 05:59:10PM +0300, Kirill A. Shutemov wrote:
> > This conversion makes GCC generate worse code. Conversion __PHYSICAL_MASK
> > to a patchable constant adds about 5k in .text on defconfig and makes it
> > slightly slower at runtime (~0.2% on my box).
> 
> Do you have explicit examples for the worse code? That might give clue
> on how to improve things.

Clarification: increase by 5k was for defconfig, where I replaced pure constant
define __PHYSICAL_MASK with patchable constant.

With CONFIG_AMD_MEM_ENCRYPT=y increase is smaller: ~1.8k.

The disassembler below is for CONFIG_AMD_MEM_ENCRYPT=y.

Before:

Dump of assembler code for function migration_entry_wait:
   0xffffffff8118ab40 <+0>:     mov    (%rsi),%rcx
   0xffffffff8118ab43 <+3>:     shr    $0x9,%rdx
   0xffffffff8118ab47 <+7>:     mov    0x10aecba(%rip),%r8        # 0xffffffff82239808 <sme_me_mask>
   0xffffffff8118ab4e <+14>:    mov    %rdx,%rax
   0xffffffff8118ab51 <+17>:    mov    0x10aec90(%rip),%r9        # 0xffffffff822397e8 <vmemmap_base>
   0xffffffff8118ab58 <+24>:    movabs $0x3fffffe00000,%rsi
   0xffffffff8118ab62 <+34>:    and    $0xff8,%eax
   0xffffffff8118ab67 <+39>:    test   $0x80,%cl
   0xffffffff8118ab6a <+42>:    not    %r8
   0xffffffff8118ab6d <+45>:    jne    0xffffffff8118ab79 <migration_entry_wait+57>
   0xffffffff8118ab6f <+47>:    movabs $0x3ffffffff000,%rsi
   0xffffffff8118ab79 <+57>:    and    %r8,%rsi
   0xffffffff8118ab7c <+60>:    add    0x10aec75(%rip),%rax        # 0xffffffff822397f8 <page_offset_base>
   0xffffffff8118ab83 <+67>:    and    %rsi,%rcx
   0xffffffff8118ab86 <+70>:    mov    %rcx,%rdx
   0xffffffff8118ab89 <+73>:    shr    $0x6,%rdx
   0xffffffff8118ab8d <+77>:    mov    %rax,%rsi
   0xffffffff8118ab90 <+80>:    lea    0x30(%r9,%rdx,1),%rdx
   0xffffffff8118ab95 <+85>:    add    %rcx,%rsi
   0xffffffff8118ab98 <+88>:    jmpq   0xffffffff8118aa20 <__migration_entry_wait>

After:

Dump of assembler code for function migration_entry_wait:
   0xffffffff8118b3e0 <+0>:     mov    (%rsi),%rsi
   0xffffffff8118b3e3 <+3>:     mov    %rdx,%rax
   0xffffffff8118b3e6 <+6>:     mov    0x10ae41b(%rip),%r8        # 0xffffffff82239808 <vmemmap_base>
   0xffffffff8118b3ed <+13>:    shr    $0x9,%rax
   0xffffffff8118b3f1 <+17>:    and    $0xff8,%eax
   0xffffffff8118b3f6 <+22>:    test   $0x80,%sil
   0xffffffff8118b3fa <+26>:    jne    0xffffffff8118b432 <migration_entry_wait+82>
   0xffffffff8118b3fc <+28>:    mov    %rsi,%rdx
   0xffffffff8118b3ff <+31>:    movabs $0x3fffffffffff,%rcx
   0xffffffff8118b409 <+41>:    and    %rcx,%rdx
   0xffffffff8118b40c <+44>:    and    $0xfffffffffffff000,%rcx
   0xffffffff8118b413 <+51>:    shr    $0xc,%rdx
   0xffffffff8118b417 <+55>:    shl    $0x6,%rdx
   0xffffffff8118b41b <+59>:    lea    0x30(%r8,%rdx,1),%rdx
   0xffffffff8118b420 <+64>:    add    0x10ae3f1(%rip),%rax        # 0xffffffff82239818 <page_offset_base>
   0xffffffff8118b427 <+71>:    and    %rcx,%rsi
   0xffffffff8118b42a <+74>:    add    %rax,%rsi
   0xffffffff8118b42d <+77>:    jmpq   0xffffffff8118b2c0 <__migration_entry_wait>
   0xffffffff8118b432 <+82>:    mov    %rsi,%rdx
   0xffffffff8118b435 <+85>:    and    $0xffffffffffe00000,%rdx
   0xffffffff8118b43c <+92>:    movabs $0x3fffffffffff,%rcx
   0xffffffff8118b446 <+102>:   and    %rcx,%rdx
   0xffffffff8118b449 <+105>:   and    $0xffffffffffe00000,%rcx
   0xffffffff8118b450 <+112>:   shr    $0x6,%rdx
   0xffffffff8118b454 <+116>:   lea    0x30(%r8,%rdx,1),%rdx
   0xffffffff8118b459 <+121>:   jmp    0xffffffff8118b420 <migration_entry_wait+64>

-- 
 Kirill A. Shutemov

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 0/3] x86: Patchable constants
  2018-02-07 17:01   ` Linus Torvalds
@ 2018-02-07 17:13     ` hpa
  -1 siblings, 0 replies; 20+ messages in thread
From: hpa @ 2018-02-07 17:13 UTC (permalink / raw)
  To: Linus Torvalds, Kirill A. Shutemov
  Cc: the arch/x86 maintainers, Tom Lendacky, Peter Zijlstra,
	Dave Hansen, Andy Lutomirski, Borislav Petkov, linux-mm,
	Linux Kernel Mailing List

On February 7, 2018 9:01:42 AM PST, Linus Torvalds <torvalds@linux-foundation.org> wrote:
>On Wed, Feb 7, 2018 at 6:59 AM, Kirill A. Shutemov
><kirill.shutemov@linux.intel.com> wrote:
>> This patchset introduces concept of patchable constants: constant
>values
>> that can be adjusted at boot-time in response to system configuration
>or
>> user input (kernel command-line).
>>
>> Patchable constants can replace variables that never changes at
>runtime
>> (only at boot-time), but used in very hot path.
>
>So I actually wanted something very close to this, but I think your
>approach is much too simplistic.
>
>You force all constants into a register, which means that the
>resulting code is always going to be very far from non-optimal.
>
>You also force a big "movabsq" instruction, which really is huge, and
>almost never needed. Together with the "use a register", it just makes
>for big code.
>
>What I wanted was something that can take things like a shift by a
>variable that is set once, and turn it into a shift by a boot-time
>constant. Which means that you literally end up patching the 8-bit
>immediate in the shift instruction itself.
>
>In particular, was looking at the dcache hashing code, and (to quote
>an old email of mine), what I wanted was to simplify the run-time
>constant part of this:
>
>│ mov $0x20,%ecx
>│ sub 0xaf8bd5(%rip),%ecx # ffffffff81d34600 <d_hash_shift>
>│ mov 0x8(%rsi),%r9
>│ add %r14d,%eax
>│ imul $0x9e370001,%eax,%eax
>│ shr %cl,%eax
>
>and it was the expression "32-d_hash_shift" that is really a constant,
>and that sequence of
>
>│ mov $0x20,%ecx
>│ sub 0xaf8bd5(%rip),%ecx # ffffffff81d34600 <d_hash_shift>
>│ shr %cl,%eax
>
>should be just a single
>
>│ shr $CONSTANT,%eax
>
>at runtime.
>
>Look - much smaller code, and register %rcx isn't used at all. And no
>D$ miss on loading that constant (that is a constant depending on
>boot-time setup only).
>
>It's rather more complex, but it actually gives a much bigger win. The
>code itself will be much better, and smaller.
>
>The *infrastructure* for the code gets pretty hairy, though.
>
>The good news is that the patch already existed to at least _some_
>degree. Peter Anvin did it about 18 months ago.
>
>It was not really pursued all the way because it *is* a lot of extra
>complexity, and I think there was some other hold-up, but he did have
>skeleton code for the actual replacement.
>
>There was a thread on the x86 arch list with the subject line
>
>    Disgusting pseudo-self-modifying code idea: "variable constants"
>
>but I'm unable to actually find the patch. I know there was at least a
>vert early prototype.
>
>Adding hpa to the cc in the hope that he has some prototype code still
>laying around..
>
>                Linus

I am currently working on it much more comprehensive set of patches for extremely this. I am already much further ahead and support most operations.
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 0/3] x86: Patchable constants
@ 2018-02-07 17:13     ` hpa
  0 siblings, 0 replies; 20+ messages in thread
From: hpa @ 2018-02-07 17:13 UTC (permalink / raw)
  To: Linus Torvalds, Kirill A. Shutemov
  Cc: the arch/x86 maintainers, Tom Lendacky, Peter Zijlstra,
	Dave Hansen, Andy Lutomirski, Borislav Petkov, linux-mm,
	Linux Kernel Mailing List

On February 7, 2018 9:01:42 AM PST, Linus Torvalds <torvalds@linux-foundation.org> wrote:
>On Wed, Feb 7, 2018 at 6:59 AM, Kirill A. Shutemov
><kirill.shutemov@linux.intel.com> wrote:
>> This patchset introduces concept of patchable constants: constant
>values
>> that can be adjusted at boot-time in response to system configuration
>or
>> user input (kernel command-line).
>>
>> Patchable constants can replace variables that never changes at
>runtime
>> (only at boot-time), but used in very hot path.
>
>So I actually wanted something very close to this, but I think your
>approach is much too simplistic.
>
>You force all constants into a register, which means that the
>resulting code is always going to be very far from non-optimal.
>
>You also force a big "movabsq" instruction, which really is huge, and
>almost never needed. Together with the "use a register", it just makes
>for big code.
>
>What I wanted was something that can take things like a shift by a
>variable that is set once, and turn it into a shift by a boot-time
>constant. Which means that you literally end up patching the 8-bit
>immediate in the shift instruction itself.
>
>In particular, was looking at the dcache hashing code, and (to quote
>an old email of mine), what I wanted was to simplify the run-time
>constant part of this:
>
>│ mov $0x20,%ecx
>│ sub 0xaf8bd5(%rip),%ecx # ffffffff81d34600 <d_hash_shift>
>│ mov 0x8(%rsi),%r9
>│ add %r14d,%eax
>│ imul $0x9e370001,%eax,%eax
>│ shr %cl,%eax
>
>and it was the expression "32-d_hash_shift" that is really a constant,
>and that sequence of
>
>│ mov $0x20,%ecx
>│ sub 0xaf8bd5(%rip),%ecx # ffffffff81d34600 <d_hash_shift>
>│ shr %cl,%eax
>
>should be just a single
>
>│ shr $CONSTANT,%eax
>
>at runtime.
>
>Look - much smaller code, and register %rcx isn't used at all. And no
>D$ miss on loading that constant (that is a constant depending on
>boot-time setup only).
>
>It's rather more complex, but it actually gives a much bigger win. The
>code itself will be much better, and smaller.
>
>The *infrastructure* for the code gets pretty hairy, though.
>
>The good news is that the patch already existed to at least _some_
>degree. Peter Anvin did it about 18 months ago.
>
>It was not really pursued all the way because it *is* a lot of extra
>complexity, and I think there was some other hold-up, but he did have
>skeleton code for the actual replacement.
>
>There was a thread on the x86 arch list with the subject line
>
>    Disgusting pseudo-self-modifying code idea: "variable constants"
>
>but I'm unable to actually find the patch. I know there was at least a
>vert early prototype.
>
>Adding hpa to the cc in the hope that he has some prototype code still
>laying around..
>
>                Linus

I am currently working on it much more comprehensive set of patches for extremely this. I am already much further ahead and support most operations.
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 0/3] x86: Patchable constants
  2018-02-07 17:01   ` Linus Torvalds
@ 2018-02-07 20:20     ` H. Peter Anvin
  -1 siblings, 0 replies; 20+ messages in thread
From: H. Peter Anvin @ 2018-02-07 20:20 UTC (permalink / raw)
  To: Linus Torvalds, Kirill A. Shutemov
  Cc: the arch/x86 maintainers, Tom Lendacky, Peter Zijlstra,
	Dave Hansen, Andy Lutomirski, Borislav Petkov, linux-mm,
	Linux Kernel Mailing List

On 02/07/18 09:01, Linus Torvalds wrote:
> 
> Look - much smaller code, and register %rcx isn't used at all. And no
> D$ miss on loading that constant (that is a constant depending on
> boot-time setup only).
> 
> It's rather more complex, but it actually gives a much bigger win. The
> code itself will be much better, and smaller.
> 
> The *infrastructure* for the code gets pretty hairy, though.
> 
> The good news is that the patch already existed to at least _some_
> degree. Peter Anvin did it about 18 months ago.
> 
> It was not really pursued all the way because it *is* a lot of extra
> complexity, and I think there was some other hold-up, but he did have
> skeleton code for the actual replacement.
> 
> There was a thread on the x86 arch list with the subject line
> 
>     Disgusting pseudo-self-modifying code idea: "variable constants"
> 
> but I'm unable to actually find the patch. I know there was at least a
> vert early prototype.
> 
> Adding hpa to the cc in the hope that he has some prototype code still
> laying around..
> 

The patchset I have is about 85% complete.  It mostly needs cleanup,
testing, and breaking into reasonable chunks (it got put on the
backburner for somewhat obvious reasons, but I don't think it'll take
very long at all to productize it.)

The main reason I haven't submitted it yet is that I got a bit overly
ambitious and wanted to implement a whole bunch of more complex
subcases, such as 64-bit shifts on a 32-bit kernel.  The win in that
case is actually quite huge, but it is requires data-dependent code
patching and not just immediate patching, which requires augmentation of
the alternatives framework.

	-hpa

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 0/3] x86: Patchable constants
@ 2018-02-07 20:20     ` H. Peter Anvin
  0 siblings, 0 replies; 20+ messages in thread
From: H. Peter Anvin @ 2018-02-07 20:20 UTC (permalink / raw)
  To: Linus Torvalds, Kirill A. Shutemov
  Cc: the arch/x86 maintainers, Tom Lendacky, Peter Zijlstra,
	Dave Hansen, Andy Lutomirski, Borislav Petkov, linux-mm,
	Linux Kernel Mailing List

On 02/07/18 09:01, Linus Torvalds wrote:
> 
> Look - much smaller code, and register %rcx isn't used at all. And no
> D$ miss on loading that constant (that is a constant depending on
> boot-time setup only).
> 
> It's rather more complex, but it actually gives a much bigger win. The
> code itself will be much better, and smaller.
> 
> The *infrastructure* for the code gets pretty hairy, though.
> 
> The good news is that the patch already existed to at least _some_
> degree. Peter Anvin did it about 18 months ago.
> 
> It was not really pursued all the way because it *is* a lot of extra
> complexity, and I think there was some other hold-up, but he did have
> skeleton code for the actual replacement.
> 
> There was a thread on the x86 arch list with the subject line
> 
>     Disgusting pseudo-self-modifying code idea: "variable constants"
> 
> but I'm unable to actually find the patch. I know there was at least a
> vert early prototype.
> 
> Adding hpa to the cc in the hope that he has some prototype code still
> laying around..
> 

The patchset I have is about 85% complete.  It mostly needs cleanup,
testing, and breaking into reasonable chunks (it got put on the
backburner for somewhat obvious reasons, but I don't think it'll take
very long at all to productize it.)

The main reason I haven't submitted it yet is that I got a bit overly
ambitious and wanted to implement a whole bunch of more complex
subcases, such as 64-bit shifts on a 32-bit kernel.  The win in that
case is actually quite huge, but it is requires data-dependent code
patching and not just immediate patching, which requires augmentation of
the alternatives framework.

	-hpa


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 0/3] x86: Patchable constants
  2018-02-07 20:20     ` H. Peter Anvin
@ 2018-02-07 20:43       ` H. Peter Anvin
  -1 siblings, 0 replies; 20+ messages in thread
From: H. Peter Anvin @ 2018-02-07 20:43 UTC (permalink / raw)
  To: Linus Torvalds, Kirill A. Shutemov
  Cc: the arch/x86 maintainers, Tom Lendacky, Peter Zijlstra,
	Dave Hansen, Andy Lutomirski, Borislav Petkov, linux-mm,
	Linux Kernel Mailing List

FYI: the interface I'm implementing looks like this:

	[output =] const_op(data, variable);

... where variable can be any variable with a static address, although
in any sane scenario it should be __ro_after_init.  During
initialization, it kicks out to an out-of-line (.alternatives_aux)
handler which accesses the variable in normal fashion, but at
alternatives-patching time it inlines the relevant opcodes.

Some of the assembly infrastructure for this is indeed hairy, especially
when the out-of-line hander needs temp registers.

	-hpa

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC 0/3] x86: Patchable constants
@ 2018-02-07 20:43       ` H. Peter Anvin
  0 siblings, 0 replies; 20+ messages in thread
From: H. Peter Anvin @ 2018-02-07 20:43 UTC (permalink / raw)
  To: Linus Torvalds, Kirill A. Shutemov
  Cc: the arch/x86 maintainers, Tom Lendacky, Peter Zijlstra,
	Dave Hansen, Andy Lutomirski, Borislav Petkov, linux-mm,
	Linux Kernel Mailing List

FYI: the interface I'm implementing looks like this:

	[output =] const_op(data, variable);

... where variable can be any variable with a static address, although
in any sane scenario it should be __ro_after_init.  During
initialization, it kicks out to an out-of-line (.alternatives_aux)
handler which accesses the variable in normal fashion, but at
alternatives-patching time it inlines the relevant opcodes.

Some of the assembly infrastructure for this is indeed hairy, especially
when the out-of-line hander needs temp registers.

	-hpa

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2018-02-07 20:52 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-02-07 14:59 [RFC 0/3] x86: Patchable constants Kirill A. Shutemov
2018-02-07 14:59 ` Kirill A. Shutemov
2018-02-07 14:59 ` [RFC 1/3] x86: Introduce patchable constants Kirill A. Shutemov
2018-02-07 14:59   ` Kirill A. Shutemov
2018-02-07 14:59 ` [RFC 2/3] x86/mm/encrypt: Convert __PHYSICAL_MASK to patchable constant Kirill A. Shutemov
2018-02-07 14:59   ` Kirill A. Shutemov
2018-02-07 14:59 ` [RFC 3/3] x86/mm/encrypt: Convert sme_me_mask " Kirill A. Shutemov
2018-02-07 14:59   ` Kirill A. Shutemov
2018-02-07 16:25 ` [RFC 0/3] x86: Patchable constants Peter Zijlstra
2018-02-07 16:25   ` Peter Zijlstra
2018-02-07 17:12   ` Kirill A. Shutemov
2018-02-07 17:12     ` Kirill A. Shutemov
2018-02-07 17:01 ` Linus Torvalds
2018-02-07 17:01   ` Linus Torvalds
2018-02-07 17:13   ` hpa
2018-02-07 17:13     ` hpa
2018-02-07 20:20   ` H. Peter Anvin
2018-02-07 20:20     ` H. Peter Anvin
2018-02-07 20:43     ` H. Peter Anvin
2018-02-07 20:43       ` H. Peter Anvin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.