All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/8] x86: Support Intel Key Locker
@ 2020-12-16 17:41 Chang S. Bae
  2020-12-16 17:41 ` [RFC PATCH 1/8] x86/cpufeature: Enumerate Key Locker feature Chang S. Bae
                   ` (9 more replies)
  0 siblings, 10 replies; 30+ messages in thread
From: Chang S. Bae @ 2020-12-16 17:41 UTC (permalink / raw)
  To: tglx, mingo, bp, luto, x86, herbert
  Cc: dan.j.williams, dave.hansen, ravi.v.shankar, ning.sun,
	kumar.n.dwarakanath, linux-crypto, linux-kernel, chang.seok.bae

Key Locker [1][2] is a new security feature available in new Intel CPUs to
protect data encryption keys for the Advanced Encryption Standard
algorithm. The protection limits the amount of time an AES key is exposed
in memory by sealing a key and referencing it with new AES instructions.

The new AES instruction set is a successor of Intel's AES-NI (AES New
Instruction). Users may switch to the Key Locker version from crypto
libraries.  This series includes a new AES implementation for the Crypto
API, which was validated through the crypto unit tests. The performance in
the test cases was measured and found comparable to the AES-NI version.

Key Locker introduces a (CPU-)internal key to encode AES keys. The kernel
needs to load it and ensure it unchanged as long as CPUs are operational.

The series has three parts:
* PATCH1-6: Implement the internal key management
* PATCH7:   Add AES implementation in Crypto library
* PATCH8:   Provide the hardware randomization option for the internal key

This RFC series has been reviewed by Dan Williams, with an open question of
whether to use hardware backup/restore, or to synchronize reinitialize the
internal key over suspend / resume to avoid the implications of key restore
failures.

[1] Intel Architecture Instruction Set Extensions Programming Reference:
    https://software.intel.com/content/dam/develop/external/us/en/documents/architecture-instruction-set-$
[2] Intel Key Locker Specification:
    https://software.intel.com/content/dam/develop/external/us/en/documents/343965-intel-key-locker-speci$

Chang S. Bae (8):
  x86/cpufeature: Enumerate Key Locker feature
  x86/cpu: Load Key Locker internal key at boot-time
  x86/msr-index: Add MSRs for Key Locker internal key
  x86/power: Restore Key Locker internal key from the ACPI S3/4 sleep
    states
  x86/cpu: Add a config option and a chicken bit for Key Locker
  selftests/x86: Test Key Locker internal key maintenance
  crypto: x86/aes-kl - Support AES algorithm using Key Locker
    instructions
  x86/cpu: Support the hardware randomization option for Key Locker
    internal key

 .../admin-guide/kernel-parameters.txt         |   2 +
 arch/x86/Kconfig                              |  14 +
 arch/x86/crypto/Makefile                      |   3 +
 arch/x86/crypto/aeskl-intel_asm.S             | 881 ++++++++++++++++++
 arch/x86/crypto/aeskl-intel_glue.c            | 697 ++++++++++++++
 arch/x86/include/asm/cpufeatures.h            |   1 +
 arch/x86/include/asm/disabled-features.h      |   8 +-
 arch/x86/include/asm/inst.h                   | 201 ++++
 arch/x86/include/asm/keylocker.h              |  41 +
 arch/x86/include/asm/msr-index.h              |   6 +
 arch/x86/include/uapi/asm/processor-flags.h   |   2 +
 arch/x86/kernel/Makefile                      |   1 +
 arch/x86/kernel/cpu/common.c                  |  66 +-
 arch/x86/kernel/cpu/cpuid-deps.c              |   1 +
 arch/x86/kernel/keylocker.c                   | 147 +++
 arch/x86/kernel/smpboot.c                     |   2 +
 arch/x86/lib/x86-opcode-map.txt               |   2 +-
 arch/x86/power/cpu.c                          |  34 +
 crypto/Kconfig                                |  28 +
 drivers/char/random.c                         |   6 +
 include/linux/random.h                        |   2 +
 tools/arch/x86/lib/x86-opcode-map.txt         |   2 +-
 tools/testing/selftests/x86/Makefile          |   2 +-
 tools/testing/selftests/x86/keylocker.c       | 177 ++++
 24 files changed, 2321 insertions(+), 5 deletions(-)
 create mode 100644 arch/x86/crypto/aeskl-intel_asm.S
 create mode 100644 arch/x86/crypto/aeskl-intel_glue.c
 create mode 100644 arch/x86/include/asm/keylocker.h
 create mode 100644 arch/x86/kernel/keylocker.c
 create mode 100644 tools/testing/selftests/x86/keylocker.c

-- 
2.17.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [RFC PATCH 1/8] x86/cpufeature: Enumerate Key Locker feature
  2020-12-16 17:41 [RFC PATCH 0/8] x86: Support Intel Key Locker Chang S. Bae
@ 2020-12-16 17:41 ` Chang S. Bae
  2020-12-16 17:41 ` [RFC PATCH 2/8] x86/cpu: Load Key Locker internal key at boot-time Chang S. Bae
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 30+ messages in thread
From: Chang S. Bae @ 2020-12-16 17:41 UTC (permalink / raw)
  To: tglx, mingo, bp, luto, x86, herbert
  Cc: dan.j.williams, dave.hansen, ravi.v.shankar, ning.sun,
	kumar.n.dwarakanath, linux-crypto, linux-kernel, chang.seok.bae

Intel's Key Locker is a new security feature providing a mechanism to
protect a data encryption key when processing the Advanced Encryption
Standard algorithm.

Here we add it to the kernel/user ABI by enumerating the hardware
capability. E.g., /proc/cpuinfo: keylocker.

Also, define the feature-specific CPUID leaf and bits for the feature
enablement.

Key Locker is on the disabled list, which is useful for compile-time
configuration later.

Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com>
Cc: x86@kernel.org
Cc: linux-kernel@vger.kernel.org
---
 arch/x86/include/asm/cpufeatures.h          |  1 +
 arch/x86/include/asm/disabled-features.h    |  8 +++++++-
 arch/x86/include/asm/keylocker.h            | 18 ++++++++++++++++++
 arch/x86/include/uapi/asm/processor-flags.h |  2 ++
 arch/x86/kernel/cpu/cpuid-deps.c            |  1 +
 5 files changed, 29 insertions(+), 1 deletion(-)
 create mode 100644 arch/x86/include/asm/keylocker.h

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index dad350d42ecf..8f2f050023b7 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -352,6 +352,7 @@
 #define X86_FEATURE_AVX512_VPOPCNTDQ	(16*32+14) /* POPCNT for vectors of DW/QW */
 #define X86_FEATURE_LA57		(16*32+16) /* 5-level page tables */
 #define X86_FEATURE_RDPID		(16*32+22) /* RDPID instruction */
+#define X86_FEATURE_KEYLOCKER		(16*32+23) /* Key Locker */
 #define X86_FEATURE_CLDEMOTE		(16*32+25) /* CLDEMOTE instruction */
 #define X86_FEATURE_MOVDIRI		(16*32+27) /* MOVDIRI instruction */
 #define X86_FEATURE_MOVDIR64B		(16*32+28) /* MOVDIR64B instruction */
diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h
index 5861d34f9771..0ac9414da242 100644
--- a/arch/x86/include/asm/disabled-features.h
+++ b/arch/x86/include/asm/disabled-features.h
@@ -44,6 +44,12 @@
 # define DISABLE_OSPKE		(1<<(X86_FEATURE_OSPKE & 31))
 #endif /* CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS */
 
+#ifdef CONFIG_X86_KEYLOCKER
+# define DISABLE_KEYLOCKER	0
+#else
+# define DISABLE_KEYLOCKER	(1<<(X86_FEATURE_KEYLOCKER & 31))
+#endif /* CONFIG_X86_KEYLOCKER */
+
 #ifdef CONFIG_X86_5LEVEL
 # define DISABLE_LA57	0
 #else
@@ -82,7 +88,7 @@
 #define DISABLED_MASK14	0
 #define DISABLED_MASK15	0
 #define DISABLED_MASK16	(DISABLE_PKU|DISABLE_OSPKE|DISABLE_LA57|DISABLE_UMIP| \
-			 DISABLE_ENQCMD)
+			 DISABLE_ENQCMD|DISABLE_KEYLOCKER)
 #define DISABLED_MASK17	0
 #define DISABLED_MASK18	0
 #define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 19)
diff --git a/arch/x86/include/asm/keylocker.h b/arch/x86/include/asm/keylocker.h
new file mode 100644
index 000000000000..2fe13c21c63f
--- /dev/null
+++ b/arch/x86/include/asm/keylocker.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
+#ifndef _ASM_KEYLOCKER_H
+#define _ASM_KEYLOCKER_H
+
+#ifndef __ASSEMBLY__
+
+#include <linux/bits.h>
+
+#define KEYLOCKER_CPUID                0x019
+#define KEYLOCKER_CPUID_EAX_SUPERVISOR BIT(0)
+#define KEYLOCKER_CPUID_EBX_AESKLE     BIT(0)
+#define KEYLOCKER_CPUID_EBX_WIDE       BIT(2)
+#define KEYLOCKER_CPUID_EBX_BACKUP     BIT(4)
+#define KEYLOCKER_CPUID_ECX_RAND       BIT(1)
+
+#endif /*__ASSEMBLY__ */
+#endif /* _ASM_KEYLOCKER_H */
diff --git a/arch/x86/include/uapi/asm/processor-flags.h b/arch/x86/include/uapi/asm/processor-flags.h
index bcba3c643e63..b958a95a0908 100644
--- a/arch/x86/include/uapi/asm/processor-flags.h
+++ b/arch/x86/include/uapi/asm/processor-flags.h
@@ -124,6 +124,8 @@
 #define X86_CR4_PCIDE		_BITUL(X86_CR4_PCIDE_BIT)
 #define X86_CR4_OSXSAVE_BIT	18 /* enable xsave and xrestore */
 #define X86_CR4_OSXSAVE		_BITUL(X86_CR4_OSXSAVE_BIT)
+#define X86_CR4_KEYLOCKER_BIT	19 /* enable Key Locker */
+#define X86_CR4_KEYLOCKER	_BITUL(X86_CR4_KEYLOCKER_BIT)
 #define X86_CR4_SMEP_BIT	20 /* enable SMEP support */
 #define X86_CR4_SMEP		_BITUL(X86_CR4_SMEP_BIT)
 #define X86_CR4_SMAP_BIT	21 /* enable SMAP support */
diff --git a/arch/x86/kernel/cpu/cpuid-deps.c b/arch/x86/kernel/cpu/cpuid-deps.c
index d502241995a3..b8edcb91fe4f 100644
--- a/arch/x86/kernel/cpu/cpuid-deps.c
+++ b/arch/x86/kernel/cpu/cpuid-deps.c
@@ -71,6 +71,7 @@ static const struct cpuid_dep cpuid_deps[] = {
 	{ X86_FEATURE_AVX512_BF16,		X86_FEATURE_AVX512VL  },
 	{ X86_FEATURE_ENQCMD,			X86_FEATURE_XSAVES    },
 	{ X86_FEATURE_PER_THREAD_MBA,		X86_FEATURE_MBA       },
+	{ X86_FEATURE_KEYLOCKER,		X86_FEATURE_XMM2      },
 	{}
 };
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC PATCH 2/8] x86/cpu: Load Key Locker internal key at boot-time
  2020-12-16 17:41 [RFC PATCH 0/8] x86: Support Intel Key Locker Chang S. Bae
  2020-12-16 17:41 ` [RFC PATCH 1/8] x86/cpufeature: Enumerate Key Locker feature Chang S. Bae
@ 2020-12-16 17:41 ` Chang S. Bae
  2020-12-16 17:41 ` [RFC PATCH 3/8] x86/msr-index: Add MSRs for Key Locker internal key Chang S. Bae
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 30+ messages in thread
From: Chang S. Bae @ 2020-12-16 17:41 UTC (permalink / raw)
  To: tglx, mingo, bp, luto, x86, herbert
  Cc: dan.j.williams, dave.hansen, ravi.v.shankar, ning.sun,
	kumar.n.dwarakanath, linux-crypto, linux-kernel, chang.seok.bae

Internal (Wrapping) Key is a new entity of Intel Key Locker feature. This
internal key is loaded in a software-inaccessible CPU state and used to
encode a data encryption key.

The kernel makes random data and loads it as the internal key in each CPU.
The data need to be invalidated as soon as the load is done.

The BIOS may disable the feature. Check the dynamic CPUID bit
(KEYLOCKER_CPUID_EBX_AESKLE) at first.

Add byte code for LOADIWKEY -- an instruction to load the internal key, in
the 'x86-opcode-map.txt' file to avoid objtool's misinterpretation.

Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com>
Cc: x86@kernel.org
Cc: linux-kernel@vger.kernel.org
---
 arch/x86/include/asm/keylocker.h      | 11 +++++
 arch/x86/kernel/Makefile              |  1 +
 arch/x86/kernel/cpu/common.c          | 38 +++++++++++++-
 arch/x86/kernel/keylocker.c           | 71 +++++++++++++++++++++++++++
 arch/x86/kernel/smpboot.c             |  2 +
 arch/x86/lib/x86-opcode-map.txt       |  2 +-
 tools/arch/x86/lib/x86-opcode-map.txt |  2 +-
 7 files changed, 124 insertions(+), 3 deletions(-)
 create mode 100644 arch/x86/kernel/keylocker.c

diff --git a/arch/x86/include/asm/keylocker.h b/arch/x86/include/asm/keylocker.h
index 2fe13c21c63f..daf0734a4095 100644
--- a/arch/x86/include/asm/keylocker.h
+++ b/arch/x86/include/asm/keylocker.h
@@ -14,5 +14,16 @@
 #define KEYLOCKER_CPUID_EBX_BACKUP     BIT(4)
 #define KEYLOCKER_CPUID_ECX_RAND       BIT(1)
 
+bool check_keylocker_readiness(void);
+
+bool load_keylocker(void);
+
+void make_keylocker_data(void);
+#ifdef CONFIG_X86_KEYLOCKER
+void invalidate_keylocker_data(void);
+#else
+#define invalidate_keylocker_data() do { } while (0)
+#endif
+
 #endif /*__ASSEMBLY__ */
 #endif /* _ASM_KEYLOCKER_H */
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 68608bd892c0..085dbf49b3b9 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -145,6 +145,7 @@ obj-$(CONFIG_PERF_EVENTS)		+= perf_regs.o
 obj-$(CONFIG_TRACING)			+= tracepoint.o
 obj-$(CONFIG_SCHED_MC_PRIO)		+= itmt.o
 obj-$(CONFIG_X86_UMIP)			+= umip.o
+obj-$(CONFIG_X86_KEYLOCKER)		+= keylocker.o
 
 obj-$(CONFIG_UNWINDER_ORC)		+= unwind_orc.o
 obj-$(CONFIG_UNWINDER_FRAME_POINTER)	+= unwind_frame.o
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 35ad8480c464..d675075848bb 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -57,6 +57,8 @@
 #include <asm/microcode_intel.h>
 #include <asm/intel-family.h>
 #include <asm/cpu_device_id.h>
+#include <asm/keylocker.h>
+
 #include <asm/uv/uv.h>
 
 #include "cpu.h"
@@ -459,6 +461,39 @@ static __init int x86_nofsgsbase_setup(char *arg)
 }
 __setup("nofsgsbase", x86_nofsgsbase_setup);
 
+static __always_inline void setup_keylocker(struct cpuinfo_x86 *c)
+{
+	bool keyloaded;
+
+	if (!cpu_feature_enabled(X86_FEATURE_KEYLOCKER))
+		goto out;
+
+	cr4_set_bits(X86_CR4_KEYLOCKER);
+
+	if (c == &boot_cpu_data) {
+		if (!check_keylocker_readiness())
+			goto disable_keylocker;
+
+		make_keylocker_data();
+	}
+
+	keyloaded = load_keylocker();
+	if (!keyloaded) {
+		pr_err_once("x86/keylocker: Failed to load internal key\n");
+		goto disable_keylocker;
+	}
+
+	pr_info_once("x86/keylocker: Activated\n");
+	return;
+
+disable_keylocker:
+	clear_cpu_cap(c, X86_FEATURE_KEYLOCKER);
+	pr_info_once("x86/keylocker: Disabled\n");
+out:
+	/* Make sure the feature disabled for kexec-reboot. */
+	cr4_clear_bits(X86_CR4_KEYLOCKER);
+}
+
 /*
  * Protection Keys are not available in 32-bit mode.
  */
@@ -1554,10 +1589,11 @@ static void identify_cpu(struct cpuinfo_x86 *c)
 	/* Disable the PN if appropriate */
 	squash_the_stupid_serial_number(c);
 
-	/* Set up SMEP/SMAP/UMIP */
+	/* Setup various Intel-specific CPU security features */
 	setup_smep(c);
 	setup_smap(c);
 	setup_umip(c);
+	setup_keylocker(c);
 
 	/* Enable FSGSBASE instructions if available. */
 	if (cpu_has(c, X86_FEATURE_FSGSBASE)) {
diff --git a/arch/x86/kernel/keylocker.c b/arch/x86/kernel/keylocker.c
new file mode 100644
index 000000000000..e455d806b80c
--- /dev/null
+++ b/arch/x86/kernel/keylocker.c
@@ -0,0 +1,71 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+/*
+ * Key Locker feature check and support the internal key
+ */
+
+#include <linux/random.h>
+
+#include <asm/keylocker.h>
+#include <asm/fpu/types.h>
+#include <asm/fpu/api.h>
+
+bool check_keylocker_readiness(void)
+{
+	u32 eax, ebx, ecx, edx;
+
+	cpuid_count(KEYLOCKER_CPUID, 0, &eax, &ebx, &ecx, &edx);
+	/* BIOS may not enable it on some systems. */
+	if (!(ebx & KEYLOCKER_CPUID_EBX_AESKLE)) {
+		pr_debug("x86/keylocker: not fully enabled\n");
+		return false;
+	}
+
+	return true;
+}
+
+/* Load Internal (Wrapping) Key */
+#define LOADIWKEY		".byte 0xf3,0x0f,0x38,0xdc,0xd1"
+#define LOADIWKEY_NUM_OPERANDS	3
+
+static struct key {
+	struct reg_128_bit value[LOADIWKEY_NUM_OPERANDS];
+} keydata;
+
+void make_keylocker_data(void)
+{
+	int i;
+
+	for (i = 0; i < LOADIWKEY_NUM_OPERANDS; i++)
+		get_random_bytes(&keydata.value[i], sizeof(struct reg_128_bit));
+}
+
+void invalidate_keylocker_data(void)
+{
+	memset(&keydata.value, 0, sizeof(struct reg_128_bit) * LOADIWKEY_NUM_OPERANDS);
+}
+
+#define USE_SWKEY	0
+
+bool load_keylocker(void)
+{
+	struct reg_128_bit zeros = { 0 };
+	u32 keysrc = USE_SWKEY;
+	bool err = true;
+
+	kernel_fpu_begin();
+
+	asm volatile ("movdqu %0, %%xmm0; movdqu %1, %%xmm1; movdqu %2, %%xmm2;"
+		      :: "m"(keydata.value[0]),
+			 "m"(keydata.value[1]),
+			 "m"(keydata.value[2]));
+
+	asm volatile (LOADIWKEY CC_SET(z) : CC_OUT(z) (err) : "a"(keysrc));
+
+	asm volatile ("movdqu %0, %%xmm0; movdqu %0, %%xmm1; movdqu %0, %%xmm2;"
+		      :: "m"(zeros));
+
+	kernel_fpu_end();
+
+	return err ? false : true;
+}
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index de776b2e6046..a01edf46d4c7 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -81,6 +81,7 @@
 #include <asm/spec-ctrl.h>
 #include <asm/hw_irq.h>
 #include <asm/stackprotector.h>
+#include <asm/keylocker.h>
 
 /* representing HT siblings of each logical CPU */
 DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_sibling_map);
@@ -1423,6 +1424,7 @@ void __init native_smp_cpus_done(unsigned int max_cpus)
 	nmi_selftest();
 	impress_friends();
 	mtrr_aps_init();
+	invalidate_keylocker_data();
 }
 
 static int __initdata setup_possible_cpus = -1;
diff --git a/arch/x86/lib/x86-opcode-map.txt b/arch/x86/lib/x86-opcode-map.txt
index ec31f5b60323..3e241cddfc86 100644
--- a/arch/x86/lib/x86-opcode-map.txt
+++ b/arch/x86/lib/x86-opcode-map.txt
@@ -795,7 +795,7 @@ cc: sha256msg1 Vdq,Wdq | vrsqrt28ps/d Vx,Wx (66),(ev)
 cd: sha256msg2 Vdq,Wdq | vrsqrt28ss/d Vx,Hx,Wx (66),(ev)
 cf: vgf2p8mulb Vx,Wx (66)
 db: VAESIMC Vdq,Wdq (66),(v1)
-dc: vaesenc Vx,Hx,Wx (66)
+dc: vaesenc Vx,Hx,Wx (66) | loadiwkey Vx,Hx (F3)
 dd: vaesenclast Vx,Hx,Wx (66)
 de: vaesdec Vx,Hx,Wx (66)
 df: vaesdeclast Vx,Hx,Wx (66)
diff --git a/tools/arch/x86/lib/x86-opcode-map.txt b/tools/arch/x86/lib/x86-opcode-map.txt
index ec31f5b60323..3e241cddfc86 100644
--- a/tools/arch/x86/lib/x86-opcode-map.txt
+++ b/tools/arch/x86/lib/x86-opcode-map.txt
@@ -795,7 +795,7 @@ cc: sha256msg1 Vdq,Wdq | vrsqrt28ps/d Vx,Wx (66),(ev)
 cd: sha256msg2 Vdq,Wdq | vrsqrt28ss/d Vx,Hx,Wx (66),(ev)
 cf: vgf2p8mulb Vx,Wx (66)
 db: VAESIMC Vdq,Wdq (66),(v1)
-dc: vaesenc Vx,Hx,Wx (66)
+dc: vaesenc Vx,Hx,Wx (66) | loadiwkey Vx,Hx (F3)
 dd: vaesenclast Vx,Hx,Wx (66)
 de: vaesdec Vx,Hx,Wx (66)
 df: vaesdeclast Vx,Hx,Wx (66)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC PATCH 3/8] x86/msr-index: Add MSRs for Key Locker internal key
  2020-12-16 17:41 [RFC PATCH 0/8] x86: Support Intel Key Locker Chang S. Bae
  2020-12-16 17:41 ` [RFC PATCH 1/8] x86/cpufeature: Enumerate Key Locker feature Chang S. Bae
  2020-12-16 17:41 ` [RFC PATCH 2/8] x86/cpu: Load Key Locker internal key at boot-time Chang S. Bae
@ 2020-12-16 17:41 ` Chang S. Bae
  2020-12-16 17:41 ` [RFC PATCH 4/8] x86/power: Restore Key Locker internal key from the ACPI S3/4 sleep states Chang S. Bae
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 30+ messages in thread
From: Chang S. Bae @ 2020-12-16 17:41 UTC (permalink / raw)
  To: tglx, mingo, bp, luto, x86, herbert
  Cc: dan.j.williams, dave.hansen, ravi.v.shankar, ning.sun,
	kumar.n.dwarakanath, linux-crypto, linux-kernel, chang.seok.bae

Key Locker internal key in a CPU state can be backed up in a platform
register. The backup can be also copied back to a CPU state. This mechanism
is useful to restore the key (after system sleep).

Add MSRs for the internal key backup, copy, and status check.

Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com>
Cc: x86@kernel.org
Cc: linux-kernel@vger.kernel.org
---
 arch/x86/include/asm/msr-index.h | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 972a34d93505..c0b9157806f7 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -922,4 +922,10 @@
 #define MSR_VM_IGNNE                    0xc0010115
 #define MSR_VM_HSAVE_PA                 0xc0010117
 
+/* MSRs for Key Locker Internal (Wrapping) Key management */
+#define MSR_IA32_COPY_LOCAL_TO_PLATFORM	0x00000d91
+#define MSR_IA32_COPY_PLATFORM_TO_LOCAL	0x00000d92
+#define MSR_IA32_COPY_STATUS		0x00000990
+#define MSR_IA32_IWKEYBACKUP_STATUS	0x00000991
+
 #endif /* _ASM_X86_MSR_INDEX_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC PATCH 4/8] x86/power: Restore Key Locker internal key from the ACPI S3/4 sleep states
  2020-12-16 17:41 [RFC PATCH 0/8] x86: Support Intel Key Locker Chang S. Bae
                   ` (2 preceding siblings ...)
  2020-12-16 17:41 ` [RFC PATCH 3/8] x86/msr-index: Add MSRs for Key Locker internal key Chang S. Bae
@ 2020-12-16 17:41 ` Chang S. Bae
  2020-12-17 19:10   ` Eric Biggers
  2021-01-28 10:34   ` Rafael J. Wysocki
  2020-12-16 17:41 ` [RFC PATCH 5/8] x86/cpu: Add a config option and a chicken bit for Key Locker Chang S. Bae
                   ` (5 subsequent siblings)
  9 siblings, 2 replies; 30+ messages in thread
From: Chang S. Bae @ 2020-12-16 17:41 UTC (permalink / raw)
  To: tglx, mingo, bp, luto, x86, herbert
  Cc: dan.j.williams, dave.hansen, ravi.v.shankar, ning.sun,
	kumar.n.dwarakanath, linux-crypto, linux-kernel, chang.seok.bae,
	linux-pm

When the system state switches to these sleep states, the internal key gets
reset. Since this system transition is transparent to userspace, the
internal key needs to be restored properly.

Key Locker provides a mechanism to back up the internal key in non-volatile
memory. The kernel requests a backup right after the key loaded at
boot-time and copies it later when the system wakes up.

The backup during the S5 sleep state is not trusted. It is overwritten by a
new key at the next boot.

On a system with the S3/4 states, enable the feature only when the backup
mechanism is supported.

Disable the feature when the copy fails (or the backup corrupts). The
shutdown is considered too noisy. A new key is considerable only when
threads can be synchronously suspended.

Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com>
Cc: x86@kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: linux-pm@vger.kernel.org
---
 arch/x86/include/asm/keylocker.h | 12 ++++++++
 arch/x86/kernel/cpu/common.c     | 25 +++++++++++-----
 arch/x86/kernel/keylocker.c      | 51 ++++++++++++++++++++++++++++++++
 arch/x86/power/cpu.c             | 34 +++++++++++++++++++++
 4 files changed, 115 insertions(+), 7 deletions(-)

diff --git a/arch/x86/include/asm/keylocker.h b/arch/x86/include/asm/keylocker.h
index daf0734a4095..722574c305c2 100644
--- a/arch/x86/include/asm/keylocker.h
+++ b/arch/x86/include/asm/keylocker.h
@@ -6,6 +6,7 @@
 #ifndef __ASSEMBLY__
 
 #include <linux/bits.h>
+#include <asm/msr.h>
 
 #define KEYLOCKER_CPUID                0x019
 #define KEYLOCKER_CPUID_EAX_SUPERVISOR BIT(0)
@@ -25,5 +26,16 @@ void invalidate_keylocker_data(void);
 #define invalidate_keylocker_data() do { } while (0)
 #endif
 
+static inline u64 read_keylocker_backup_status(void)
+{
+	u64 status;
+
+	rdmsrl(MSR_IA32_IWKEYBACKUP_STATUS, status);
+	return status;
+}
+
+void backup_keylocker(void);
+bool copy_keylocker(void);
+
 #endif /*__ASSEMBLY__ */
 #endif /* _ASM_KEYLOCKER_H */
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index d675075848bb..a446d5aff08f 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -463,24 +463,35 @@ __setup("nofsgsbase", x86_nofsgsbase_setup);
 
 static __always_inline void setup_keylocker(struct cpuinfo_x86 *c)
 {
-	bool keyloaded;
-
 	if (!cpu_feature_enabled(X86_FEATURE_KEYLOCKER))
 		goto out;
 
 	cr4_set_bits(X86_CR4_KEYLOCKER);
 
 	if (c == &boot_cpu_data) {
+		bool keyloaded;
+
 		if (!check_keylocker_readiness())
 			goto disable_keylocker;
 
 		make_keylocker_data();
-	}
 
-	keyloaded = load_keylocker();
-	if (!keyloaded) {
-		pr_err_once("x86/keylocker: Failed to load internal key\n");
-		goto disable_keylocker;
+		keyloaded = load_keylocker();
+		if (!keyloaded) {
+			pr_err("x86/keylocker: Fail to load internal key\n");
+			goto disable_keylocker;
+		}
+
+		backup_keylocker();
+	} else {
+		bool keycopied;
+
+		/* NB: When system wakes up, this path recovers the internal key. */
+		keycopied = copy_keylocker();
+		if (!keycopied) {
+			pr_err_once("x86/keylocker: Fail to copy internal key\n");
+			goto disable_keylocker;
+		}
 	}
 
 	pr_info_once("x86/keylocker: Activated\n");
diff --git a/arch/x86/kernel/keylocker.c b/arch/x86/kernel/keylocker.c
index e455d806b80c..229875ac80d5 100644
--- a/arch/x86/kernel/keylocker.c
+++ b/arch/x86/kernel/keylocker.c
@@ -5,11 +5,15 @@
  */
 
 #include <linux/random.h>
+#include <linux/acpi.h>
+#include <linux/delay.h>
 
 #include <asm/keylocker.h>
 #include <asm/fpu/types.h>
 #include <asm/fpu/api.h>
 
+static bool keybackup_available;
+
 bool check_keylocker_readiness(void)
 {
 	u32 eax, ebx, ecx, edx;
@@ -21,6 +25,14 @@ bool check_keylocker_readiness(void)
 		return false;
 	}
 
+	keybackup_available = (ebx & KEYLOCKER_CPUID_EBX_BACKUP);
+	/* Internal Key backup is essential with S3/4 states */
+	if (!keybackup_available &&
+	    (acpi_sleep_state_supported(ACPI_STATE_S3) ||
+	     acpi_sleep_state_supported(ACPI_STATE_S4))) {
+		pr_debug("x86/keylocker: no key backup support with possible S3/4\n");
+		return false;
+	}
 	return true;
 }
 
@@ -29,6 +41,7 @@ bool check_keylocker_readiness(void)
 #define LOADIWKEY_NUM_OPERANDS	3
 
 static struct key {
+	bool valid;
 	struct reg_128_bit value[LOADIWKEY_NUM_OPERANDS];
 } keydata;
 
@@ -38,11 +51,15 @@ void make_keylocker_data(void)
 
 	for (i = 0; i < LOADIWKEY_NUM_OPERANDS; i++)
 		get_random_bytes(&keydata.value[i], sizeof(struct reg_128_bit));
+
+	keydata.valid = true;
 }
 
 void invalidate_keylocker_data(void)
 {
 	memset(&keydata.value, 0, sizeof(struct reg_128_bit) * LOADIWKEY_NUM_OPERANDS);
+
+	keydata.valid = false;
 }
 
 #define USE_SWKEY	0
@@ -69,3 +86,37 @@ bool load_keylocker(void)
 
 	return err ? false : true;
 }
+
+void backup_keylocker(void)
+{
+	if (keybackup_available)
+		wrmsrl(MSR_IA32_COPY_LOCAL_TO_PLATFORM, 1);
+}
+
+#define KEYRESTORE_RETRY	1
+
+bool copy_keylocker(void)
+{
+	bool copied = false;
+	int i;
+
+	/* Use valid key data when available */
+	if (keydata.valid)
+		return load_keylocker();
+
+	if (!keybackup_available)
+		return copied;
+
+	wrmsrl(MSR_IA32_COPY_PLATFORM_TO_LOCAL, 1);
+
+	for (i = 0; (i <= KEYRESTORE_RETRY) && !copied; i++) {
+		u64 status;
+
+		if (i)
+			udelay(1);
+		rdmsrl(MSR_IA32_COPY_STATUS, status);
+		copied = status & BIT(0) ? true : false;
+	}
+
+	return copied;
+}
diff --git a/arch/x86/power/cpu.c b/arch/x86/power/cpu.c
index db1378c6ff26..5412440e7c5c 100644
--- a/arch/x86/power/cpu.c
+++ b/arch/x86/power/cpu.c
@@ -25,6 +25,7 @@
 #include <asm/cpu.h>
 #include <asm/mmu_context.h>
 #include <asm/cpu_device_id.h>
+#include <asm/keylocker.h>
 
 #ifdef CONFIG_X86_32
 __visible unsigned long saved_context_ebx;
@@ -57,6 +58,38 @@ static void msr_restore_context(struct saved_context *ctxt)
 	}
 }
 
+/*
+ * The boot CPU executes this function, while other CPUs restore the key
+ * through the setup path in setup_keylocker().
+ */
+static void restore_keylocker(void)
+{
+	u64 keybackup_status;
+	bool keycopied;
+
+	if (!cpu_feature_enabled(X86_FEATURE_KEYLOCKER))
+		return;
+
+	keybackup_status = read_keylocker_backup_status();
+	if (!(keybackup_status & BIT(0))) {
+		pr_err("x86/keylocker: internal key restoration failed with %s\n",
+		       (keybackup_status & BIT(2)) ? "read error" : "invalid status");
+		WARN_ON(1);
+		goto disable_keylocker;
+	}
+
+	keycopied = copy_keylocker();
+	if (keycopied)
+		return;
+
+	pr_err("x86/keylocker: internal key copy failure\n");
+
+disable_keylocker:
+	pr_info("x86/keylocker: Disabled with internal key restoration failure\n");
+	setup_clear_cpu_cap(X86_FEATURE_KEYLOCKER);
+	cr4_clear_bits(X86_CR4_KEYLOCKER);
+}
+
 /**
  *	__save_processor_state - save CPU registers before creating a
  *		hibernation image and before restoring the memory state from it
@@ -265,6 +298,7 @@ static void notrace __restore_processor_state(struct saved_context *ctxt)
 	mtrr_bp_restore();
 	perf_restore_debug_store();
 	msr_restore_context(ctxt);
+	restore_keylocker();
 
 	c = &cpu_data(smp_processor_id());
 	if (cpu_has(c, X86_FEATURE_MSR_IA32_FEAT_CTL))
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC PATCH 5/8] x86/cpu: Add a config option and a chicken bit for Key Locker
  2020-12-16 17:41 [RFC PATCH 0/8] x86: Support Intel Key Locker Chang S. Bae
                   ` (3 preceding siblings ...)
  2020-12-16 17:41 ` [RFC PATCH 4/8] x86/power: Restore Key Locker internal key from the ACPI S3/4 sleep states Chang S. Bae
@ 2020-12-16 17:41 ` Chang S. Bae
  2020-12-16 17:41 ` [RFC PATCH 6/8] selftests/x86: Test Key Locker internal key maintenance Chang S. Bae
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 30+ messages in thread
From: Chang S. Bae @ 2020-12-16 17:41 UTC (permalink / raw)
  To: tglx, mingo, bp, luto, x86, herbert
  Cc: dan.j.williams, dave.hansen, ravi.v.shankar, ning.sun,
	kumar.n.dwarakanath, linux-crypto, linux-kernel, chang.seok.bae,
	linux-doc

Add a kernel config option to enable the feature (disabled by default) at
compile-time.

Also, add a new command-line parameter -- 'nokeylocker' to disable the
feature at boot-time.

Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com>
Cc: x86@kernel.org
Cc: linux-doc@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
---
 Documentation/admin-guide/kernel-parameters.txt |  2 ++
 arch/x86/Kconfig                                | 14 ++++++++++++++
 arch/x86/kernel/cpu/common.c                    | 16 ++++++++++++++++
 3 files changed, 32 insertions(+)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 44fde25bb221..c389ad8fb9de 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -3220,6 +3220,8 @@
 
 	nohugeiomap	[KNL,X86,PPC,ARM64] Disable kernel huge I/O mappings.
 
+	nokeylocker	[X86] Disables Key Locker hardware feature.
+
 	nosmt		[KNL,S390] Disable symmetric multithreading (SMT).
 			Equivalent to smt=1.
 
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index fbf26e0f7a6a..7623af32f919 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1886,6 +1886,20 @@ config X86_INTEL_MEMORY_PROTECTION_KEYS
 
 	  If unsure, say y.
 
+config X86_KEYLOCKER
+	prompt "Key Locker"
+	def_bool n
+	depends on CPU_SUP_INTEL
+	help
+	  Key Locker is a new security feature to protect a data encryption
+	  key for the Advanced Encryption Standard (AES) algorithm.
+
+	  When enabled, every CPU has a unique internal key to wrap the AES
+	  key into an encoded format.  The internal key is not accessible
+	  to software once loaded.
+
+	  If unsure, say y.
+
 choice
 	prompt "TSX enable mode"
 	depends on CPU_SUP_INTEL
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index a446d5aff08f..ba5bd79fbac2 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -354,6 +354,22 @@ static __always_inline void setup_umip(struct cpuinfo_x86 *c)
 /* These bits should not change their value after CPU init is finished. */
 static const unsigned long cr4_pinned_mask =
 	X86_CR4_SMEP | X86_CR4_SMAP | X86_CR4_UMIP | X86_CR4_FSGSBASE;
+
+static __init int x86_nokeylocker_setup(char *arg)
+{
+	/* Expect an exact match without trailing characters */
+	if (strlen(arg))
+		return 0;
+
+	if (!cpu_feature_enabled(X86_FEATURE_KEYLOCKER))
+		return 1;
+
+	setup_clear_cpu_cap(X86_FEATURE_KEYLOCKER);
+	pr_info("x86/keylocker: Disabled by kernel command line\n");
+	return 1;
+}
+__setup("nokeylocker", x86_nokeylocker_setup);
+
 static DEFINE_STATIC_KEY_FALSE_RO(cr_pinning);
 static unsigned long cr4_pinned_bits __ro_after_init;
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC PATCH 6/8] selftests/x86: Test Key Locker internal key maintenance
  2020-12-16 17:41 [RFC PATCH 0/8] x86: Support Intel Key Locker Chang S. Bae
                   ` (4 preceding siblings ...)
  2020-12-16 17:41 ` [RFC PATCH 5/8] x86/cpu: Add a config option and a chicken bit for Key Locker Chang S. Bae
@ 2020-12-16 17:41 ` Chang S. Bae
  2020-12-18  9:59   ` Peter Zijlstra
  2020-12-16 17:41 ` [RFC PATCH 7/8] crypto: x86/aes-kl - Support AES algorithm using Key Locker instructions Chang S. Bae
                   ` (3 subsequent siblings)
  9 siblings, 1 reply; 30+ messages in thread
From: Chang S. Bae @ 2020-12-16 17:41 UTC (permalink / raw)
  To: tglx, mingo, bp, luto, x86, herbert
  Cc: dan.j.williams, dave.hansen, ravi.v.shankar, ning.sun,
	kumar.n.dwarakanath, linux-crypto, linux-kernel, chang.seok.bae,
	linux-kselftest

The test validates the internal key to be the same in all CPUs.

It performs the validation again with the Suspend-To-RAM (ACPI S3) state.

Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-kselftest@vger.kernel.org
---
 tools/testing/selftests/x86/Makefile    |   2 +-
 tools/testing/selftests/x86/keylocker.c | 177 ++++++++++++++++++++++++
 2 files changed, 178 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/x86/keylocker.c

diff --git a/tools/testing/selftests/x86/Makefile b/tools/testing/selftests/x86/Makefile
index 6703c7906b71..c53e496d77b2 100644
--- a/tools/testing/selftests/x86/Makefile
+++ b/tools/testing/selftests/x86/Makefile
@@ -13,7 +13,7 @@ CAN_BUILD_WITH_NOPIE := $(shell ./check_cc.sh $(CC) trivial_program.c -no-pie)
 TARGETS_C_BOTHBITS := single_step_syscall sysret_ss_attrs syscall_nt test_mremap_vdso \
 			check_initial_reg_state sigreturn iopl ioperm \
 			test_vdso test_vsyscall mov_ss_trap \
-			syscall_arg_fault fsgsbase_restore
+			syscall_arg_fault fsgsbase_restore keylocker
 TARGETS_C_32BIT_ONLY := entry_from_vm86 test_syscall_vdso unwind_vdso \
 			test_FCMOV test_FCOMI test_FISTTP \
 			vdso_restorer
diff --git a/tools/testing/selftests/x86/keylocker.c b/tools/testing/selftests/x86/keylocker.c
new file mode 100644
index 000000000000..3d69c1615bca
--- /dev/null
+++ b/tools/testing/selftests/x86/keylocker.c
@@ -0,0 +1,177 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * keylocker.c, validating the internal key management
+ */
+#undef _GNU_SOURCE
+#define _GNU_SOURCE 1
+
+#include <stdio.h>
+#include <stdbool.h>
+#include <string.h>
+#include <fcntl.h>
+#include <err.h>
+#include <sched.h>
+#include <setjmp.h>
+#include <signal.h>
+#include <unistd.h>
+
+#define HANDLE_SIZE	48
+
+static bool keylocker_disabled;
+
+/* Encode a 128-bit key to a 384-bit handle */
+static inline void __encode_key(char *handle)
+{
+	static const unsigned char aeskey[] = { 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x38,
+						0x71, 0x77, 0x74, 0x69, 0x6f, 0x6b, 0x6c, 0x78 };
+
+	asm volatile ("movdqu %0, %%xmm0" : : "m" (*aeskey) :);
+
+	/* Set no restriction to the handle */
+	asm volatile ("mov $0, %%eax" :);
+
+	/* ENCODEKEY128 %EAX */
+	asm volatile (".byte 0xf3, 0xf, 0x38, 0xfa, 0xc0");
+
+	asm volatile ("movdqu %%xmm0, %0; movdqu %%xmm1, %1; movdqu %%xmm2, %2;"
+		      : "=m" (handle[0]), "=m" (handle[0x10]), "=m" (handle[0x20]));
+}
+
+static jmp_buf jmpbuf;
+
+static void handle_sigill(int sig, siginfo_t *si, void *ctx_void)
+{
+	keylocker_disabled = true;
+	siglongjmp(jmpbuf, 1);
+}
+
+static bool encode_key(char *handle)
+{
+	bool success = true;
+	struct sigaction sa;
+	int ret;
+
+	memset(&sa, 0, sizeof(sa));
+
+	/* Set signal handler */
+	sa.sa_flags = SA_SIGINFO;
+	sa.sa_sigaction = handle_sigill;
+	sigemptyset(&sa.sa_mask);
+	ret = sigaction(SIGILL, &sa, 0);
+	if (ret)
+		err(1, "sigaction");
+
+	if (sigsetjmp(jmpbuf, 1))
+		success = false;
+	else
+		__encode_key(handle);
+
+	/* Clear signal handler */
+	sa.sa_flags = 0;
+	sa.sa_sigaction = NULL;
+	sa.sa_handler = SIG_DFL;
+	sigemptyset(&sa.sa_mask);
+	ret = sigaction(SIGILL, &sa, 0);
+	if (ret)
+		err(1, "sigaction");
+
+	return success;
+}
+
+/*
+ * Test if the internal key is the same in all the CPUs:
+ *
+ * Since the value is not readable, compare the encoded output of a AES key
+ * between CPUs.
+ */
+
+static int nerrs;
+
+static unsigned char cpu0_handle[HANDLE_SIZE] = { 0 };
+
+static void test_internal_key(bool slept, long cpus)
+{
+	int cpu, errs;
+
+	printf("Test the internal key consistency between CPUs\n");
+
+	for (cpu = 0, errs = 0; cpu < cpus; cpu++) {
+		char handle[HANDLE_SIZE] = { 0 };
+		cpu_set_t mask;
+		bool success;
+
+		CPU_ZERO(&mask);
+		CPU_SET(cpu, &mask);
+		sched_setaffinity(0, sizeof(cpu_set_t), &mask);
+
+		success = encode_key(handle);
+		if (!success) {
+			/* The encode should success after the S3 sleep */
+			if (slept)
+				errs++;
+			printf("[%s]\tKey Locker disabled at CPU%d\n",
+			       slept ? "FAIL" : "NOTE", cpu);
+			continue;
+		}
+
+		if (cpu == 0 && !slept) {
+			/* Record the first handle value as reference */
+			memcpy(cpu0_handle, handle, HANDLE_SIZE);
+		} else if (memcmp(cpu0_handle, handle, HANDLE_SIZE)) {
+			printf("[FAIL]\tMismatched internal key at CPU%d\n",
+			       cpu);
+			errs++;
+		}
+	}
+
+	if (errs == 0 && !keylocker_disabled)
+		printf("[OK]\tAll the internal keys are the same\n");
+	else
+		nerrs += errs;
+}
+
+static void switch_to_sleep(bool *slept)
+{
+	ssize_t bytes;
+	int fd;
+
+	printf("Transition to Suspend-To-RAM state\n");
+
+	fd = open("/sys/power/mem_sleep", O_RDWR);
+	if (fd < 0)
+		err(1, "Open /sys/power/mem_sleep");
+
+	bytes = write(fd, "deep", strlen("deep"));
+	if (bytes != strlen("deep"))
+		err(1, "Write /sys/power/mem_sleep");
+	close(fd);
+
+	fd = open("/sys/power/state", O_RDWR);
+	if (fd < 0)
+		err(1, "Open /sys/power/state");
+
+	bytes = write(fd, "mem", strlen("mem"));
+	if (bytes != strlen("mem"))
+		err(1, "Write /sys/power/state");
+	close(fd);
+
+	printf("Wake up from Suspend-To-RAM state\n");
+	*slept = true;
+}
+
+int main(void)
+{
+	bool slept = false;
+	long cpus;
+
+	cpus = sysconf(_SC_NPROCESSORS_ONLN);
+	printf("%ld CPUs in the system\n", cpus);
+
+	test_internal_key(slept, cpus);
+	if (keylocker_disabled)
+		return nerrs ? 1 : 0;
+
+	switch_to_sleep(&slept);
+	test_internal_key(slept, cpus);
+	return nerrs ? 1 : 0;
+}
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC PATCH 7/8] crypto: x86/aes-kl - Support AES algorithm using Key Locker instructions
  2020-12-16 17:41 [RFC PATCH 0/8] x86: Support Intel Key Locker Chang S. Bae
                   ` (5 preceding siblings ...)
  2020-12-16 17:41 ` [RFC PATCH 6/8] selftests/x86: Test Key Locker internal key maintenance Chang S. Bae
@ 2020-12-16 17:41 ` Chang S. Bae
  2020-12-17 10:16   ` Ard Biesheuvel
                     ` (3 more replies)
  2020-12-16 17:41 ` [RFC PATCH 8/8] x86/cpu: Support the hardware randomization option for Key Locker internal key Chang S. Bae
                   ` (2 subsequent siblings)
  9 siblings, 4 replies; 30+ messages in thread
From: Chang S. Bae @ 2020-12-16 17:41 UTC (permalink / raw)
  To: tglx, mingo, bp, luto, x86, herbert
  Cc: dan.j.williams, dave.hansen, ravi.v.shankar, ning.sun,
	kumar.n.dwarakanath, linux-crypto, linux-kernel, chang.seok.bae

Key Locker (KL) is Intel's new security feature that protects the AES key
at the time of data transformation. New AES SIMD instructions -- as a
successor of Intel's AES-NI -- are provided to encode an AES key and
reference it for the AES algorithm.

New instructions support 128/256-bit keys. While it is not desirable to
receive any 192-bit key, AES-NI instructions are taken to serve this size.

New instructions are operational in both 32-/64-bit modes.

Add a set of new macros for the new instructions so that no new binutils
version is required.

Implemented methods are for a single block as well as ECB, CBC, CTR, and
XTS modes. The methods are not compatible with other AES implementations as
accessing an encrypted key instead of the normal AES key.

setkey() call encodes an AES key. User may displace the AES key once
encoded, as encrypt()/decrypt() methods do not need the key.

Most C code follows the AES-NI implementation. It has higher priority than
the AES-NI as providing key protection.

Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: x86@kernel.org
Cc: linux-crypto@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
---
 arch/x86/crypto/Makefile           |   3 +
 arch/x86/crypto/aeskl-intel_asm.S  | 881 +++++++++++++++++++++++++++++
 arch/x86/crypto/aeskl-intel_glue.c | 697 +++++++++++++++++++++++
 arch/x86/include/asm/inst.h        | 201 +++++++
 crypto/Kconfig                     |  28 +
 5 files changed, 1810 insertions(+)
 create mode 100644 arch/x86/crypto/aeskl-intel_asm.S
 create mode 100644 arch/x86/crypto/aeskl-intel_glue.c

diff --git a/arch/x86/crypto/Makefile b/arch/x86/crypto/Makefile
index a31de0c6ccde..8e2e34e73a21 100644
--- a/arch/x86/crypto/Makefile
+++ b/arch/x86/crypto/Makefile
@@ -54,6 +54,9 @@ obj-$(CONFIG_CRYPTO_AES_NI_INTEL) += aesni-intel.o
 aesni-intel-y := aesni-intel_asm.o aesni-intel_glue.o
 aesni-intel-$(CONFIG_64BIT) += aesni-intel_avx-x86_64.o aes_ctrby8_avx-x86_64.o
 
+obj-$(CONFIG_CRYPTO_AES_KL) += aeskl-intel.o
+aeskl-intel-y := aeskl-intel_asm.o aesni-intel_asm.o aeskl-intel_glue.o
+
 obj-$(CONFIG_CRYPTO_SHA1_SSSE3) += sha1-ssse3.o
 sha1-ssse3-y := sha1_avx2_x86_64_asm.o sha1_ssse3_asm.o sha1_ssse3_glue.o
 sha1-ssse3-$(CONFIG_AS_SHA1_NI) += sha1_ni_asm.o
diff --git a/arch/x86/crypto/aeskl-intel_asm.S b/arch/x86/crypto/aeskl-intel_asm.S
new file mode 100644
index 000000000000..80ddeda11bdf
--- /dev/null
+++ b/arch/x86/crypto/aeskl-intel_asm.S
@@ -0,0 +1,881 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * Implement AES algorithm using Intel AES Key Locker instructions.
+ *
+ * Most codes are based from AES-NI implementation, aesni-intel_asm.S
+ *
+ */
+
+#include <linux/linkage.h>
+#include <asm/inst.h>
+#include <asm/frame.h>
+
+#define STATE1	%xmm0
+#define STATE2	%xmm1
+#define STATE3	%xmm2
+#define STATE4	%xmm3
+#define STATE5	%xmm4
+#define STATE6	%xmm5
+#define STATE7	%xmm6
+#define STATE8	%xmm7
+#define STATE	STATE1
+
+#ifdef __x86_64__
+#define IN1	%xmm8
+#define IN2	%xmm9
+#define IN3	%xmm10
+#define IN4	%xmm11
+#define IN5	%xmm12
+#define IN6	%xmm13
+#define IN7	%xmm14
+#define IN8	%xmm15
+#define IN	IN1
+#else
+#define IN	%xmm1
+#endif
+
+#ifdef __x86_64__
+#define AREG	%rax
+#define HANDLEP	%rdi
+#define OUTP	%rsi
+#define KLEN	%r9d
+#define INP	%rdx
+#define T1	%r10
+#define LEN	%rcx
+#define IVP	%r8
+#else
+#define AREG	%eax
+#define HANDLEP	%edi
+#define OUTP	AREG
+#define KLEN	%ebx
+#define INP	%edx
+#define T1    %ecx
+#define LEN %esi
+#define IVP %ebp
+#endif
+
+#define UKEYP OUTP
+
+/*
+ * int __aeskl_setkey(struct crypto_aes_ctx *ctx,
+ *		      const u8 *in_key,
+ *		      unsigned int key_len)
+ */
+SYM_FUNC_START(__aeskl_setkey)
+	FRAME_BEGIN
+#ifndef __x86_64__
+	push HANDLEP
+	movl (FRAME_OFFSET+8)(%esp), HANDLEP	# ctx
+	movl (FRAME_OFFSET+12)(%esp), UKEYP	# in_key
+	movl (FRAME_OFFSET+16)(%esp), %edx	# key_len
+#endif
+	movl %edx, 480(HANDLEP)
+	movdqu (UKEYP), STATE1
+	mov $1, %eax
+	cmp $16, %dl
+	je .Lsetkey_128
+
+	movdqu 0x10(UKEYP), STATE2
+	ENCODEKEY256 %eax, %eax
+	movdqu STATE4, 0x30(HANDLEP)
+	jmp .Lsetkey_end
+.Lsetkey_128:
+	ENCODEKEY128 %eax, %eax
+
+.Lsetkey_end:
+	movdqu STATE1, (HANDLEP)
+	movdqu STATE2, 0x10(HANDLEP)
+	movdqu STATE3, 0x20(HANDLEP)
+
+	xor AREG, AREG
+#ifndef __x86_64__
+	popl HANDLEP
+#endif
+	FRAME_END
+	ret
+SYM_FUNC_END(__aeskl_setkey)
+
+/*
+ * int __aeskl_enc1(const void *ctx,
+ *		    u8 *dst,
+ *		    const u8 *src)
+ */
+SYM_FUNC_START(__aeskl_enc1)
+	FRAME_BEGIN
+#ifndef __x86_64__
+	pushl HANDLEP
+	pushl KLEN
+	movl (FRAME_OFFSET+12)(%esp), HANDLEP	# ctx
+	movl (FRAME_OFFSET+16)(%esp), OUTP	# dst
+	movl (FRAME_OFFSET+20)(%esp), INP	# src
+#endif
+	movdqu (INP), STATE
+	movl 480(HANDLEP), KLEN
+
+	cmp $16, KLEN
+	je .Lenc_128
+	AESENC256KL HANDLEP, STATE
+	jz .Lenc_err
+	jmp .Lenc_noerr
+.Lenc_128:
+	AESENC128KL HANDLEP, STATE
+	jz .Lenc_err
+
+.Lenc_noerr:
+	xor AREG, AREG
+	jmp .Lenc_end
+.Lenc_err:
+	mov $1, AREG
+.Lenc_end:
+	movdqu STATE, (OUTP)
+#ifndef __x86_64__
+	popl KLEN
+	popl HANDLEP
+#endif
+	FRAME_END
+	ret
+SYM_FUNC_END(__aeskl_enc1)
+
+/*
+ * int __aeskl_dec1(const void *ctx,
+ *		    u8 *dst,
+ *		    const u8 *src)
+ */
+SYM_FUNC_START(__aeskl_dec1)
+	FRAME_BEGIN
+#ifndef __x86_64__
+	pushl HANDLEP
+	pushl KLEN
+	movl (FRAME_OFFSET+12)(%esp), HANDLEP	# ctx
+	movl (FRAME_OFFSET+16)(%esp), OUTP	# dst
+	movl (FRAME_OFFSET+20)(%esp), INP	# src
+#endif
+	movdqu (INP), STATE
+	mov 480(HANDLEP), KLEN
+
+	cmp $16, KLEN
+	je .Ldec_128
+	AESDEC256KL HANDLEP, STATE
+	jz .Ldec_err
+	jmp .Ldec_noerr
+.Ldec_128:
+	AESDEC128KL HANDLEP, STATE
+	jz .Ldec_err
+
+.Ldec_noerr:
+	xor AREG, AREG
+	jmp .Ldec_end
+.Ldec_err:
+	mov $1, AREG
+.Ldec_end:
+	movdqu STATE, (OUTP)
+#ifndef __x86_64__
+	popl KLEN
+	popl HANDLEP
+#endif
+	FRAME_END
+	ret
+SYM_FUNC_END(__aeskl_dec1)
+
+/*
+ * int __aeskl_ecb_enc(struct crypto_aes_ctx *ctx,
+ *		       const u8 *dst,
+ *		       u8 *src,
+ *		       size_t len)
+ */
+SYM_FUNC_START(__aeskl_ecb_enc)
+	FRAME_BEGIN
+#ifndef __x86_64__
+	pushl LEN
+	pushl HANDLEP
+	pushl KLEN
+	movl (FRAME_OFFSET+16)(%esp), HANDLEP	# ctx
+	movl (FRAME_OFFSET+20)(%esp), OUTP	# dst
+	movl (FRAME_OFFSET+24)(%esp), INP	# src
+	movl (FRAME_OFFSET+28)(%esp), LEN	# len
+#endif
+	test LEN, LEN
+	jz .Lecb_enc_noerr
+	mov 480(HANDLEP), KLEN
+	cmp $16, LEN
+	jb .Lecb_enc_noerr
+	cmp $128, LEN
+	jb .Lecb_enc1
+
+.align 4
+.Lecb_enc8:
+	movdqu (INP), STATE1
+	movdqu 0x10(INP), STATE2
+	movdqu 0x20(INP), STATE3
+	movdqu 0x30(INP), STATE4
+	movdqu 0x40(INP), STATE5
+	movdqu 0x50(INP), STATE6
+	movdqu 0x60(INP), STATE7
+	movdqu 0x70(INP), STATE8
+
+	cmp $16, KLEN
+	je .Lecb_enc8_128
+	AESENCWIDE256KL HANDLEP
+	jz .Lecb_enc_err
+	jmp .Lecb_enc8_end
+.Lecb_enc8_128:
+	AESENCWIDE128KL HANDLEP
+	jz .Lecb_enc_err
+
+.Lecb_enc8_end:
+	movdqu STATE1, (OUTP)
+	movdqu STATE2, 0x10(OUTP)
+	movdqu STATE3, 0x20(OUTP)
+	movdqu STATE4, 0x30(OUTP)
+	movdqu STATE5, 0x40(OUTP)
+	movdqu STATE6, 0x50(OUTP)
+	movdqu STATE7, 0x60(OUTP)
+	movdqu STATE8, 0x70(OUTP)
+
+	sub $128, LEN
+	add $128, INP
+	add $128, OUTP
+	cmp $128, LEN
+	jge .Lecb_enc8
+	cmp $16, LEN
+	jb .Lecb_enc_noerr
+
+.align 4
+.Lecb_enc1:
+	movdqu (INP), STATE1
+	cmp $16, KLEN
+	je .Lecb_enc1_128
+	AESENC256KL HANDLEP, STATE
+	jz .Lecb_enc_err
+	jmp .Lecb_enc1_end
+.Lecb_enc1_128:
+	AESENC128KL HANDLEP, STATE
+	jz .Lecb_enc_err
+
+.Lecb_enc1_end:
+	movdqu STATE1, (OUTP)
+	sub $16, LEN
+	add $16, INP
+	add $16, OUTP
+	cmp $16, LEN
+	jge .Lecb_enc1
+
+.Lecb_enc_noerr:
+	xor AREG, AREG
+	jmp .Lecb_enc_end
+.Lecb_enc_err:
+	mov $1, AREG
+.Lecb_enc_end:
+#ifndef __x86_64__
+	popl KLEN
+	popl HANDLEP
+	popl LEN
+#endif
+	FRAME_END
+	ret
+SYM_FUNC_END(__aeskl_ecb_enc)
+
+/*
+ * int __aeskl_ecb_dec(struct crypto_aes_ctx *ctx,
+ *		       const u8 *dst,
+ *		       u8 *src,
+ *		       size_t len);
+ */
+SYM_FUNC_START(__aeskl_ecb_dec)
+	FRAME_BEGIN
+#ifndef __x86_64__
+	pushl LEN
+	pushl HANDLEP
+	pushl KLEN
+	movl (FRAME_OFFSET+16)(%esp), HANDLEP	# ctx
+	movl (FRAME_OFFSET+20)(%esp), OUTP	# dst
+	movl (FRAME_OFFSET+24)(%esp), INP	# src
+	movl (FRAME_OFFSET+28)(%esp), LEN	# len
+#endif
+
+	test LEN, LEN
+	jz .Lecb_dec_noerr
+	mov 480(HANDLEP), KLEN
+	cmp $16, LEN
+	jb .Lecb_dec_noerr
+	cmp $128, LEN
+	jb .Lecb_dec1
+
+.align 4
+.Lecb_dec8:
+	movdqu (INP), STATE1
+	movdqu 0x10(INP), STATE2
+	movdqu 0x20(INP), STATE3
+	movdqu 0x30(INP), STATE4
+	movdqu 0x40(INP), STATE5
+	movdqu 0x50(INP), STATE6
+	movdqu 0x60(INP), STATE7
+	movdqu 0x70(INP), STATE8
+
+	cmp $16, KLEN
+	je .Lecb_dec8_128
+	AESDECWIDE256KL HANDLEP
+	jz .Lecb_dec_err
+	jmp .Lecb_dec8_end
+.Lecb_dec8_128:
+	AESDECWIDE128KL HANDLEP
+	jz .Lecb_dec_err
+
+.Lecb_dec8_end:
+	movdqu STATE1, (OUTP)
+	movdqu STATE2, 0x10(OUTP)
+	movdqu STATE3, 0x20(OUTP)
+	movdqu STATE4, 0x30(OUTP)
+	movdqu STATE5, 0x40(OUTP)
+	movdqu STATE6, 0x50(OUTP)
+	movdqu STATE7, 0x60(OUTP)
+	movdqu STATE8, 0x70(OUTP)
+
+	sub $128, LEN
+	add $128, INP
+	add $128, OUTP
+	cmp $128, LEN
+	jge .Lecb_dec8
+	cmp $16, LEN
+	jb .Lecb_dec_noerr
+
+.align 4
+.Lecb_dec1:
+	movdqu (INP), STATE1
+	cmp $16, KLEN
+	je .Lecb_dec1_128
+	AESDEC256KL HANDLEP, STATE
+	jz .Lecb_dec_err
+	jmp .Lecb_dec1_end
+.Lecb_dec1_128:
+	AESDEC128KL HANDLEP, STATE
+	jz .Lecb_dec_err
+
+.Lecb_dec1_end:
+	movdqu STATE1, (OUTP)
+	sub $16, LEN
+	add $16, INP
+	add $16, OUTP
+	cmp $16, LEN
+	jge .Lecb_dec1
+
+.Lecb_dec_noerr:
+	xor AREG, AREG
+	jmp .Lecb_dec_end
+.Lecb_dec_err:
+	mov $1, AREG
+.Lecb_dec_end:
+#ifndef __x86_64__
+	popl KLEN
+	popl HANDLEP
+	popl LEN
+#endif
+	FRAME_END
+	ret
+SYM_FUNC_END(__aeskl_ecb_dec)
+
+/*
+ * int __aeskl_cbc_enc(struct crypto_aes_ctx *ctx,
+ *		       const u8 *dst,
+ *		       u8 *src,
+ *		       size_t len,
+ *		       u8 *iv)
+ */
+SYM_FUNC_START(__aeskl_cbc_enc)
+	FRAME_BEGIN
+#ifndef __x86_64__
+	pushl IVP
+	pushl LEN
+	pushl HANDLEP
+	pushl KLEN
+	movl (FRAME_OFFSET+20)(%esp), HANDLEP	# ctx
+	movl (FRAME_OFFSET+24)(%esp), OUTP	# dst
+	movl (FRAME_OFFSET+28)(%esp), INP	# src
+	movl (FRAME_OFFSET+32)(%esp), LEN	# len
+	movl (FRAME_OFFSET+36)(%esp), IVP	# iv
+#endif
+
+	cmp $16, LEN
+	jb .Lcbc_enc_noerr
+	mov 480(HANDLEP), KLEN
+	movdqu (IVP), STATE
+
+.align 4
+.Lcbc_enc1:
+	movdqu (INP), IN
+	pxor IN, STATE
+
+	cmp $16, KLEN
+	je .Lcbc_enc1_128
+	AESENC256KL HANDLEP, STATE
+	jz .Lcbc_enc_err
+	jmp .Lcbc_enc1_end
+.Lcbc_enc1_128:
+	AESENC128KL HANDLEP, STATE
+	jz .Lcbc_enc_err
+
+.Lcbc_enc1_end:
+	movdqu STATE, (OUTP)
+	sub $16, LEN
+	add $16, INP
+	add $16, OUTP
+	cmp $16, LEN
+	jge .Lcbc_enc1
+	movdqu STATE, (IVP)
+
+.Lcbc_enc_noerr:
+	xor AREG, AREG
+	jmp .Lcbc_enc_end
+.Lcbc_enc_err:
+	mov $1, AREG
+.Lcbc_enc_end:
+#ifndef __x86_64__
+	popl KLEN
+	popl HANDLEP
+	popl LEN
+	popl IVP
+#endif
+	FRAME_END
+	ret
+SYM_FUNC_END(__aeskl_cbc_enc)
+
+/*
+ * int __aeskl_cbc_dec(struct crypto_aes_ctx *ctx,
+ *		       const u8 *dst,
+ *		       u8 *src,
+ *		       size_t len,
+ *		       u8 *iv)
+ */
+SYM_FUNC_START(__aeskl_cbc_dec)
+	FRAME_BEGIN
+#ifndef __x86_64__
+	pushl IVP
+	pushl LEN
+	pushl HANDLEP
+	pushl KLEN
+	movl (FRAME_OFFSET+20)(%esp), HANDLEP	# ctx
+	movl (FRAME_OFFSET+24)(%esp), OUTP	# dst
+	movl (FRAME_OFFSET+28)(%esp), INP	# src
+	movl (FRAME_OFFSET+32)(%esp), LEN	# len
+	movl (FRAME_OFFSET+36)(%esp), IVP	# iv
+#endif
+
+	cmp $16, LEN
+	jb .Lcbc_dec_noerr
+	mov 480(HANDLEP), KLEN
+#ifdef __x86_64__
+	cmp $128, LEN
+	jb .Lcbc_dec1_pre
+
+.align 4
+.Lcbc_dec8:
+	movdqu 0x0(INP), STATE1
+	movdqu 0x10(INP), STATE2
+	movdqu 0x20(INP), STATE3
+	movdqu 0x30(INP), STATE4
+	movdqu 0x40(INP), STATE5
+	movdqu 0x50(INP), STATE6
+	movdqu 0x60(INP), STATE7
+	movdqu 0x70(INP), STATE8
+
+	movdqu (IVP), IN1
+	movdqa STATE1, IN2
+	movdqa STATE2, IN3
+	movdqa STATE3, IN4
+	movdqa STATE4, IN5
+	movdqa STATE5, IN6
+	movdqa STATE6, IN7
+	movdqa STATE7, IN8
+	movdqu STATE8, (IVP)
+
+	cmp $16, KLEN
+	je .Lcbc_dec8_128
+	AESDECWIDE256KL HANDLEP
+	jz .Lcbc_dec_err
+	jmp .Lcbc_dec8_end
+.Lcbc_dec8_128:
+	AESDECWIDE128KL HANDLEP
+	jz .Lcbc_dec_err
+
+.Lcbc_dec8_end:
+	pxor IN1, STATE1
+	pxor IN2, STATE2
+	pxor IN3, STATE3
+	pxor IN4, STATE4
+	pxor IN5, STATE5
+	pxor IN6, STATE6
+	pxor IN7, STATE7
+	pxor IN8, STATE8
+
+	movdqu STATE1, 0x0(OUTP)
+	movdqu STATE2, 0x10(OUTP)
+	movdqu STATE3, 0x20(OUTP)
+	movdqu STATE4, 0x30(OUTP)
+	movdqu STATE5, 0x40(OUTP)
+	movdqu STATE6, 0x50(OUTP)
+	movdqu STATE7, 0x60(OUTP)
+	movdqu STATE8, 0x70(OUTP)
+
+	sub $128, LEN
+	add $128, INP
+	add $128, OUTP
+	cmp $128, LEN
+	jge .Lcbc_dec8
+	cmp $16, LEN
+	jb .Lcbc_dec_noerr
+#endif
+
+.align 4
+.Lcbc_dec1_pre:
+	movdqu (IVP), STATE3
+.Lcbc_dec1:
+	movdqu (INP), STATE2
+	movdqa STATE2, STATE1
+
+	cmp $16, KLEN
+	je .Lcbc_dec1_128
+	AESDEC256KL HANDLEP, STATE1
+	jz .Lcbc_dec_err
+	jmp .Lcbc_dec1_end
+.Lcbc_dec1_128:
+	AESDEC128KL HANDLEP, STATE1
+	jz .Lcbc_dec_err
+
+.Lcbc_dec1_end:
+	pxor STATE3, STATE1
+	movdqu STATE1, (OUTP)
+	movdqa STATE2, STATE3
+	sub $16, LEN
+	add $16, INP
+	add $16, OUTP
+	cmp $16, LEN
+	jge .Lcbc_dec1
+	movdqu STATE3, (IVP)
+
+.Lcbc_dec_noerr:
+	xor AREG, AREG
+	jmp .Lcbc_dec_end
+.Lcbc_dec_err:
+	mov $1, AREG
+.Lcbc_dec_end:
+#ifndef __x86_64__
+	popl KLEN
+	popl HANDLEP
+	popl LEN
+	popl IVP
+#endif
+	FRAME_END
+	ret
+SYM_FUNC_END(__aeskl_cbc_dec)
+
+
+#ifdef __x86_64__
+
+/*
+ * CTR implementations
+ */
+
+.pushsection .rodata
+.align 16
+.Lbswap_mask:
+	.byte 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0
+.popsection
+
+.section	.rodata.cst16.gf128mul_x_ble_mask, "aM", @progbits, 16
+.align 16
+.Lgf128mul_x_ble_mask:
+	.octa 0x00000000000000010000000000000087
+
+#define BSWAP_MASK	%xmm10
+#define CTR		%xmm11
+#define INC		%xmm12
+#define IV		%xmm13
+#define TCTR_LOW	%r11
+
+.text
+.align 4
+__aeskl_ctr_init:
+	movdqa .Lbswap_mask, BSWAP_MASK
+	movdqa IV, CTR
+	pshufb BSWAP_MASK, CTR
+	mov $1, TCTR_LOW
+	movq TCTR_LOW, INC
+	movq CTR, TCTR_LOW
+	ret
+SYM_FUNC_END(__aeskl_ctr_init)
+
+.align 4
+__aeskl_ctr_inc:
+	paddq INC, CTR
+	add $1, TCTR_LOW
+	jnc .Lctr_inc_low
+	pslldq $8, INC
+	paddq INC, CTR
+	psrldq $8, INC
+.Lctr_inc_low:
+	movdqa CTR, IV
+	pshufb BSWAP_MASK, IV
+	ret
+SYM_FUNC_END(__aeskl_ctr_inc)
+
+/*
+ * int __aeskl_ctr_enc(struct crypto_aes_ctx *ctx,
+ *		       const u8 *dst,
+ *		       u8 *src,
+ *		       size_t len,
+ *		       u8 *iv)
+ */
+SYM_FUNC_START(__aeskl_ctr_enc)
+	FRAME_BEGIN
+	cmp $16, LEN
+	jb .Lctr_enc_noerr
+	mov 480(HANDLEP), KLEN
+	movdqu (IVP), IV
+	call __aeskl_ctr_init
+	cmp $128, LEN
+	jb .Lctr_enc1
+
+.align 4
+.Lctr_enc8:
+	movdqa IV, STATE1
+	call __aeskl_ctr_inc
+	movdqa IV, STATE2
+	call __aeskl_ctr_inc
+	movdqa IV, STATE3
+	call __aeskl_ctr_inc
+	movdqa IV, STATE4
+	call __aeskl_ctr_inc
+	movdqa IV, STATE5
+	call __aeskl_ctr_inc
+	movdqa IV, STATE6
+	call __aeskl_ctr_inc
+	movdqa IV, STATE7
+	call __aeskl_ctr_inc
+	movdqa IV, STATE8
+	call __aeskl_ctr_inc
+
+	cmp $16, KLEN
+	je .Lctr_enc8_128
+	AESENCWIDE256KL %rdi
+	jz .Lctr_enc_err
+	jmp .Lctr_enc8_end
+.Lctr_enc8_128:
+	AESENCWIDE128KL %rdi
+	jz .Lctr_enc_err
+.Lctr_enc8_end:
+
+	movdqu (INP), IN1
+	pxor IN1, STATE1
+	movdqu STATE1, (OUTP)
+
+	movdqu 0x10(INP), IN1
+	pxor IN1, STATE2
+	movdqu STATE2, 0x10(OUTP)
+
+	movdqu 0x20(INP), IN1
+	pxor IN1, STATE3
+	movdqu STATE3, 0x20(OUTP)
+
+	movdqu 0x30(INP), IN1
+	pxor IN1, STATE4
+	movdqu STATE4, 0x30(OUTP)
+
+	movdqu 0x40(INP), IN1
+	pxor IN1, STATE5
+	movdqu STATE5, 0x40(OUTP)
+
+	movdqu 0x50(INP), IN1
+	pxor IN1, STATE6
+	movdqu STATE6, 0x50(OUTP)
+
+	movdqu 0x60(INP), IN1
+	pxor IN1, STATE7
+	movdqu STATE7, 0x60(OUTP)
+
+	movdqu 0x70(INP), IN1
+	pxor IN1, STATE8
+	movdqu STATE8, 0x70(OUTP)
+
+	sub $128, LEN
+	add $128, INP
+	add $128, OUTP
+	cmp $128, LEN
+	jge .Lctr_enc8
+	cmp $16, LEN
+	jb .Lctr_enc_end
+
+.align 4
+.Lctr_enc1:
+	movdqa IV, STATE
+	call __aeskl_ctr_inc
+
+	cmp $16, KLEN
+	je .Lctr_enc1_128
+	AESENC256KL HANDLEP, STATE
+	jmp .Lctr_enc1_end
+.Lctr_enc1_128:
+	AESENC128KL HANDLEP, STATE
+
+.Lctr_enc1_end:
+	movdqu (INP), IN
+	pxor IN, STATE
+	movdqu STATE, (OUTP)
+	sub $16, LEN
+	add $16, INP
+	add $16, OUTP
+	cmp $16, LEN
+	jge .Lctr_enc1
+
+.Lctr_enc_end:
+	movdqu IV, (IVP)
+.Lctr_enc_noerr:
+	xor AREG, AREG
+	jmp .Lctr_enc_ret
+.Lctr_enc_err:
+	mov $1, AREG
+.Lctr_enc_ret:
+	FRAME_END
+	ret
+SYM_FUNC_END(__aeskl_ctr_enc)
+
+/*
+ * XTS implementation
+ */
+#define GF128MUL_MASK %xmm10
+
+#define __aeskl_gf128mul_x_ble() \
+	pshufd $0x13, IV, CTR; \
+	paddq IV, IV; \
+	psrad $31, CTR; \
+	pand GF128MUL_MASK, CTR; \
+	pxor CTR, IV;
+
+/*
+ * int __aeskl_xts_crypt8(const struct crypto_aes_ctx *ctx,
+ *			  const u8 *dst,
+ *			  u8 *src,
+ *			  bool enc,
+ *			  u8 *iv)
+ */
+SYM_FUNC_START(__aeskl_xts_crypt8)
+	FRAME_BEGIN
+
+	movdqa .Lgf128mul_x_ble_mask, GF128MUL_MASK
+	movdqu (IVP), IV
+
+	mov 480(HANDLEP), KLEN
+
+	movdqa IV, STATE1
+	movdqu (INP), INC
+	pxor INC, STATE1
+	movdqu IV, (OUTP)
+
+	__aeskl_gf128mul_x_ble()
+	movdqa IV, STATE2
+	movdqu 0x10(INP), INC
+	pxor INC, STATE2
+	movdqu IV, 0x10(OUTP)
+
+	__aeskl_gf128mul_x_ble()
+	movdqa IV, STATE3
+	movdqu 0x20(INP), INC
+	pxor INC, STATE3
+	movdqu IV, 0x20(OUTP)
+
+	__aeskl_gf128mul_x_ble()
+	movdqa IV, STATE4
+	movdqu 0x30(INP), INC
+	pxor INC, STATE4
+	movdqu IV, 0x30(OUTP)
+
+	__aeskl_gf128mul_x_ble()
+	movdqa IV, STATE5
+	movdqu 0x40(INP), INC
+	pxor INC, STATE5
+	movdqu IV, 0x40(OUTP)
+
+	__aeskl_gf128mul_x_ble()
+	movdqa IV, STATE6
+	movdqu 0x50(INP), INC
+	pxor INC, STATE6
+	movdqu IV, 0x50(OUTP)
+
+	__aeskl_gf128mul_x_ble()
+	movdqa IV, STATE7
+	movdqu 0x60(INP), INC
+	pxor INC, STATE7
+	movdqu IV, 0x60(OUTP)
+
+	__aeskl_gf128mul_x_ble()
+	movdqa IV, STATE8
+	movdqu 0x70(INP), INC
+	pxor INC, STATE8
+	movdqu IV, 0x70(OUTP)
+
+	cmpb $0, %cl
+	je  .Lxts_dec8
+	cmp $16, KLEN
+	je .Lxts_enc8_128
+	AESENCWIDE256KL %rdi
+	jz .Lxts_err
+	jmp .Lxts_crypt8_end
+.Lxts_enc8_128:
+	AESENCWIDE128KL %rdi
+	jz .Lxts_err
+	jmp .Lxts_crypt8_end
+.Lxts_dec8:
+	cmp $16, KLEN
+	je .Lxts_dec8_128
+	AESDECWIDE256KL %rdi
+	jz .Lxts_err
+	jmp .Lxts_crypt8_end
+.Lxts_dec8_128:
+	AESDECWIDE128KL %rdi
+	jz .Lxts_err
+
+.Lxts_crypt8_end:
+	movdqu 0x00(OUTP), INC
+	pxor INC, STATE1
+	movdqu STATE1, 0x00(OUTP)
+
+	movdqu 0x10(OUTP), INC
+	pxor INC, STATE2
+	movdqu STATE2, 0x10(OUTP)
+
+	movdqu 0x20(OUTP), INC
+	pxor INC, STATE3
+	movdqu STATE3, 0x20(OUTP)
+
+	movdqu 0x30(OUTP), INC
+	pxor INC, STATE4
+	movdqu STATE4, 0x30(OUTP)
+
+	movdqu 0x40(OUTP), INC
+	pxor INC, STATE5
+	movdqu STATE5, 0x40(OUTP)
+
+	movdqu 0x50(OUTP), INC
+	pxor INC, STATE6
+	movdqu STATE6, 0x50(OUTP)
+
+	movdqu 0x60(OUTP), INC
+	pxor INC, STATE7
+	movdqu STATE7, 0x60(OUTP)
+
+	movdqu 0x70(OUTP), INC
+	pxor INC, STATE8
+	movdqu STATE8, 0x70(OUTP)
+
+	__aeskl_gf128mul_x_ble()
+	movdqu IV, (IVP)
+
+	xor AREG, AREG
+	jmp .Lxts_end
+.Lxts_err:
+	mov $1, AREG
+.Lxts_end:
+	FRAME_END
+	ret
+SYM_FUNC_END(__aeskl_xts_crypt8)
+
+#endif
diff --git a/arch/x86/crypto/aeskl-intel_glue.c b/arch/x86/crypto/aeskl-intel_glue.c
new file mode 100644
index 000000000000..9e3f900ad4af
--- /dev/null
+++ b/arch/x86/crypto/aeskl-intel_glue.c
@@ -0,0 +1,697 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Support for AES Key Locker instructions. This file contains glue
+ * code, the real AES implementation will be in aeskl-intel_asm.S.
+ *
+ * Most code is based on AES-NI glue code, aesni-intel_glue.c
+ */
+
+#include <linux/types.h>
+#include <linux/module.h>
+#include <linux/err.h>
+#include <crypto/algapi.h>
+#include <crypto/aes.h>
+#include <crypto/internal/skcipher.h>
+#include <crypto/internal/simd.h>
+#include <crypto/xts.h>
+#include <asm/keylocker.h>
+#include <asm/cpu_device_id.h>
+#include <asm/fpu/api.h>
+#include <asm/simd.h>
+
+#ifdef CONFIG_X86_64
+#include <asm/crypto/glue_helper.h>
+#endif
+
+#define AESKL_ALIGN		16
+#define AESKL_ALIGN_ATTR	__aligned(AESKL_ALIGN)
+#define AES_BLOCK_MASK		(~(AES_BLOCK_SIZE - 1))
+#define RFC4106_HASH_SUBKEY_SZ	16
+#define AESKL_ALIGN_EXTRA	((AESKL_ALIGN - 1) & ~(CRYPTO_MINALIGN - 1))
+#define CRYPTO_AESKL_CTX_SIZE	(sizeof(struct crypto_aes_ctx) + AESKL_ALIGN_EXTRA)
+
+struct aeskl_xts_ctx {
+	u8 raw_tweak_ctx[sizeof(struct crypto_aes_ctx)] AESKL_ALIGN_ATTR;
+	u8 raw_crypt_ctx[sizeof(struct crypto_aes_ctx)] AESKL_ALIGN_ATTR;
+};
+
+#define XTS_AESKL_CTX_SIZE	(sizeof(struct aeskl_xts_ctx) + AESKL_ALIGN_EXTRA)
+
+asmlinkage int __aeskl_setkey(struct crypto_aes_ctx *ctx, const u8 *in_key, unsigned int key_len);
+asmlinkage int aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key, unsigned int key_en);
+
+asmlinkage int __aeskl_enc1(const void *ctx, u8 *out, const u8 *in);
+asmlinkage int __aeskl_dec1(const void *ctx, u8 *out, const u8 *in);
+asmlinkage void aesni_enc(const void *ctx, u8 *out, const u8 *in);
+asmlinkage void aesni_dec(const void *ctx, u8 *out, const u8 *in);
+
+asmlinkage int __aeskl_ecb_enc(struct crypto_aes_ctx *ctx, u8 *out, const u8 *in, unsigned int len);
+asmlinkage int __aeskl_ecb_dec(struct crypto_aes_ctx *ctx, u8 *out, const u8 *in, unsigned int len);
+asmlinkage void aesni_ecb_enc(struct crypto_aes_ctx *ctx, u8 *out, const u8 *in, unsigned int len);
+asmlinkage void aesni_ecb_dec(struct crypto_aes_ctx *ctx, u8 *out, const u8 *in, unsigned int len);
+
+asmlinkage int __aeskl_cbc_enc(struct crypto_aes_ctx *ctx, u8 *out,
+			       const u8 *in, unsigned int len, u8 *iv);
+asmlinkage int __aeskl_cbc_dec(struct crypto_aes_ctx *ctx, u8 *out,
+			       const u8 *in, unsigned int len, u8 *iv);
+asmlinkage void aesni_cbc_enc(struct crypto_aes_ctx *ctx, u8 *out,
+			      const u8 *in, unsigned int len, u8 *iv);
+asmlinkage void aesni_cbc_dec(struct crypto_aes_ctx *ctx, u8 *out,
+			      const u8 *in, unsigned int len, u8 *iv);
+
+#ifdef CONFIG_X86_64
+asmlinkage int __aeskl_ctr_enc(struct crypto_aes_ctx *ctx, u8 *out,
+			       const u8 *in, unsigned int len, u8 *iv);
+asmlinkage void aesni_ctr_enc(struct crypto_aes_ctx *ctx, u8 *out,
+			      const u8 *in, unsigned int len, u8 *iv);
+asmlinkage int __aeskl_xts_crypt8(const struct crypto_aes_ctx *ctx, u8 *out,
+				  const u8 *in, bool enc, u8 *iv);
+asmlinkage void aesni_xts_crypt8(const struct crypto_aes_ctx *ctx, u8 *out,
+				 const u8 *in, bool enc, u8 *iv);
+#endif
+
+static inline void aeskl_enc1(const void *ctx, u8 *out, const u8 *in)
+{
+	int err;
+
+	err = __aeskl_enc1(ctx, out, in);
+	if (err)
+		pr_err("aes-kl: invalid handle\n");
+}
+
+static inline void aeskl_dec1(const void *ctx, u8 *out, const u8 *in)
+{
+	int err;
+
+	err = __aeskl_dec1(ctx, out, in);
+	if (err)
+		pr_err("aes-kl: invalid handle\n");
+}
+
+static inline struct crypto_aes_ctx *aes_ctx(void *raw_ctx)
+{
+	unsigned long addr = (unsigned long)raw_ctx;
+	unsigned long align = AESKL_ALIGN;
+
+	if (align <= crypto_tfm_ctx_alignment())
+		align = 1;
+	return (struct crypto_aes_ctx *)ALIGN(addr, align);
+}
+
+static int aeskl_setkey_common(struct crypto_tfm *tfm, void *raw_ctx, const u8 *in_key,
+			       unsigned int key_len)
+{
+	struct crypto_aes_ctx *ctx = aes_ctx(raw_ctx);
+	int err;
+
+	if (!crypto_simd_usable())
+		return -EBUSY;
+
+	/*
+	 * 192-bit key is not supported by Key Locker. Fall back to
+	 * the AES-NI implementation.
+	 */
+	if (unlikely(key_len == AES_KEYSIZE_192)) {
+		kernel_fpu_begin();
+		err = aesni_set_key(ctx, in_key, key_len);
+		kernel_fpu_end();
+		return err;
+	}
+
+	if (key_len != AES_KEYSIZE_128 && key_len != AES_KEYSIZE_256)
+		return -EINVAL;
+
+	kernel_fpu_begin();
+	/* Encode the key to a handle, only usable at ring 0 */
+	err = __aeskl_setkey(ctx, in_key, key_len);
+	kernel_fpu_end();
+
+	return err;
+}
+
+static int aeskl_setkey(struct crypto_tfm *tfm, const u8 *in_key, unsigned int key_len)
+{
+	return aeskl_setkey_common(tfm, crypto_tfm_ctx(tfm), in_key, key_len);
+}
+
+static void aeskl_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
+{
+	struct crypto_aes_ctx *ctx = aes_ctx(crypto_tfm_ctx(tfm));
+	int err = 0;
+
+	if (!crypto_simd_usable())
+		return;
+
+	kernel_fpu_begin();
+	/* 192-bit key not supported, fall back to AES-NI.*/
+	if (unlikely(ctx->key_length == AES_KEYSIZE_192))
+		aesni_enc(ctx, dst, src);
+	else
+		err = __aeskl_enc1(ctx, dst, src);
+	kernel_fpu_end();
+
+	if (err)
+		pr_err("aes-kl (encrypt): invalid handle\n");
+}
+
+static void aeskl_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
+{
+	struct crypto_aes_ctx *ctx = aes_ctx(crypto_tfm_ctx(tfm));
+	int err = 0;
+
+	if (!crypto_simd_usable())
+		return;
+
+	kernel_fpu_begin();
+	/* 192-bit key not supported, fall back to AES-NI */
+	if (unlikely(ctx->key_length == AES_KEYSIZE_192))
+		aesni_dec(ctx, dst, src);
+	else
+		err = __aeskl_dec1(ctx, dst, src);
+	kernel_fpu_end();
+
+	if (err)
+		pr_err("aes-kl (encrypt): invalid handle\n");
+}
+
+static int aeskl_skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key,
+				 unsigned int len)
+{
+	return aeskl_setkey_common(crypto_skcipher_tfm(tfm),
+				   crypto_skcipher_ctx(tfm), key, len);
+}
+
+static int ecb_encrypt(struct skcipher_request *req)
+{
+	struct crypto_skcipher *tfm;
+	struct crypto_aes_ctx *ctx;
+	struct skcipher_walk walk;
+	unsigned int nbytes;
+	int err;
+
+	tfm = crypto_skcipher_reqtfm(req);
+	ctx = aes_ctx(crypto_skcipher_ctx(tfm));
+
+	err = skcipher_walk_virt(&walk, req, true);
+	if (err)
+		return err;
+
+	while ((nbytes = walk.nbytes)) {
+		unsigned int len = nbytes & AES_BLOCK_MASK;
+		const u8 *src = walk.src.virt.addr;
+		u8 *dst = walk.dst.virt.addr;
+
+		kernel_fpu_begin();
+		if (unlikely(ctx->key_length == AES_KEYSIZE_192))
+			aesni_ecb_enc(ctx, dst, src, len);
+		else
+			err = __aeskl_ecb_enc(ctx, dst, src, len);
+		kernel_fpu_end();
+
+		if (err) {
+			skcipher_walk_done(&walk, nbytes & (AES_BLOCK_SIZE - 1));
+			return -EINVAL;
+		}
+
+		nbytes &= AES_BLOCK_SIZE - 1;
+
+		err = skcipher_walk_done(&walk, nbytes);
+		if (err)
+			return err;
+	}
+
+	return err;
+}
+
+static int ecb_decrypt(struct skcipher_request *req)
+{
+	struct crypto_skcipher *tfm;
+	struct crypto_aes_ctx *ctx;
+	struct skcipher_walk walk;
+	unsigned int nbytes;
+	int err;
+
+	tfm = crypto_skcipher_reqtfm(req);
+	ctx = aes_ctx(crypto_skcipher_ctx(tfm));
+
+	err = skcipher_walk_virt(&walk, req, true);
+	if (err)
+		return err;
+
+	while ((nbytes = walk.nbytes)) {
+		unsigned int len = nbytes & AES_BLOCK_MASK;
+		const u8 *src = walk.src.virt.addr;
+		u8 *dst = walk.dst.virt.addr;
+
+		kernel_fpu_begin();
+		if (unlikely(ctx->key_length == AES_KEYSIZE_192))
+			aesni_ecb_dec(ctx, dst, src, len);
+		else
+			err = __aeskl_ecb_dec(ctx, dst, src, len);
+		kernel_fpu_end();
+
+		if (err) {
+			skcipher_walk_done(&walk, nbytes & (AES_BLOCK_SIZE - 1));
+			return -EINVAL;
+		}
+
+		nbytes &= AES_BLOCK_SIZE - 1;
+
+		err = skcipher_walk_done(&walk, nbytes);
+		if (err)
+			return err;
+	}
+
+	return err;
+}
+
+static int cbc_encrypt(struct skcipher_request *req)
+{
+	struct crypto_skcipher *tfm;
+	struct crypto_aes_ctx *ctx;
+	struct skcipher_walk walk;
+	unsigned int nbytes;
+	int err;
+
+	tfm = crypto_skcipher_reqtfm(req);
+	ctx = aes_ctx(crypto_skcipher_ctx(tfm));
+	err = skcipher_walk_virt(&walk, req, true);
+	if (err)
+		return err;
+
+	while ((nbytes = walk.nbytes)) {
+		unsigned int len = nbytes & AES_BLOCK_MASK;
+		const u8 *src = walk.src.virt.addr;
+		u8 *dst = walk.dst.virt.addr;
+		u8 *iv = walk.iv;
+
+		kernel_fpu_begin();
+		if (unlikely(ctx->key_length == AES_KEYSIZE_192))
+			aesni_cbc_enc(ctx, dst, src, len, iv);
+		else
+			err = __aeskl_cbc_enc(ctx, dst, src, len, iv);
+		kernel_fpu_end();
+
+		if (err) {
+			skcipher_walk_done(&walk, nbytes & (AES_BLOCK_SIZE - 1));
+			return -EINVAL;
+		}
+
+		nbytes &= AES_BLOCK_SIZE - 1;
+
+		err = skcipher_walk_done(&walk, nbytes);
+		if (err)
+			return err;
+	}
+
+	return err;
+}
+
+static int cbc_decrypt(struct skcipher_request *req)
+{
+	struct crypto_skcipher *tfm;
+	struct crypto_aes_ctx *ctx;
+	struct skcipher_walk walk;
+	unsigned int nbytes;
+	int err;
+
+	tfm = crypto_skcipher_reqtfm(req);
+	ctx = aes_ctx(crypto_skcipher_ctx(tfm));
+	err = skcipher_walk_virt(&walk, req, true);
+	if (err)
+		return err;
+
+	while ((nbytes = walk.nbytes)) {
+		unsigned int len = nbytes & AES_BLOCK_MASK;
+		const u8 *src = walk.src.virt.addr;
+		u8 *dst = walk.dst.virt.addr;
+		u8 *iv = walk.iv;
+
+		kernel_fpu_begin();
+		if (unlikely(ctx->key_length == AES_KEYSIZE_192))
+			aesni_cbc_dec(ctx, dst, src, len, iv);
+		else
+			err = __aeskl_cbc_dec(ctx, dst, src, len, iv);
+		kernel_fpu_end();
+
+		if (err) {
+			skcipher_walk_done(&walk, nbytes & (AES_BLOCK_SIZE - 1));
+			return -EINVAL;
+		}
+
+		nbytes &= AES_BLOCK_SIZE - 1;
+
+		err = skcipher_walk_done(&walk, nbytes);
+		if (err)
+			return err;
+	}
+
+	return err;
+}
+
+#ifdef CONFIG_X86_64
+static int ctr_crypt(struct skcipher_request *req)
+{
+	struct crypto_skcipher *tfm;
+	struct crypto_aes_ctx *ctx;
+	struct skcipher_walk walk;
+	unsigned int nbytes;
+	int err;
+
+	tfm = crypto_skcipher_reqtfm(req);
+	ctx = aes_ctx(crypto_skcipher_ctx(tfm));
+
+	err = skcipher_walk_virt(&walk, req, true);
+	if (err)
+		return err;
+
+	while ((nbytes = walk.nbytes) >= AES_BLOCK_SIZE) {
+		unsigned int len = nbytes & AES_BLOCK_MASK;
+		const u8 *src = walk.src.virt.addr;
+		u8 *dst = walk.dst.virt.addr;
+		u8 *iv = walk.iv;
+
+		kernel_fpu_begin();
+		if (unlikely(ctx->key_length == AES_KEYSIZE_192))
+			aesni_ctr_enc(ctx, dst, src, len, iv);
+		else
+			err = __aeskl_ctr_enc(ctx, dst, src, len, iv);
+		kernel_fpu_end();
+
+		if (err) {
+			skcipher_walk_done(&walk, nbytes & (AES_BLOCK_SIZE - 1));
+			return -EINVAL;
+		}
+
+		nbytes &= AES_BLOCK_SIZE - 1;
+
+		err = skcipher_walk_done(&walk, nbytes);
+		if (err)
+			return err;
+	}
+
+	if (nbytes) {
+		u8 keystream[AES_BLOCK_SIZE];
+		u8 *src = walk.src.virt.addr;
+		u8 *dst = walk.dst.virt.addr;
+		u8 *ctrblk = walk.iv;
+
+		kernel_fpu_begin();
+		if (unlikely(ctx->key_length == AES_KEYSIZE_192))
+			aesni_enc(ctx, keystream, ctrblk);
+		else
+			err = __aeskl_enc1(ctx, keystream, ctrblk);
+		kernel_fpu_end();
+
+		if (err) {
+			skcipher_walk_done(&walk, 0);
+			return -EINVAL;
+		}
+
+		crypto_xor(keystream, src, nbytes);
+		memcpy(dst, keystream, nbytes);
+		crypto_inc(ctrblk, AES_BLOCK_SIZE);
+
+		err = skcipher_walk_done(&walk, 0);
+	}
+
+	return err;
+}
+
+static int aeskl_xts_setkey(struct crypto_skcipher *tfm, const u8 *key,
+			    unsigned int keylen)
+{
+	struct aeskl_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
+	int err;
+
+	err = xts_verify_key(tfm, key, keylen);
+	if (err)
+		return err;
+
+	keylen /= 2;
+
+	/* first half of xts-key is for crypt */
+	err = aeskl_setkey_common(crypto_skcipher_tfm(tfm), ctx->raw_crypt_ctx, key, keylen);
+	if (err)
+		return err;
+
+	/* second half of xts-key is for tweak */
+	return aeskl_setkey_common(crypto_skcipher_tfm(tfm), ctx->raw_tweak_ctx, key + keylen,
+				   keylen);
+}
+
+static void aeskl_xts_tweak(const void *raw_ctx, u8 *out, const u8 *in)
+{
+	struct crypto_aes_ctx *ctx = aes_ctx((void *)raw_ctx);
+
+	if (unlikely(ctx->key_length == AES_KEYSIZE_192))
+		aesni_enc(raw_ctx, out, in);
+	else
+		aeskl_enc1(raw_ctx, out, in);
+}
+
+static void aeskl_xts_enc1(const void *raw_ctx, u8 *dst, const u8 *src, le128 *iv)
+{
+	struct crypto_aes_ctx *ctx = aes_ctx((void *)raw_ctx);
+	common_glue_func_t fn = aeskl_enc1;
+
+	if (unlikely(ctx->key_length == AES_KEYSIZE_192))
+		fn = aesni_enc;
+
+	glue_xts_crypt_128bit_one(raw_ctx, dst, src, iv, fn);
+}
+
+static void aeskl_xts_dec1(const void *raw_ctx, u8 *dst, const u8 *src, le128 *iv)
+{
+	struct crypto_aes_ctx *ctx = aes_ctx((void *)raw_ctx);
+	common_glue_func_t fn = aeskl_dec1;
+
+	if (unlikely(ctx->key_length == AES_KEYSIZE_192))
+		fn = aesni_dec;
+
+	glue_xts_crypt_128bit_one(raw_ctx, dst, src, iv, fn);
+}
+
+static void aeskl_xts_enc8(const void *raw_ctx, u8 *dst, const u8 *src, le128 *iv)
+{
+	struct crypto_aes_ctx *ctx = aes_ctx((void *)raw_ctx);
+	int err = 0;
+
+	if (unlikely(ctx->key_length == AES_KEYSIZE_192))
+		aesni_xts_crypt8(raw_ctx, dst, src, true, (u8 *)iv);
+	else
+		err = __aeskl_xts_crypt8(raw_ctx, dst, src, true, (u8 *)iv);
+
+	if (err)
+		pr_err("aes-kl (XTS encrypt): invalid handle\n");
+}
+
+static void aeskl_xts_dec8(const void *raw_ctx, u8 *dst, const u8 *src, le128 *iv)
+{
+	struct crypto_aes_ctx *ctx = aes_ctx((void *)raw_ctx);
+	int err = 0;
+
+	if (unlikely(ctx->key_length == AES_KEYSIZE_192))
+		aesni_xts_crypt8(raw_ctx, dst, src, false, (u8 *)iv);
+	else
+		__aeskl_xts_crypt8(raw_ctx, dst, src, false, (u8 *)iv);
+
+	if (err)
+		pr_err("aes-kl (XTS decrypt): invalid handle\n");
+}
+
+static const struct common_glue_ctx aeskl_xts_enc = {
+	.num_funcs = 2,
+	.fpu_blocks_limit = 1,
+
+	.funcs = { {
+		.num_blocks = 8,
+		.fn_u = { .xts = aeskl_xts_enc8 }
+	}, {
+		.num_blocks = 1,
+		.fn_u = { .xts = aeskl_xts_enc1 }
+	} }
+};
+
+static const struct common_glue_ctx aeskl_xts_dec = {
+	.num_funcs = 2,
+	.fpu_blocks_limit = 1,
+
+	.funcs = { {
+		.num_blocks = 8,
+		.fn_u = { .xts = aeskl_xts_dec8 }
+	}, {
+		.num_blocks = 1,
+		.fn_u = { .xts = aeskl_xts_dec1 }
+	} }
+};
+
+static int xts_crypt(struct skcipher_request *req, bool decrypt)
+{
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct aeskl_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
+	const struct common_glue_ctx *gctx;
+
+	if (decrypt)
+		gctx = &aeskl_xts_dec;
+	else
+		gctx = &aeskl_xts_enc;
+
+	return glue_xts_req_128bit(gctx, req, aeskl_xts_tweak,
+				   aes_ctx(ctx->raw_tweak_ctx),
+				   aes_ctx(ctx->raw_crypt_ctx),
+				   decrypt);
+}
+
+static int xts_encrypt(struct skcipher_request *req)
+{
+	return xts_crypt(req, false);
+}
+
+static int xts_decrypt(struct skcipher_request *req)
+{
+	return xts_crypt(req, true);
+}
+#endif
+
+static struct crypto_alg aeskl_cipher_alg = {
+	.cra_name		= "aes",
+	.cra_driver_name	= "aes-aeskl",
+	.cra_priority		= 301,
+	.cra_flags		= CRYPTO_ALG_TYPE_CIPHER,
+	.cra_blocksize		= AES_BLOCK_SIZE,
+	.cra_ctxsize		= CRYPTO_AESKL_CTX_SIZE,
+	.cra_module		= THIS_MODULE,
+	.cra_u	= {
+		.cipher	= {
+			.cia_min_keysize	= AES_MIN_KEY_SIZE,
+			.cia_max_keysize	= AES_MAX_KEY_SIZE,
+			.cia_setkey		= aeskl_setkey,
+			.cia_encrypt		= aeskl_encrypt,
+			.cia_decrypt		= aeskl_decrypt
+		}
+	}
+};
+
+static struct skcipher_alg aeskl_skciphers[] = {
+	{
+		.base = {
+			.cra_name		= "__ecb(aes)",
+			.cra_driver_name	= "__ecb-aes-aeskl",
+			.cra_priority		= 401,
+			.cra_flags		= CRYPTO_ALG_INTERNAL,
+			.cra_blocksize		= AES_BLOCK_SIZE,
+			.cra_ctxsize		= CRYPTO_AESKL_CTX_SIZE,
+			.cra_module		= THIS_MODULE,
+		},
+		.min_keysize	= AES_MIN_KEY_SIZE,
+		.max_keysize	= AES_MAX_KEY_SIZE,
+		.setkey		= aeskl_skcipher_setkey,
+		.encrypt	= ecb_encrypt,
+		.decrypt	= ecb_decrypt,
+	}, {
+		.base = {
+			.cra_name		= "__cbc(aes)",
+			.cra_driver_name	= "__cbc-aes-aeskl",
+			.cra_priority		= 401,
+			.cra_flags		= CRYPTO_ALG_INTERNAL,
+			.cra_blocksize		= AES_BLOCK_SIZE,
+			.cra_ctxsize		= CRYPTO_AESKL_CTX_SIZE,
+			.cra_module		= THIS_MODULE,
+		},
+		.min_keysize	= AES_MIN_KEY_SIZE,
+		.max_keysize	= AES_MAX_KEY_SIZE,
+		.ivsize		= AES_BLOCK_SIZE,
+		.setkey		= aeskl_skcipher_setkey,
+		.encrypt	= cbc_encrypt,
+		.decrypt	= cbc_decrypt,
+#ifdef CONFIG_X86_64
+	}, {
+		.base = {
+			.cra_name		= "__ctr(aes)",
+			.cra_driver_name	= "__ctr-aes-aeskl",
+			.cra_priority		= 401,
+			.cra_flags		= CRYPTO_ALG_INTERNAL,
+			.cra_blocksize		= 1,
+			.cra_ctxsize		= CRYPTO_AESKL_CTX_SIZE,
+			.cra_module		= THIS_MODULE,
+		},
+		.min_keysize	= AES_MIN_KEY_SIZE,
+		.max_keysize	= AES_MAX_KEY_SIZE,
+		.ivsize		= AES_BLOCK_SIZE,
+		.chunksize	= AES_BLOCK_SIZE,
+		.setkey		= aeskl_skcipher_setkey,
+		.encrypt	= ctr_crypt,
+		.decrypt	= ctr_crypt,
+	}, {
+		.base = {
+			.cra_name		= "__xts(aes)",
+			.cra_driver_name	= "__xts-aes-aeskl",
+			.cra_priority		= 402,
+			.cra_flags		= CRYPTO_ALG_INTERNAL,
+			.cra_blocksize		= AES_BLOCK_SIZE,
+			.cra_ctxsize		= XTS_AESKL_CTX_SIZE,
+			.cra_module		= THIS_MODULE,
+		},
+		.min_keysize	= 2 * AES_MIN_KEY_SIZE,
+		.max_keysize	= 2 * AES_MAX_KEY_SIZE,
+		.ivsize		= AES_BLOCK_SIZE,
+		.setkey		= aeskl_xts_setkey,
+		.encrypt	= xts_encrypt,
+		.decrypt	= xts_decrypt,
+#endif
+	}
+};
+
+static struct simd_skcipher_alg *aeskl_simd_skciphers[ARRAY_SIZE(aeskl_skciphers)];
+
+static const struct x86_cpu_id aes_keylocker_cpuid[] = {
+	X86_MATCH_FEATURE(X86_FEATURE_AES, NULL),
+	X86_MATCH_FEATURE(X86_FEATURE_KEYLOCKER, NULL),
+	{}
+};
+
+static int __init aeskl_init(void)
+{
+	u32 eax, ebx, ecx, edx;
+	int err;
+
+	if (!x86_match_cpu(aes_keylocker_cpuid))
+		return -ENODEV;
+
+	cpuid_count(KEYLOCKER_CPUID, 0, &eax, &ebx, &ecx, &edx);
+	if (!(ebx & KEYLOCKER_CPUID_EBX_AESKLE) ||
+	    !(eax & KEYLOCKER_CPUID_EAX_SUPERVISOR) ||
+	    !(ebx & KEYLOCKER_CPUID_EBX_WIDE))
+		return -ENODEV;
+
+	err = crypto_register_alg(&aeskl_cipher_alg);
+	if (err)
+		return err;
+
+	err = simd_register_skciphers_compat(aeskl_skciphers, ARRAY_SIZE(aeskl_skciphers),
+					     aeskl_simd_skciphers);
+	if (err)
+		goto unregister_algs;
+
+	return 0;
+
+unregister_algs:
+	crypto_unregister_alg(&aeskl_cipher_alg);
+
+	return err;
+}
+
+static void __exit aeskl_exit(void)
+{
+	simd_unregister_skciphers(aeskl_skciphers, ARRAY_SIZE(aeskl_skciphers),
+				  aeskl_simd_skciphers);
+	crypto_unregister_alg(&aeskl_cipher_alg);
+}
+
+late_initcall(aeskl_init);
+module_exit(aeskl_exit);
+
+MODULE_DESCRIPTION("Rijndael (AES) Cipher Algorithm, AES Key Locker implementation");
+MODULE_LICENSE("GPL");
+MODULE_ALIAS_CRYPTO("aes");
diff --git a/arch/x86/include/asm/inst.h b/arch/x86/include/asm/inst.h
index bd7f02480ca1..b719a11a2905 100644
--- a/arch/x86/include/asm/inst.h
+++ b/arch/x86/include/asm/inst.h
@@ -122,9 +122,62 @@
 #endif
 	.endm
 
+	.macro XMM_NUM opd xmm
+	\opd = REG_NUM_INVALID
+	.ifc \xmm,%xmm0
+	\opd = 0
+	.endif
+	.ifc \xmm,%xmm1
+	\opd = 1
+	.endif
+	.ifc \xmm,%xmm2
+	\opd = 2
+	.endif
+	.ifc \xmm,%xmm3
+	\opd = 3
+	.endif
+	.ifc \xmm,%xmm4
+	\opd = 4
+	.endif
+	.ifc \xmm,%xmm5
+	\opd = 5
+	.endif
+	.ifc \xmm,%xmm6
+	\opd = 6
+	.endif
+	.ifc \xmm,%xmm7
+	\opd = 7
+	.endif
+	.ifc \xmm,%xmm8
+	\opd = 8
+	.endif
+	.ifc \xmm,%xmm9
+	\opd = 9
+	.endif
+	.ifc \xmm,%xmm10
+	\opd = 10
+	.endif
+	.ifc \xmm,%xmm11
+	\opd = 11
+	.endif
+	.ifc \xmm,%xmm12
+	\opd = 12
+	.endif
+	.ifc \xmm,%xmm13
+	\opd = 13
+	.endif
+	.ifc \xmm,%xmm14
+	\opd = 14
+	.endif
+	.ifc \xmm,%xmm15
+	\opd = 15
+	.endif
+	.endm
+
 	.macro REG_TYPE type reg
 	R32_NUM reg_type_r32 \reg
 	R64_NUM reg_type_r64 \reg
+	XMM_NUM reg_type_xmm \reg
 	.if reg_type_r64 <> REG_NUM_INVALID
 	\type = REG_TYPE_R64
 	.elseif reg_type_r32 <> REG_NUM_INVALID
@@ -134,6 +187,14 @@
 	.endif
 	.endm
 
+	.macro PFX_OPD_SIZE
+	.byte 0x66
+	.endm
+
+	.macro PFX_RPT
+	.byte 0xf3
+	.endm
+
 	.macro PFX_REX opd1 opd2 W=0
 	.if ((\opd1 | \opd2) & 8) || \W
 	.byte 0x40 | ((\opd1 & 8) >> 3) | ((\opd2 & 8) >> 1) | (\W << 3)
@@ -158,6 +219,146 @@
 	.byte 0x0f, 0xc7
 	MODRM 0xc0 rdpid_opd 0x7
 .endm
+
+	.macro ENCODEKEY128 reg1 reg2
+	R32_NUM encodekey128_opd1 \reg1
+	R32_NUM encodekey128_opd2 \reg2
+	PFX_RPT
+	.byte 0xf, 0x38, 0xfa
+	MODRM 0xc0  encodekey128_opd2 encodekey128_opd1
+	.endm
+
+	.macro ENCODEKEY256 reg1 reg2
+	R32_NUM encodekey256_opd1 \reg1
+	R32_NUM encodekey256_opd2 \reg2
+	PFX_RPT
+	.byte 0x0f, 0x38, 0xfb
+	MODRM 0xc0 encodekey256_opd1 encodekey256_opd2
+	.endm
+
+	.macro AESENC128KL reg, xmm
+	REG_TYPE aesenc128kl_opd1_type \reg
+	.if aesenc128kl_opd1_type == REG_TYPE_R64
+	R64_NUM aesenc128kl_opd1 \reg
+	.elseif aesenc128kl_opd1_type == REG_TYPE_R32
+	R32_NUM aesenc128kl_opd1 \reg
+	.else
+	aesenc128kl_opd1 = REG_NUM_INVALID
+	.endif
+	XMM_NUM aesenc128kl_opd2 \xmm
+	PFX_RPT
+	.byte 0x0f, 0x38, 0xdc
+	MODRM 0x0 aesenc128kl_opd1 aesenc128kl_opd2
+	.endm
+
+	.macro AESDEC128KL reg, xmm
+	REG_TYPE aesdec128kl_opd1_type \reg
+	.if aesdec128kl_opd1_type == REG_TYPE_R64
+	R64_NUM aesdec128kl_opd1 \reg
+	.elseif aesdec128kl_opd1_type == REG_TYPE_R32
+	R32_NUM aesdec128kl_opd1 \reg
+	.else
+	aesdec128kl_opd1 = REG_NUM_INVALID
+	.endif
+	XMM_NUM aesdec128kl_opd2 \xmm
+	PFX_RPT
+	.byte 0x0f, 0x38, 0xdd
+	MODRM 0x0 aesdec128kl_opd1 aesdec128kl_opd2
+	.endm
+
+	.macro AESENC256KL reg, xmm
+	REG_TYPE aesenc256kl_opd1_type \reg
+	.if aesenc256kl_opd1_type == REG_TYPE_R64
+	R64_NUM aesenc256kl_opd1 \reg
+	.elseif aesenc256kl_opd1_type == REG_TYPE_R32
+	R32_NUM aesenc256kl_opd1 \reg
+	.else
+	aesenc256kl_opd1 = REG_NUM_INVALID
+	.endif
+	XMM_NUM aesenc256kl_opd2 \xmm
+	PFX_RPT
+	.byte 0x0f, 0x38, 0xde
+	MODRM 0x0 aesenc256kl_opd1 aesenc256kl_opd2
+	.endm
+
+	.macro AESDEC256KL reg, xmm
+	REG_TYPE aesdec256kl_opd1_type \reg
+	.if aesdec256kl_opd1_type == REG_TYPE_R64
+	R64_NUM aesdec256kl_opd1 \reg
+	.elseif aesdec256kl_opd1_type == REG_TYPE_R32
+	R32_NUM aesdec256kl_opd1 \reg
+	.else
+	aesdec256kl_opd1 = REG_NUM_INVALID
+	.endif
+	XMM_NUM aesdec256kl_opd2 \xmm
+	PFX_RPT
+	.byte 0x0f, 0x38, 0xdf
+	MODRM 0x0 aesdec256kl_opd1 aesdec256kl_opd2
+	.endm
+
+	.macro AESENCWIDE128KL reg
+	REG_TYPE aesencwide128kl_opd1_type \reg
+	.if aesencwide128kl_opd1_type == REG_TYPE_R64
+	R64_NUM aesencwide128kl_opd1 \reg
+	.elseif aesencwide128kl_opd1_type == REG_TYPE_R32
+	R32_NUM aesencwide128kl_opd1 \reg
+	.else
+	aesencwide128kl_opd1 = REG_NUM_INVALID
+	.endif
+	PFX_RPT
+	.byte 0x0f, 0x38, 0xd8
+	MODRM 0x0 aesencwide128kl_opd1 0x0
+	.endm
+
+	.macro AESDECWIDE128KL reg
+	REG_TYPE aesdecwide128kl_opd1_type \reg
+	.if aesdecwide128kl_opd1_type == REG_TYPE_R64
+	R64_NUM aesdecwide128kl_opd1 \reg
+	.elseif aesdecwide128kl_opd1_type == REG_TYPE_R32
+	R32_NUM aesdecwide128kl_opd1 \reg
+	.else
+	aesdecwide128kl_opd1 = REG_NUM_INVALID
+	.endif
+	PFX_RPT
+	.byte 0x0f, 0x38, 0xd8
+	MODRM 0x0 aesdecwide128kl_opd1 0x1
+	.endm
+
+	.macro AESENCWIDE256KL reg
+	REG_TYPE aesencwide256kl_opd1_type \reg
+	.if aesencwide256kl_opd1_type == REG_TYPE_R64
+	R64_NUM aesencwide256kl_opd1 \reg
+	.elseif aesencwide256kl_opd1_type == REG_TYPE_R32
+	R32_NUM aesencwide256kl_opd1 \reg
+	.else
+	aesencwide256kl_opd1 = REG_NUM_INVALID
+	.endif
+	PFX_RPT
+	.byte 0x0f, 0x38, 0xd8
+	MODRM 0x0 aesencwide256kl_opd1 0x2
+	.endm
+
+	.macro AESDECWIDE256KL reg
+	REG_TYPE aesdecwide256kl_opd1_type \reg
+	.if aesdecwide256kl_opd1_type == REG_TYPE_R64
+	R64_NUM aesdecwide256kl_opd1 \reg
+	.elseif aesdecwide256kl_opd1_type == REG_TYPE_R32
+	R32_NUM aesdecwide256kl_opd1 \reg
+	.else
+	aesdecwide256kl_opd1 = REG_NUM_INVALID
+	.endif
+	PFX_RPT
+	.byte 0x0f, 0x38, 0xd8
+	MODRM 0x0 aesdecwide256kl_opd1 0x3
+	.endm
+
+	.macro LOADIWKEY xmm1, xmm2
+	XMM_NUM loadiwkey_opd1 \xmm1
+	XMM_NUM loadiwkey_opd2 \xmm2
+	PFX_RPT
+	.byte 0x0f, 0x38, 0xdc
+	MODRM 0xc0 loadiwkey_opd1 loadiwkey_opd2
+	.endm
 #endif
 
 #endif
diff --git a/crypto/Kconfig b/crypto/Kconfig
index 094ef56ab7b4..75a184179c72 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -1159,6 +1159,34 @@ config CRYPTO_AES_NI_INTEL
 	  ECB, CBC, LRW, XTS. The 64 bit version has additional
 	  acceleration for CTR.
 
+config CRYPTO_AES_KL
+	tristate "AES cipher algorithms (AES-KL)"
+	depends on X86_KEYLOCKER
+	select CRYPTO_AES_NI_INTEL
+	help
+	  Use AES Key Locker instructions for AES algorithm.
+
+	  AES cipher algorithms (FIPS-197). AES uses the Rijndael
+	  algorithm.
+
+	  Rijndael appears to be consistently a very good performer in both
+	  hardware and software across a wide range of computing
+	  environments regardless of its use in feedback or non-feedback
+	  modes. Its key setup time is excellent, and its key agility is
+	  good. Rijndael's very low memory requirements make it very well
+	  suited for restricted-space environments, in which it also
+	  demonstrates excellent performance. Rijndael's operations are
+	  among the easiest to defend against power and timing attacks.
+
+	  The AES specifies three key sizes: 128, 192 and 256 bits
+
+	  See <http://csrc.nist.gov/encryption/aes/> for more information.
+
+	  For 128- and 256-bit keys, the AES cipher algorithm is
+	  implemented by AES Key Locker instructions. This implementation
+	  does not need an AES key once wrapped to an encoded form. For AES
+	  compliance, 192-bit is processed by AES-NI instructions.
+
 config CRYPTO_AES_SPARC64
 	tristate "AES cipher algorithms (SPARC64)"
 	depends on SPARC64
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [RFC PATCH 8/8] x86/cpu: Support the hardware randomization option for Key Locker internal key
  2020-12-16 17:41 [RFC PATCH 0/8] x86: Support Intel Key Locker Chang S. Bae
                   ` (6 preceding siblings ...)
  2020-12-16 17:41 ` [RFC PATCH 7/8] crypto: x86/aes-kl - Support AES algorithm using Key Locker instructions Chang S. Bae
@ 2020-12-16 17:41 ` Chang S. Bae
  2020-12-17 19:10 ` [RFC PATCH 0/8] x86: Support Intel Key Locker Eric Biggers
  2020-12-19 18:59 ` Andy Lutomirski
  9 siblings, 0 replies; 30+ messages in thread
From: Chang S. Bae @ 2020-12-16 17:41 UTC (permalink / raw)
  To: tglx, mingo, bp, luto, x86, herbert
  Cc: dan.j.williams, dave.hansen, ravi.v.shankar, ning.sun,
	kumar.n.dwarakanath, linux-crypto, linux-kernel, chang.seok.bae,
	Mark Brown, linux-doc

Hardware can load the internal key with randomization. random.trust_cpu
determines the use of the CPU's random number generator. Take the parameter
to use the CPU's internal key randomization.

The backup mechanism is required to distribute the key. It is the only
way to copy the (unknown) key value to other CPUs.

This randomization option is disabled when hardware does not support the
key backup.

Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: x86@kernel.org
Cc: linux-doc@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
---
 arch/x86/include/asm/keylocker.h |  2 +-
 arch/x86/kernel/cpu/common.c     |  3 ++-
 arch/x86/kernel/keylocker.c      | 31 ++++++++++++++++++++++++++++---
 drivers/char/random.c            |  6 ++++++
 include/linux/random.h           |  2 ++
 5 files changed, 39 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/keylocker.h b/arch/x86/include/asm/keylocker.h
index 722574c305c2..a6774ced916a 100644
--- a/arch/x86/include/asm/keylocker.h
+++ b/arch/x86/include/asm/keylocker.h
@@ -19,7 +19,7 @@ bool check_keylocker_readiness(void);
 
 bool load_keylocker(void);
 
-void make_keylocker_data(void);
+void make_keylocker_data(bool use_hwrand);
 #ifdef CONFIG_X86_KEYLOCKER
 void invalidate_keylocker_data(void);
 #else
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index ba5bd79fbac2..48881d8ea559 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -485,12 +485,13 @@ static __always_inline void setup_keylocker(struct cpuinfo_x86 *c)
 	cr4_set_bits(X86_CR4_KEYLOCKER);
 
 	if (c == &boot_cpu_data) {
+		bool use_hwrand = check_random_trust_cpu();
 		bool keyloaded;
 
 		if (!check_keylocker_readiness())
 			goto disable_keylocker;
 
-		make_keylocker_data();
+		make_keylocker_data(use_hwrand);
 
 		keyloaded = load_keylocker();
 		if (!keyloaded) {
diff --git a/arch/x86/kernel/keylocker.c b/arch/x86/kernel/keylocker.c
index 229875ac80d5..e77e4c3d785e 100644
--- a/arch/x86/kernel/keylocker.c
+++ b/arch/x86/kernel/keylocker.c
@@ -13,6 +13,7 @@
 #include <asm/fpu/api.h>
 
 static bool keybackup_available;
+static bool keyhwrand_available;
 
 bool check_keylocker_readiness(void)
 {
@@ -33,25 +34,33 @@ bool check_keylocker_readiness(void)
 		pr_debug("x86/keylocker: no key backup support with possible S3/4\n");
 		return false;
 	}
+
+	keyhwrand_available = (ecx & KEYLOCKER_CPUID_ECX_RAND);
 	return true;
 }
 
 /* Load Internal (Wrapping) Key */
 #define LOADIWKEY		".byte 0xf3,0x0f,0x38,0xdc,0xd1"
 #define LOADIWKEY_NUM_OPERANDS	3
+#define LOADIWKEY_HWRAND_RETRY	10
 
 static struct key {
 	bool valid;
+	bool hwrand;
 	struct reg_128_bit value[LOADIWKEY_NUM_OPERANDS];
 } keydata;
 
-void make_keylocker_data(void)
+void make_keylocker_data(bool use_hwrand)
 {
 	int i;
 
 	for (i = 0; i < LOADIWKEY_NUM_OPERANDS; i++)
 		get_random_bytes(&keydata.value[i], sizeof(struct reg_128_bit));
 
+	keydata.hwrand = (use_hwrand && keyhwrand_available && keybackup_available);
+	if (use_hwrand && !keydata.hwrand)
+		pr_warn("x86/keylocker: hardware random key not fully supported\n");
+
 	keydata.valid = true;
 }
 
@@ -63,12 +72,22 @@ void invalidate_keylocker_data(void)
 }
 
 #define USE_SWKEY	0
+#define USE_HWRANDKEY	BIT(1)
 
 bool load_keylocker(void)
 {
 	struct reg_128_bit zeros = { 0 };
-	u32 keysrc = USE_SWKEY;
 	bool err = true;
+	u32 keysrc;
+	int retry;
+
+	if (keydata.hwrand) {
+		keysrc = USE_HWRANDKEY;
+		retry = LOADIWKEY_HWRAND_RETRY;
+	} else {
+		keysrc = USE_SWKEY;
+		retry = 0;
+	}
 
 	kernel_fpu_begin();
 
@@ -77,13 +96,19 @@ bool load_keylocker(void)
 			 "m"(keydata.value[1]),
 			 "m"(keydata.value[2]));
 
-	asm volatile (LOADIWKEY CC_SET(z) : CC_OUT(z) (err) : "a"(keysrc));
+	do {
+		asm volatile (LOADIWKEY CC_SET(z) : CC_OUT(z) (err) : "a"(keysrc));
+		retry--;
+	} while (err && retry >= 0);
 
 	asm volatile ("movdqu %0, %%xmm0; movdqu %0, %%xmm1; movdqu %0, %%xmm2;"
 		      :: "m"(zeros));
 
 	kernel_fpu_end();
 
+	if (keydata.hwrand)
+		invalidate_keylocker_data();
+
 	return err ? false : true;
 }
 
diff --git a/drivers/char/random.c b/drivers/char/random.c
index 2a41b21623ae..3ee0d659ab2a 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -781,6 +781,12 @@ static int __init parse_trust_cpu(char *arg)
 }
 early_param("random.trust_cpu", parse_trust_cpu);
 
+bool check_random_trust_cpu(void)
+{
+	return trust_cpu;
+}
+EXPORT_SYMBOL(check_random_trust_cpu);
+
 static bool crng_init_try_arch(struct crng_state *crng)
 {
 	int		i;
diff --git a/include/linux/random.h b/include/linux/random.h
index f45b8be3e3c4..f08f44988b13 100644
--- a/include/linux/random.h
+++ b/include/linux/random.h
@@ -158,4 +158,6 @@ static inline bool __init arch_get_random_long_early(unsigned long *v)
 }
 #endif
 
+extern bool check_random_trust_cpu(void);
+
 #endif /* _LINUX_RANDOM_H */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH 7/8] crypto: x86/aes-kl - Support AES algorithm using Key Locker instructions
  2020-12-16 17:41 ` [RFC PATCH 7/8] crypto: x86/aes-kl - Support AES algorithm using Key Locker instructions Chang S. Bae
@ 2020-12-17 10:16   ` Ard Biesheuvel
  2021-05-14 20:36     ` Bae, Chang Seok
  2020-12-17 20:54   ` Andy Lutomirski
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 30+ messages in thread
From: Ard Biesheuvel @ 2020-12-17 10:16 UTC (permalink / raw)
  To: Chang S. Bae, Herbert Xu
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Andy Lutomirski,
	X86 ML, Dan Williams, dave.hansen, ravi.v.shankar, ning.sun,
	kumar.n.dwarakanath, Linux Crypto Mailing List,
	Linux Kernel Mailing List

Hello Chang,

On Wed, 16 Dec 2020 at 18:47, Chang S. Bae <chang.seok.bae@intel.com> wrote:
>
> Key Locker (KL) is Intel's new security feature that protects the AES key
> at the time of data transformation. New AES SIMD instructions -- as a
> successor of Intel's AES-NI -- are provided to encode an AES key and
> reference it for the AES algorithm.
>
> New instructions support 128/256-bit keys. While it is not desirable to
> receive any 192-bit key, AES-NI instructions are taken to serve this size.
>
> New instructions are operational in both 32-/64-bit modes.
>
> Add a set of new macros for the new instructions so that no new binutils
> version is required.
>
> Implemented methods are for a single block as well as ECB, CBC, CTR, and
> XTS modes. The methods are not compatible with other AES implementations as
> accessing an encrypted key instead of the normal AES key.
>
> setkey() call encodes an AES key. User may displace the AES key once
> encoded, as encrypt()/decrypt() methods do not need the key.
>
> Most C code follows the AES-NI implementation. It has higher priority than
> the AES-NI as providing key protection.
>
> Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com>
> Cc: Herbert Xu <herbert@gondor.apana.org.au>
> Cc: x86@kernel.org
> Cc: linux-crypto@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> ---
>  arch/x86/crypto/Makefile           |   3 +
>  arch/x86/crypto/aeskl-intel_asm.S  | 881 +++++++++++++++++++++++++++++
>  arch/x86/crypto/aeskl-intel_glue.c | 697 +++++++++++++++++++++++
>  arch/x86/include/asm/inst.h        | 201 +++++++
>  crypto/Kconfig                     |  28 +
>  5 files changed, 1810 insertions(+)

We will need to refactor this - cloning the entire driver and just
replacing aes-ni with aes-kl is a maintenance nightmare.

Please refer to the arm64 tree for an example how to combine chaining
mode routines implemented in assembler with different implementations
of the core AES transforms (aes-modes.S is combined with either
aes-ce.S or aes-neon.S to produce two different drivers)

...
> diff --git a/arch/x86/crypto/aeskl-intel_glue.c b/arch/x86/crypto/aeskl-intel_glue.c
> new file mode 100644
> index 000000000000..9e3f900ad4af
> --- /dev/null
> +++ b/arch/x86/crypto/aeskl-intel_glue.c
> @@ -0,0 +1,697 @@
...
> +static void aeskl_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
> +{
> +       struct crypto_aes_ctx *ctx = aes_ctx(crypto_tfm_ctx(tfm));
> +       int err = 0;
> +
> +       if (!crypto_simd_usable())
> +               return;
> +

It is clear that AES-KL cannot be handled by a fallback algorithm,
given that the key is no longer available. But that doesn't mean that
you can just give up like this.

This basically implies that we cannot expose the cipher interface at
all, and so AES-KL can only be used by callers that use the
asynchronous interface, which rules out 802.11, s/w kTLS, macsec and
kerberos.

This ties in to a related discussion that is going on about when to
allow kernel mode SIMD. I am currently investigating whether we can
change the rules a bit so that crypto_simd_usable() is guaranteed to
be true.





> +       kernel_fpu_begin();
> +       /* 192-bit key not supported, fall back to AES-NI.*/
> +       if (unlikely(ctx->key_length == AES_KEYSIZE_192))
> +               aesni_enc(ctx, dst, src);
> +       else
> +               err = __aeskl_enc1(ctx, dst, src);
> +       kernel_fpu_end();
> +
> +       if (err)
> +               pr_err("aes-kl (encrypt): invalid handle\n");
> +}
> +
> +static void aeskl_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
> +{
> +       struct crypto_aes_ctx *ctx = aes_ctx(crypto_tfm_ctx(tfm));
> +       int err = 0;
> +
> +       if (!crypto_simd_usable())
> +               return;
> +
> +       kernel_fpu_begin();
> +       /* 192-bit key not supported, fall back to AES-NI */
> +       if (unlikely(ctx->key_length == AES_KEYSIZE_192))
> +               aesni_dec(ctx, dst, src);
> +       else
> +               err = __aeskl_dec1(ctx, dst, src);
> +       kernel_fpu_end();
> +
> +       if (err)
> +               pr_err("aes-kl (encrypt): invalid handle\n");
> +}
> +
> +static int aeskl_skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key,
> +                                unsigned int len)
> +{
> +       return aeskl_setkey_common(crypto_skcipher_tfm(tfm),
> +                                  crypto_skcipher_ctx(tfm), key, len);
> +}
> +
> +static int ecb_encrypt(struct skcipher_request *req)
> +{
> +       struct crypto_skcipher *tfm;
> +       struct crypto_aes_ctx *ctx;
> +       struct skcipher_walk walk;
> +       unsigned int nbytes;
> +       int err;
> +
> +       tfm = crypto_skcipher_reqtfm(req);
> +       ctx = aes_ctx(crypto_skcipher_ctx(tfm));
> +
> +       err = skcipher_walk_virt(&walk, req, true);
> +       if (err)
> +               return err;
> +
> +       while ((nbytes = walk.nbytes)) {
> +               unsigned int len = nbytes & AES_BLOCK_MASK;
> +               const u8 *src = walk.src.virt.addr;
> +               u8 *dst = walk.dst.virt.addr;
> +
> +               kernel_fpu_begin();
> +               if (unlikely(ctx->key_length == AES_KEYSIZE_192))
> +                       aesni_ecb_enc(ctx, dst, src, len);

Could we please use a proper fallback here, and relay the entire request?


> +               else
> +                       err = __aeskl_ecb_enc(ctx, dst, src, len);
> +               kernel_fpu_end();
> +
> +               if (err) {
> +                       skcipher_walk_done(&walk, nbytes & (AES_BLOCK_SIZE - 1));

This doesn't look right. The skcipher scatterlist walker may have a
live kmap() here so you can't just return.

> +                       return -EINVAL;
> +               }
> +
> +               nbytes &= AES_BLOCK_SIZE - 1;
> +
> +               err = skcipher_walk_done(&walk, nbytes);
> +               if (err)
> +                       return err;
> +       }
> +
> +       return err;
> +}
> +
> +static int ecb_decrypt(struct skcipher_request *req)
> +{
> +       struct crypto_skcipher *tfm;
> +       struct crypto_aes_ctx *ctx;
> +       struct skcipher_walk walk;
> +       unsigned int nbytes;
> +       int err;
> +
> +       tfm = crypto_skcipher_reqtfm(req);
> +       ctx = aes_ctx(crypto_skcipher_ctx(tfm));
> +
> +       err = skcipher_walk_virt(&walk, req, true);
> +       if (err)
> +               return err;
> +
> +       while ((nbytes = walk.nbytes)) {
> +               unsigned int len = nbytes & AES_BLOCK_MASK;
> +               const u8 *src = walk.src.virt.addr;
> +               u8 *dst = walk.dst.virt.addr;
> +
> +               kernel_fpu_begin();
> +               if (unlikely(ctx->key_length == AES_KEYSIZE_192))
> +                       aesni_ecb_dec(ctx, dst, src, len);
> +               else
> +                       err = __aeskl_ecb_dec(ctx, dst, src, len);
> +               kernel_fpu_end();
> +
> +               if (err) {
> +                       skcipher_walk_done(&walk, nbytes & (AES_BLOCK_SIZE - 1));
> +                       return -EINVAL;
> +               }
> +
> +               nbytes &= AES_BLOCK_SIZE - 1;
> +
> +               err = skcipher_walk_done(&walk, nbytes);
> +               if (err)
> +                       return err;
> +       }
> +
> +       return err;
> +}
> +
> +static int cbc_encrypt(struct skcipher_request *req)
> +{
> +       struct crypto_skcipher *tfm;
> +       struct crypto_aes_ctx *ctx;
> +       struct skcipher_walk walk;
> +       unsigned int nbytes;
> +       int err;
> +
> +       tfm = crypto_skcipher_reqtfm(req);
> +       ctx = aes_ctx(crypto_skcipher_ctx(tfm));
> +       err = skcipher_walk_virt(&walk, req, true);
> +       if (err)
> +               return err;
> +
> +       while ((nbytes = walk.nbytes)) {
> +               unsigned int len = nbytes & AES_BLOCK_MASK;
> +               const u8 *src = walk.src.virt.addr;
> +               u8 *dst = walk.dst.virt.addr;
> +               u8 *iv = walk.iv;
> +
> +               kernel_fpu_begin();
> +               if (unlikely(ctx->key_length == AES_KEYSIZE_192))
> +                       aesni_cbc_enc(ctx, dst, src, len, iv);
> +               else
> +                       err = __aeskl_cbc_enc(ctx, dst, src, len, iv);
> +               kernel_fpu_end();
> +
> +               if (err) {
> +                       skcipher_walk_done(&walk, nbytes & (AES_BLOCK_SIZE - 1));
> +                       return -EINVAL;
> +               }
> +
> +               nbytes &= AES_BLOCK_SIZE - 1;
> +
> +               err = skcipher_walk_done(&walk, nbytes);
> +               if (err)
> +                       return err;
> +       }
> +
> +       return err;
> +}
> +
> +static int cbc_decrypt(struct skcipher_request *req)
> +{
> +       struct crypto_skcipher *tfm;
> +       struct crypto_aes_ctx *ctx;
> +       struct skcipher_walk walk;
> +       unsigned int nbytes;
> +       int err;
> +
> +       tfm = crypto_skcipher_reqtfm(req);
> +       ctx = aes_ctx(crypto_skcipher_ctx(tfm));
> +       err = skcipher_walk_virt(&walk, req, true);
> +       if (err)
> +               return err;
> +
> +       while ((nbytes = walk.nbytes)) {
> +               unsigned int len = nbytes & AES_BLOCK_MASK;
> +               const u8 *src = walk.src.virt.addr;
> +               u8 *dst = walk.dst.virt.addr;
> +               u8 *iv = walk.iv;
> +
> +               kernel_fpu_begin();
> +               if (unlikely(ctx->key_length == AES_KEYSIZE_192))
> +                       aesni_cbc_dec(ctx, dst, src, len, iv);
> +               else
> +                       err = __aeskl_cbc_dec(ctx, dst, src, len, iv);
> +               kernel_fpu_end();
> +
> +               if (err) {
> +                       skcipher_walk_done(&walk, nbytes & (AES_BLOCK_SIZE - 1));
> +                       return -EINVAL;
> +               }
> +
> +               nbytes &= AES_BLOCK_SIZE - 1;
> +
> +               err = skcipher_walk_done(&walk, nbytes);
> +               if (err)
> +                       return err;
> +       }
> +
> +       return err;
> +}
> +
> +#ifdef CONFIG_X86_64
> +static int ctr_crypt(struct skcipher_request *req)
> +{
> +       struct crypto_skcipher *tfm;
> +       struct crypto_aes_ctx *ctx;
> +       struct skcipher_walk walk;
> +       unsigned int nbytes;
> +       int err;
> +
> +       tfm = crypto_skcipher_reqtfm(req);
> +       ctx = aes_ctx(crypto_skcipher_ctx(tfm));
> +
> +       err = skcipher_walk_virt(&walk, req, true);
> +       if (err)
> +               return err;
> +
> +       while ((nbytes = walk.nbytes) >= AES_BLOCK_SIZE) {
> +               unsigned int len = nbytes & AES_BLOCK_MASK;
> +               const u8 *src = walk.src.virt.addr;
> +               u8 *dst = walk.dst.virt.addr;
> +               u8 *iv = walk.iv;
> +
> +               kernel_fpu_begin();
> +               if (unlikely(ctx->key_length == AES_KEYSIZE_192))
> +                       aesni_ctr_enc(ctx, dst, src, len, iv);
> +               else
> +                       err = __aeskl_ctr_enc(ctx, dst, src, len, iv);
> +               kernel_fpu_end();
> +
> +               if (err) {
> +                       skcipher_walk_done(&walk, nbytes & (AES_BLOCK_SIZE - 1));
> +                       return -EINVAL;
> +               }
> +
> +               nbytes &= AES_BLOCK_SIZE - 1;
> +
> +               err = skcipher_walk_done(&walk, nbytes);
> +               if (err)
> +                       return err;
> +       }
> +
> +       if (nbytes) {
> +               u8 keystream[AES_BLOCK_SIZE];
> +               u8 *src = walk.src.virt.addr;
> +               u8 *dst = walk.dst.virt.addr;
> +               u8 *ctrblk = walk.iv;
> +
> +               kernel_fpu_begin();
> +               if (unlikely(ctx->key_length == AES_KEYSIZE_192))
> +                       aesni_enc(ctx, keystream, ctrblk);
> +               else
> +                       err = __aeskl_enc1(ctx, keystream, ctrblk);
> +               kernel_fpu_end();
> +
> +               if (err) {
> +                       skcipher_walk_done(&walk, 0);
> +                       return -EINVAL;
> +               }
> +
> +               crypto_xor(keystream, src, nbytes);
> +               memcpy(dst, keystream, nbytes);
> +               crypto_inc(ctrblk, AES_BLOCK_SIZE);
> +
> +               err = skcipher_walk_done(&walk, 0);
> +       }
> +
> +       return err;
> +}
> +
> +static int aeskl_xts_setkey(struct crypto_skcipher *tfm, const u8 *key,
> +                           unsigned int keylen)
> +{
> +       struct aeskl_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
> +       int err;
> +
> +       err = xts_verify_key(tfm, key, keylen);
> +       if (err)
> +               return err;
> +
> +       keylen /= 2;
> +
> +       /* first half of xts-key is for crypt */
> +       err = aeskl_setkey_common(crypto_skcipher_tfm(tfm), ctx->raw_crypt_ctx, key, keylen);
> +       if (err)
> +               return err;
> +
> +       /* second half of xts-key is for tweak */
> +       return aeskl_setkey_common(crypto_skcipher_tfm(tfm), ctx->raw_tweak_ctx, key + keylen,
> +                                  keylen);
> +}
> +
> +static void aeskl_xts_tweak(const void *raw_ctx, u8 *out, const u8 *in)
> +{
> +       struct crypto_aes_ctx *ctx = aes_ctx((void *)raw_ctx);
> +
> +       if (unlikely(ctx->key_length == AES_KEYSIZE_192))
> +               aesni_enc(raw_ctx, out, in);
> +       else
> +               aeskl_enc1(raw_ctx, out, in);
> +}
> +
> +static void aeskl_xts_enc1(const void *raw_ctx, u8 *dst, const u8 *src, le128 *iv)
> +{
> +       struct crypto_aes_ctx *ctx = aes_ctx((void *)raw_ctx);
> +       common_glue_func_t fn = aeskl_enc1;
> +
> +       if (unlikely(ctx->key_length == AES_KEYSIZE_192))
> +               fn = aesni_enc;
> +
> +       glue_xts_crypt_128bit_one(raw_ctx, dst, src, iv, fn);
> +}
> +
> +static void aeskl_xts_dec1(const void *raw_ctx, u8 *dst, const u8 *src, le128 *iv)
> +{
> +       struct crypto_aes_ctx *ctx = aes_ctx((void *)raw_ctx);
> +       common_glue_func_t fn = aeskl_dec1;
> +
> +       if (unlikely(ctx->key_length == AES_KEYSIZE_192))
> +               fn = aesni_dec;
> +
> +       glue_xts_crypt_128bit_one(raw_ctx, dst, src, iv, fn);
> +}
> +
> +static void aeskl_xts_enc8(const void *raw_ctx, u8 *dst, const u8 *src, le128 *iv)
> +{
> +       struct crypto_aes_ctx *ctx = aes_ctx((void *)raw_ctx);
> +       int err = 0;
> +
> +       if (unlikely(ctx->key_length == AES_KEYSIZE_192))
> +               aesni_xts_crypt8(raw_ctx, dst, src, true, (u8 *)iv);
> +       else
> +               err = __aeskl_xts_crypt8(raw_ctx, dst, src, true, (u8 *)iv);
> +
> +       if (err)
> +               pr_err("aes-kl (XTS encrypt): invalid handle\n");
> +}
> +
> +static void aeskl_xts_dec8(const void *raw_ctx, u8 *dst, const u8 *src, le128 *iv)
> +{
> +       struct crypto_aes_ctx *ctx = aes_ctx((void *)raw_ctx);
> +       int err = 0;
> +
> +       if (unlikely(ctx->key_length == AES_KEYSIZE_192))
> +               aesni_xts_crypt8(raw_ctx, dst, src, false, (u8 *)iv);
> +       else
> +               __aeskl_xts_crypt8(raw_ctx, dst, src, false, (u8 *)iv);
> +
> +       if (err)
> +               pr_err("aes-kl (XTS decrypt): invalid handle\n");
> +}
> +
> +static const struct common_glue_ctx aeskl_xts_enc = {
> +       .num_funcs = 2,
> +       .fpu_blocks_limit = 1,
> +
> +       .funcs = { {
> +               .num_blocks = 8,
> +               .fn_u = { .xts = aeskl_xts_enc8 }
> +       }, {
> +               .num_blocks = 1,
> +               .fn_u = { .xts = aeskl_xts_enc1 }
> +       } }
> +};
> +
> +static const struct common_glue_ctx aeskl_xts_dec = {
> +       .num_funcs = 2,
> +       .fpu_blocks_limit = 1,
> +
> +       .funcs = { {
> +               .num_blocks = 8,
> +               .fn_u = { .xts = aeskl_xts_dec8 }
> +       }, {
> +               .num_blocks = 1,
> +               .fn_u = { .xts = aeskl_xts_dec1 }
> +       } }
> +};
> +
> +static int xts_crypt(struct skcipher_request *req, bool decrypt)
> +{
> +       struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> +       struct aeskl_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
> +       const struct common_glue_ctx *gctx;
> +
> +       if (decrypt)
> +               gctx = &aeskl_xts_dec;
> +       else
> +               gctx = &aeskl_xts_enc;
> +
> +       return glue_xts_req_128bit(gctx, req, aeskl_xts_tweak,
> +                                  aes_ctx(ctx->raw_tweak_ctx),
> +                                  aes_ctx(ctx->raw_crypt_ctx),
> +                                  decrypt);
> +}
> +
> +static int xts_encrypt(struct skcipher_request *req)
> +{
> +       return xts_crypt(req, false);
> +}
> +
> +static int xts_decrypt(struct skcipher_request *req)
> +{
> +       return xts_crypt(req, true);
> +}
> +#endif
> +
> +static struct crypto_alg aeskl_cipher_alg = {
> +       .cra_name               = "aes",
> +       .cra_driver_name        = "aes-aeskl",
> +       .cra_priority           = 301,
> +       .cra_flags              = CRYPTO_ALG_TYPE_CIPHER,
> +       .cra_blocksize          = AES_BLOCK_SIZE,
> +       .cra_ctxsize            = CRYPTO_AESKL_CTX_SIZE,
> +       .cra_module             = THIS_MODULE,
> +       .cra_u  = {
> +               .cipher = {
> +                       .cia_min_keysize        = AES_MIN_KEY_SIZE,
> +                       .cia_max_keysize        = AES_MAX_KEY_SIZE,
> +                       .cia_setkey             = aeskl_setkey,
> +                       .cia_encrypt            = aeskl_encrypt,
> +                       .cia_decrypt            = aeskl_decrypt
> +               }
> +       }
> +};
> +
> +static struct skcipher_alg aeskl_skciphers[] = {
> +       {
> +               .base = {
> +                       .cra_name               = "__ecb(aes)",
> +                       .cra_driver_name        = "__ecb-aes-aeskl",
> +                       .cra_priority           = 401,
> +                       .cra_flags              = CRYPTO_ALG_INTERNAL,
> +                       .cra_blocksize          = AES_BLOCK_SIZE,
> +                       .cra_ctxsize            = CRYPTO_AESKL_CTX_SIZE,
> +                       .cra_module             = THIS_MODULE,
> +               },
> +               .min_keysize    = AES_MIN_KEY_SIZE,
> +               .max_keysize    = AES_MAX_KEY_SIZE,
> +               .setkey         = aeskl_skcipher_setkey,
> +               .encrypt        = ecb_encrypt,
> +               .decrypt        = ecb_decrypt,
> +       }, {
> +               .base = {
> +                       .cra_name               = "__cbc(aes)",
> +                       .cra_driver_name        = "__cbc-aes-aeskl",
> +                       .cra_priority           = 401,
> +                       .cra_flags              = CRYPTO_ALG_INTERNAL,
> +                       .cra_blocksize          = AES_BLOCK_SIZE,
> +                       .cra_ctxsize            = CRYPTO_AESKL_CTX_SIZE,
> +                       .cra_module             = THIS_MODULE,
> +               },
> +               .min_keysize    = AES_MIN_KEY_SIZE,
> +               .max_keysize    = AES_MAX_KEY_SIZE,
> +               .ivsize         = AES_BLOCK_SIZE,
> +               .setkey         = aeskl_skcipher_setkey,
> +               .encrypt        = cbc_encrypt,
> +               .decrypt        = cbc_decrypt,
> +#ifdef CONFIG_X86_64
> +       }, {
> +               .base = {
> +                       .cra_name               = "__ctr(aes)",
> +                       .cra_driver_name        = "__ctr-aes-aeskl",
> +                       .cra_priority           = 401,
> +                       .cra_flags              = CRYPTO_ALG_INTERNAL,
> +                       .cra_blocksize          = 1,
> +                       .cra_ctxsize            = CRYPTO_AESKL_CTX_SIZE,
> +                       .cra_module             = THIS_MODULE,
> +               },
> +               .min_keysize    = AES_MIN_KEY_SIZE,
> +               .max_keysize    = AES_MAX_KEY_SIZE,
> +               .ivsize         = AES_BLOCK_SIZE,
> +               .chunksize      = AES_BLOCK_SIZE,
> +               .setkey         = aeskl_skcipher_setkey,
> +               .encrypt        = ctr_crypt,
> +               .decrypt        = ctr_crypt,
> +       }, {
> +               .base = {
> +                       .cra_name               = "__xts(aes)",
> +                       .cra_driver_name        = "__xts-aes-aeskl",
> +                       .cra_priority           = 402,
> +                       .cra_flags              = CRYPTO_ALG_INTERNAL,
> +                       .cra_blocksize          = AES_BLOCK_SIZE,
> +                       .cra_ctxsize            = XTS_AESKL_CTX_SIZE,
> +                       .cra_module             = THIS_MODULE,
> +               },
> +               .min_keysize    = 2 * AES_MIN_KEY_SIZE,
> +               .max_keysize    = 2 * AES_MAX_KEY_SIZE,
> +               .ivsize         = AES_BLOCK_SIZE,
> +               .setkey         = aeskl_xts_setkey,
> +               .encrypt        = xts_encrypt,
> +               .decrypt        = xts_decrypt,
> +#endif
> +       }
> +};
> +
> +static struct simd_skcipher_alg *aeskl_simd_skciphers[ARRAY_SIZE(aeskl_skciphers)];
> +
> +static const struct x86_cpu_id aes_keylocker_cpuid[] = {
> +       X86_MATCH_FEATURE(X86_FEATURE_AES, NULL),
> +       X86_MATCH_FEATURE(X86_FEATURE_KEYLOCKER, NULL),
> +       {}
> +};
> +
> +static int __init aeskl_init(void)
> +{
> +       u32 eax, ebx, ecx, edx;
> +       int err;
> +
> +       if (!x86_match_cpu(aes_keylocker_cpuid))
> +               return -ENODEV;
> +
> +       cpuid_count(KEYLOCKER_CPUID, 0, &eax, &ebx, &ecx, &edx);
> +       if (!(ebx & KEYLOCKER_CPUID_EBX_AESKLE) ||
> +           !(eax & KEYLOCKER_CPUID_EAX_SUPERVISOR) ||
> +           !(ebx & KEYLOCKER_CPUID_EBX_WIDE))
> +               return -ENODEV;
> +
> +       err = crypto_register_alg(&aeskl_cipher_alg);
> +       if (err)
> +               return err;
> +
> +       err = simd_register_skciphers_compat(aeskl_skciphers, ARRAY_SIZE(aeskl_skciphers),
> +                                            aeskl_simd_skciphers);
> +       if (err)
> +               goto unregister_algs;
> +
> +       return 0;
> +
> +unregister_algs:
> +       crypto_unregister_alg(&aeskl_cipher_alg);
> +
> +       return err;
> +}
> +
> +static void __exit aeskl_exit(void)
> +{
> +       simd_unregister_skciphers(aeskl_skciphers, ARRAY_SIZE(aeskl_skciphers),
> +                                 aeskl_simd_skciphers);
> +       crypto_unregister_alg(&aeskl_cipher_alg);
> +}
> +
> +late_initcall(aeskl_init);
> +module_exit(aeskl_exit);
> +
> +MODULE_DESCRIPTION("Rijndael (AES) Cipher Algorithm, AES Key Locker implementation");
> +MODULE_LICENSE("GPL");
> +MODULE_ALIAS_CRYPTO("aes");
> diff --git a/arch/x86/include/asm/inst.h b/arch/x86/include/asm/inst.h
> index bd7f02480ca1..b719a11a2905 100644
> --- a/arch/x86/include/asm/inst.h
> +++ b/arch/x86/include/asm/inst.h
> @@ -122,9 +122,62 @@
>  #endif
>         .endm
>
> +       .macro XMM_NUM opd xmm
> +       \opd = REG_NUM_INVALID
> +       .ifc \xmm,%xmm0
> +       \opd = 0
> +       .endif
> +       .ifc \xmm,%xmm1
> +       \opd = 1
> +       .endif
> +       .ifc \xmm,%xmm2
> +       \opd = 2
> +       .endif
> +       .ifc \xmm,%xmm3
> +       \opd = 3
> +       .endif
> +       .ifc \xmm,%xmm4
> +       \opd = 4
> +       .endif
> +       .ifc \xmm,%xmm5
> +       \opd = 5
> +       .endif
> +       .ifc \xmm,%xmm6
> +       \opd = 6
> +       .endif
> +       .ifc \xmm,%xmm7
> +       \opd = 7
> +       .endif
> +       .ifc \xmm,%xmm8
> +       \opd = 8
> +       .endif
> +       .ifc \xmm,%xmm9
> +       \opd = 9
> +       .endif
> +       .ifc \xmm,%xmm10
> +       \opd = 10
> +       .endif
> +       .ifc \xmm,%xmm11
> +       \opd = 11
> +       .endif
> +       .ifc \xmm,%xmm12
> +       \opd = 12
> +       .endif
> +       .ifc \xmm,%xmm13
> +       \opd = 13
> +       .endif
> +       .ifc \xmm,%xmm14
> +       \opd = 14
> +       .endif
> +       .ifc \xmm,%xmm15
> +       \opd = 15
> +       .endif
> +       .endm
> +
>         .macro REG_TYPE type reg
>         R32_NUM reg_type_r32 \reg
>         R64_NUM reg_type_r64 \reg
> +       XMM_NUM reg_type_xmm \reg
>         .if reg_type_r64 <> REG_NUM_INVALID
>         \type = REG_TYPE_R64
>         .elseif reg_type_r32 <> REG_NUM_INVALID
> @@ -134,6 +187,14 @@
>         .endif
>         .endm
>
> +       .macro PFX_OPD_SIZE
> +       .byte 0x66
> +       .endm
> +
> +       .macro PFX_RPT
> +       .byte 0xf3
> +       .endm
> +
>         .macro PFX_REX opd1 opd2 W=0
>         .if ((\opd1 | \opd2) & 8) || \W
>         .byte 0x40 | ((\opd1 & 8) >> 3) | ((\opd2 & 8) >> 1) | (\W << 3)
> @@ -158,6 +219,146 @@
>         .byte 0x0f, 0xc7
>         MODRM 0xc0 rdpid_opd 0x7
>  .endm
> +
> +       .macro ENCODEKEY128 reg1 reg2
> +       R32_NUM encodekey128_opd1 \reg1
> +       R32_NUM encodekey128_opd2 \reg2
> +       PFX_RPT
> +       .byte 0xf, 0x38, 0xfa
> +       MODRM 0xc0  encodekey128_opd2 encodekey128_opd1
> +       .endm
> +
> +       .macro ENCODEKEY256 reg1 reg2
> +       R32_NUM encodekey256_opd1 \reg1
> +       R32_NUM encodekey256_opd2 \reg2
> +       PFX_RPT
> +       .byte 0x0f, 0x38, 0xfb
> +       MODRM 0xc0 encodekey256_opd1 encodekey256_opd2
> +       .endm
> +
> +       .macro AESENC128KL reg, xmm
> +       REG_TYPE aesenc128kl_opd1_type \reg
> +       .if aesenc128kl_opd1_type == REG_TYPE_R64
> +       R64_NUM aesenc128kl_opd1 \reg
> +       .elseif aesenc128kl_opd1_type == REG_TYPE_R32
> +       R32_NUM aesenc128kl_opd1 \reg
> +       .else
> +       aesenc128kl_opd1 = REG_NUM_INVALID
> +       .endif
> +       XMM_NUM aesenc128kl_opd2 \xmm
> +       PFX_RPT
> +       .byte 0x0f, 0x38, 0xdc
> +       MODRM 0x0 aesenc128kl_opd1 aesenc128kl_opd2
> +       .endm
> +
> +       .macro AESDEC128KL reg, xmm
> +       REG_TYPE aesdec128kl_opd1_type \reg
> +       .if aesdec128kl_opd1_type == REG_TYPE_R64
> +       R64_NUM aesdec128kl_opd1 \reg
> +       .elseif aesdec128kl_opd1_type == REG_TYPE_R32
> +       R32_NUM aesdec128kl_opd1 \reg
> +       .else
> +       aesdec128kl_opd1 = REG_NUM_INVALID
> +       .endif
> +       XMM_NUM aesdec128kl_opd2 \xmm
> +       PFX_RPT
> +       .byte 0x0f, 0x38, 0xdd
> +       MODRM 0x0 aesdec128kl_opd1 aesdec128kl_opd2
> +       .endm
> +
> +       .macro AESENC256KL reg, xmm
> +       REG_TYPE aesenc256kl_opd1_type \reg
> +       .if aesenc256kl_opd1_type == REG_TYPE_R64
> +       R64_NUM aesenc256kl_opd1 \reg
> +       .elseif aesenc256kl_opd1_type == REG_TYPE_R32
> +       R32_NUM aesenc256kl_opd1 \reg
> +       .else
> +       aesenc256kl_opd1 = REG_NUM_INVALID
> +       .endif
> +       XMM_NUM aesenc256kl_opd2 \xmm
> +       PFX_RPT
> +       .byte 0x0f, 0x38, 0xde
> +       MODRM 0x0 aesenc256kl_opd1 aesenc256kl_opd2
> +       .endm
> +
> +       .macro AESDEC256KL reg, xmm
> +       REG_TYPE aesdec256kl_opd1_type \reg
> +       .if aesdec256kl_opd1_type == REG_TYPE_R64
> +       R64_NUM aesdec256kl_opd1 \reg
> +       .elseif aesdec256kl_opd1_type == REG_TYPE_R32
> +       R32_NUM aesdec256kl_opd1 \reg
> +       .else
> +       aesdec256kl_opd1 = REG_NUM_INVALID
> +       .endif
> +       XMM_NUM aesdec256kl_opd2 \xmm
> +       PFX_RPT
> +       .byte 0x0f, 0x38, 0xdf
> +       MODRM 0x0 aesdec256kl_opd1 aesdec256kl_opd2
> +       .endm
> +
> +       .macro AESENCWIDE128KL reg
> +       REG_TYPE aesencwide128kl_opd1_type \reg
> +       .if aesencwide128kl_opd1_type == REG_TYPE_R64
> +       R64_NUM aesencwide128kl_opd1 \reg
> +       .elseif aesencwide128kl_opd1_type == REG_TYPE_R32
> +       R32_NUM aesencwide128kl_opd1 \reg
> +       .else
> +       aesencwide128kl_opd1 = REG_NUM_INVALID
> +       .endif
> +       PFX_RPT
> +       .byte 0x0f, 0x38, 0xd8
> +       MODRM 0x0 aesencwide128kl_opd1 0x0
> +       .endm
> +
> +       .macro AESDECWIDE128KL reg
> +       REG_TYPE aesdecwide128kl_opd1_type \reg
> +       .if aesdecwide128kl_opd1_type == REG_TYPE_R64
> +       R64_NUM aesdecwide128kl_opd1 \reg
> +       .elseif aesdecwide128kl_opd1_type == REG_TYPE_R32
> +       R32_NUM aesdecwide128kl_opd1 \reg
> +       .else
> +       aesdecwide128kl_opd1 = REG_NUM_INVALID
> +       .endif
> +       PFX_RPT
> +       .byte 0x0f, 0x38, 0xd8
> +       MODRM 0x0 aesdecwide128kl_opd1 0x1
> +       .endm
> +
> +       .macro AESENCWIDE256KL reg
> +       REG_TYPE aesencwide256kl_opd1_type \reg
> +       .if aesencwide256kl_opd1_type == REG_TYPE_R64
> +       R64_NUM aesencwide256kl_opd1 \reg
> +       .elseif aesencwide256kl_opd1_type == REG_TYPE_R32
> +       R32_NUM aesencwide256kl_opd1 \reg
> +       .else
> +       aesencwide256kl_opd1 = REG_NUM_INVALID
> +       .endif
> +       PFX_RPT
> +       .byte 0x0f, 0x38, 0xd8
> +       MODRM 0x0 aesencwide256kl_opd1 0x2
> +       .endm
> +
> +       .macro AESDECWIDE256KL reg
> +       REG_TYPE aesdecwide256kl_opd1_type \reg
> +       .if aesdecwide256kl_opd1_type == REG_TYPE_R64
> +       R64_NUM aesdecwide256kl_opd1 \reg
> +       .elseif aesdecwide256kl_opd1_type == REG_TYPE_R32
> +       R32_NUM aesdecwide256kl_opd1 \reg
> +       .else
> +       aesdecwide256kl_opd1 = REG_NUM_INVALID
> +       .endif
> +       PFX_RPT
> +       .byte 0x0f, 0x38, 0xd8
> +       MODRM 0x0 aesdecwide256kl_opd1 0x3
> +       .endm
> +
> +       .macro LOADIWKEY xmm1, xmm2
> +       XMM_NUM loadiwkey_opd1 \xmm1
> +       XMM_NUM loadiwkey_opd2 \xmm2
> +       PFX_RPT
> +       .byte 0x0f, 0x38, 0xdc
> +       MODRM 0xc0 loadiwkey_opd1 loadiwkey_opd2
> +       .endm
>  #endif
>
>  #endif
> diff --git a/crypto/Kconfig b/crypto/Kconfig
> index 094ef56ab7b4..75a184179c72 100644
> --- a/crypto/Kconfig
> +++ b/crypto/Kconfig
> @@ -1159,6 +1159,34 @@ config CRYPTO_AES_NI_INTEL
>           ECB, CBC, LRW, XTS. The 64 bit version has additional
>           acceleration for CTR.
>
> +config CRYPTO_AES_KL
> +       tristate "AES cipher algorithms (AES-KL)"
> +       depends on X86_KEYLOCKER
> +       select CRYPTO_AES_NI_INTEL
> +       help
> +         Use AES Key Locker instructions for AES algorithm.
> +
> +         AES cipher algorithms (FIPS-197). AES uses the Rijndael
> +         algorithm.
> +
> +         Rijndael appears to be consistently a very good performer in both
> +         hardware and software across a wide range of computing
> +         environments regardless of its use in feedback or non-feedback
> +         modes. Its key setup time is excellent, and its key agility is
> +         good. Rijndael's very low memory requirements make it very well
> +         suited for restricted-space environments, in which it also
> +         demonstrates excellent performance. Rijndael's operations are
> +         among the easiest to defend against power and timing attacks.
> +
> +         The AES specifies three key sizes: 128, 192 and 256 bits
> +
> +         See <http://csrc.nist.gov/encryption/aes/> for more information.
> +
> +         For 128- and 256-bit keys, the AES cipher algorithm is
> +         implemented by AES Key Locker instructions. This implementation
> +         does not need an AES key once wrapped to an encoded form. For AES
> +         compliance, 192-bit is processed by AES-NI instructions.
> +
>  config CRYPTO_AES_SPARC64
>         tristate "AES cipher algorithms (SPARC64)"
>         depends on SPARC64
> --
> 2.17.1
>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH 4/8] x86/power: Restore Key Locker internal key from the ACPI S3/4 sleep states
  2020-12-16 17:41 ` [RFC PATCH 4/8] x86/power: Restore Key Locker internal key from the ACPI S3/4 sleep states Chang S. Bae
@ 2020-12-17 19:10   ` Eric Biggers
  2020-12-18  1:00     ` Bae, Chang Seok
  2021-01-28 10:34   ` Rafael J. Wysocki
  1 sibling, 1 reply; 30+ messages in thread
From: Eric Biggers @ 2020-12-17 19:10 UTC (permalink / raw)
  To: Chang S. Bae
  Cc: tglx, mingo, bp, luto, x86, herbert, dan.j.williams, dave.hansen,
	ravi.v.shankar, ning.sun, kumar.n.dwarakanath, linux-crypto,
	linux-kernel, linux-pm

On Wed, Dec 16, 2020 at 09:41:42AM -0800, Chang S. Bae wrote:
> When the system state switches to these sleep states, the internal key gets
> reset. Since this system transition is transparent to userspace, the
> internal key needs to be restored properly.
> 
> Key Locker provides a mechanism to back up the internal key in non-volatile
> memory. The kernel requests a backup right after the key loaded at
> boot-time and copies it later when the system wakes up.
> 
> The backup during the S5 sleep state is not trusted. It is overwritten by a
> new key at the next boot.
> 
> On a system with the S3/4 states, enable the feature only when the backup
> mechanism is supported.
> 
> Disable the feature when the copy fails (or the backup corrupts). The
> shutdown is considered too noisy. A new key is considerable only when
> threads can be synchronously suspended.

Can this backup key be used to decrypt the encoded AES keys without executing
the keylocker instructions on the same CPU?

- Eric

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH 0/8] x86: Support Intel Key Locker
  2020-12-16 17:41 [RFC PATCH 0/8] x86: Support Intel Key Locker Chang S. Bae
                   ` (7 preceding siblings ...)
  2020-12-16 17:41 ` [RFC PATCH 8/8] x86/cpu: Support the hardware randomization option for Key Locker internal key Chang S. Bae
@ 2020-12-17 19:10 ` Eric Biggers
  2020-12-17 20:07   ` Dan Williams
  2020-12-18  1:08   ` Bae, Chang Seok
  2020-12-19 18:59 ` Andy Lutomirski
  9 siblings, 2 replies; 30+ messages in thread
From: Eric Biggers @ 2020-12-17 19:10 UTC (permalink / raw)
  To: Chang S. Bae
  Cc: tglx, mingo, bp, luto, x86, herbert, dan.j.williams, dave.hansen,
	ravi.v.shankar, ning.sun, kumar.n.dwarakanath, linux-crypto,
	linux-kernel

On Wed, Dec 16, 2020 at 09:41:38AM -0800, Chang S. Bae wrote:
> [1] Intel Architecture Instruction Set Extensions Programming Reference:
>     https://software.intel.com/content/dam/develop/external/us/en/documents/architecture-instruction-set-$
> [2] Intel Key Locker Specification:
>     https://software.intel.com/content/dam/develop/external/us/en/documents/343965-intel-key-locker-speci$

Both of these links are broken.

- Eric

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH 0/8] x86: Support Intel Key Locker
  2020-12-17 19:10 ` [RFC PATCH 0/8] x86: Support Intel Key Locker Eric Biggers
@ 2020-12-17 20:07   ` Dan Williams
  2020-12-18  1:08   ` Bae, Chang Seok
  1 sibling, 0 replies; 30+ messages in thread
From: Dan Williams @ 2020-12-17 20:07 UTC (permalink / raw)
  To: Eric Biggers
  Cc: Chang S. Bae, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Andy Lutomirski, X86 ML, Herbert Xu, Dave Hansen, Ravi V Shankar,
	ning sun, Kumar N Dwarakanath, linux-crypto,
	Linux Kernel Mailing List

On Thu, Dec 17, 2020 at 11:11 AM Eric Biggers <ebiggers@kernel.org> wrote:
>
> On Wed, Dec 16, 2020 at 09:41:38AM -0800, Chang S. Bae wrote:
> > [1] Intel Architecture Instruction Set Extensions Programming Reference:
> >     https://software.intel.com/content/dam/develop/external/us/en/documents/architecture-instruction-set-$

https://software.intel.com/content/dam/develop/external/us/en/documents/architecture-instruction-set-extensions-programming-reference.pdf

> > [2] Intel Key Locker Specification:
> >     https://software.intel.com/content/dam/develop/external/us/en/documents/343965-intel-key-locker-speci$

https://software.intel.com/content/dam/develop/external/us/en/documents/343965-intel-key-locker-specification.pdf

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH 7/8] crypto: x86/aes-kl - Support AES algorithm using Key Locker instructions
  2020-12-16 17:41 ` [RFC PATCH 7/8] crypto: x86/aes-kl - Support AES algorithm using Key Locker instructions Chang S. Bae
  2020-12-17 10:16   ` Ard Biesheuvel
@ 2020-12-17 20:54   ` Andy Lutomirski
  2021-05-14 20:48     ` Bae, Chang Seok
  2020-12-17 20:58   ` [NEEDS-REVIEW] " Dave Hansen
  2020-12-18 10:11   ` Peter Zijlstra
  3 siblings, 1 reply; 30+ messages in thread
From: Andy Lutomirski @ 2020-12-17 20:54 UTC (permalink / raw)
  To: Chang S. Bae
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Andrew Lutomirski,
	X86 ML, Herbert Xu, Dan Williams, Dave Hansen, Ravi V. Shankar,
	ning.sun, kumar.n.dwarakanath, Linux Crypto Mailing List, LKML

On Wed, Dec 16, 2020 at 9:46 AM Chang S. Bae <chang.seok.bae@intel.com> wrote:
>
> Key Locker (KL) is Intel's new security feature that protects the AES key
> at the time of data transformation. New AES SIMD instructions -- as a
> successor of Intel's AES-NI -- are provided to encode an AES key and
> reference it for the AES algorithm.
>
> New instructions support 128/256-bit keys. While it is not desirable to
> receive any 192-bit key, AES-NI instructions are taken to serve this size.
>
> New instructions are operational in both 32-/64-bit modes.
>
> Add a set of new macros for the new instructions so that no new binutils
> version is required.
>
> Implemented methods are for a single block as well as ECB, CBC, CTR, and
> XTS modes. The methods are not compatible with other AES implementations as
> accessing an encrypted key instead of the normal AES key.
>
> setkey() call encodes an AES key. User may displace the AES key once
> encoded, as encrypt()/decrypt() methods do not need the key.
>
> Most C code follows the AES-NI implementation. It has higher priority than
> the AES-NI as providing key protection.

What does this patch *do*?

IKL gives a few special key slots that have certain restrictions and
certain security properties.  What can you use them for?  With this
series installed, what is the user-visible effect?  Is there a new
API?  Do you use them with the netlink user crypto interface?  Do you
use them for encrypting disks?  Swap?  How?  How do you allocate,
reset, and free keys?  Who has permissions to use them?

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [NEEDS-REVIEW] [RFC PATCH 7/8] crypto: x86/aes-kl - Support AES algorithm using Key Locker instructions
  2020-12-16 17:41 ` [RFC PATCH 7/8] crypto: x86/aes-kl - Support AES algorithm using Key Locker instructions Chang S. Bae
  2020-12-17 10:16   ` Ard Biesheuvel
  2020-12-17 20:54   ` Andy Lutomirski
@ 2020-12-17 20:58   ` Dave Hansen
  2020-12-18  9:56     ` Peter Zijlstra
  2020-12-18 10:11   ` Peter Zijlstra
  3 siblings, 1 reply; 30+ messages in thread
From: Dave Hansen @ 2020-12-17 20:58 UTC (permalink / raw)
  To: Chang S. Bae, tglx, mingo, bp, luto, x86, herbert
  Cc: dan.j.williams, ravi.v.shankar, ning.sun, kumar.n.dwarakanath,
	linux-crypto, linux-kernel

On 12/16/20 9:41 AM, Chang S. Bae wrote:
> +config CRYPTO_AES_KL
> +	tristate "AES cipher algorithms (AES-KL)"
> +	depends on X86_KEYLOCKER
> +	select CRYPTO_AES_NI_INTEL
> +	help
> +	  Use AES Key Locker instructions for AES algorithm.
> +
> +	  AES cipher algorithms (FIPS-197). AES uses the Rijndael
> +	  algorithm.
> +
> +	  Rijndael appears to be consistently a very good performer in both
> +	  hardware and software across a wide range of computing
> +	  environments regardless of its use in feedback or non-feedback
> +	  modes. Its key setup time is excellent, and its key agility is
> +	  good. Rijndael's very low memory requirements make it very well
> +	  suited for restricted-space environments, in which it also
> +	  demonstrates excellent performance. Rijndael's operations are
> +	  among the easiest to defend against power and timing attacks.
> +
> +	  The AES specifies three key sizes: 128, 192 and 256 bits
> +
> +	  See <http://csrc.nist.gov/encryption/aes/> for more information.
> +
> +	  For 128- and 256-bit keys, the AES cipher algorithm is
> +	  implemented by AES Key Locker instructions. This implementation
> +	  does not need an AES key once wrapped to an encoded form. For AES
> +	  compliance, 192-bit is processed by AES-NI instructions.

Giving a history lesson and high-level overview of AES doesn't quite
seem appropriate here, unless this is the first the kernel has seen of AES.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH 4/8] x86/power: Restore Key Locker internal key from the ACPI S3/4 sleep states
  2020-12-17 19:10   ` Eric Biggers
@ 2020-12-18  1:00     ` Bae, Chang Seok
  0 siblings, 0 replies; 30+ messages in thread
From: Bae, Chang Seok @ 2020-12-18  1:00 UTC (permalink / raw)
  To: Eric Biggers
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Andy Lutomirski,
	the arch/x86 maintainers, herbert, Williams, Dan J, Hansen, Dave,
	Shankar, Ravi V, Sun, Ning, Dwarakanath, Kumar N, linux-crypto,
	linux-kernel, linux-pm


> On Dec 18, 2020, at 04:10, Eric Biggers <ebiggers@kernel.org> wrote:
> 
> On Wed, Dec 16, 2020 at 09:41:42AM -0800, Chang S. Bae wrote:
>> When the system state switches to these sleep states, the internal key gets
>> reset. Since this system transition is transparent to userspace, the
>> internal key needs to be restored properly.
>> 
>> Key Locker provides a mechanism to back up the internal key in non-volatile
>> memory. The kernel requests a backup right after the key loaded at
>> boot-time and copies it later when the system wakes up.
>> 
>> The backup during the S5 sleep state is not trusted. It is overwritten by a
>> new key at the next boot.
>> 
>> On a system with the S3/4 states, enable the feature only when the backup
>> mechanism is supported.
>> 
>> Disable the feature when the copy fails (or the backup corrupts). The
>> shutdown is considered too noisy. A new key is considerable only when
>> threads can be synchronously suspended.
> 
> Can this backup key be used to decrypt the encoded AES keys without executing
> the keylocker instructions on the same CPU?

No. The backup key itself is inaccessible to the software.

Thanks,
Chang


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH 0/8] x86: Support Intel Key Locker
  2020-12-17 19:10 ` [RFC PATCH 0/8] x86: Support Intel Key Locker Eric Biggers
  2020-12-17 20:07   ` Dan Williams
@ 2020-12-18  1:08   ` Bae, Chang Seok
  1 sibling, 0 replies; 30+ messages in thread
From: Bae, Chang Seok @ 2020-12-18  1:08 UTC (permalink / raw)
  To: Eric Biggers
  Cc: Thomas Gleixner, mingo, bp, luto, x86, herbert, Williams, Dan J,
	Hansen, Dave, Shankar, Ravi V, Sun, Ning, Dwarakanath, Kumar N,
	linux-crypto, linux-kernel


> On Dec 18, 2020, at 04:10, Eric Biggers <ebiggers@kernel.org> wrote:
> 
> On Wed, Dec 16, 2020 at 09:41:38AM -0800, Chang S. Bae wrote:
>> [1] Intel Architecture Instruction Set Extensions Programming Reference:
>>    https://software.intel.com/content/dam/develop/external/us/en/documents/architecture-instruction-set-$
>> [2] Intel Key Locker Specification:
>>    https://software.intel.com/content/dam/develop/external/us/en/documents/343965-intel-key-locker-speci$
> 
> Both of these links are broken.

Sorry, my bad -- my editor ate some strings with copy & paste. 

Dan has provided the links already, thank you.

Chang

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [NEEDS-REVIEW] [RFC PATCH 7/8] crypto: x86/aes-kl - Support AES algorithm using Key Locker instructions
  2020-12-17 20:58   ` [NEEDS-REVIEW] " Dave Hansen
@ 2020-12-18  9:56     ` Peter Zijlstra
  0 siblings, 0 replies; 30+ messages in thread
From: Peter Zijlstra @ 2020-12-18  9:56 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Chang S. Bae, tglx, mingo, bp, luto, x86, herbert,
	dan.j.williams, ravi.v.shankar, ning.sun, kumar.n.dwarakanath,
	linux-crypto, linux-kernel

On Thu, Dec 17, 2020 at 12:58:34PM -0800, Dave Hansen wrote:
> On 12/16/20 9:41 AM, Chang S. Bae wrote:
> > +config CRYPTO_AES_KL
> > +	tristate "AES cipher algorithms (AES-KL)"
> > +	depends on X86_KEYLOCKER
> > +	select CRYPTO_AES_NI_INTEL
> > +	help
> > +	  Use AES Key Locker instructions for AES algorithm.
> > +
> > +	  AES cipher algorithms (FIPS-197). AES uses the Rijndael
> > +	  algorithm.
> > +
> > +	  Rijndael appears to be consistently a very good performer in both
> > +	  hardware and software across a wide range of computing
> > +	  environments regardless of its use in feedback or non-feedback
> > +	  modes. Its key setup time is excellent, and its key agility is
> > +	  good. Rijndael's very low memory requirements make it very well
> > +	  suited for restricted-space environments, in which it also
> > +	  demonstrates excellent performance. Rijndael's operations are
> > +	  among the easiest to defend against power and timing attacks.
> > +
> > +	  The AES specifies three key sizes: 128, 192 and 256 bits
> > +
> > +	  See <http://csrc.nist.gov/encryption/aes/> for more information.
> > +

It's direct copy-pasta from CRYPTO_AES_NI_INTEL until about here.

> > +	  For 128- and 256-bit keys, the AES cipher algorithm is
> > +	  implemented by AES Key Locker instructions. This implementation
> > +	  does not need an AES key once wrapped to an encoded form. For AES
> > +	  compliance, 192-bit is processed by AES-NI instructions.
> 
> Giving a history lesson and high-level overview of AES doesn't quite
> seem appropriate here, unless this is the first the kernel has seen of AES.

And the new bits aren't really enlightening either, as you point out.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH 6/8] selftests/x86: Test Key Locker internal key maintenance
  2020-12-16 17:41 ` [RFC PATCH 6/8] selftests/x86: Test Key Locker internal key maintenance Chang S. Bae
@ 2020-12-18  9:59   ` Peter Zijlstra
  2020-12-18 10:43     ` Bae, Chang Seok
  0 siblings, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2020-12-18  9:59 UTC (permalink / raw)
  To: Chang S. Bae
  Cc: tglx, mingo, bp, luto, x86, herbert, dan.j.williams, dave.hansen,
	ravi.v.shankar, ning.sun, kumar.n.dwarakanath, linux-crypto,
	linux-kernel, linux-kselftest

On Wed, Dec 16, 2020 at 09:41:44AM -0800, Chang S. Bae wrote:
> +	/* ENCODEKEY128 %EAX */
> +	asm volatile (".byte 0xf3, 0xf, 0x38, 0xfa, 0xc0");

This is lacking a binutils version number. Boris, didn't you do a
checkpatch.pl thing for that?

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH 7/8] crypto: x86/aes-kl - Support AES algorithm using Key Locker instructions
  2020-12-16 17:41 ` [RFC PATCH 7/8] crypto: x86/aes-kl - Support AES algorithm using Key Locker instructions Chang S. Bae
                     ` (2 preceding siblings ...)
  2020-12-17 20:58   ` [NEEDS-REVIEW] " Dave Hansen
@ 2020-12-18 10:11   ` Peter Zijlstra
  2020-12-18 10:34     ` Bae, Chang Seok
  3 siblings, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2020-12-18 10:11 UTC (permalink / raw)
  To: Chang S. Bae
  Cc: tglx, mingo, bp, luto, x86, herbert, dan.j.williams, dave.hansen,
	ravi.v.shankar, ning.sun, kumar.n.dwarakanath, linux-crypto,
	linux-kernel

On Wed, Dec 16, 2020 at 09:41:45AM -0800, Chang S. Bae wrote:
> diff --git a/arch/x86/include/asm/inst.h b/arch/x86/include/asm/inst.h
> index bd7f02480ca1..b719a11a2905 100644
> --- a/arch/x86/include/asm/inst.h
> +++ b/arch/x86/include/asm/inst.h
> @@ -122,9 +122,62 @@
>  #endif
>  	.endm
>  
> +	.macro XMM_NUM opd xmm
> +	\opd = REG_NUM_INVALID
> +	.ifc \xmm,%xmm0
> +	\opd = 0
> +	.endif
> +	.ifc \xmm,%xmm1
> +	\opd = 1
> +	.endif
> +	.ifc \xmm,%xmm2
> +	\opd = 2
> +	.endif
> +	.ifc \xmm,%xmm3
> +	\opd = 3
> +	.endif
> +	.ifc \xmm,%xmm4
> +	\opd = 4
> +	.endif
> +	.ifc \xmm,%xmm5
> +	\opd = 5
> +	.endif
> +	.ifc \xmm,%xmm6
> +	\opd = 6
> +	.endif
> +	.ifc \xmm,%xmm7
> +	\opd = 7
> +	.endif
> +	.ifc \xmm,%xmm8
> +	\opd = 8
> +	.endif
> +	.ifc \xmm,%xmm9
> +	\opd = 9
> +	.endif
> +	.ifc \xmm,%xmm10
> +	\opd = 10
> +	.endif
> +	.ifc \xmm,%xmm11
> +	\opd = 11
> +	.endif
> +	.ifc \xmm,%xmm12
> +	\opd = 12
> +	.endif
> +	.ifc \xmm,%xmm13
> +	\opd = 13
> +	.endif
> +	.ifc \xmm,%xmm14
> +	\opd = 14
> +	.endif
> +	.ifc \xmm,%xmm15
> +	\opd = 15
> +	.endif
> +	.endm
> +
>  	.macro REG_TYPE type reg
>  	R32_NUM reg_type_r32 \reg
>  	R64_NUM reg_type_r64 \reg
> +	XMM_NUM reg_type_xmm \reg
>  	.if reg_type_r64 <> REG_NUM_INVALID
>  	\type = REG_TYPE_R64
>  	.elseif reg_type_r32 <> REG_NUM_INVALID
> @@ -134,6 +187,14 @@
>  	.endif
>  	.endm
>  
> +	.macro PFX_OPD_SIZE
> +	.byte 0x66
> +	.endm
> +
> +	.macro PFX_RPT
> +	.byte 0xf3
> +	.endm
> +
>  	.macro PFX_REX opd1 opd2 W=0
>  	.if ((\opd1 | \opd2) & 8) || \W
>  	.byte 0x40 | ((\opd1 & 8) >> 3) | ((\opd2 & 8) >> 1) | (\W << 3)
> @@ -158,6 +219,146 @@
>  	.byte 0x0f, 0xc7
>  	MODRM 0xc0 rdpid_opd 0x7
>  .endm
> +
> +	.macro ENCODEKEY128 reg1 reg2
> +	R32_NUM encodekey128_opd1 \reg1
> +	R32_NUM encodekey128_opd2 \reg2
> +	PFX_RPT
> +	.byte 0xf, 0x38, 0xfa
> +	MODRM 0xc0  encodekey128_opd2 encodekey128_opd1
> +	.endm
> +
> +	.macro ENCODEKEY256 reg1 reg2
> +	R32_NUM encodekey256_opd1 \reg1
> +	R32_NUM encodekey256_opd2 \reg2
> +	PFX_RPT
> +	.byte 0x0f, 0x38, 0xfb
> +	MODRM 0xc0 encodekey256_opd1 encodekey256_opd2
> +	.endm
> +
> +	.macro AESENC128KL reg, xmm
> +	REG_TYPE aesenc128kl_opd1_type \reg
> +	.if aesenc128kl_opd1_type == REG_TYPE_R64
> +	R64_NUM aesenc128kl_opd1 \reg
> +	.elseif aesenc128kl_opd1_type == REG_TYPE_R32
> +	R32_NUM aesenc128kl_opd1 \reg
> +	.else
> +	aesenc128kl_opd1 = REG_NUM_INVALID
> +	.endif
> +	XMM_NUM aesenc128kl_opd2 \xmm
> +	PFX_RPT
> +	.byte 0x0f, 0x38, 0xdc
> +	MODRM 0x0 aesenc128kl_opd1 aesenc128kl_opd2
> +	.endm
> +
> +	.macro AESDEC128KL reg, xmm
> +	REG_TYPE aesdec128kl_opd1_type \reg
> +	.if aesdec128kl_opd1_type == REG_TYPE_R64
> +	R64_NUM aesdec128kl_opd1 \reg
> +	.elseif aesdec128kl_opd1_type == REG_TYPE_R32
> +	R32_NUM aesdec128kl_opd1 \reg
> +	.else
> +	aesdec128kl_opd1 = REG_NUM_INVALID
> +	.endif
> +	XMM_NUM aesdec128kl_opd2 \xmm
> +	PFX_RPT
> +	.byte 0x0f, 0x38, 0xdd
> +	MODRM 0x0 aesdec128kl_opd1 aesdec128kl_opd2
> +	.endm
> +
> +	.macro AESENC256KL reg, xmm
> +	REG_TYPE aesenc256kl_opd1_type \reg
> +	.if aesenc256kl_opd1_type == REG_TYPE_R64
> +	R64_NUM aesenc256kl_opd1 \reg
> +	.elseif aesenc256kl_opd1_type == REG_TYPE_R32
> +	R32_NUM aesenc256kl_opd1 \reg
> +	.else
> +	aesenc256kl_opd1 = REG_NUM_INVALID
> +	.endif
> +	XMM_NUM aesenc256kl_opd2 \xmm
> +	PFX_RPT
> +	.byte 0x0f, 0x38, 0xde
> +	MODRM 0x0 aesenc256kl_opd1 aesenc256kl_opd2
> +	.endm
> +
> +	.macro AESDEC256KL reg, xmm
> +	REG_TYPE aesdec256kl_opd1_type \reg
> +	.if aesdec256kl_opd1_type == REG_TYPE_R64
> +	R64_NUM aesdec256kl_opd1 \reg
> +	.elseif aesdec256kl_opd1_type == REG_TYPE_R32
> +	R32_NUM aesdec256kl_opd1 \reg
> +	.else
> +	aesdec256kl_opd1 = REG_NUM_INVALID
> +	.endif
> +	XMM_NUM aesdec256kl_opd2 \xmm
> +	PFX_RPT
> +	.byte 0x0f, 0x38, 0xdf
> +	MODRM 0x0 aesdec256kl_opd1 aesdec256kl_opd2
> +	.endm
> +
> +	.macro AESENCWIDE128KL reg
> +	REG_TYPE aesencwide128kl_opd1_type \reg
> +	.if aesencwide128kl_opd1_type == REG_TYPE_R64
> +	R64_NUM aesencwide128kl_opd1 \reg
> +	.elseif aesencwide128kl_opd1_type == REG_TYPE_R32
> +	R32_NUM aesencwide128kl_opd1 \reg
> +	.else
> +	aesencwide128kl_opd1 = REG_NUM_INVALID
> +	.endif
> +	PFX_RPT
> +	.byte 0x0f, 0x38, 0xd8
> +	MODRM 0x0 aesencwide128kl_opd1 0x0
> +	.endm
> +
> +	.macro AESDECWIDE128KL reg
> +	REG_TYPE aesdecwide128kl_opd1_type \reg
> +	.if aesdecwide128kl_opd1_type == REG_TYPE_R64
> +	R64_NUM aesdecwide128kl_opd1 \reg
> +	.elseif aesdecwide128kl_opd1_type == REG_TYPE_R32
> +	R32_NUM aesdecwide128kl_opd1 \reg
> +	.else
> +	aesdecwide128kl_opd1 = REG_NUM_INVALID
> +	.endif
> +	PFX_RPT
> +	.byte 0x0f, 0x38, 0xd8
> +	MODRM 0x0 aesdecwide128kl_opd1 0x1
> +	.endm
> +
> +	.macro AESENCWIDE256KL reg
> +	REG_TYPE aesencwide256kl_opd1_type \reg
> +	.if aesencwide256kl_opd1_type == REG_TYPE_R64
> +	R64_NUM aesencwide256kl_opd1 \reg
> +	.elseif aesencwide256kl_opd1_type == REG_TYPE_R32
> +	R32_NUM aesencwide256kl_opd1 \reg
> +	.else
> +	aesencwide256kl_opd1 = REG_NUM_INVALID
> +	.endif
> +	PFX_RPT
> +	.byte 0x0f, 0x38, 0xd8
> +	MODRM 0x0 aesencwide256kl_opd1 0x2
> +	.endm
> +
> +	.macro AESDECWIDE256KL reg
> +	REG_TYPE aesdecwide256kl_opd1_type \reg
> +	.if aesdecwide256kl_opd1_type == REG_TYPE_R64
> +	R64_NUM aesdecwide256kl_opd1 \reg
> +	.elseif aesdecwide256kl_opd1_type == REG_TYPE_R32
> +	R32_NUM aesdecwide256kl_opd1 \reg
> +	.else
> +	aesdecwide256kl_opd1 = REG_NUM_INVALID
> +	.endif
> +	PFX_RPT
> +	.byte 0x0f, 0x38, 0xd8
> +	MODRM 0x0 aesdecwide256kl_opd1 0x3
> +	.endm
> +
> +	.macro LOADIWKEY xmm1, xmm2
> +	XMM_NUM loadiwkey_opd1 \xmm1
> +	XMM_NUM loadiwkey_opd2 \xmm2
> +	PFX_RPT
> +	.byte 0x0f, 0x38, 0xdc
> +	MODRM 0xc0 loadiwkey_opd1 loadiwkey_opd2
> +	.endm
>  #endif
>  
>  #endif

*groan*, so what actual version of binutils is needed and why is this
driver important enough to build on ancient crud to warrant all this
gunk?


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH 7/8] crypto: x86/aes-kl - Support AES algorithm using Key Locker instructions
  2020-12-18 10:11   ` Peter Zijlstra
@ 2020-12-18 10:34     ` Bae, Chang Seok
  2020-12-18 11:00       ` Borislav Petkov
  2020-12-18 14:33       ` Peter Zijlstra
  0 siblings, 2 replies; 30+ messages in thread
From: Bae, Chang Seok @ 2020-12-18 10:34 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Andy Lutomirski,
	the arch/x86 maintainers, herbert, Williams, Dan J, Hansen, Dave,
	Shankar, Ravi V, Sun, Ning, Dwarakanath, Kumar N, linux-crypto,
	linux-kernel


> On Dec 18, 2020, at 19:11, Peter Zijlstra <peterz@infradead.org> wrote:
> 
> *groan*, so what actual version of binutils is needed and why is this
> driver important enough to build on ancient crud to warrant all this
> gunk?

The new Key Locker instructions look to be added a few month ago [1].
But the latest binutils release (2.35.1) does not include them yet.

I’m open to drop the macros if there is any better way to define them
without binutils support.

Thanks,
Chang

[1] https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=c4694f172b51a2168b8cc15109ab1b97fc0bcb9c

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH 6/8] selftests/x86: Test Key Locker internal key maintenance
  2020-12-18  9:59   ` Peter Zijlstra
@ 2020-12-18 10:43     ` Bae, Chang Seok
  0 siblings, 0 replies; 30+ messages in thread
From: Bae, Chang Seok @ 2020-12-18 10:43 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Thomas Gleixner, Ingo Molnar, bp, luto, x86, herbert, Williams,
	Dan J, Hansen, Dave, Shankar, Ravi V, Sun, Ning, Dwarakanath,
	Kumar N, linux-crypto, linux-kernel, linux-kselftest


> On Dec 18, 2020, at 18:59, Peter Zijlstra <peterz@infradead.org> wrote:
> 
> On Wed, Dec 16, 2020 at 09:41:44AM -0800, Chang S. Bae wrote:
>> +	/* ENCODEKEY128 %EAX */
>> +	asm volatile (".byte 0xf3, 0xf, 0x38, 0xfa, 0xc0");
> 
> This is lacking a binutils version number.

I will add the version number when any new release begins to support
this.

Thanks,
Chang

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH 7/8] crypto: x86/aes-kl - Support AES algorithm using Key Locker instructions
  2020-12-18 10:34     ` Bae, Chang Seok
@ 2020-12-18 11:00       ` Borislav Petkov
  2020-12-18 14:33       ` Peter Zijlstra
  1 sibling, 0 replies; 30+ messages in thread
From: Borislav Petkov @ 2020-12-18 11:00 UTC (permalink / raw)
  To: Bae, Chang Seok
  Cc: Peter Zijlstra, Thomas Gleixner, Ingo Molnar, Andy Lutomirski,
	the arch/x86 maintainers, herbert, Williams, Dan J, Hansen, Dave,
	Shankar, Ravi V, Sun, Ning, Dwarakanath, Kumar N, linux-crypto,
	linux-kernel

On Fri, Dec 18, 2020 at 10:34:28AM +0000, Bae, Chang Seok wrote:
> I’m open to drop the macros if there is any better way to define them
> without binutils support.

Yap, make the driver build depend on the binutils version which supports
them.

-- 
Regards/Gruss,
    Boris.

SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH 7/8] crypto: x86/aes-kl - Support AES algorithm using Key Locker instructions
  2020-12-18 10:34     ` Bae, Chang Seok
  2020-12-18 11:00       ` Borislav Petkov
@ 2020-12-18 14:33       ` Peter Zijlstra
  1 sibling, 0 replies; 30+ messages in thread
From: Peter Zijlstra @ 2020-12-18 14:33 UTC (permalink / raw)
  To: Bae, Chang Seok
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Andy Lutomirski,
	the arch/x86 maintainers, herbert, Williams, Dan J, Hansen, Dave,
	Shankar, Ravi V, Sun, Ning, Dwarakanath, Kumar N, linux-crypto,
	linux-kernel

On Fri, Dec 18, 2020 at 10:34:28AM +0000, Bae, Chang Seok wrote:
> 
> > On Dec 18, 2020, at 19:11, Peter Zijlstra <peterz@infradead.org> wrote:
> > 
> > *groan*, so what actual version of binutils is needed and why is this
> > driver important enough to build on ancient crud to warrant all this
> > gunk?
> 
> The new Key Locker instructions look to be added a few month ago [1].
> But the latest binutils release (2.35.1) does not include them yet.
> 
> I’m open to drop the macros if there is any better way to define them
> without binutils support.

It's just a driver, make it depend on binutils having the instructions.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH 0/8] x86: Support Intel Key Locker
  2020-12-16 17:41 [RFC PATCH 0/8] x86: Support Intel Key Locker Chang S. Bae
                   ` (8 preceding siblings ...)
  2020-12-17 19:10 ` [RFC PATCH 0/8] x86: Support Intel Key Locker Eric Biggers
@ 2020-12-19 18:59 ` Andy Lutomirski
  2020-12-22 19:03   ` Bae, Chang Seok
  9 siblings, 1 reply; 30+ messages in thread
From: Andy Lutomirski @ 2020-12-19 18:59 UTC (permalink / raw)
  To: Chang S. Bae, Andrew Cooper
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Andrew Lutomirski,
	X86 ML, Herbert Xu, Dan Williams, Dave Hansen, Ravi V. Shankar,
	ning.sun, kumar.n.dwarakanath, Linux Crypto Mailing List, LKML

On Wed, Dec 16, 2020 at 9:46 AM Chang S. Bae <chang.seok.bae@intel.com> wrote:
>
> Key Locker [1][2] is a new security feature available in new Intel CPUs to
> protect data encryption keys for the Advanced Encryption Standard
> algorithm. The protection limits the amount of time an AES key is exposed
> in memory by sealing a key and referencing it with new AES instructions.

I think some fundamental issues need to be worked out before we can
enable key locker upstream at all.

First, how fast is LOADIWKEY?  Does it depend on the mode?  Is it
credible to context switch the wrapping key?

First, on bare metal, we need to decide whether to use a wrapping key
or a software-provided wrapping key.  Both choices have pros and cons,
and it's not clear to me whether Linux should have a boot-time
parameter, a runtime control, a fixed value, or something else.  If we
use a random key, we need to figure out what to do about S5 and
hibernation.  No matter what we do, we're going to have some issues
with CRIU.

We also need to understand the virtualization situation.  What do we
expect hypervisors to do with Key Locker?  The only obviously
performant way I can see for VMMs to support migration is to use the
same wrapping key fleetwide.  (This is also the only way I can see for
VMMs to manage the wrapping key in a way that a side channel can't
extract it from hypervisor memory.)  But VMMs can't do this without
some degree of cooperation from the guest.  Perhaps we should disable
KL if CPUID.HYPERVISOR is set for now?

It's a shame that the spec seems to have some holes in the key
management mechanisms.  It would be very nice if there was a way to
load IWKey from an SGX enclave, and it would also be nice if there was
a way to load an IWKey that is wrapped by a different key.  Also, for
non-random IWKey values, there doesn't seem to be a way for software
(in an enclave or otherwise) to confirm that it's wrapping an AES key
against a particular wrapping key, which seems to severely limit the
ability to safely provision a new wrapped key at runtime.

--Andy

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH 0/8] x86: Support Intel Key Locker
  2020-12-19 18:59 ` Andy Lutomirski
@ 2020-12-22 19:03   ` Bae, Chang Seok
  0 siblings, 0 replies; 30+ messages in thread
From: Bae, Chang Seok @ 2020-12-22 19:03 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Cooper, Andrew, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	X86 ML, Herbert Xu, Williams, Dan J, Hansen, Dave, Shankar,
	Ravi V, Sun, Ning, Dwarakanath, Kumar N,
	Linux Crypto Mailing List, LKML


> On Dec 20, 2020, at 03:59, Andy Lutomirski <luto@kernel.org> wrote:
> 
> On Wed, Dec 16, 2020 at 9:46 AM Chang S. Bae <chang.seok.bae@intel.com> wrote:
>> 
>> Key Locker [1][2] is a new security feature available in new Intel CPUs to
>> protect data encryption keys for the Advanced Encryption Standard
>> algorithm. The protection limits the amount of time an AES key is exposed
>> in memory by sealing a key and referencing it with new AES instructions.
> 
> I think some fundamental issues need to be worked out before we can
> enable key locker upstream at all.
> 
> First, how fast is LOADIWKEY?  Does it depend on the mode?  Is it
> credible to context switch the wrapping key?

I saw it took 110-130 cycles without the hardware rand. But about 25K
cycles with it.

It is not executable in userspace.

> First, on bare metal, we need to decide whether to use a wrapping key
> or a software-provided wrapping key.  Both choices have pros and cons,
> and it's not clear to me whether Linux should have a boot-time
> parameter, a runtime control, a fixed value, or something else.  

It is assumed that all the CPUs need to have the same key loaded (at
boot-time).

I thought the software wrapping key is simple to be loaded. 

With hardware rand, the key value is unknown to software. The key 
needs the backup mechanism to copy it to every CPU.

> If we use a random key, we need to figure out what to do about S5 and
> hibernation.  

It was considered to restore the key from the backup before 
resuming any suspended thread. That’s the case for S3 and S4
(hibernation) sleep states. The system restarts with S5.

> No matter what we do, we're going to have some issues with CRIU.

It looks like the case as long as the wrapping key is not fully 
restored by it.

> We also need to understand the virtualization situation.  What do we
> expect hypervisors to do with Key Locker?  The only obviously
> performant way I can see for VMMs to support migration is to use the
> same wrapping key fleetwide.  (This is also the only way I can see for
> VMMs to manage the wrapping key in a way that a side channel can't
> extract it from hypervisor memory.)  But VMMs can't do this without
> some degree of cooperation from the guest.  Perhaps we should disable
> KL if CPUID.HYPERVISOR is set for now?

This is one of the options we considered too.

> It's a shame that the spec seems to have some holes in the key
> management mechanisms.  It would be very nice if there was a way to
> load IWKey from an SGX enclave, and it would also be nice if there was
> a way to load an IWKey that is wrapped by a different key.  Also, for
> non-random IWKey values, there doesn't seem to be a way for software
> (in an enclave or otherwise) to confirm that it's wrapping an AES key
> against a particular wrapping key, which seems to severely limit the
> ability to safely provision a new wrapped key at runtime.

The current use of the wrapping key is only for an AES key. Maybe the 
feature will be extended in the future.

Thanks,
Chang

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH 4/8] x86/power: Restore Key Locker internal key from the ACPI S3/4 sleep states
  2020-12-16 17:41 ` [RFC PATCH 4/8] x86/power: Restore Key Locker internal key from the ACPI S3/4 sleep states Chang S. Bae
  2020-12-17 19:10   ` Eric Biggers
@ 2021-01-28 10:34   ` Rafael J. Wysocki
  2021-01-28 16:10     ` Bae, Chang Seok
  1 sibling, 1 reply; 30+ messages in thread
From: Rafael J. Wysocki @ 2021-01-28 10:34 UTC (permalink / raw)
  To: Chang S. Bae
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Andy Lutomirski,
	the arch/x86 maintainers, Herbert Xu, Dan Williams, Dave Hansen,
	Ravi V. Shankar, Ning Sun, kumar.n.dwarakanath,
	Linux Crypto Mailing List, Linux Kernel Mailing List, Linux PM

On Wed, Dec 16, 2020 at 6:47 PM Chang S. Bae <chang.seok.bae@intel.com> wrote:
>
> When the system state switches to these sleep states, the internal key gets
> reset. Since this system transition is transparent to userspace, the
> internal key needs to be restored properly.
>
> Key Locker provides a mechanism to back up the internal key in non-volatile
> memory. The kernel requests a backup right after the key loaded at
> boot-time and copies it later when the system wakes up.
>
> The backup during the S5 sleep state is not trusted. It is overwritten by a
> new key at the next boot.
>
> On a system with the S3/4 states, enable the feature only when the backup
> mechanism is supported.
>
> Disable the feature when the copy fails (or the backup corrupts). The
> shutdown is considered too noisy. A new key is considerable only when
> threads can be synchronously suspended.
>
> Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com>
> Cc: x86@kernel.org
> Cc: linux-kernel@vger.kernel.org
> Cc: linux-pm@vger.kernel.org
> ---
>  arch/x86/include/asm/keylocker.h | 12 ++++++++
>  arch/x86/kernel/cpu/common.c     | 25 +++++++++++-----
>  arch/x86/kernel/keylocker.c      | 51 ++++++++++++++++++++++++++++++++
>  arch/x86/power/cpu.c             | 34 +++++++++++++++++++++
>  4 files changed, 115 insertions(+), 7 deletions(-)
>
> diff --git a/arch/x86/include/asm/keylocker.h b/arch/x86/include/asm/keylocker.h
> index daf0734a4095..722574c305c2 100644
> --- a/arch/x86/include/asm/keylocker.h
> +++ b/arch/x86/include/asm/keylocker.h
> @@ -6,6 +6,7 @@
>  #ifndef __ASSEMBLY__
>
>  #include <linux/bits.h>
> +#include <asm/msr.h>
>
>  #define KEYLOCKER_CPUID                0x019
>  #define KEYLOCKER_CPUID_EAX_SUPERVISOR BIT(0)
> @@ -25,5 +26,16 @@ void invalidate_keylocker_data(void);
>  #define invalidate_keylocker_data() do { } while (0)
>  #endif
>
> +static inline u64 read_keylocker_backup_status(void)
> +{
> +       u64 status;
> +
> +       rdmsrl(MSR_IA32_IWKEYBACKUP_STATUS, status);
> +       return status;
> +}
> +
> +void backup_keylocker(void);
> +bool copy_keylocker(void);
> +
>  #endif /*__ASSEMBLY__ */
>  #endif /* _ASM_KEYLOCKER_H */
> diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
> index d675075848bb..a446d5aff08f 100644
> --- a/arch/x86/kernel/cpu/common.c
> +++ b/arch/x86/kernel/cpu/common.c
> @@ -463,24 +463,35 @@ __setup("nofsgsbase", x86_nofsgsbase_setup);
>
>  static __always_inline void setup_keylocker(struct cpuinfo_x86 *c)
>  {
> -       bool keyloaded;
> -
>         if (!cpu_feature_enabled(X86_FEATURE_KEYLOCKER))
>                 goto out;
>
>         cr4_set_bits(X86_CR4_KEYLOCKER);
>
>         if (c == &boot_cpu_data) {
> +               bool keyloaded;
> +
>                 if (!check_keylocker_readiness())
>                         goto disable_keylocker;
>
>                 make_keylocker_data();
> -       }
>
> -       keyloaded = load_keylocker();
> -       if (!keyloaded) {
> -               pr_err_once("x86/keylocker: Failed to load internal key\n");
> -               goto disable_keylocker;
> +               keyloaded = load_keylocker();
> +               if (!keyloaded) {
> +                       pr_err("x86/keylocker: Fail to load internal key\n");
> +                       goto disable_keylocker;
> +               }
> +
> +               backup_keylocker();
> +       } else {
> +               bool keycopied;
> +
> +               /* NB: When system wakes up, this path recovers the internal key. */
> +               keycopied = copy_keylocker();
> +               if (!keycopied) {
> +                       pr_err_once("x86/keylocker: Fail to copy internal key\n");
> +                       goto disable_keylocker;
> +               }
>         }
>
>         pr_info_once("x86/keylocker: Activated\n");
> diff --git a/arch/x86/kernel/keylocker.c b/arch/x86/kernel/keylocker.c
> index e455d806b80c..229875ac80d5 100644
> --- a/arch/x86/kernel/keylocker.c
> +++ b/arch/x86/kernel/keylocker.c
> @@ -5,11 +5,15 @@
>   */
>
>  #include <linux/random.h>
> +#include <linux/acpi.h>
> +#include <linux/delay.h>
>
>  #include <asm/keylocker.h>
>  #include <asm/fpu/types.h>
>  #include <asm/fpu/api.h>
>
> +static bool keybackup_available;
> +
>  bool check_keylocker_readiness(void)
>  {
>         u32 eax, ebx, ecx, edx;
> @@ -21,6 +25,14 @@ bool check_keylocker_readiness(void)
>                 return false;
>         }
>
> +       keybackup_available = (ebx & KEYLOCKER_CPUID_EBX_BACKUP);
> +       /* Internal Key backup is essential with S3/4 states */
> +       if (!keybackup_available &&
> +           (acpi_sleep_state_supported(ACPI_STATE_S3) ||
> +            acpi_sleep_state_supported(ACPI_STATE_S4))) {
> +               pr_debug("x86/keylocker: no key backup support with possible S3/4\n");
> +               return false;
> +       }
>         return true;
>  }
>
> @@ -29,6 +41,7 @@ bool check_keylocker_readiness(void)
>  #define LOADIWKEY_NUM_OPERANDS 3
>
>  static struct key {
> +       bool valid;
>         struct reg_128_bit value[LOADIWKEY_NUM_OPERANDS];
>  } keydata;
>
> @@ -38,11 +51,15 @@ void make_keylocker_data(void)
>
>         for (i = 0; i < LOADIWKEY_NUM_OPERANDS; i++)
>                 get_random_bytes(&keydata.value[i], sizeof(struct reg_128_bit));
> +
> +       keydata.valid = true;
>  }
>
>  void invalidate_keylocker_data(void)
>  {
>         memset(&keydata.value, 0, sizeof(struct reg_128_bit) * LOADIWKEY_NUM_OPERANDS);
> +
> +       keydata.valid = false;
>  }
>
>  #define USE_SWKEY      0
> @@ -69,3 +86,37 @@ bool load_keylocker(void)
>
>         return err ? false : true;
>  }
> +
> +void backup_keylocker(void)
> +{
> +       if (keybackup_available)
> +               wrmsrl(MSR_IA32_COPY_LOCAL_TO_PLATFORM, 1);
> +}
> +
> +#define KEYRESTORE_RETRY       1
> +
> +bool copy_keylocker(void)
> +{
> +       bool copied = false;
> +       int i;
> +
> +       /* Use valid key data when available */
> +       if (keydata.valid)
> +               return load_keylocker();
> +
> +       if (!keybackup_available)
> +               return copied;
> +
> +       wrmsrl(MSR_IA32_COPY_PLATFORM_TO_LOCAL, 1);
> +
> +       for (i = 0; (i <= KEYRESTORE_RETRY) && !copied; i++) {
> +               u64 status;
> +
> +               if (i)
> +                       udelay(1);
> +               rdmsrl(MSR_IA32_COPY_STATUS, status);
> +               copied = status & BIT(0) ? true : false;
> +       }
> +
> +       return copied;
> +}
> diff --git a/arch/x86/power/cpu.c b/arch/x86/power/cpu.c
> index db1378c6ff26..5412440e7c5c 100644
> --- a/arch/x86/power/cpu.c
> +++ b/arch/x86/power/cpu.c
> @@ -25,6 +25,7 @@
>  #include <asm/cpu.h>
>  #include <asm/mmu_context.h>
>  #include <asm/cpu_device_id.h>
> +#include <asm/keylocker.h>
>
>  #ifdef CONFIG_X86_32
>  __visible unsigned long saved_context_ebx;
> @@ -57,6 +58,38 @@ static void msr_restore_context(struct saved_context *ctxt)
>         }
>  }
>
> +/*
> + * The boot CPU executes this function, while other CPUs restore the key
> + * through the setup path in setup_keylocker().
> + */
> +static void restore_keylocker(void)
> +{
> +       u64 keybackup_status;
> +       bool keycopied;
> +
> +       if (!cpu_feature_enabled(X86_FEATURE_KEYLOCKER))
> +               return;
> +
> +       keybackup_status = read_keylocker_backup_status();
> +       if (!(keybackup_status & BIT(0))) {
> +               pr_err("x86/keylocker: internal key restoration failed with %s\n",
> +                      (keybackup_status & BIT(2)) ? "read error" : "invalid status");
> +               WARN_ON(1);
> +               goto disable_keylocker;
> +       }

The above conditional could be consolidated a bit by using WARN():

if (WARN(!(keybackup_status & BIT(0)), "x86/keylocker: internal key
restoration failed with %s\n",
        (keybackup_status & BIT(2)) ? "read error" : "invalid status")
                goto disable_keylocker;

Apart from this the patch LGTM.

Thanks!

> +
> +       keycopied = copy_keylocker();
> +       if (keycopied)
> +               return;
> +
> +       pr_err("x86/keylocker: internal key copy failure\n");
> +
> +disable_keylocker:
> +       pr_info("x86/keylocker: Disabled with internal key restoration failure\n");
> +       setup_clear_cpu_cap(X86_FEATURE_KEYLOCKER);
> +       cr4_clear_bits(X86_CR4_KEYLOCKER);
> +}
> +
>  /**
>   *     __save_processor_state - save CPU registers before creating a
>   *             hibernation image and before restoring the memory state from it
> @@ -265,6 +298,7 @@ static void notrace __restore_processor_state(struct saved_context *ctxt)
>         mtrr_bp_restore();
>         perf_restore_debug_store();
>         msr_restore_context(ctxt);
> +       restore_keylocker();
>
>         c = &cpu_data(smp_processor_id());
>         if (cpu_has(c, X86_FEATURE_MSR_IA32_FEAT_CTL))
> --
> 2.17.1
>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH 4/8] x86/power: Restore Key Locker internal key from the ACPI S3/4 sleep states
  2021-01-28 10:34   ` Rafael J. Wysocki
@ 2021-01-28 16:10     ` Bae, Chang Seok
  0 siblings, 0 replies; 30+ messages in thread
From: Bae, Chang Seok @ 2021-01-28 16:10 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Andy Lutomirski,
	the arch/x86 maintainers, Herbert Xu, Williams, Dan J, Hansen,
	Dave, Shankar, Ravi V, Sun, Ning, Dwarakanath, Kumar N,
	Linux Crypto Mailing List, Linux Kernel Mailing List, Linux PM

On Jan 28, 2021, at 02:34, Rafael J. Wysocki <rafael@kernel.org> wrote:
> On Wed, Dec 16, 2020 at 6:47 PM Chang S. Bae <chang.seok.bae@intel.com> wrote:
>> 
>> +       keybackup_status = read_keylocker_backup_status();
>> +       if (!(keybackup_status & BIT(0))) {
>> +               pr_err("x86/keylocker: internal key restoration failed with %s\n",
>> +                      (keybackup_status & BIT(2)) ? "read error" : "invalid status");
>> +               WARN_ON(1);
>> +               goto disable_keylocker;
>> +       }
> 
> The above conditional could be consolidated a bit by using WARN():
> 
> if (WARN(!(keybackup_status & BIT(0)), "x86/keylocker: internal key
> restoration failed with %s\n",
>        (keybackup_status & BIT(2)) ? "read error" : "invalid status")
>                goto disable_keylocker;
> 
> Apart from this the patch LGTM.

Thanks for the review! I will make this change on my next revision.

Chang

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH 7/8] crypto: x86/aes-kl - Support AES algorithm using Key Locker instructions
  2020-12-17 10:16   ` Ard Biesheuvel
@ 2021-05-14 20:36     ` Bae, Chang Seok
  0 siblings, 0 replies; 30+ messages in thread
From: Bae, Chang Seok @ 2021-05-14 20:36 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Herbert Xu, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Andy Lutomirski, X86 ML, Williams, Dan J, Hansen, Dave, Shankar,
	Ravi V, Sun, Ning, Dwarakanath, Kumar N,
	Linux Crypto Mailing List, Linux Kernel Mailing List

On Dec 17, 2020, at 02:16, Ard Biesheuvel <ardb@kernel.org> wrote:
> 
> We will need to refactor this - cloning the entire driver and just
> replacing aes-ni with aes-kl is a maintenance nightmare.
> 
> Please refer to the arm64 tree for an example how to combine chaining
> mode routines implemented in assembler with different implementations
> of the core AES transforms (aes-modes.S is combined with either
> aes-ce.S or aes-neon.S to produce two different drivers)

I just post v2 [1]. PATCH9 [2] refactors some glue code out of AES-NI to
prepare AES-KL.

[ Past a few months were not fully spent on this but it took a while to
  address comments and to debug test cases. ]

> ...
>> diff --git a/arch/x86/crypto/aeskl-intel_glue.c b/arch/x86/crypto/aeskl-intel_glue.c
>> new file mode 100644
>> index 000000000000..9e3f900ad4af
>> --- /dev/null
>> +++ b/arch/x86/crypto/aeskl-intel_glue.c
>> @@ -0,0 +1,697 @@
> ...
>> +static void aeskl_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
>> +{
>> +       struct crypto_aes_ctx *ctx = aes_ctx(crypto_tfm_ctx(tfm));
>> +       int err = 0;
>> +
>> +       if (!crypto_simd_usable())
>> +               return;
>> +
> 
> It is clear that AES-KL cannot be handled by a fallback algorithm,
> given that the key is no longer available. But that doesn't mean that
> you can just give up like this.
> 
> This basically implies that we cannot expose the cipher interface at
> all, and so AES-KL can only be used by callers that use the
> asynchronous interface, which rules out 802.11, s/w kTLS, macsec and
> kerberos.

I made not to expose the synchronous interface, in v2.

> This ties in to a related discussion that is going on about when to
> allow kernel mode SIMD. I am currently investigating whether we can
> change the rules a bit so that crypto_simd_usable() is guaranteed to
> be true.

I saw your series [3]. Yes, I’m very interested in it.

>> +static int ecb_encrypt(struct skcipher_request *req)
>> +{
>> +       struct crypto_skcipher *tfm;
>> +       struct crypto_aes_ctx *ctx;
>> +       struct skcipher_walk walk;
>> +       unsigned int nbytes;
>> +       int err;
>> +
>> +       tfm = crypto_skcipher_reqtfm(req);
>> +       ctx = aes_ctx(crypto_skcipher_ctx(tfm));
>> +
>> +       err = skcipher_walk_virt(&walk, req, true);
>> +       if (err)
>> +               return err;
>> +
>> +       while ((nbytes = walk.nbytes)) {
>> +               unsigned int len = nbytes & AES_BLOCK_MASK;
>> +               const u8 *src = walk.src.virt.addr;
>> +               u8 *dst = walk.dst.virt.addr;
>> +
>> +               kernel_fpu_begin();
>> +               if (unlikely(ctx->key_length == AES_KEYSIZE_192))
>> +                       aesni_ecb_enc(ctx, dst, src, len);
> 
> Could we please use a proper fallback here, and relay the entire request?

I made a change like this in v2:

+static int ecb_encrypt(struct skcipher_request *req)
+{
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+
+	if (likely(keylength(crypto_skcipher_ctx(tfm)) != AES_KEYSIZE_192))
+		return ecb_crypt_common(req, aeskl_ecb_enc);
+	else
+		return ecb_crypt_common(req, aesni_ecb_enc);
+}

>> +               else
>> +                       err = __aeskl_ecb_enc(ctx, dst, src, len);
>> +               kernel_fpu_end();
>> +
>> +               if (err) {
>> +                       skcipher_walk_done(&walk, nbytes & (AES_BLOCK_SIZE - 1));
> 
> This doesn't look right. The skcipher scatterlist walker may have a
> live kmap() here so you can't just return.

I’ve added a preparatory patch [4] to deal with cases like this.

Thanks,
Chang

[1] https://lore.kernel.org/lkml/20210514201508.27967-1-chang.seok.bae@intel.com/
[2] https://lore.kernel.org/lkml/20210514201508.27967-10-chang.seok.bae@intel.com/
[3] https://lore.kernel.org/lkml/20201218170106.23280-1-ardb@kernel.org/
[4] https://lore.kernel.org/lkml/20210514201508.27967-9-chang.seok.bae@intel.com/

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [RFC PATCH 7/8] crypto: x86/aes-kl - Support AES algorithm using Key Locker instructions
  2020-12-17 20:54   ` Andy Lutomirski
@ 2021-05-14 20:48     ` Bae, Chang Seok
  0 siblings, 0 replies; 30+ messages in thread
From: Bae, Chang Seok @ 2021-05-14 20:48 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, X86 ML,
	Herbert Xu, Williams, Dan J, Hansen, Dave, Shankar, Ravi V, Sun,
	Ning, Dwarakanath, Kumar N, Linux Crypto Mailing List, LKML

First of all, my apologies for the delay. I wish now with v2 [1] is a
momentum. 

On Dec 17, 2020, at 12:54, Andy Lutomirski <luto@kernel.org> wrote:
> 
> What does this patch *do*?

It adds a new AES implementation that can be a replacement for the AES-NI
version.

> IKL gives a few special key slots that have certain restrictions and
> certain security properties.  

I think this can be viewed as one implementation of Envelope Encryption.
Internal Wrapping Key (IWKey) on the spec is practically Key-Encryption Key.
Each CPU has one key state and it is used to encode AES keys (or Data
Encryption Key) as many as a user wants. An encoded form may convey access
restrictions.

> What can you use them for?  With this series installed, what is the
> user-visible effect?  Is there a new API?  Do you use them with the netlink
> user crypto interface?  Do you use them for encrypting disks?  Swap?  

No new API is added here.

No observable effect is expected to end-users. AES Key Locker provides the
same function of transforming data and does this for the chaining modes at the
same speed (or a bit faster).

As a replacement for AES-NI, the usage will be pretty much the same as
AES-NI’s. Admittedly, this instruction set has some limitations, e.g., with no
192-bit key support.

Since it can protect AES keys during the transformation, I think one may
consider using it for huge data. So, yes, block disk encryption for instance.
For testing purposes though, I was able to run it with dm-crypt [2].

> How?  How do you allocate, reset, and free keys?  Who has permissions to use
> them?

IWKey (or KEK) is loaded only in kernel mode. The value is randomized.

FWIW, the code intentionally sets a restriction to the encoded form. Once
encoded from an AES key, AES instructions that referencing it have to be
executed in kernel mode.

Thanks,
Chang

[1] https://lore.kernel.org/lkml/20210514201508.27967-1-chang.seok.bae@intel.com/
[2] https://gitlab.com/cryptsetup/cryptsetup/-/wikis/DMCrypt


^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2021-05-14 20:48 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-16 17:41 [RFC PATCH 0/8] x86: Support Intel Key Locker Chang S. Bae
2020-12-16 17:41 ` [RFC PATCH 1/8] x86/cpufeature: Enumerate Key Locker feature Chang S. Bae
2020-12-16 17:41 ` [RFC PATCH 2/8] x86/cpu: Load Key Locker internal key at boot-time Chang S. Bae
2020-12-16 17:41 ` [RFC PATCH 3/8] x86/msr-index: Add MSRs for Key Locker internal key Chang S. Bae
2020-12-16 17:41 ` [RFC PATCH 4/8] x86/power: Restore Key Locker internal key from the ACPI S3/4 sleep states Chang S. Bae
2020-12-17 19:10   ` Eric Biggers
2020-12-18  1:00     ` Bae, Chang Seok
2021-01-28 10:34   ` Rafael J. Wysocki
2021-01-28 16:10     ` Bae, Chang Seok
2020-12-16 17:41 ` [RFC PATCH 5/8] x86/cpu: Add a config option and a chicken bit for Key Locker Chang S. Bae
2020-12-16 17:41 ` [RFC PATCH 6/8] selftests/x86: Test Key Locker internal key maintenance Chang S. Bae
2020-12-18  9:59   ` Peter Zijlstra
2020-12-18 10:43     ` Bae, Chang Seok
2020-12-16 17:41 ` [RFC PATCH 7/8] crypto: x86/aes-kl - Support AES algorithm using Key Locker instructions Chang S. Bae
2020-12-17 10:16   ` Ard Biesheuvel
2021-05-14 20:36     ` Bae, Chang Seok
2020-12-17 20:54   ` Andy Lutomirski
2021-05-14 20:48     ` Bae, Chang Seok
2020-12-17 20:58   ` [NEEDS-REVIEW] " Dave Hansen
2020-12-18  9:56     ` Peter Zijlstra
2020-12-18 10:11   ` Peter Zijlstra
2020-12-18 10:34     ` Bae, Chang Seok
2020-12-18 11:00       ` Borislav Petkov
2020-12-18 14:33       ` Peter Zijlstra
2020-12-16 17:41 ` [RFC PATCH 8/8] x86/cpu: Support the hardware randomization option for Key Locker internal key Chang S. Bae
2020-12-17 19:10 ` [RFC PATCH 0/8] x86: Support Intel Key Locker Eric Biggers
2020-12-17 20:07   ` Dan Williams
2020-12-18  1:08   ` Bae, Chang Seok
2020-12-19 18:59 ` Andy Lutomirski
2020-12-22 19:03   ` Bae, Chang Seok

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.