From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09537C87FF1 for ; Sat, 3 Jun 2023 15:35:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236978AbjFCPfY (ORCPT ); Sat, 3 Jun 2023 11:35:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37480 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229683AbjFCPfG (ORCPT ); Sat, 3 Jun 2023 11:35:06 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 64694123; Sat, 3 Jun 2023 08:35:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685806505; x=1717342505; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=ijlcQTB25JWDSKBgGP5PTsXN7m/nFm/uJsgOpwUsehM=; b=IHzuPO0OrKDv0tbBBH+0CZxWwqQijXemf2vDm7ye/Ea6XEEi3eOceRPR 7ez36hUY1KwBBMKtWis3dHoT0Pm+3QIsF24TOCHR3gnLgA/zED9S2acyG 5lyA1QPzHBTvUg5gYP8UlyzWUo8VyOrfS/VD+KMS1cSwhkYZHfLabp4yP DVXqDSSYxlCuwv5IY/LTCZ1ZAfy7fvbb6fbViDYL/AluBBnnd7I/AIKZB t6N53ShsW4Qn0sQ9imiocchH3lX6BPOzZoBalIw3fLymB7NFkp2h9Va9L OILDueIDpn8LDK12rcpmW0nmbz2I1OszJOt3gO8WCe4D7r53FoNAQvT9f Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10730"; a="356097355" X-IronPort-AV: E=Sophos;i="6.00,216,1681196400"; d="scan'208";a="356097355" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Jun 2023 08:35:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10730"; a="702274248" X-IronPort-AV: E=Sophos;i="6.00,216,1681196400"; d="scan'208";a="702274248" Received: from chang-linux-3.sc.intel.com ([172.25.66.173]) by orsmga007.jf.intel.com with ESMTP; 03 Jun 2023 08:35:03 -0700 From: "Chang S. Bae" To: linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, dm-devel@redhat.com Cc: ebiggers@kernel.org, elliott@hpe.com, gmazyland@gmail.com, luto@kernel.org, dave.hansen@linux.intel.com, tglx@linutronix.de, bp@alien8.de, mingo@kernel.org, x86@kernel.org, herbert@gondor.apana.org.au, ardb@kernel.org, dan.j.williams@intel.com, bernie.keany@intel.com, charishma1.gairuboyina@intel.com, lalithambika.krishnakumar@intel.com, nhuck@google.com, chang.seok.bae@intel.com, Ingo Molnar , "H. Peter Anvin" , "Rafael J. Wysocki" , Peter Zijlstra Subject: [PATCH v8 04/12] x86/asm: Add a wrapper function for the LOADIWKEY instruction Date: Sat, 3 Jun 2023 08:22:19 -0700 Message-Id: <20230603152227.12335-5-chang.seok.bae@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230603152227.12335-1-chang.seok.bae@intel.com> References: <20230524165717.14062-1-chang.seok.bae@intel.com> <20230603152227.12335-1-chang.seok.bae@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Key Locker introduces a CPU-internal wrapping key to encode a user key to a key handle. Then a key handle is referenced instead of the plain text key. LOADIWKEY loads a wrapping key in the software-inaccessible CPU state. It operates only in kernel mode. The kernel will use this to load a new key at boot time. Establish a wrapper to prepare for this use. Also, define struct iwkey to pass the key value to it. Signed-off-by: Chang S. Bae Reviewed-by: Dan Williams Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: "H. Peter Anvin" Cc: "Rafael J. Wysocki" Cc: Peter Zijlstra Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org --- Changes from v6: * Massage the changelog -- clarify the reason and the changes a bit. Changes from v5: * Fix a typo: kernel_cpu_begin() -> kernel_fpu_begin() Changes from RFC v2: * Separate out the code as a new patch. * Improve the usability with the new struct as an argument. (Dan Williams) Note, Dan wondered if: WARN_ON(!irq_fpu_usable()); would be appropriate in the load_xmm_iwkey() function. --- arch/x86/include/asm/keylocker.h | 25 ++++++++++++++++++++++ arch/x86/include/asm/special_insns.h | 32 ++++++++++++++++++++++++++++ 2 files changed, 57 insertions(+) create mode 100644 arch/x86/include/asm/keylocker.h diff --git a/arch/x86/include/asm/keylocker.h b/arch/x86/include/asm/keylocker.h new file mode 100644 index 000000000000..9b3bec452b31 --- /dev/null +++ b/arch/x86/include/asm/keylocker.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#ifndef _ASM_KEYLOCKER_H +#define _ASM_KEYLOCKER_H + +#ifndef __ASSEMBLY__ + +#include + +/** + * struct iwkey - A temporary wrapping key storage. + * @integrity_key: A 128-bit key to check that key handles have not + * been tampered with. + * @encryption_key: A 256-bit encryption key used in + * wrapping/unwrapping a clear text key. + * + * This storage should be flushed immediately after loaded. + */ +struct iwkey { + struct reg_128_bit integrity_key; + struct reg_128_bit encryption_key[2]; +}; + +#endif /*__ASSEMBLY__ */ +#endif /* _ASM_KEYLOCKER_H */ diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h index de48d1389936..dd2d8b40fce3 100644 --- a/arch/x86/include/asm/special_insns.h +++ b/arch/x86/include/asm/special_insns.h @@ -9,6 +9,7 @@ #include #include #include +#include /* * The compiler should not reorder volatile asm statements with respect to each @@ -283,6 +284,37 @@ static __always_inline void tile_release(void) asm volatile(".byte 0xc4, 0xe2, 0x78, 0x49, 0xc0"); } +/** + * load_xmm_iwkey - Load a CPU-internal wrapping key + * @key: A struct iwkey pointer. + * + * Load @key to XMMs then do LOADIWKEY. After this, flush XMM + * registers. Caller is responsible for kernel_fpu_begin(). + */ +static inline void load_xmm_iwkey(struct iwkey *key) +{ + struct reg_128_bit zeros = { 0 }; + + asm volatile ("movdqu %0, %%xmm0; movdqu %1, %%xmm1; movdqu %2, %%xmm2;" + :: "m"(key->integrity_key), "m"(key->encryption_key[0]), + "m"(key->encryption_key[1])); + + /* + * LOADIWKEY %xmm1,%xmm2 + * + * EAX and XMM0 are implicit operands. Load a key value + * from XMM0-2 to a software-invisible CPU state. With zero + * in EAX, CPU does not do hardware randomization and the key + * backup is allowed. + * + * This instruction is supported by binutils >= 2.36. + */ + asm volatile (".byte 0xf3,0x0f,0x38,0xdc,0xd1" :: "a"(0)); + + asm volatile ("movdqu %0, %%xmm0; movdqu %0, %%xmm1; movdqu %0, %%xmm2;" + :: "m"(zeros)); +} + #endif /* __KERNEL__ */ #endif /* _ASM_X86_SPECIAL_INSNS_H */ -- 2.17.1