linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Rafael J. Wysocki" <rafael@kernel.org>
To: "Chang S. Bae" <chang.seok.bae@intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>,
	Borislav Petkov <bp@suse.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Ingo Molnar <mingo@kernel.org>, Andy Lutomirski <luto@kernel.org>,
	"the arch/x86 maintainers" <x86@kernel.org>,
	Herbert Xu <herbert@gondor.apana.org.au>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Linux Crypto Mailing List <linux-crypto@vger.kernel.org>,
	Eric Biggers <ebiggers@kernel.org>,
	Dan Williams <dan.j.williams@intel.com>,
	charishma1.gairuboyina@intel.com, kumar.n.dwarakanath@intel.com,
	lalithambika.krishnakumar@intel.com,
	"Ravi V. Shankar" <ravi.v.shankar@intel.com>,
	Linux PM <linux-pm@vger.kernel.org>
Subject: Re: [PATCH v4 08/13] x86/power/keylocker: Restore internal wrapping key from the ACPI S3/4 sleep states
Date: Fri, 17 Dec 2021 16:42:46 +0100	[thread overview]
Message-ID: <CAJZ5v0gbePA+rR9gMRnaJrUGS1MwF6UQzxrFZChy5i=11tgz-A@mail.gmail.com> (raw)
In-Reply-To: <20211214005212.20588-9-chang.seok.bae@intel.com>

First, I would change the subject to "x86/PM/keylocker: Restore
internal wrapping key on resume from ACPI S3/S4".

On Tue, Dec 14, 2021 at 2:00 AM Chang S. Bae <chang.seok.bae@intel.com> wrote:
>
> When the system state switches to these sleep states, the internal
> wrapping key gets reset in the CPU state.

And here I would say

"When the system enters the ACPI S3 or S4 sleep state, the internal
wrapping key is discarded."

>
> The primary use case for the feature is bare metal dm-crypt. The key needs
> to be restored properly on wakeup, as dm-crypt does not prompt for the key
> on resume from suspend. Even the prompt it does perform for unlocking
> the volume where the hibernation image is stored, it still expects to reuse
> the key handles within the hibernation image once it is loaded. So it is
> motivated to meet dm-crypt's expectation that the key handles in the
> suspend-image remain valid after resume from an S-state.
>
> Key Locker provides a mechanism to back up the internal wrapping key in
> non-volatile storage. The kernel requests a backup right after the key is
> loaded at boot time. It is copied back to each CPU upon wakeup.
>
> While the backup may be maintained in NVM across S5 and G3 "off"
> states it is not architecturally guaranteed, nor is it expected by dm-crypt
> which expects to prompt for the key each time the volume is started.
>
> The entirety of Key Locker needs to be disabled if the backup mechanism is
> not available unless CONFIG_SUSPEND=n, otherwise dm-crypt requires the
> backup to be available.
>
> In the event of a key restore failure the kernel proceeds with an
> initialized IWKey state. This has the effect of invalidating any key
> handles that might be present in a suspend-image. When this happens
> dm-crypt will see I/O errors resulting from error returns from
> crypto_skcipher_{en,de}crypt(). While this will disrupt operations in the
> current boot, data is not at risk and access is restored at the next reboot
> to create new handles relative to the current IWKey.
>
> Manage a feature-specific flag to communicate with the crypto
> implementation. This ensures to stop using the AES instructions upon the
> key restore failure while not turning off the feature.
>
> Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com>
> Reviewed-by: Dan Williams <dan.j.williams@intel.com>
> Cc: x86@kernel.org
> Cc: linux-kernel@vger.kernel.org
> Cc: linux-pm@vger.kernel.org
> ---
> Changes from v3:
> * Fix the build issue with !X86_KEYLOCKER. (Eric Biggers)
>
> Changes from RFC v2:
> * Change the backup key failure handling. (Dan Williams)
>
> Changes from RFC v1:
> * Folded the warning message into the if condition check.
>   (Rafael Wysocki)
> * Rebased on the changes of the previous patches.
> * Added error code for key restoration failures.
> * Moved the restore helper.
> * Added function descriptions.
> ---
>  arch/x86/include/asm/keylocker.h |   4 +
>  arch/x86/kernel/keylocker.c      | 124 ++++++++++++++++++++++++++++++-
>  arch/x86/power/cpu.c             |   2 +
>  3 files changed, 128 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/include/asm/keylocker.h b/arch/x86/include/asm/keylocker.h
> index 820ac29c06d9..c1d27fb5a1c3 100644
> --- a/arch/x86/include/asm/keylocker.h
> +++ b/arch/x86/include/asm/keylocker.h
> @@ -32,9 +32,13 @@ struct iwkey {
>  #ifdef CONFIG_X86_KEYLOCKER
>  void setup_keylocker(struct cpuinfo_x86 *c);
>  void destroy_keylocker_data(void);
> +void restore_keylocker(void);
> +extern bool valid_keylocker(void);
>  #else
>  #define setup_keylocker(c) do { } while (0)
>  #define destroy_keylocker_data() do { } while (0)
> +#define restore_keylocker() do { } while (0)
> +static inline bool valid_keylocker(void) { return false; }
>  #endif
>
>  #endif /*__ASSEMBLY__ */
> diff --git a/arch/x86/kernel/keylocker.c b/arch/x86/kernel/keylocker.c
> index 87d775a65716..ff0e012e3dd5 100644
> --- a/arch/x86/kernel/keylocker.c
> +++ b/arch/x86/kernel/keylocker.c
> @@ -11,11 +11,26 @@
>  #include <asm/fpu/api.h>
>  #include <asm/keylocker.h>
>  #include <asm/tlbflush.h>
> +#include <asm/msr.h>
>
>  static __initdata struct keylocker_setup_data {
> +       bool initialized;
>         struct iwkey key;
>  } kl_setup;
>
> +/*
> + * This flag is set with IWKey load. When the key restore fails, it is
> + * reset. This restore state is exported to the crypto library, then AES-KL
> + * will not be used there. So, the feature is soft-disabled with this flag.
> + */
> +static bool valid_kl;
> +
> +bool valid_keylocker(void)
> +{
> +       return valid_kl;
> +}
> +EXPORT_SYMBOL_GPL(valid_keylocker);
> +
>  static void __init generate_keylocker_data(void)
>  {
>         get_random_bytes(&kl_setup.key.integrity_key,  sizeof(kl_setup.key.integrity_key));
> @@ -25,6 +40,8 @@ static void __init generate_keylocker_data(void)
>  void __init destroy_keylocker_data(void)
>  {
>         memset(&kl_setup.key, KEY_DESTROY, sizeof(kl_setup.key));
> +       kl_setup.initialized = true;
> +       valid_kl = true;
>  }
>
>  static void __init load_keylocker(void)
> @@ -34,6 +51,27 @@ static void __init load_keylocker(void)
>         kernel_fpu_end();
>  }
>
> +/**
> + * copy_keylocker - Copy the internal wrapping key from the backup.
> + *
> + * Request hardware to copy the key in non-volatile storage to the CPU
> + * state.
> + *
> + * Returns:    -EBUSY if the copy fails, 0 if successful.
> + */
> +static int copy_keylocker(void)
> +{
> +       u64 status;
> +
> +       wrmsrl(MSR_IA32_COPY_IWKEY_TO_LOCAL, 1);
> +
> +       rdmsrl(MSR_IA32_IWKEY_COPY_STATUS, status);
> +       if (status & BIT(0))
> +               return 0;
> +       else
> +               return -EBUSY;
> +}
> +
>  /**
>   * setup_keylocker - Enable the feature.
>   * @c:         A pointer to struct cpuinfo_x86
> @@ -49,6 +87,7 @@ void __ref setup_keylocker(struct cpuinfo_x86 *c)
>
>         if (c == &boot_cpu_data) {
>                 u32 eax, ebx, ecx, edx;
> +               bool backup_available;
>
>                 cpuid_count(KEYLOCKER_CPUID, 0, &eax, &ebx, &ecx, &edx);
>                 /*
> @@ -62,10 +101,49 @@ void __ref setup_keylocker(struct cpuinfo_x86 *c)
>                         goto disable;
>                 }
>
> +               backup_available = (ebx & KEYLOCKER_CPUID_EBX_BACKUP) ? true : false;

Why not

backup_available = !!(ebx & KEYLOCKER_CPUID_EBX_BACKUP);

Apart from this it looks OK, so with the above addressed, please feel
free to add

Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>

to this patch.

> +               /*
> +                * The internal wrapping key in CPU state is volatile in
> +                * S3/4 states. So ensure the backup capability along with
> +                * S-states.
> +                */
> +               if (!backup_available && IS_ENABLED(CONFIG_SUSPEND)) {
> +                       pr_debug("x86/keylocker: No key backup support with possible S3/4.\n");
> +                       goto disable;
> +               }
> +
>                 generate_keylocker_data();
> -       }
> +               load_keylocker();
>
> -       load_keylocker();
> +               /* Backup an internal wrapping key in non-volatile media. */
> +               if (backup_available)
> +                       wrmsrl(MSR_IA32_BACKUP_IWKEY_TO_PLATFORM, 1);
> +       } else {
> +               int rc;
> +
> +               /*
> +                * Load the internal wrapping key directly when available
> +                * in memory, which is only possible at boot-time.
> +                *
> +                * NB: When system wakes up, this path also recovers the
> +                * internal wrapping key.
> +                */
> +               if (!kl_setup.initialized) {
> +                       load_keylocker();
> +               } else if (valid_kl) {
> +                       rc = copy_keylocker();
> +                       /*
> +                        * The boot CPU was successful but the key copy
> +                        * fails here. Then, the subsequent feature use
> +                        * will have inconsistent keys and failures. So,
> +                        * invalidate the feature via the flag.
> +                        */
> +                       if (rc) {
> +                               valid_kl = false;
> +                               pr_err_once("x86/keylocker: Invalid copy status (rc: %d).\n", rc);
> +                       }
> +               }
> +       }
>
>         pr_info_once("x86/keylocker: Enabled.\n");
>         return;
> @@ -77,3 +155,45 @@ void __ref setup_keylocker(struct cpuinfo_x86 *c)
>         /* Make sure the feature disabled for kexec-reboot. */
>         cr4_clear_bits(X86_CR4_KEYLOCKER);
>  }
> +
> +/**
> + * restore_keylocker - Restore the internal wrapping key.
> + *
> + * The boot CPU executes this while other CPUs restore it through the setup
> + * function.
> + */
> +void restore_keylocker(void)
> +{
> +       u64 backup_status;
> +       int rc;
> +
> +       if (!cpu_feature_enabled(X86_FEATURE_KEYLOCKER) || !valid_kl)
> +               return;
> +
> +       /*
> +        * The IA32_IWKEYBACKUP_STATUS MSR contains a bitmap that indicates
> +        * an invalid backup if bit 0 is set and a read (or write) error if
> +        * bit 2 is set.
> +        */
> +       rdmsrl(MSR_IA32_IWKEY_BACKUP_STATUS, backup_status);
> +       if (backup_status & BIT(0)) {
> +               rc = copy_keylocker();
> +               if (rc)
> +                       pr_err("x86/keylocker: Invalid copy state (rc: %d).\n", rc);
> +               else
> +                       return;
> +       } else {
> +               pr_err("x86/keylocker: The key backup access failed with %s.\n",
> +                      (backup_status & BIT(2)) ? "read error" : "invalid status");
> +       }
> +
> +       /*
> +        * Now the backup key is not available. Invalidate the feature via
> +        * the flag to avoid any subsequent use. But keep the feature with
> +        * zero IWKeys instead of disabling it. The current users will see
> +        * key handle integrity failure but that's because of the internal
> +        * key change.
> +        */
> +       pr_err("x86/keylocker: Failed to restore internal wrapping key.\n");
> +       valid_kl = false;
> +}
> diff --git a/arch/x86/power/cpu.c b/arch/x86/power/cpu.c
> index 9f2b251e83c5..1a290f529c73 100644
> --- a/arch/x86/power/cpu.c
> +++ b/arch/x86/power/cpu.c
> @@ -25,6 +25,7 @@
>  #include <asm/cpu.h>
>  #include <asm/mmu_context.h>
>  #include <asm/cpu_device_id.h>
> +#include <asm/keylocker.h>
>
>  #ifdef CONFIG_X86_32
>  __visible unsigned long saved_context_ebx;
> @@ -262,6 +263,7 @@ static void notrace __restore_processor_state(struct saved_context *ctxt)
>         mtrr_bp_restore();
>         perf_restore_debug_store();
>         msr_restore_context(ctxt);
> +       restore_keylocker();
>
>         c = &cpu_data(smp_processor_id());
>         if (cpu_has(c, X86_FEATURE_MSR_IA32_FEAT_CTL))
> --
> 2.17.1
>

  reply	other threads:[~2021-12-17 15:43 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-12-14  0:51 [PATCH v4 00/13] x86: Support Key Locker Chang S. Bae
2021-12-14  0:52 ` [PATCH v4 01/13] Documentation/x86: Document " Chang S. Bae
2021-12-14  0:52 ` [PATCH v4 02/13] x86/cpufeature: Enumerate Key Locker feature Chang S. Bae
2021-12-14  0:52 ` [PATCH v4 03/13] x86/insn: Add Key Locker instructions to the opcode map Chang S. Bae
2021-12-14  0:52 ` [PATCH v4 04/13] x86/asm: Add a wrapper function for the LOADIWKEY instruction Chang S. Bae
2021-12-14  0:52 ` [PATCH v4 05/13] x86/msr-index: Add MSRs for Key Locker internal wrapping key Chang S. Bae
2021-12-14  0:52 ` [PATCH v4 06/13] x86/keylocker: Define Key Locker CPUID leaf Chang S. Bae
2021-12-14  0:52 ` [PATCH v4 07/13] x86/cpu/keylocker: Load an internal wrapping key at boot-time Chang S. Bae
2021-12-14  0:52 ` [PATCH v4 08/13] x86/power/keylocker: Restore internal wrapping key from the ACPI S3/4 sleep states Chang S. Bae
2021-12-17 15:42   ` Rafael J. Wysocki [this message]
2021-12-22  4:58     ` Bae, Chang Seok
2021-12-14  0:52 ` [PATCH v4 09/13] x86/cpu: Add a configuration and command line option for Key Locker Chang S. Bae
2021-12-14  0:52 ` [PATCH v4 10/13] crypto: x86/aes - Prepare for a new AES implementation Chang S. Bae
2021-12-14  0:52 ` [PATCH v4 11/13] crypto: x86/aes-kl - Support AES algorithm using Key Locker instructions Chang S. Bae
2021-12-24 17:42   ` Andy Lutomirski
2022-01-07 18:06     ` Bae, Chang Seok
2021-12-14  0:52 ` [PATCH v4 12/13] crypto: x86/aes-kl - Support CBC mode Chang S. Bae
2021-12-14  0:52 ` [PATCH v4 13/13] crypto: x86/aes-kl - Support XTS mode Chang S. Bae
2021-12-16  1:09 ` [PATCH v4 00/13] x86: Support Key Locker Eric Biggers
2022-01-05 21:55   ` Bae, Chang Seok
2022-01-06  5:07     ` Eric Biggers
2022-01-06  6:13       ` Bae, Chang Seok
2022-01-06 16:25       ` [dm-devel] " Milan Broz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAJZ5v0gbePA+rR9gMRnaJrUGS1MwF6UQzxrFZChy5i=11tgz-A@mail.gmail.com' \
    --to=rafael@kernel.org \
    --cc=bp@suse.de \
    --cc=chang.seok.bae@intel.com \
    --cc=charishma1.gairuboyina@intel.com \
    --cc=dan.j.williams@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=ebiggers@kernel.org \
    --cc=herbert@gondor.apana.org.au \
    --cc=kumar.n.dwarakanath@intel.com \
    --cc=lalithambika.krishnakumar@intel.com \
    --cc=linux-crypto@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=mingo@kernel.org \
    --cc=ravi.v.shankar@intel.com \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).