From: Borislav Petkov <bp@suse.de>
To: speck@linutronix.de
Subject: [MODERATED] Re: [PATCH v7 04/10] TAAv7 4
Date: Tue, 22 Oct 2019 18:51:12 +0200 [thread overview]
Message-ID: <20191022165112.GK31458@zn.tnic> (raw)
In-Reply-To: cover.1571688957.git.pawan.kumar.gupta@linux.intel.com
On Mon, Oct 21, 2019 at 01:26:02PM -0700, speck for Pawan Gupta wrote:
> @@ -268,6 +270,110 @@ static int __init mds_cmdline(char *str)
> }
> early_param("mds", mds_cmdline);
>
> +#undef pr_fmt
> +#define pr_fmt(fmt) "TAA: " fmt
> +
> +/* Default mitigation for TAA-affected CPUs */
> +static enum taa_mitigations taa_mitigation __ro_after_init = TAA_MITIGATION_VERW;
> +static bool taa_nosmt __ro_after_init;
> +
> +static const char * const taa_strings[] = {
> + [TAA_MITIGATION_OFF] = "Vulnerable",
> + [TAA_MITIGATION_UCODE_NEEDED] = "Vulnerable: Clear CPU buffers attempted, no microcode",
> + [TAA_MITIGATION_VERW] = "Mitigation: Clear CPU buffers",
> + [TAA_MITIGATION_TSX_DISABLE] = "Mitigation: TSX disabled",
> +};
> +
> +static void __init taa_select_mitigation(void)
> +{
> + u64 ia32_cap = x86_read_arch_cap_msr();
Do that MSR access...
> +
> + if (!boot_cpu_has_bug(X86_BUG_TAA)) {
> + taa_mitigation = TAA_MITIGATION_OFF;
> + return;
> + }
... here.
> +
> + /*
> + * As X86_BUG_TAA=1, TSX feature is supported by the hardware. If
> + * TSX was disabled (X86_FEATURE_RTM=0) earlier during tsx_init().
That sentence is having trouble saying whatever it is trying to say. :)
> + * Select TSX_DISABLE as mitigation.
> + *
> + * This check is ahead of mitigations=off and tsx_async_abort=off
> + * because when TSX is disabled mitigation is already in place. This
> + * ensures sysfs doesn't show "Vulnerable" when TSX is disabled.
> + */
So make that check first and then merge the other two:
if (!boot_cpu_has_bug(X86_BUG_TAA) || cpu_mitigations_off()) {
taa_mitigation = TAA_MITIGATION_OFF;
return;
}
> + if (!boot_cpu_has(X86_FEATURE_RTM)) {
> + taa_mitigation = TAA_MITIGATION_TSX_DISABLE;
> + pr_info("%s\n", taa_strings[taa_mitigation]);
> + return;
> + }
> +
> + /* All mitigations turned off from cmdline (mitigations=off) */
> + if (cpu_mitigations_off()) {
> + taa_mitigation = TAA_MITIGATION_OFF;
> + return;
> + }
> +
> + /* TAA mitigation is turned off from cmdline (tsx_async_abort=off) */
> + if (taa_mitigation == TAA_MITIGATION_OFF) {
goto out;
and you slap an out label above the pr_info at the end of this function.
> + pr_info("%s\n", taa_strings[taa_mitigation]);
> + return;
> + }
> +
> + if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
> + taa_mitigation = TAA_MITIGATION_VERW;
> + else
> + taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
> +
> + /*
> + * VERW doesn't clear the CPU buffers when MD_CLEAR=1 and MDS_NO=1.
> + * A microcode update fixes this behavior to clear CPU buffers.
> + * Microcode update also adds support for MSR_IA32_TSX_CTRL which
> + * is enumerated by ARCH_CAP_TSX_CTRL_MSR bit.
> + *
> + * On MDS_NO=1 CPUs if ARCH_CAP_TSX_CTRL_MSR is not set, microcode
> + * update is required.
> + */
> + if ((ia32_cap & ARCH_CAP_MDS_NO) &&
> + !(ia32_cap & ARCH_CAP_TSX_CTRL_MSR))
> + taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
> +
> + /*
> + * TSX is enabled, select alternate mitigation for TAA which is
> + * same as MDS. Enable MDS static branch to clear CPU buffers.
> + *
> + * For guests that can't determine whether the correct microcode is
> + * present on host, enable the mitigation for UCODE_NEEDED as well.
> + */
> + static_branch_enable(&mds_user_clear);
> +
> + if (taa_nosmt || cpu_mitigations_auto_nosmt())
> + cpu_smt_disable(false);
> +
> + pr_info("%s\n", taa_strings[taa_mitigation]);
> +}
To show better what I mean, here's the whole function fixed up:
static void __init taa_select_mitigation(void)
{
u64 ia32_cap;
if (!boot_cpu_has(X86_FEATURE_RTM)) {
taa_mitigation = TAA_MITIGATION_TSX_DISABLE;
goto out;
}
if (!boot_cpu_has_bug(X86_BUG_TAA) || cpu_mitigations_off()) {
taa_mitigation = TAA_MITIGATION_OFF;
goto out;
}
ia32_cap = x86_read_arch_cap_msr();
/* TAA mitigation is turned off from cmdline (tsx_async_abort=off) */
if (taa_mitigation == TAA_MITIGATION_OFF)
goto out;
if (boot_cpu_has(X86_FEATURE_MD_CLEAR))
taa_mitigation = TAA_MITIGATION_VERW;
else
taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
/*
* VERW doesn't clear the CPU buffers when MD_CLEAR=1 and MDS_NO=1.
* A microcode update fixes this behavior to clear CPU buffers.
* Microcode update also adds support for MSR_IA32_TSX_CTRL which
* is enumerated by ARCH_CAP_TSX_CTRL_MSR bit.
*
* On MDS_NO=1 CPUs if ARCH_CAP_TSX_CTRL_MSR is not set, microcode
* update is required.
*/
if ((ia32_cap & ARCH_CAP_MDS_NO) &&
!(ia32_cap & ARCH_CAP_TSX_CTRL_MSR))
taa_mitigation = TAA_MITIGATION_UCODE_NEEDED;
/*
* TSX is enabled, select alternate mitigation for TAA which is
* the same as MDS. Enable MDS static branch to clear CPU buffers.
*
* For guests that can't determine whether the correct microcode is
* present on host, enable the mitigation for UCODE_NEEDED as well.
*/
static_branch_enable(&mds_user_clear);
if (taa_nosmt || cpu_mitigations_auto_nosmt())
cpu_smt_disable(false);
out:
pr_info("%s\n", taa_strings[taa_mitigation]);
}
> +
> +static int __init tsx_async_abort_cmdline(char *str)
That function name needs a verb: tsx_async_abort_parse_cmdline()
> +{
> + if (!boot_cpu_has_bug(X86_BUG_TAA))
> + return 0;
> +
> + if (!str)
> + return -EINVAL;
> +
> + if (!strcmp(str, "off")) {
> + taa_mitigation = TAA_MITIGATION_OFF;
> + } else if (!strcmp(str, "full")) {
> + taa_mitigation = TAA_MITIGATION_VERW;
> + } else if (!strcmp(str, "full,nosmt")) {
> + taa_mitigation = TAA_MITIGATION_VERW;
> + taa_nosmt = true;
> + }
> +
> + return 0;
> +}
> +early_param("tsx_async_abort", tsx_async_abort_cmdline);
Say what now?!
The previous patch added this:
tsx= [X86] Control Transactional Synchronization
Extensions (TSX) feature in Intel processors that
support TSX control.
This parameter controls the TSX feature. The options are:
on - Enable TSX on the system.
off - Disable TSX on the system.
So what is the final name of command line option now?
> +
> #undef pr_fmt
> #define pr_fmt(fmt) "Spectre V1 : " fmt
>
> @@ -765,7 +871,7 @@ static void update_indir_branch_cond(void)
> #undef pr_fmt
> #define pr_fmt(fmt) fmt
>
> -/* Update the static key controlling the MDS CPU buffer clear in idle */
> +/* Update the static key controlling the MDS and TAA CPU buffer clear in idle */
> static void update_mds_branch_idle(void)
> {
> /*
> @@ -775,8 +881,11 @@ static void update_mds_branch_idle(void)
> * The other variants cannot be mitigated when SMT is enabled, so
> * clearing the buffers on idle just to prevent the Store Buffer
> * repartitioning leak would be a window dressing exercise.
> + *
> + * Apply idle buffer clearing to TAA affected CPUs also.
> */
> - if (!boot_cpu_has_bug(X86_BUG_MSBDS_ONLY))
> + if (!boot_cpu_has_bug(X86_BUG_MSBDS_ONLY) &&
> + !boot_cpu_has_bug(X86_BUG_TAA))
> return;
>
> if (sched_smt_active())
> @@ -786,6 +895,7 @@ static void update_mds_branch_idle(void)
> }
>
> #define MDS_MSG_SMT "MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.\n"
> +#define TAA_MSG_SMT "TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.\n"
>
> void cpu_bugs_smt_update(void)
> {
> @@ -819,6 +929,19 @@ void cpu_bugs_smt_update(void)
> break;
> }
>
> + switch (taa_mitigation) {
> + case TAA_MITIGATION_VERW:
> + case TAA_MITIGATION_UCODE_NEEDED:
> + if (sched_smt_active())
> + pr_warn_once(TAA_MSG_SMT);
> + /* TSX is enabled, apply MDS idle buffer clearing. */
> + update_mds_branch_idle();
> + break;
> + case TAA_MITIGATION_TSX_DISABLE:
> + case TAA_MITIGATION_OFF:
> + break;
> + }
> +
> mutex_unlock(&spec_ctrl_mutex);
> }
>
> diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
> index 885d4ac2111a..86f22c1e5912 100644
> --- a/arch/x86/kernel/cpu/common.c
> +++ b/arch/x86/kernel/cpu/common.c
> @@ -1128,6 +1128,21 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
> if (!cpu_matches(NO_SWAPGS))
> setup_force_cpu_bug(X86_BUG_SWAPGS);
>
> + /*
> + * When processor is not mitigated for TAA (TAA_NO=0) set TAA bug when:
> + * - TSX is supported or
> + * - TSX_CTRL is supported
s/supported/present/
> + *
> + * TSX_CTRL check is needed for cases when TSX could be disabled before
> + * the kernel boot e.g. kexec
> + * TSX_CTRL check alone is not sufficient for cases when the microcode
> + * update is not present or running as guest that don't get TSX_CTRL.
> + */
> + if (!(ia32_cap & ARCH_CAP_TAA_NO) &&
> + (boot_cpu_has(X86_FEATURE_RTM) ||
You have a @c parameter passed in, use it:
cpu_has(c, X86_FEATURE_RTM)
> + (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
> + setup_force_cpu_bug(X86_BUG_TAA);
> +
> if (cpu_matches(NO_MELTDOWN))
> return;
>
> --
> 2.20.1
--
Regards/Gruss,
Boris.
SUSE Software Solutions Germany GmbH, GF: Felix Imendörffer, HRB 36809, AG Nürnberg
--
next prev parent reply other threads:[~2019-10-22 16:51 UTC|newest]
Thread overview: 78+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-21 20:22 [MODERATED] [PATCH v7 00/10] TAAv7 0 Pawan Gupta
2019-10-21 20:23 ` [MODERATED] [PATCH v7 01/10] TAAv7 1 Pawan Gupta
2019-10-21 20:24 ` [MODERATED] [PATCH v7 02/10] TAAv7 2 Pawan Gupta
2019-10-21 20:25 ` [MODERATED] [PATCH v7 03/10] TAAv7 3 Pawan Gupta
2019-10-21 20:26 ` [MODERATED] [PATCH v7 04/10] TAAv7 4 Pawan Gupta
2019-10-21 20:27 ` [MODERATED] [PATCH v7 05/10] TAAv7 5 Pawan Gupta
2019-10-21 20:28 ` [MODERATED] [PATCH v7 06/10] TAAv7 6 Pawan Gupta
2019-10-21 20:29 ` [MODERATED] [PATCH v7 07/10] TAAv7 7 Pawan Gupta
2019-10-21 20:30 ` [MODERATED] [PATCH v7 08/10] TAAv7 8 Pawan Gupta
2019-10-21 20:31 ` [MODERATED] [PATCH v7 09/10] TAAv7 9 Michal Hocko
2019-10-21 20:32 ` [MODERATED] [PATCH v7 10/10] TAAv7 10 Pawan Gupta
2019-10-21 21:32 ` [MODERATED] Re: [PATCH v7 00/10] TAAv7 0 Andy Lutomirski
2019-10-21 23:06 ` Andrew Cooper
2019-10-22 0:34 ` Pawan Gupta
2019-10-22 4:10 ` [MODERATED] Jon Masters
2019-10-22 5:53 ` [MODERATED] Pawan Gupta
2019-10-22 7:58 ` [MODERATED] Re: ***UNCHECKED*** [PATCH v7 07/10] TAAv7 7 Michal Hocko
2019-10-22 16:55 ` [MODERATED] " Pawan Gupta
2019-10-22 8:00 ` [MODERATED] Re: ***UNCHECKED*** [PATCH v7 09/10] TAAv7 9 Michal Hocko
2019-10-22 8:15 ` [MODERATED] Re: ***UNCHECKED*** [PATCH v7 03/10] TAAv7 3 Michal Hocko
2019-10-22 14:42 ` Josh Poimboeuf
2019-10-22 16:48 ` [MODERATED] " Pawan Gupta
2019-10-22 17:01 ` [MODERATED] Re: ***UNCHECKED*** " Michal Hocko
2019-10-22 17:35 ` Josh Poimboeuf
2019-10-22 14:38 ` [MODERATED] " Borislav Petkov
2019-10-22 16:58 ` Pawan Gupta
2019-10-22 14:48 ` Borislav Petkov
2019-10-22 17:00 ` Pawan Gupta
2019-10-22 17:16 ` [MODERATED] " Borislav Petkov
2019-10-22 18:07 ` [MODERATED] " Pawan Gupta
2019-10-22 15:07 ` Borislav Petkov
2019-10-22 18:36 ` Pawan Gupta
2019-10-22 18:59 ` [MODERATED] " Borislav Petkov
2019-10-22 16:51 ` Borislav Petkov [this message]
2019-10-22 17:02 ` [MODERATED] Re: [PATCH v7 04/10] TAAv7 4 Borislav Petkov
2019-10-22 18:00 ` Pawan Gupta
2019-10-22 18:12 ` [MODERATED] " Borislav Petkov
2019-10-22 19:16 ` Luck, Tony
2019-10-22 19:28 ` [MODERATED] " Borislav Petkov
2019-10-22 20:02 ` Luck, Tony
2019-10-22 20:48 ` [MODERATED] Jon Masters
2019-10-22 20:54 ` [MODERATED] Re: [PATCH v7 04/10] TAAv7 4 Borislav Petkov
2019-10-22 21:38 ` Josh Poimboeuf
2019-10-22 21:46 ` Borislav Petkov
2019-10-22 22:06 ` Josh Poimboeuf
2019-10-22 22:13 ` Borislav Petkov
2019-10-22 17:44 ` Pawan Gupta
2019-10-22 19:04 ` [MODERATED] " Borislav Petkov
2019-10-22 21:29 ` [MODERATED] " Pawan Gupta
2019-10-22 21:53 ` Borislav Petkov
2019-10-22 22:05 ` Borislav Petkov
2019-10-23 0:27 ` Pawan Gupta
2019-10-23 5:25 ` Pawan Gupta
2019-10-23 6:46 ` Borislav Petkov
2019-10-23 13:28 ` Pawan Gupta
2019-10-23 14:39 ` Borislav Petkov
2019-10-23 1:33 ` Pawan Gupta
2019-10-23 6:48 ` Borislav Petkov
2019-10-22 17:25 ` [MODERATED] Re: [PATCH v7 01/10] TAAv7 1 Josh Poimboeuf
2019-10-23 9:26 ` Borislav Petkov
2019-10-22 17:26 ` Josh Poimboeuf
2019-10-22 20:44 ` [MODERATED] Jon Masters
2019-10-22 17:47 ` [MODERATED] Re: [PATCH v7 03/10] TAAv7 3 Josh Poimboeuf
2019-10-22 18:39 ` [MODERATED] Re: [PATCH v7 10/10] TAAv7 10 Josh Poimboeuf
2019-10-23 7:24 ` Borislav Petkov
2019-10-22 21:20 ` [MODERATED] Re: [PATCH v7 04/10] TAAv7 4 Josh Poimboeuf
2019-10-22 21:35 ` Andrew Cooper
2019-10-22 21:44 ` Josh Poimboeuf
2019-10-22 22:03 ` Andrew Cooper
2019-10-23 1:16 ` Josh Poimboeuf
2019-10-23 15:46 ` [MODERATED] Re: [PATCH v7 00/10] TAAv7 0 Borislav Petkov
2019-10-23 17:11 ` Josh Poimboeuf
2019-10-23 21:49 ` Borislav Petkov
2019-10-23 22:12 ` Pawan Gupta
2019-10-24 14:08 ` Borislav Petkov
[not found] ` <5dae165e.1c69fb81.4beee.e271SMTPIN_ADDED_BROKEN@mx.google.com>
2019-10-24 20:53 ` [MODERATED] Re: [PATCH v7 06/10] TAAv7 6 Paolo Bonzini
2019-10-24 21:00 ` Luck, Tony
2019-10-24 21:33 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191022165112.GK31458@zn.tnic \
--to=bp@suse.de \
--cc=speck@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).