From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linutronix.de (193.142.43.55:993) by crypto-ml.lab.linutronix.de with IMAP4-SSL for ; 22 Oct 2019 21:35:27 -0000 Received: from esa3.hc3370-68.iphmx.com ([216.71.145.155]) by Galois.linutronix.de with esmtps (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1iN1oJ-0007Ir-V3 for speck@linutronix.de; Tue, 22 Oct 2019 23:35:26 +0200 Subject: [MODERATED] Re: [PATCH v7 04/10] TAAv7 4 References: =?utf-8?q?=3C8047c25fbcc22edfed95e41eeb685d8236cf9beb=2E1571688957=2Egi?= =?utf-8?q?t=2Epawan=2Ekumar=2Egupta=40linux=2Eintel=2Ecom=3E?= <20191022212045.gdoespyymxm6fy2f@treble> From: Andrew Cooper Message-ID: <00890cc3-818b-6fbc-4780-22364407e68c@citrix.com> Date: Tue, 22 Oct 2019 22:35:12 +0100 MIME-Version: 1.0 In-Reply-To: <20191022212045.gdoespyymxm6fy2f@treble> Content-Type: multipart/mixed; boundary="bad6wbH6T27jSBWle4IHXGlgFX36El0IP"; protected-headers="v1" To: speck@linutronix.de List-ID: --bad6wbH6T27jSBWle4IHXGlgFX36El0IP Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Content-Language: en-GB On 22/10/2019 22:20, speck for Josh Poimboeuf wrote: > On Mon, Oct 21, 2019 at 01:26:02PM -0700, speck for Pawan Gupta wrote: >> diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/p= rocessor.h >> index 6e0a3b43d027..999b85039128 100644 >> --- a/arch/x86/include/asm/processor.h >> +++ b/arch/x86/include/asm/processor.h >> @@ -988,4 +988,11 @@ enum mds_mitigations { >> MDS_MITIGATION_VMWERV, >> }; >> =20 >> +enum taa_mitigations { >> + TAA_MITIGATION_OFF, >> + TAA_MITIGATION_UCODE_NEEDED, >> + TAA_MITIGATION_VERW, >> + TAA_MITIGATION_TSX_DISABLE, > I would rename "TSX_DISABLE" to "TSX_DISABLED", because > > a) it matches the verb tense of "UCODE_NEEDED", and > b) the past tense hopefully helps make it slightly clearer that TSX wa= s > disabled separately, not as part of the mitigation code itself. > >> @@ -765,7 +871,7 @@ static void update_indir_branch_cond(void) >> #undef pr_fmt >> #define pr_fmt(fmt) fmt >> =20 >> -/* Update the static key controlling the MDS CPU buffer clear in idle= */ >> +/* Update the static key controlling the MDS and TAA CPU buffer clear= in idle */ >> static void update_mds_branch_idle(void) >> { >> /* >> @@ -775,8 +881,11 @@ static void update_mds_branch_idle(void) >> * The other variants cannot be mitigated when SMT is enabled, so >> * clearing the buffers on idle just to prevent the Store Buffer >> * repartitioning leak would be a window dressing exercise. >> + * >> + * Apply idle buffer clearing to TAA affected CPUs also. >> */ >> - if (!boot_cpu_has_bug(X86_BUG_MSBDS_ONLY)) >> + if (!boot_cpu_has_bug(X86_BUG_MSBDS_ONLY) && >> + !boot_cpu_has_bug(X86_BUG_TAA)) >> return; > Sorry, I was out for most of the last two weeks, so I think I left you > hanging on this. For context, here's your answer to my previous > question about whether the X86_BUG_TAA check makes sense here: > >> It does provide protection against the "store buffer" leak. But the >> other buffers(fill buffer and load port) are still SMT vulnerable. Do >> you prefer it removed? > Yes, please remove it, for the same reason we used for MDS. There's no= t > much point in partially mitigating TAA here. > >> diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common= =2Ec >> index 885d4ac2111a..86f22c1e5912 100644 >> --- a/arch/x86/kernel/cpu/common.c >> +++ b/arch/x86/kernel/cpu/common.c >> @@ -1128,6 +1128,21 @@ static void __init cpu_set_bug_bits(struct cpui= nfo_x86 *c) >> if (!cpu_matches(NO_SWAPGS)) >> setup_force_cpu_bug(X86_BUG_SWAPGS); >> =20 >> + /* >> + * When processor is not mitigated for TAA (TAA_NO=3D0) set TAA bug = when: >> + * - TSX is supported or >> + * - TSX_CTRL is supported >> + * >> + * TSX_CTRL check is needed for cases when TSX could be disabled bef= ore >> + * the kernel boot e.g. kexec >> + * TSX_CTRL check alone is not sufficient for cases when the microco= de >> + * update is not present or running as guest that don't get TSX_CTRL= =2E >> + */ >> + if (!(ia32_cap & ARCH_CAP_TAA_NO) && >> + (boot_cpu_has(X86_FEATURE_RTM) || >> + (ia32_cap & ARCH_CAP_TSX_CTRL_MSR))) >> + setup_force_cpu_bug(X86_BUG_TAA); >> + > I'm finding this logic to be less than 100% convincing, or at least har= d > to follow. And it's different from most of the other bug checks in thi= s > function. > > Would the following work instead? > > if (!cpu_matches(NO_MDS) && !(ia32_cap & ARCH_CAP_TAA_NO)) > setup_force_cpu_bug(X86_BUG_TAA); > > In other words I would presume all the NO_MDS CPUs listed in > 'cpu_vuln_whitelist' are also immune to TAA. =46rom a leakage point of view, TAA is just another way to access stale date in the load ports, store buffer or fill buffers. In practice, the leakage mechanism exists on all MDS-vulnerable, TSX-enabled parts.=C2=A0 Parts which are not vulnerable to MDS leakage in= the first place will be similarly unaffected. On pre MDS_NO=3D1 parts, the existing VERW and no-smt mitigates TAA as we= ll. On the current MDS_NO=3D1 parts, the silicon fix constituting MDS_NO took= care of the known leakage cases by forcing a register file write-back of 0 rather than the stale data.=C2=A0 However, the TSX Async Abort case was= missed, which is why this generation of parts are still vulnerable. Therefore, the TAA vulnerability only really matters for TSX=3D1, MDS_NO=3D1, TAA_NO=3D0 parts, because these parts are the ones that are leaky, but have a lower overhead mitigation option than VERW and no-SMT. ~Andrew --bad6wbH6T27jSBWle4IHXGlgFX36El0IP--