From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linutronix.de (193.142.43.55:993) by crypto-ml.lab.linutronix.de with IMAP4-SSL for ; 22 Oct 2019 22:03:31 -0000 Received: from esa5.hc3370-68.iphmx.com ([216.71.155.168]) by Galois.linutronix.de with esmtps (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1iN2FV-0007xP-Te for speck@linutronix.de; Wed, 23 Oct 2019 00:03:30 +0200 Subject: [MODERATED] Re: [PATCH v7 04/10] TAAv7 4 References: <20191022212045.gdoespyymxm6fy2f@treble> <00890cc3-818b-6fbc-4780-22364407e68c@citrix.com> <20191022214415.5gy4q7l4fa6ln6l6@treble> From: Andrew Cooper Message-ID: Date: Tue, 22 Oct 2019 23:03:19 +0100 MIME-Version: 1.0 In-Reply-To: <20191022214415.5gy4q7l4fa6ln6l6@treble> Content-Type: multipart/mixed; boundary="KzDEBlT8OQFt9qVIrcQYODPESadKQNFJl"; protected-headers="v1" To: speck@linutronix.de List-ID: --KzDEBlT8OQFt9qVIrcQYODPESadKQNFJl Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Content-Language: en-GB On 22/10/2019 22:44, speck for Josh Poimboeuf wrote: > On Tue, Oct 22, 2019 at 10:35:12PM +0100, speck for Andrew Cooper wrote= : >> On 22/10/2019 22:20, speck for Josh Poimboeuf wrote: >>> On Mon, Oct 21, 2019 at 01:26:02PM -0700, speck for Pawan Gupta wrote= : >>>> diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm= /processor.h >>>> index 6e0a3b43d027..999b85039128 100644 >>>> --- a/arch/x86/include/asm/processor.h >>>> +++ b/arch/x86/include/asm/processor.h >>>> @@ -988,4 +988,11 @@ enum mds_mitigations { >>>> MDS_MITIGATION_VMWERV, >>>> }; >>>> =20 >>>> +enum taa_mitigations { >>>> + TAA_MITIGATION_OFF, >>>> + TAA_MITIGATION_UCODE_NEEDED, >>>> + TAA_MITIGATION_VERW, >>>> + TAA_MITIGATION_TSX_DISABLE, >>> I would rename "TSX_DISABLE" to "TSX_DISABLED", because >>> >>> a) it matches the verb tense of "UCODE_NEEDED", and >>> b) the past tense hopefully helps make it slightly clearer that TSX = was >>> disabled separately, not as part of the mitigation code itself. >>> >>>> @@ -765,7 +871,7 @@ static void update_indir_branch_cond(void) >>>> #undef pr_fmt >>>> #define pr_fmt(fmt) fmt >>>> =20 >>>> -/* Update the static key controlling the MDS CPU buffer clear in id= le */ >>>> +/* Update the static key controlling the MDS and TAA CPU buffer cle= ar in idle */ >>>> static void update_mds_branch_idle(void) >>>> { >>>> /* >>>> @@ -775,8 +881,11 @@ static void update_mds_branch_idle(void) >>>> * The other variants cannot be mitigated when SMT is enabled, so >>>> * clearing the buffers on idle just to prevent the Store Buffer >>>> * repartitioning leak would be a window dressing exercise. >>>> + * >>>> + * Apply idle buffer clearing to TAA affected CPUs also. >>>> */ >>>> - if (!boot_cpu_has_bug(X86_BUG_MSBDS_ONLY)) >>>> + if (!boot_cpu_has_bug(X86_BUG_MSBDS_ONLY) && >>>> + !boot_cpu_has_bug(X86_BUG_TAA)) >>>> return; >>> Sorry, I was out for most of the last two weeks, so I think I left yo= u >>> hanging on this. For context, here's your answer to my previous >>> question about whether the X86_BUG_TAA check makes sense here: >>> >>>> It does provide protection against the "store buffer" leak. But the >>>> other buffers(fill buffer and load port) are still SMT vulnerable. D= o >>>> you prefer it removed? >>> Yes, please remove it, for the same reason we used for MDS. There's = not >>> much point in partially mitigating TAA here. >>> >>>> diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/comm= on.c >>>> index 885d4ac2111a..86f22c1e5912 100644 >>>> --- a/arch/x86/kernel/cpu/common.c >>>> +++ b/arch/x86/kernel/cpu/common.c >>>> @@ -1128,6 +1128,21 @@ static void __init cpu_set_bug_bits(struct cp= uinfo_x86 *c) >>>> if (!cpu_matches(NO_SWAPGS)) >>>> setup_force_cpu_bug(X86_BUG_SWAPGS); >>>> =20 >>>> + /* >>>> + * When processor is not mitigated for TAA (TAA_NO=3D0) set TAA bu= g when: >>>> + * - TSX is supported or >>>> + * - TSX_CTRL is supported >>>> + * >>>> + * TSX_CTRL check is needed for cases when TSX could be disabled b= efore >>>> + * the kernel boot e.g. kexec >>>> + * TSX_CTRL check alone is not sufficient for cases when the micro= code >>>> + * update is not present or running as guest that don't get TSX_CT= RL. >>>> + */ >>>> + if (!(ia32_cap & ARCH_CAP_TAA_NO) && >>>> + (boot_cpu_has(X86_FEATURE_RTM) || >>>> + (ia32_cap & ARCH_CAP_TSX_CTRL_MSR))) >>>> + setup_force_cpu_bug(X86_BUG_TAA); >>>> + >>> I'm finding this logic to be less than 100% convincing, or at least h= ard >>> to follow. And it's different from most of the other bug checks in t= his >>> function. >>> >>> Would the following work instead? >>> >>> if (!cpu_matches(NO_MDS) && !(ia32_cap & ARCH_CAP_TAA_NO)) >>> setup_force_cpu_bug(X86_BUG_TAA); >>> >>> In other words I would presume all the NO_MDS CPUs listed in >>> 'cpu_vuln_whitelist' are also immune to TAA. >> From a leakage point of view, TAA is just another way to access stale >> date in the load ports, store buffer or fill buffers. >> >> In practice, the leakage mechanism exists on all MDS-vulnerable, >> TSX-enabled parts.=C2=A0 Parts which are not vulnerable to MDS leakage= in the >> first place will be similarly unaffected. >> >> On pre MDS_NO=3D1 parts, the existing VERW and no-smt mitigates TAA as= well. >> >> On the current MDS_NO=3D1 parts, the silicon fix constituting MDS_NO t= ook >> care of the known leakage cases by forcing a register file write-back = of >> 0 rather than the stale data.=C2=A0 However, the TSX Async Abort case = was >> missed, which is why this generation of parts are still vulnerable. >> >> Therefore, the TAA vulnerability only really matters for TSX=3D1, >> MDS_NO=3D1, TAA_NO=3D0 parts, because these parts are the ones that ar= e >> leaky, but have a lower overhead mitigation option than VERW and no-SM= T. > I agree with all of that. So are you saying you agree or disagree with= > my code snippet? Ah - sorry for being unclear.=C2=A0 Disagree. Your code snippet will cause some Atom and very old Core parts to also report to be TAA vulnerable, as they are on the MDS whitelist. ~Andrew --KzDEBlT8OQFt9qVIrcQYODPESadKQNFJl--