From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linutronix.de (193.142.43.55:993) by crypto-ml.lab.linutronix.de with IMAP4-SSL for ; 15 Apr 2020 20:59:36 -0000 Received: from mga05.intel.com ([192.55.52.43]) by Galois.linutronix.de with esmtps (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1jOp8B-0001uW-Dz for speck@linutronix.de; Wed, 15 Apr 2020 22:59:35 +0200 Received: from localhost (mtg-dev.jf.intel.com [10.54.74.10]) by smtp.ostc.intel.com (Postfix) with ESMTP id 8E6F16363 for ; Wed, 15 Apr 2020 20:59:10 +0000 (UTC) Date: Wed, 15 Apr 2020 13:59:10 -0700 From: mark gross Subject: [MODERATED] Re: [PATCH 3/4] V7 more sampling fun 3 Message-ID: <20200415205910.GA100223@mtg-dev.jf.intel.com> Reply-To: mgross@linux.intel.com References: <20200414200544.zqhguchba3m2jhr6@treble> <20200414215924.GE29751@mtg-dev.jf.intel.com> <20200414224631.7hyhmcn6v2od3zyp@treble> MIME-Version: 1.0 In-Reply-To: <20200414224631.7hyhmcn6v2od3zyp@treble> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit To: speck@linutronix.de List-ID: On Tue, Apr 14, 2020 at 05:46:31PM -0500, speck for Josh Poimboeuf wrote: > On Tue, Apr 14, 2020 at 02:59:24PM -0700, speck for mark gross wrote: > > On Tue, Apr 14, 2020 at 03:05:44PM -0500, speck for Josh Poimboeuf wrote: > > > On Thu, Jan 16, 2020 at 02:16:07PM -0800, speck for mark gross wrote: > > > > +static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = { > > > > + VULNBL_INTEL_STEPPING(KABYLAKE_L, X86_STEPPINGS(0, 0xA), SRBDS), > > > > + VULNBL_INTEL_STEPPING(KABYLAKE_L, X86_STEPPINGS(0xB, 0xC), SRBDS_IF_TSX), > > > > + VULNBL_INTEL_STEPPING(KABYLAKE, X86_STEPPINGS(0, 0xB), SRBDS), > > > > + VULNBL_INTEL_STEPPING(KABYLAKE, X86_STEPPINGS(0xC, 0xD), SRBDS_IF_TSX), > > > > > > Another readability tweak: "0x0" helps with vertical alignment: > > > > > > VULNBL_INTEL_STEPPING(KABYLAKE_L, X86_STEPPINGS(0x0, 0xA), SRBDS), > > > VULNBL_INTEL_STEPPING(KABYLAKE_L, X86_STEPPINGS(0xB, 0xC), SRBDS_IF_TSX), > > > VULNBL_INTEL_STEPPING(KABYLAKE, X86_STEPPINGS(0x0, 0xB), SRBDS), > > > VULNBL_INTEL_STEPPING(KABYLAKE, X86_STEPPINGS(0xC, 0xD), SRBDS_IF_TSX), > > > > FWIW the white paper no longer calls out individual steppings as vulnerable > > only if TSX so I'm losing the SRBDS_IF_TSX stuff. > > > > Given that the additional steppings 0xB, 0xC and 0xD no longer are treated > > differently I'm just going with the following: > > VULNBL_INTEL_STEPPING(KABYLAKE_L, X86_STEPPINGS(0, 0xC), SRBDS) > > VULNBL_INTEL_STEPPING(KABYLAKE, X86_STEPPINGS(0, 0xD), SRBDS) > > Ok, though I still think "0x0" looks more consistent in this context ;-) ok. --mark > -- > Josh