From: Benjamin Herrenschmidt <firstname.lastname@example.org> To: Ingo Molnar <email@example.com>, Balbir Singh <firstname.lastname@example.org> Cc: email@example.com, firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, email@example.com Subject: Re: [GIT PULL] x86/mm changes for v5.8 Date: Tue, 02 Jun 2020 19:37:29 +1000 Message-ID: <firstname.lastname@example.org> (raw) In-Reply-To: <20200602073350.GA481221@gmail.com> On Tue, 2020-06-02 at 09:33 +0200, Ingo Molnar wrote: > Or rather, we should ask a higher level question as well, maybe we > should not do this feature at all? Well, it does solve a real issue in some circumstances and there was a reasonable discussion about this on the list that lead to it being merged with Kees and Thomas (and others) agreeing :) But yes, it is pointless with SMT and yes maybe we should make it explicitly do nothing on SMT, but let's not throw the baby out with the bath water shall we ? > Typically cloud computing systems such as AWS will have SMT enabled, > because cloud computing pricing is essentially per vCPU, and they want > to sell the hyperthreads as vCPUs. Not necessarily and not in every circumstances. Yes, VMs will typically have SMT enabled. This isn't targeted at them though. One example that was given during the discussions was containers pertaining to different users. Now maybe we could discuss making the flush on changes of cgroups or other similar things rather than individual process, but so far that hasn't come up during the disucssion on the mailing list. Another example would be a process that handles more critical data such as payment information, than the rest of the system and wants to protect itself (or the admin wants that process protected) against possible data leaks to less trusted processes. > So the safest solution, disabling > SMT on affected systems, is not actually done, because it's an > economic non-starter. (I'd like to note the security double standard > there: the most secure option, to disable SMT, is not actually used ...) This has nothing to do about SMT, though yes maybe we should make the patch do nothing on SMT but this isn't what this feature is about. > BTW., I wonder how Amazon is solving the single-vCPU customer workload > problem on AWS: if the vast majority of AWS computing capacity is > running on a single vCPU, because it's the cheapest tier and because > it's more than enough capacity to run a website. Even core-scheduling > doesn't solve this fundamental SMT security problem: separate customer > workloads *cannot* share the same core - but this means that the > single-vCPU workloads will only be able to utilize 50% of all > available vCPUs if they are properly isolated. > > Or if the majority of AWS EC2 etc. customer systems are using 2,4 or > more vCPUs, then both this feature and 'core-scheduling' is > effectively pointless from a security POV, because the cloud computing > systems are de-facto partitioned into cores already, with each core > accounted as 2 vCPUs. AWS has more than just VMs for rent :-) There are a whole pile of higher level "services" that our users can use and not all of them necessarily run on VMs charged per vCPU. > The hour-up-rounded way AWS (and many other cloud providers) account > system runtime costs suggests that they are doing relatively static > partitioning of customer workloads already, i.e. customer workloads > are mapped to actual physical hardware in an exclusive fashion, with > no overcommitting of physical resources and no sharing of cores > between customers. > > If I look at the pricing and capabilities table of AWS: > > https://aws.amazon.com/ec2/pricing/on-demand/ > > Only the 't2' and 't3' On-Demand instances have 'Variable' pricing, > which is only 9% of the offered 228 configurations. > > I.e. I strongly suspect that neither L1D flushing nor core-scheduling > is actually required on affected vulnerable CPUs to keep customer > workloads isolated from each other, on the majority of cloud computing > systems, because they are already isolated via semi-static > partitioning, using pricing that reflects static partitioning. This isn't about that. These patches aren't trying to solve problems happening inside of a customer VM running SMT not are they about protecting VMs against other VMs on the same system. Cheers, Ben.
next prev parent reply index Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-06-01 17:01 Ingo Molnar 2020-06-01 21:42 ` Linus Torvalds 2020-06-02 2:35 ` Linus Torvalds 2020-06-02 10:25 ` Singh, Balbir 2020-06-02 7:33 ` Ingo Molnar 2020-06-02 9:37 ` Benjamin Herrenschmidt [this message] 2020-06-02 18:28 ` Thomas Gleixner 2020-06-02 19:14 ` Linus Torvalds 2020-06-02 23:01 ` Singh, Balbir 2020-06-02 23:28 ` Linus Torvalds 2020-06-03 1:31 ` Singh, Balbir 2020-06-04 17:29 ` [GIT PULL v2] " Ingo Molnar 2020-06-05 2:41 ` Linus Torvalds 2020-06-05 8:11 ` [GIT PULL v3] " Ingo Molnar 2020-06-05 20:40 ` pr-tracker-bot
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
LKML Archive on lore.kernel.org Archives are clonable: git clone --mirror https://lore.kernel.org/lkml/0 lkml/git/0.git git clone --mirror https://lore.kernel.org/lkml/1 lkml/git/1.git git clone --mirror https://lore.kernel.org/lkml/2 lkml/git/2.git git clone --mirror https://lore.kernel.org/lkml/3 lkml/git/3.git git clone --mirror https://lore.kernel.org/lkml/4 lkml/git/4.git git clone --mirror https://lore.kernel.org/lkml/5 lkml/git/5.git git clone --mirror https://lore.kernel.org/lkml/6 lkml/git/6.git git clone --mirror https://lore.kernel.org/lkml/7 lkml/git/7.git git clone --mirror https://lore.kernel.org/lkml/8 lkml/git/8.git git clone --mirror https://lore.kernel.org/lkml/9 lkml/git/9.git # If you have public-inbox 1.1+ installed, you may # initialize and index your mirror using the following commands: public-inbox-init -V2 lkml lkml/ https://lore.kernel.org/lkml \ firstname.lastname@example.org public-inbox-index lkml Example config snippet for mirrors Newsgroup available over NNTP: nntp://nntp.lore.kernel.org/org.kernel.vger.linux-kernel AGPL code for this site: git clone https://public-inbox.org/public-inbox.git