All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: "Shamsundara Havanur, Harsha" <havanur@amazon.com>,
	"jbeulich@suse.com" <jbeulich@suse.com>,
	"julien@xen.org" <julien@xen.org>
Cc: "Wieczorkiewicz, Pawel" <wipawel@amazon.de>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"paul@xen.org" <paul@xen.org>
Subject: Re: [XEN PATCH v1 1/1] Invalidate cache for cpus affinitized to the domain
Date: Mon, 14 Dec 2020 19:36:42 +0000	[thread overview]
Message-ID: <d41f91ae-4df5-abe1-e58e-92a2424c077a@citrix.com> (raw)
In-Reply-To: <eef19ecad32ac9379b6535ec2a4b444e78b29058.camel@amazon.com>

On 14/12/2020 19:05, Shamsundara Havanur, Harsha wrote:
> On Mon, 2020-12-14 at 16:01 +0000, Andrew Cooper wrote:
>> CAUTION: This email originated from outside of the organization. Do
>> not click links or open attachments unless you can confirm the sender
>> and know the content is safe.
>>
>>
>>
>> On 14/12/2020 10:56, Julien Grall wrote:
>>> Hi Harsha,
>>>
>>> On 14/12/2020 09:26, Shamsundara Havanur, Harsha wrote:
>>>> On Mon, 2020-12-14 at 09:52 +0100, Jan Beulich wrote:
>>>>> CAUTION: This email originated from outside of the
>>>>> organization. Do
>>>>> not click links or open attachments unless you can confirm the
>>>>> sender
>>>>> and know the content is safe.
>>>>>
>>>>>
>>>>>
>>>>> On 11.12.2020 12:44, Harsha Shamsundara Havanur wrote:
>>>>>> A HVM domain flushes cache on all the cpus using
>>>>>> `flush_all` macro which uses cpu_online_map, during
>>>>>> i) creation of a new domain
>>>>>> ii) when device-model op is performed
>>>>>> iii) when domain is destructed.
>>>>>>
>>>>>> This triggers IPI on all the cpus, thus affecting other
>>>>>> domains that are pinned to different pcpus. This patch
>>>>>> restricts cache flush to the set of cpus affinitized to
>>>>>> the current domain using `domain->dirty_cpumask`.
>>>>> But then you need to effect cache flushing when a CPU gets
>>>>> taken out of domain->dirty_cpumask. I don't think you/we want
>>>>> to do that.
>>>>>
>>>> If we do not restrict, it could lead to DoS attack, where a
>>>> malicious
>>>> guest could keep writing to MTRR registers or do a cache flush
>>>> through
>>>> DM Op and keep sending IPIs to other neighboring guests.
>>> I saw Jan already answered about the alleged DoS, so I will just
>>> focus
>>> on the resolution.
>>>
>>> I agree that in the ideal situation we want to limit the impact on
>>> the
>>> other vCPUs. However, we also need to make sure the cure is not
>>> worse
>>> than the symptoms.
>> And specifically, only a change which is correct.  This patch very
>> definitely isn't.
>>
>> Lines can get cached on other cpus from, e.g. qemu mappings and PV
>> backends.
>>
>>> The cache flush cannot be restricted in all the pinning situation
>>> because pinning doesn't imply the pCPU will be dedicated to a given
>>> vCPU or even the vCPU will stick on pCPU (we may allow floating on
>>> a
>>> NUMA socket). Although your setup may offer this guarantee.
>>>
>>> My knowledge in this area is quite limited. But below a few
>>> question
>>> that hopefully will help to make a decision.
>>>
>>> The first question to answer is: can the flush can be restricted in
>>> a
>>> setup where each vCPUs are running on a decicated pCPU (i.e
>>> partionned
>>> system)?
>> Not really.  Lines can become cached even from speculation in the
>> directmap.
>>
>> If you need to flush the caches (and don't have a virtual mapping to
>> issue clflush/clflushopt/clwb over), it must be on all CPUs.
> If lines are cached due to aggressive speculation from a different
> guest, wouldn't they be invalidated at the speculation boundary, since
> it's a wrong speculation? Would it still require to be flushed
> explicitly?

No.  Caches are microarchitectural state (just like TLBs, linefill
buffers, etc.)

The entire mess surrounding speculative security issues is that the
perturbance from bad speculation survive, and can be recovered at a
later point.

~Andrew


      reply	other threads:[~2020-12-14 19:37 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <cover.1607686878.git.havanur@amazon.com>
2020-12-11 11:44 ` [XEN PATCH v1 1/1] Invalidate cache for cpus affinitized to the domain Harsha Shamsundara Havanur
2020-12-12 13:35   ` Shamsundara Havanur, Harsha
2020-12-14  8:52   ` Jan Beulich
2020-12-14  9:26     ` Shamsundara Havanur, Harsha
2020-12-14  9:55       ` Jan Beulich
2020-12-14 10:56       ` Julien Grall
2020-12-14 16:01         ` Andrew Cooper
2020-12-14 19:05           ` Shamsundara Havanur, Harsha
2020-12-14 19:36             ` Andrew Cooper [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d41f91ae-4df5-abe1-e58e-92a2424c077a@citrix.com \
    --to=andrew.cooper3@citrix.com \
    --cc=havanur@amazon.com \
    --cc=jbeulich@suse.com \
    --cc=julien@xen.org \
    --cc=paul@xen.org \
    --cc=wipawel@amazon.de \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.