linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Alexandre Chartre <alexandre.chartre@oracle.com>
To: Junaid Shahid <junaids@google.com>, linux-kernel@vger.kernel.org
Cc: kvm@vger.kernel.org, pbonzini@redhat.com, jmattson@google.com,
	pjt@google.com, oweisse@google.com, rppt@linux.ibm.com,
	dave.hansen@linux.intel.com, peterz@infradead.org,
	tglx@linutronix.de, luto@kernel.org, linux-mm@kvack.org
Subject: Re: [RFC PATCH 00/47] Address Space Isolation for KVM
Date: Tue, 22 Mar 2022 10:46:35 +0100	[thread overview]
Message-ID: <c3131f1c-a354-ca3b-ed61-5b06ef1ab7a1@oracle.com> (raw)
In-Reply-To: <a37bb4b7-3772-4579-a4e6-d27fb29411a6@google.com>


On 3/18/22 00:25, Junaid Shahid wrote:
>> ASI Core
>> ========
>>
>> KPTI
>> ----
>> Implementing KPTI with ASI is possible but this is not straight forward.
>> This requires some special handling in particular in the assembly kernel
>> entry/exit code for syscall, interrupt and exception (see ASI RFC v4 [4]
>> as an example) because we are also switching privilege level in addition
>> of switching the pagetable. So this might be something to consider early
>> in your implementation to ensure it is effectively compatible with KPTI.
> 
> Yes, I will look in more detail into how to implement KPTI on top of
> this ASI implementation, but at least at a high level, it seems that
> it should work. Of course, the devil is always in the details :)
> 
>>
>> Going beyond KPTI (with a KPTI-next) and trying to execute most
>> syscalls/interrupts without switching to the full kernel address space
>> is more challenging, because it would require much more kernel mapping
>> in the user pagetable, and this would basically defeat the purpose of
>> KPTI. You can refer to discussions about the RFC to defer CR3 switch
>> to C code [7] which was an attempt to just reach the kernel entry C
>> code with a KPTI pagetable.
> 
> In principle, the ASI restricted address space would not contain any
> sensitive data, so having more mappings should be ok as long as they
> really are non-sensitive. Of course, it is possible that we may
> mistakenly think that some data is not sensitive and mark it as such,
> but in reality it really was sensitive in some way. In that sense, a
> strict KPTI is certainly a little more secure than the KPTI-Next that
> I mentioned, but KPTI-Next would also have lower performance overhead
> compared to the strict KPTI.
> 

Mappings are precisely the issue for KPTI-next. The RFC I submitted show
that going beyond KPTI might require to map data which could be deemed
sensitive. Also they are extra complications that make it difficult to reach
C code with a KPTI page-table. This was discussed in v2 of the "Defer CR3
switch to C code" RFC:
https://lore.kernel.org/all/20201116144757.1920077-1-alexandre.chartre@oracle.com/


>>
>> Interrupts/Exceptions
>> ---------------------
>> As long as interrupts/exceptions are not expected to be processed with
>> ASI, it is probably better to explicitly exit ASI before processing an
>> interrupt/exception, otherwise you will have an extra overhead on each
>> interrupt/exception to take a page fault and then exit ASI.
> 
> I agree that for those interrupts/exceptions that will need to access
> sensitive data, it is better to do an explicit ASI Exit at the start.
> But it is probably possible for many interrupts to be handled without
> needing to access sensitive data, in which case, it would be better
> to avoid the ASI Exit.
> 
>>
>> This is particularily true if you have want to have KPTI use ASI, and
>> in that case the ASI exit will need to be done early in the interrupt
>> and exception assembly entry code.
>>
>> ASI Hooks
>> ---------
>> ASI hooks are certainly a good idea to perform specific actions on ASI
>> enter or exit. However, I am not sure they are appropriate places for cpus
>> stunning with KVM-ASI. That's because cpus stunning doesn't need to be
>> done precisely when entering and exiting ASI, and it probably shouldn't be
>> done there: it should be done right before VMEnter and right after VMExit
>> (see below).
>>
> 
> I believe that performing sibling CPU stunning right after VM Exit
> will negate most of the performance advantage of ASI. I think that it
> is feasible to do the stunning on ASI Exit. Please see below for how
> we handle the problem that you have mentioned.
> 

Right, I was confused by what you exactly meant by cpu stun/unstun but
I think it's now clearer with your explanation below.


> 
>> Sibling CPUs Synchronization
>> ============================
>> KVM-ASI requires the synchronization of sibling CPUs from the same CPU
>> core so that when a VM is running then sibling CPUs are running with the
>> ASI associated with this VM (or an ASI compatible with the VM, depending
>> on how ASI is defined). That way the VM can only spy on data from ASI
>> and won't be able to access any sensitive data.
>>
>> So, right before entering a VM, KVM should ensures that sibling CPUs are
>> using ASI. If a sibling CPU is not using ASI then KVM can either wait for
>> that sibling to run ASI, or force it to use ASI (or to become idle).
>> This behavior should be enforced as long as any sibling is running the
>> VM. When all siblings are not running the VM then other siblings can run
>> any code (using ASI or not).
>>
>> I would be interesting to see the code you use to achieve this, because
>> I don't get how this is achieved from the description of your sibling
>> hyperthread stun and unstun mechanism.
>>
>> Note that this synchronization is critical for ASI to work, in particular
>> when entering the VM, we need to be absolutely sure that sibling CPUs are
>> effectively using ASI. The core scheduling sibling stunning code you
>> referenced [6] uses a mechanism which is fine for userspace synchronization
>> (the delivery of the IPI forces the sibling to immediately enter the kernel)
>> but this won't work for ASI as the delivery of the IPI won't guarantee that
>> the sibling as enter ASI yet. I did some experiments that show that data
>> will leak if siblings are not perfectly synchronized.
> 
> I agree that it is not secure to run one sibling in the unrestricted
> kernel address space while the other sibling is running in an ASI
> restricted address space, without doing a cache flush before
> re-entering the VM. However, I think that avoiding this situation
> does not require doing a sibling stun operation immediately after VM
> Exit. The way we avoid it is as follows.
> 
> First, we always use ASI in conjunction with core scheduling. This
> means that if HT0 is running a VCPU thread, then HT1 will be running
> either a VCPU thread of the same VM or the Idle thread. If it is
> running a VCPU thread, then if/when that thread takes a VM Exit, it
> will also be running in the same ASI restricted address space. For
> the idle thread, we have created another ASI Class, called Idle-ASI,
> which maps only globally non-sensitive kernel memory. The idle loop
> enters this ASI address space.
> 
> This means that when HT0 does a VM Exit, HT1 will either be running
> the guest code of a VCPU of the same VM, or it will be running kernel
> code in either a KVM-ASI or the Idle-ASI address space. (If HT1 is
> already running in the full kernel address space, that would imply
> that it had previously done an ASI Exit, which would have triggered a
> stun_sibling, which would have already caused HT0 to exit the VM and
> wait in the kernel).

Note that using core scheduling (or not) is a detail, what is important
is whether HT are running with ASI or not. Running core scheduling will
just improve chances to have all siblings run ASI at the same time
and so improve ASI performances.


> If HT1 now does an ASI Exit, that will trigger the stun_sibling()
> operation in its pre_asi_exit() handler, which will set the state of
> the core/HT0 to Stunned (and possibly send an IPI too, though that
> will be ignored if HT0 was already in kernel mode). Now when HT0
> tries to re-enter the VM, since its state is set to Stunned, it will
> just wait in a loop until HT1 does an unstun_sibling() operation,
> which it will do in its post_asi_enter handler the next time it does
> an ASI Enter (which would be either just before VM Enter if it was
> KVM-ASI, or in the next iteration of the idle loop if it was
> Idle-ASI). In either case, HT1's post_asi_enter() handler would also
> do a flush_sensitive_cpu_state operation before the unstun_sibling(),
> so when HT0 gets out of its wait-loop and does a VM Enter, there will
> not be any sensitive state left.
> 
> One thing that probably was not clear from the patch, is that the
> stun state check and wait-loop is still always executed before VM
> Enter, even if no ASI Exit happened in that execution.
> 

So if I understand correctly, you have following sequence:

0 - Initially state is set to "stunned" for all cpus (i.e. a cpu should
     wait before VMEnter)

1 - After ASI Enter: Set sibling state to "unstunned" (i.e. sibling can
     do VMEnter)

2 - Before VMEnter : wait while my state is "stunned"

3 - Before ASI Exit : Set sibling state to "stunned" (i.e. sibling should
     wait before VMEnter)

I have tried this kind of implementation, and the problem is with step 2
(wait while my state is "stunned"); how do you wait exactly? You can't
just do an active wait otherwise you have all kind of problems (depending
if you have interrupts enabled or not) especially as you don't know how
long you have to wait for (this depends on what the other cpu is doing).

That's why I have been dissociating ASI and cpu stunning (and eventually
move to only do kernel core scheduling). Basically I replaced step 2 by
a call to the scheduler to select threads using ASI on all siblings (or
run something else if there's higher priority threads to run) which means
enabling kernel core scheduling at this point.

>>
>> A Possible Alternative to ASI?
>> =============================
>> ASI prevents access to sensitive data by unmapping them. On the other
>> hand, the KVM code somewhat already identifies access to sensitive data
>> as part of the L1TF/MDS mitigation, and when KVM is about to access
>> sensitive data then it sets l1tf_flush_l1d to true (so that L1D gets
>> flushed before VMEnter).
>>
>> With KVM knowing when it accesses sensitive data, I think we can provide
>> the same mitigation as ASI by simply allowing KVM code which doesn't
>> access sensitive data to be run concurrently with a VM. This can be done
>> by tagging the kernel thread when it enters KVM code which doesn't
>> access sensitive data, and untagging the thread right before it accesses
>> sensitive data. And when KVM is about to do a VMEnter then we synchronize
>> siblings CPUs so that they run threads with the same tag. Sounds familar?
>> Yes, because that's similar to core scheduling but inside the kernel
>> (let's call it "kernel core scheduling").
>>
>> I think the benefit of this approach would be that it should be much
>> simpler to implement and less invasive than ASI, and it doesn't preclude
>> to later do ASI: ASI can be done in addition and provide an extra level
>> of mitigation in case some sensitive is still accessed by KVM. Also it
>> would provide the critical sibling CPU synchronization mechanism that
>> we also need with ASI.
>>
>> I did some prototyping to implement this kernel core scheduling a while
>> ago (and then get diverted on other stuff) but so far performances have
>> been abyssal especially when doing a strict synchronization between
>> sibling CPUs. I am planning go back and do more investigations when I
>> have cycles but probably not that soon.
>>
> 
> This also seems like an interesting approach. It does have some
> different trade-offs compared to ASI. First, there is the trade-off
> between a blacklist vs. whitelist approach. Secondly, ASI has a more
> structured approach based on the sensitivity of the data itself,
> instead of having to audit every new code path to verify whether or
> not it can potentially access any sensitive data. On the other hand,
> as you point out, this approach is much simpler than ASI, which is
> certainly a plus.

I think the main benefit is that it provides a mechanism for running
specific kernel threads together on sibling cpus independently of ASI.
So it will be easier to implement (you don't need ASI) and to test.

Then, once this mechanism has proven to work (and to be efficient),
you can have KVM ASI use it.

alex.


  reply	other threads:[~2022-03-22  9:47 UTC|newest]

Thread overview: 64+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-23  5:21 [RFC PATCH 00/47] Address Space Isolation for KVM Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 01/47] mm: asi: Introduce ASI core API Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 02/47] mm: asi: Add command-line parameter to enable/disable ASI Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 03/47] mm: asi: Switch to unrestricted address space when entering scheduler Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 04/47] mm: asi: ASI support in interrupts/exceptions Junaid Shahid
2022-03-14 15:50   ` Thomas Gleixner
2022-03-15  2:01     ` Junaid Shahid
2022-03-15 12:55       ` Thomas Gleixner
2022-03-15 22:41         ` Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 05/47] mm: asi: Make __get_current_cr3_fast() ASI-aware Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 06/47] mm: asi: ASI page table allocation and free functions Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 07/47] mm: asi: Functions to map/unmap a memory range into ASI page tables Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 08/47] mm: asi: Add basic infrastructure for global non-sensitive mappings Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 09/47] mm: Add __PAGEFLAG_FALSE Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 10/47] mm: asi: Support for global non-sensitive direct map allocations Junaid Shahid
2022-03-23 21:06   ` Matthew Wilcox
2022-03-23 23:48     ` Junaid Shahid
2022-03-24  1:54       ` Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 11/47] mm: asi: Global non-sensitive vmalloc/vmap support Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 12/47] mm: asi: Support for global non-sensitive slab caches Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 13/47] asi: Added ASI memory cgroup flag Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 14/47] mm: asi: Disable ASI API when ASI is not enabled for a process Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 15/47] kvm: asi: Restricted address space for VM execution Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 16/47] mm: asi: Support for mapping non-sensitive pcpu chunks Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 17/47] mm: asi: Aliased direct map for local non-sensitive allocations Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 18/47] mm: asi: Support for pre-ASI-init " Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 19/47] mm: asi: Support for locally nonsensitive page allocations Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 20/47] mm: asi: Support for locally non-sensitive vmalloc allocations Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 21/47] mm: asi: Add support for locally non-sensitive VM_USERMAP pages Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 22/47] mm: asi: Added refcounting when initilizing an asi Junaid Shahid
2022-02-23  5:21 ` [RFC PATCH 23/47] mm: asi: Add support for mapping all userspace memory into ASI Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 24/47] mm: asi: Support for local non-sensitive slab caches Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 25/47] mm: asi: Avoid warning from NMI userspace accesses in ASI context Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 26/47] mm: asi: Use separate PCIDs for restricted address spaces Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 27/47] mm: asi: Avoid TLB flushes during ASI CR3 switches when possible Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 28/47] mm: asi: Avoid TLB flush IPIs to CPUs not in ASI context Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 29/47] mm: asi: Reduce TLB flushes when freeing pages asynchronously Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 30/47] mm: asi: Add API for mapping userspace address ranges Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 31/47] mm: asi: Support for non-sensitive SLUB caches Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 32/47] x86: asi: Allocate FPU state separately when ASI is enabled Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 33/47] kvm: asi: Map guest memory into restricted ASI address space Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 34/47] kvm: asi: Unmap guest memory from ASI address space when using nested virt Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 35/47] mm: asi: asi_exit() on PF, skip handling if address is accessible Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 36/47] mm: asi: Adding support for dynamic percpu ASI allocations Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 37/47] mm: asi: ASI annotation support for static variables Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 38/47] mm: asi: ASI annotation support for dynamic modules Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 39/47] mm: asi: Skip conventional L1TF/MDS mitigations Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 40/47] mm: asi: support for static percpu DEFINE_PER_CPU*_ASI Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 41/47] mm: asi: Annotation of static variables to be nonsensitive Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 42/47] mm: asi: Annotation of PERCPU " Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 43/47] mm: asi: Annotation of dynamic " Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 44/47] kvm: asi: Splitting kvm_vcpu_arch into non/sensitive parts Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 45/47] mm: asi: Mapping global nonsensitive areas in asi_global_init Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 46/47] kvm: asi: Do asi_exit() in vcpu_run loop before returning to userspace Junaid Shahid
2022-02-23  5:22 ` [RFC PATCH 47/47] mm: asi: Properly un/mapping task stack from ASI + tlb flush Junaid Shahid
2022-03-05  3:39 ` [RFC PATCH 00/47] Address Space Isolation for KVM Hyeonggon Yoo
2022-03-16 21:34 ` Alexandre Chartre
2022-03-17 23:25   ` Junaid Shahid
2022-03-22  9:46     ` Alexandre Chartre [this message]
2022-03-23 19:35       ` Junaid Shahid
2022-04-08  8:52         ` Alexandre Chartre
2022-04-11  3:26           ` junaid_shahid
2022-03-16 22:49 ` Thomas Gleixner
2022-03-17 21:24   ` Junaid Shahid

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c3131f1c-a354-ca3b-ed61-5b06ef1ab7a1@oracle.com \
    --to=alexandre.chartre@oracle.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=jmattson@google.com \
    --cc=junaids@google.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=luto@kernel.org \
    --cc=oweisse@google.com \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=pjt@google.com \
    --cc=rppt@linux.ibm.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).