* [MODERATED] [QUESTION] about vmx_l1d_flush_pages
@ 2018-07-18 14:58 Nicolai Stange
2018-07-18 15:45 ` Thomas Gleixner
0 siblings, 1 reply; 5+ messages in thread
From: Nicolai Stange @ 2018-07-18 14:58 UTC (permalink / raw)
To: speck
Hi,
I've got two questions related to the initialization of
vmx_l1d_flush_pages from v15 commit a47dd5f06714 ("x86/KVM/VMX: Add L1D
flush algorithm").
[Apologies for not replying properly -- I don't have the original mail].
1.) The
[empty_zp] "r" (vmx_l1d_flush_pages)
asm constraint in vmx_l1d_flush() seems to suggest that these pages
are zeroed out. But AFAICS they're actually left uninitialized.
Am I wrong or is this intended?
2.) With nested KVM, the vmx_l1d_flush_pages could be subject to KSM on
the host. This means that the 16 vmx_l1d_flush_pages could get
mapped to fewer host physical pages and that would break the L1d
flush?
If so, an obvious fix would be to initialize all 16 pages with
a different pattern each.
Thanks,
Nicolai
--
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [QUESTION] about vmx_l1d_flush_pages
2018-07-18 14:58 [MODERATED] [QUESTION] about vmx_l1d_flush_pages Nicolai Stange
@ 2018-07-18 15:45 ` Thomas Gleixner
2018-07-18 17:07 ` [MODERATED] [PATCH] fix L1TF kvm initialization Nicolai Stange
2018-07-20 5:25 ` [MODERATED] Re: [QUESTION] about vmx_l1d_flush_pages Nicolai Stange
0 siblings, 2 replies; 5+ messages in thread
From: Thomas Gleixner @ 2018-07-18 15:45 UTC (permalink / raw)
To: speck
On Wed, 18 Jul 2018, speck for Nicolai Stange wrote:
> I've got two questions related to the initialization of
> vmx_l1d_flush_pages from v15 commit a47dd5f06714 ("x86/KVM/VMX: Add L1D
> flush algorithm").
>
> [Apologies for not replying properly -- I don't have the original mail].
Want the archive ?
> 1.) The
> [empty_zp] "r" (vmx_l1d_flush_pages)
> asm constraint in vmx_l1d_flush() seems to suggest that these pages
> are zeroed out. But AFAICS they're actually left uninitialized.
> Am I wrong or is this intended?
Not intended AFAICT.
> 2.) With nested KVM, the vmx_l1d_flush_pages could be subject to KSM on
> the host. This means that the 16 vmx_l1d_flush_pages could get
> mapped to fewer host physical pages and that would break the L1d
> flush?
Probably.
> If so, an obvious fix would be to initialize all 16 pages with
> a different pattern each.
Care to whip up a patch?
Thanks,
tglx
^ permalink raw reply [flat|nested] 5+ messages in thread
* [MODERATED] [PATCH] fix L1TF kvm initialization
2018-07-18 15:45 ` Thomas Gleixner
@ 2018-07-18 17:07 ` Nicolai Stange
2018-07-19 10:38 ` Thomas Gleixner
2018-07-20 5:25 ` [MODERATED] Re: [QUESTION] about vmx_l1d_flush_pages Nicolai Stange
1 sibling, 1 reply; 5+ messages in thread
From: Nicolai Stange @ 2018-07-18 17:07 UTC (permalink / raw)
To: speck
From: Nicolai Stange <nstange@suse.de>
Subject: [PATCH] x86/KVM/VMX: initialize the vmx_l1d_flush_pages' content
The slow path in vmx_l1d_flush() reads from vmx_l1d_flush_pages in order
to evict the L1d cache.
However, these are never cleared and, in theory, their data could be leaked.
More importantly, KSM could merge a nested hypervisor's vmx_l1d_flush_pages
to fewer than 1 << L1D_CACHE_ORDER host physical pages and this would break
the L1d flushing algorithm: L1d on x86_64 is tagged by physical addresses.
Fix this by initializing the individual vmx_l1d_flush_pages with a
different pattern each.
Rename the "empty_zp" asm constraint identifier in vmx_l1d_flush() to
"flush_pages" to reflect this change.
Signed-off-by: Nicolai Stange <nstange@suse.de>
---
arch/x86/kvm/vmx.c | 17 ++++++++++++++---
1 file changed, 14 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index c5c0118b126d..b4b8e8cb4a7e 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -211,6 +211,7 @@ static void *vmx_l1d_flush_pages;
static int vmx_setup_l1d_flush(enum vmx_l1d_flush_state l1tf)
{
struct page *page;
+ unsigned int i;
if (!enable_ept) {
l1tf_vmx_mitigation = VMENTER_L1D_FLUSH_EPT_DISABLED;
@@ -243,6 +244,16 @@ static int vmx_setup_l1d_flush(enum vmx_l1d_flush_state l1tf)
if (!page)
return -ENOMEM;
vmx_l1d_flush_pages = page_address(page);
+
+ /*
+ * Initialize each page with a different pattern in
+ * order to protect against KSM in the nested
+ * virtualization case.
+ */
+ for (i = 0; i < 1u << L1D_CACHE_ORDER; ++i) {
+ memset(vmx_l1d_flush_pages + i * PAGE_SIZE, i + 1,
+ PAGE_SIZE);
+ }
}
l1tf_vmx_mitigation = l1tf;
@@ -9701,7 +9712,7 @@ static void vmx_l1d_flush(struct kvm_vcpu *vcpu)
/* First ensure the pages are in the TLB */
"xorl %%eax, %%eax\n"
".Lpopulate_tlb:\n\t"
- "movzbl (%[empty_zp], %%" _ASM_AX "), %%ecx\n\t"
+ "movzbl (%[flush_pages], %%" _ASM_AX "), %%ecx\n\t"
"addl $4096, %%eax\n\t"
"cmpl %%eax, %[size]\n\t"
"jne .Lpopulate_tlb\n\t"
@@ -9710,12 +9721,12 @@ static void vmx_l1d_flush(struct kvm_vcpu *vcpu)
/* Now fill the cache */
"xorl %%eax, %%eax\n"
".Lfill_cache:\n"
- "movzbl (%[empty_zp], %%" _ASM_AX "), %%ecx\n\t"
+ "movzbl (%[flush_pages], %%" _ASM_AX "), %%ecx\n\t"
"addl $64, %%eax\n\t"
"cmpl %%eax, %[size]\n\t"
"jne .Lfill_cache\n\t"
"lfence\n"
- :: [empty_zp] "r" (vmx_l1d_flush_pages),
+ :: [flush_pages] "r" (vmx_l1d_flush_pages),
[size] "r" (size)
: "eax", "ebx", "ecx", "edx");
}
--
2.13.7
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH] fix L1TF kvm initialization
2018-07-18 17:07 ` [MODERATED] [PATCH] fix L1TF kvm initialization Nicolai Stange
@ 2018-07-19 10:38 ` Thomas Gleixner
0 siblings, 0 replies; 5+ messages in thread
From: Thomas Gleixner @ 2018-07-19 10:38 UTC (permalink / raw)
To: speck
On Wed, 18 Jul 2018, speck for Nicolai Stange wrote:
> From: Nicolai Stange <nstange@suse.de>
> Subject: [PATCH] x86/KVM/VMX: initialize the vmx_l1d_flush_pages' content
>
> The slow path in vmx_l1d_flush() reads from vmx_l1d_flush_pages in order
> to evict the L1d cache.
>
> However, these are never cleared and, in theory, their data could be leaked.
>
> More importantly, KSM could merge a nested hypervisor's vmx_l1d_flush_pages
> to fewer than 1 << L1D_CACHE_ORDER host physical pages and this would break
> the L1d flushing algorithm: L1d on x86_64 is tagged by physical addresses.
>
> Fix this by initializing the individual vmx_l1d_flush_pages with a
> different pattern each.
>
> Rename the "empty_zp" asm constraint identifier in vmx_l1d_flush() to
> "flush_pages" to reflect this change.
>
> Signed-off-by: Nicolai Stange <nstange@suse.de>
Applied and all branches updated.
Thanks,
tglx
^ permalink raw reply [flat|nested] 5+ messages in thread
* [MODERATED] Re: [QUESTION] about vmx_l1d_flush_pages
2018-07-18 15:45 ` Thomas Gleixner
2018-07-18 17:07 ` [MODERATED] [PATCH] fix L1TF kvm initialization Nicolai Stange
@ 2018-07-20 5:25 ` Nicolai Stange
1 sibling, 0 replies; 5+ messages in thread
From: Nicolai Stange @ 2018-07-20 5:25 UTC (permalink / raw)
To: speck
speck for Thomas Gleixner <speck@linutronix.de> writes:
> On Wed, 18 Jul 2018, speck for Nicolai Stange wrote:
>> I've got two questions related to the initialization of
>> vmx_l1d_flush_pages from v15 commit a47dd5f06714 ("x86/KVM/VMX: Add L1D
>> flush algorithm").
>>
>> [Apologies for not replying properly -- I don't have the original mail].
>
> Want the archive ?
That would be great, thanks for the offer!
Nicolai
--
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2018-07-20 5:26 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-18 14:58 [MODERATED] [QUESTION] about vmx_l1d_flush_pages Nicolai Stange
2018-07-18 15:45 ` Thomas Gleixner
2018-07-18 17:07 ` [MODERATED] [PATCH] fix L1TF kvm initialization Nicolai Stange
2018-07-19 10:38 ` Thomas Gleixner
2018-07-20 5:25 ` [MODERATED] Re: [QUESTION] about vmx_l1d_flush_pages Nicolai Stange
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.