All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 4.19 STABLE v2 0/2] KVM: VMX: Fix null pointer dereference
@ 2020-05-12  0:28 Sean Christopherson
  2020-05-12  0:28 ` [PATCH 4.19 STABLE v2 1/2] KVM: VMX: Explicitly reference RCX as the vmx_vcpu pointer in asm blobs Sean Christopherson
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Sean Christopherson @ 2020-05-12  0:28 UTC (permalink / raw)
  To: stable, Greg Kroah-Hartman, Ben Hutchings, Sasha Levin
  Cc: Paolo Bonzini, linux-kernel, Tobias Urdin

A simple fix for a null pointer dereference in vmx_vcpu_run() with an
ugly-but-safe prereq patch.

The even uglier ASM_CALL_CONSTRAINT is gone in v2 as I finally figured
out why vmx_return was undefined: GCC dropped the entire asm blob because
all outputs were deemed unused.

v2:
  - Document why there is no exact upstream commit for the fix, with
    --verbose. [Greg]
  - Tag the asm blob as volatile and drop the ASM_CALL_CONSTRAINT hack.

Sean Christopherson (2):
  KVM: VMX: Explicitly reference RCX as the vmx_vcpu pointer in asm
    blobs
  KVM: VMX: Mark RCX, RDX and RSI as clobbered in vmx_vcpu_run()'s asm
    blob

 arch/x86/kvm/vmx.c | 91 +++++++++++++++++++++++++---------------------
 1 file changed, 50 insertions(+), 41 deletions(-)

-- 
2.26.0


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH 4.19 STABLE v2 1/2] KVM: VMX: Explicitly reference RCX as the vmx_vcpu pointer in asm blobs
  2020-05-12  0:28 [PATCH 4.19 STABLE v2 0/2] KVM: VMX: Fix null pointer dereference Sean Christopherson
@ 2020-05-12  0:28 ` Sean Christopherson
  2020-05-12  0:28 ` [PATCH 4.19 STABLE v2 2/2] KVM: VMX: Mark RCX, RDX and RSI as clobbered in vmx_vcpu_run()'s asm blob Sean Christopherson
  2020-05-12 11:43 ` [PATCH 4.19 STABLE v2 0/2] KVM: VMX: Fix null pointer dereference Greg Kroah-Hartman
  2 siblings, 0 replies; 4+ messages in thread
From: Sean Christopherson @ 2020-05-12  0:28 UTC (permalink / raw)
  To: stable, Greg Kroah-Hartman, Ben Hutchings, Sasha Levin
  Cc: Paolo Bonzini, linux-kernel, Tobias Urdin

Upstream commit 051a2d3e59e51ae49fd56aef34e472832897ce46.

Use '%% " _ASM_CX"' instead of '%0' to dereference RCX, i.e. the
'struct vcpu_vmx' pointer, in the VM-Enter asm blobs of vmx_vcpu_run()
and nested_vmx_check_vmentry_hw().  Using the symbolic name means that
adding/removing an output parameter(s) requires "rewriting" almost all
of the asm blob, which makes it nearly impossible to understand what's
being changed in even the most minor patches.

Opportunistically improve the code comments.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/vmx.c | 86 +++++++++++++++++++++++++---------------------
 1 file changed, 47 insertions(+), 39 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index fe5036641c596..5b06a98ffd4cb 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -10776,9 +10776,9 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
 		"push %%" _ASM_DX "; push %%" _ASM_BP ";"
 		"push %%" _ASM_CX " \n\t" /* placeholder for guest rcx */
 		"push %%" _ASM_CX " \n\t"
-		"cmp %%" _ASM_SP ", %c[host_rsp](%0) \n\t"
+		"cmp %%" _ASM_SP ", %c[host_rsp](%%" _ASM_CX ") \n\t"
 		"je 1f \n\t"
-		"mov %%" _ASM_SP ", %c[host_rsp](%0) \n\t"
+		"mov %%" _ASM_SP ", %c[host_rsp](%%" _ASM_CX ") \n\t"
 		/* Avoid VMWRITE when Enlightened VMCS is in use */
 		"test %%" _ASM_SI ", %%" _ASM_SI " \n\t"
 		"jz 2f \n\t"
@@ -10788,32 +10788,33 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
 		__ex(ASM_VMX_VMWRITE_RSP_RDX) "\n\t"
 		"1: \n\t"
 		/* Reload cr2 if changed */
-		"mov %c[cr2](%0), %%" _ASM_AX " \n\t"
+		"mov %c[cr2](%%" _ASM_CX "), %%" _ASM_AX " \n\t"
 		"mov %%cr2, %%" _ASM_DX " \n\t"
 		"cmp %%" _ASM_AX ", %%" _ASM_DX " \n\t"
 		"je 3f \n\t"
 		"mov %%" _ASM_AX", %%cr2 \n\t"
 		"3: \n\t"
 		/* Check if vmlaunch of vmresume is needed */
-		"cmpb $0, %c[launched](%0) \n\t"
+		"cmpb $0, %c[launched](%%" _ASM_CX ") \n\t"
 		/* Load guest registers.  Don't clobber flags. */
-		"mov %c[rax](%0), %%" _ASM_AX " \n\t"
-		"mov %c[rbx](%0), %%" _ASM_BX " \n\t"
-		"mov %c[rdx](%0), %%" _ASM_DX " \n\t"
-		"mov %c[rsi](%0), %%" _ASM_SI " \n\t"
-		"mov %c[rdi](%0), %%" _ASM_DI " \n\t"
-		"mov %c[rbp](%0), %%" _ASM_BP " \n\t"
+		"mov %c[rax](%%" _ASM_CX "), %%" _ASM_AX " \n\t"
+		"mov %c[rbx](%%" _ASM_CX "), %%" _ASM_BX " \n\t"
+		"mov %c[rdx](%%" _ASM_CX "), %%" _ASM_DX " \n\t"
+		"mov %c[rsi](%%" _ASM_CX "), %%" _ASM_SI " \n\t"
+		"mov %c[rdi](%%" _ASM_CX "), %%" _ASM_DI " \n\t"
+		"mov %c[rbp](%%" _ASM_CX "), %%" _ASM_BP " \n\t"
 #ifdef CONFIG_X86_64
-		"mov %c[r8](%0),  %%r8  \n\t"
-		"mov %c[r9](%0),  %%r9  \n\t"
-		"mov %c[r10](%0), %%r10 \n\t"
-		"mov %c[r11](%0), %%r11 \n\t"
-		"mov %c[r12](%0), %%r12 \n\t"
-		"mov %c[r13](%0), %%r13 \n\t"
-		"mov %c[r14](%0), %%r14 \n\t"
-		"mov %c[r15](%0), %%r15 \n\t"
+		"mov %c[r8](%%" _ASM_CX "),  %%r8  \n\t"
+		"mov %c[r9](%%" _ASM_CX "),  %%r9  \n\t"
+		"mov %c[r10](%%" _ASM_CX "), %%r10 \n\t"
+		"mov %c[r11](%%" _ASM_CX "), %%r11 \n\t"
+		"mov %c[r12](%%" _ASM_CX "), %%r12 \n\t"
+		"mov %c[r13](%%" _ASM_CX "), %%r13 \n\t"
+		"mov %c[r14](%%" _ASM_CX "), %%r14 \n\t"
+		"mov %c[r15](%%" _ASM_CX "), %%r15 \n\t"
 #endif
-		"mov %c[rcx](%0), %%" _ASM_CX " \n\t" /* kills %0 (ecx) */
+		/* Load guest RCX.  This kills the vmx_vcpu pointer! */
+		"mov %c[rcx](%%" _ASM_CX "), %%" _ASM_CX " \n\t"
 
 		/* Enter guest mode */
 		"jne 1f \n\t"
@@ -10821,26 +10822,33 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
 		"jmp 2f \n\t"
 		"1: " __ex(ASM_VMX_VMRESUME) "\n\t"
 		"2: "
-		/* Save guest registers, load host registers, keep flags */
-		"mov %0, %c[wordsize](%%" _ASM_SP ") \n\t"
-		"pop %0 \n\t"
-		"setbe %c[fail](%0)\n\t"
-		"mov %%" _ASM_AX ", %c[rax](%0) \n\t"
-		"mov %%" _ASM_BX ", %c[rbx](%0) \n\t"
-		__ASM_SIZE(pop) " %c[rcx](%0) \n\t"
-		"mov %%" _ASM_DX ", %c[rdx](%0) \n\t"
-		"mov %%" _ASM_SI ", %c[rsi](%0) \n\t"
-		"mov %%" _ASM_DI ", %c[rdi](%0) \n\t"
-		"mov %%" _ASM_BP ", %c[rbp](%0) \n\t"
+
+		/* Save guest's RCX to the stack placeholder (see above) */
+		"mov %%" _ASM_CX ", %c[wordsize](%%" _ASM_SP ") \n\t"
+
+		/* Load host's RCX, i.e. the vmx_vcpu pointer */
+		"pop %%" _ASM_CX " \n\t"
+
+		/* Set vmx->fail based on EFLAGS.{CF,ZF} */
+		"setbe %c[fail](%%" _ASM_CX ")\n\t"
+
+		/* Save all guest registers, including RCX from the stack */
+		"mov %%" _ASM_AX ", %c[rax](%%" _ASM_CX ") \n\t"
+		"mov %%" _ASM_BX ", %c[rbx](%%" _ASM_CX ") \n\t"
+		__ASM_SIZE(pop) " %c[rcx](%%" _ASM_CX ") \n\t"
+		"mov %%" _ASM_DX ", %c[rdx](%%" _ASM_CX ") \n\t"
+		"mov %%" _ASM_SI ", %c[rsi](%%" _ASM_CX ") \n\t"
+		"mov %%" _ASM_DI ", %c[rdi](%%" _ASM_CX ") \n\t"
+		"mov %%" _ASM_BP ", %c[rbp](%%" _ASM_CX ") \n\t"
 #ifdef CONFIG_X86_64
-		"mov %%r8,  %c[r8](%0) \n\t"
-		"mov %%r9,  %c[r9](%0) \n\t"
-		"mov %%r10, %c[r10](%0) \n\t"
-		"mov %%r11, %c[r11](%0) \n\t"
-		"mov %%r12, %c[r12](%0) \n\t"
-		"mov %%r13, %c[r13](%0) \n\t"
-		"mov %%r14, %c[r14](%0) \n\t"
-		"mov %%r15, %c[r15](%0) \n\t"
+		"mov %%r8,  %c[r8](%%" _ASM_CX ") \n\t"
+		"mov %%r9,  %c[r9](%%" _ASM_CX ") \n\t"
+		"mov %%r10, %c[r10](%%" _ASM_CX ") \n\t"
+		"mov %%r11, %c[r11](%%" _ASM_CX ") \n\t"
+		"mov %%r12, %c[r12](%%" _ASM_CX ") \n\t"
+		"mov %%r13, %c[r13](%%" _ASM_CX ") \n\t"
+		"mov %%r14, %c[r14](%%" _ASM_CX ") \n\t"
+		"mov %%r15, %c[r15](%%" _ASM_CX ") \n\t"
 
 		/*
 		 * Clear all general purpose registers (except RSP, which is loaded by
@@ -10860,7 +10868,7 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
 		"xor %%r15d, %%r15d \n\t"
 #endif
 		"mov %%cr2, %%" _ASM_AX "   \n\t"
-		"mov %%" _ASM_AX ", %c[cr2](%0) \n\t"
+		"mov %%" _ASM_AX ", %c[cr2](%%" _ASM_CX ") \n\t"
 
 		"xor %%eax, %%eax \n\t"
 		"xor %%ebx, %%ebx \n\t"
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH 4.19 STABLE v2 2/2] KVM: VMX: Mark RCX, RDX and RSI as clobbered in vmx_vcpu_run()'s asm blob
  2020-05-12  0:28 [PATCH 4.19 STABLE v2 0/2] KVM: VMX: Fix null pointer dereference Sean Christopherson
  2020-05-12  0:28 ` [PATCH 4.19 STABLE v2 1/2] KVM: VMX: Explicitly reference RCX as the vmx_vcpu pointer in asm blobs Sean Christopherson
@ 2020-05-12  0:28 ` Sean Christopherson
  2020-05-12 11:43 ` [PATCH 4.19 STABLE v2 0/2] KVM: VMX: Fix null pointer dereference Greg Kroah-Hartman
  2 siblings, 0 replies; 4+ messages in thread
From: Sean Christopherson @ 2020-05-12  0:28 UTC (permalink / raw)
  To: stable, Greg Kroah-Hartman, Ben Hutchings, Sasha Levin
  Cc: Paolo Bonzini, linux-kernel, Tobias Urdin

Based on upstream commit f3689e3f17f064fd4cd5f0cb01ae2395c94f39d9.

Save RCX, RDX and RSI to fake outputs to coerce the compiler into
treating them as clobbered.  RCX in particular is likely to be reused by
the compiler to dereference the 'struct vcpu_vmx' pointer, which will
result in a null pointer dereference now that RCX is zeroed by the asm
blob.

Tag the asm() blob as volatile to prevent GCC from dropping the blob,
which is possible now that the blob has output values, all of which are
unused.

Upstream commit f3689e3f17f06 ("KVM: VMX: Save RSI to an unused output
in the vCPU-run asm blob") is not a direct equivalent of this patch. As
its shortlog states, it only tagged RSI as clobbered, whereas here RCX
and RDX are also clobbered.

In upstream at the time of the offending commit (b4be98039a92 in 4.19,
0e0ab73c9a024 upstream), the inline asm blob had previously been moved
to a dedicated helper, __vmx_vcpu_run().  For unrelated reasons,
__vmx_vcpu_run() was put into its own optimization unit, which for all
intents and purposes made it impossible to consume clobbered registers
because RCX, RDX and RSI are volatile and __vmx_vcpu_run() couldn't
itself be inlined.  In other words, the bug existed but couldn't be hit.

Similarly, the lack of "volatile" was also a bug in upstream that was
hidden by an unrelated change that exists in upstream but not in 4.19.
In this case, the asm blob also uses ASM_CALL_CONSTRAINT (marks RSP as
being an input/output constraint) in upstream to play nice with objtool
due the blob making a CALL.  In 4.19, there is no CALL and thus no
ASM_CALL_CONSTRAINT.

Furthermore, both of the lurking bugs were blasted away in upstream by
commits 5e0781df1899 ("KVM: VMX: Move vCPU-run code to a proper assembly
routine") and fc2ba5a27a1a ("KVM: VMX: Call vCPU-run asm sub-routine
from C and remove clobbering"), i.e. these bugs will never be directly
fixed in upstream.

Reported-by: Tobias Urdin <tobias.urdin@binero.com>
Fixes: b4be98039a92 ("KVM: VMX: Zero out *all* general purpose registers after VM-Exit")
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/vmx.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 5b06a98ffd4cb..f08c287b62426 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -10771,7 +10771,7 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
 	else if (static_branch_unlikely(&mds_user_clear))
 		mds_clear_cpu_buffers();
 
-	asm(
+	asm volatile (
 		/* Store host registers */
 		"push %%" _ASM_DX "; push %%" _ASM_BP ";"
 		"push %%" _ASM_CX " \n\t" /* placeholder for guest rcx */
@@ -10882,7 +10882,8 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
 		".global vmx_return \n\t"
 		"vmx_return: " _ASM_PTR " 2b \n\t"
 		".popsection"
-	      : : "c"(vmx), "d"((unsigned long)HOST_RSP), "S"(evmcs_rsp),
+	      : "=c"((int){0}), "=d"((int){0}), "=S"((int){0})
+	      : "c"(vmx), "d"((unsigned long)HOST_RSP), "S"(evmcs_rsp),
 		[launched]"i"(offsetof(struct vcpu_vmx, __launched)),
 		[fail]"i"(offsetof(struct vcpu_vmx, fail)),
 		[host_rsp]"i"(offsetof(struct vcpu_vmx, host_rsp)),
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH 4.19 STABLE v2 0/2] KVM: VMX: Fix null pointer dereference
  2020-05-12  0:28 [PATCH 4.19 STABLE v2 0/2] KVM: VMX: Fix null pointer dereference Sean Christopherson
  2020-05-12  0:28 ` [PATCH 4.19 STABLE v2 1/2] KVM: VMX: Explicitly reference RCX as the vmx_vcpu pointer in asm blobs Sean Christopherson
  2020-05-12  0:28 ` [PATCH 4.19 STABLE v2 2/2] KVM: VMX: Mark RCX, RDX and RSI as clobbered in vmx_vcpu_run()'s asm blob Sean Christopherson
@ 2020-05-12 11:43 ` Greg Kroah-Hartman
  2 siblings, 0 replies; 4+ messages in thread
From: Greg Kroah-Hartman @ 2020-05-12 11:43 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: stable, Ben Hutchings, Sasha Levin, Paolo Bonzini, linux-kernel,
	Tobias Urdin

On Mon, May 11, 2020 at 05:28:13PM -0700, Sean Christopherson wrote:
> A simple fix for a null pointer dereference in vmx_vcpu_run() with an
> ugly-but-safe prereq patch.
> 
> The even uglier ASM_CALL_CONSTRAINT is gone in v2 as I finally figured
> out why vmx_return was undefined: GCC dropped the entire asm blob because
> all outputs were deemed unused.

Both now queued up, thanks.

greg k-h

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-05-12 11:43 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-12  0:28 [PATCH 4.19 STABLE v2 0/2] KVM: VMX: Fix null pointer dereference Sean Christopherson
2020-05-12  0:28 ` [PATCH 4.19 STABLE v2 1/2] KVM: VMX: Explicitly reference RCX as the vmx_vcpu pointer in asm blobs Sean Christopherson
2020-05-12  0:28 ` [PATCH 4.19 STABLE v2 2/2] KVM: VMX: Mark RCX, RDX and RSI as clobbered in vmx_vcpu_run()'s asm blob Sean Christopherson
2020-05-12 11:43 ` [PATCH 4.19 STABLE v2 0/2] KVM: VMX: Fix null pointer dereference Greg Kroah-Hartman

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.