linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] KVM: SVM: Add register operand to vmsave call in sev_es_vcpu_load
@ 2020-12-19  6:37 Nathan Chancellor
  2020-12-21 17:48 ` Sean Christopherson
  2020-12-21 18:12 ` Paolo Bonzini
  0 siblings, 2 replies; 4+ messages in thread
From: Nathan Chancellor @ 2020-12-19  6:37 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Tom Lendacky, Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, x86, kvm, linux-kernel, clang-built-linux,
	Nick Desaulniers, Sami Tolvanen, Nathan Chancellor

When using LLVM's integrated assembler (LLVM_IAS=1) while building
x86_64_defconfig + CONFIG_KVM=y + CONFIG_KVM_AMD=y, the following build
error occurs:

 $ make LLVM=1 LLVM_IAS=1 arch/x86/kvm/svm/sev.o
 arch/x86/kvm/svm/sev.c:2004:15: error: too few operands for instruction
         asm volatile(__ex("vmsave") : : "a" (__sme_page_pa(sd->save_area)) : "memory");
                      ^
 arch/x86/kvm/svm/sev.c:28:17: note: expanded from macro '__ex'
 #define __ex(x) __kvm_handle_fault_on_reboot(x)
                 ^
 ./arch/x86/include/asm/kvm_host.h:1646:10: note: expanded from macro '__kvm_handle_fault_on_reboot'
         "666: \n\t"                                                     \
                 ^
 <inline asm>:2:2: note: instantiated into assembly here
         vmsave
         ^
 1 error generated.

This happens because LLVM currently does not support calling vmsave
without the fixed register operand (%rax for 64-bit and %eax for
32-bit). This will be fixed in LLVM 12 but the kernel currently supports
LLVM 10.0.1 and newer so this needs to be handled.

Add the proper register using the _ASM_AX macro, which matches the
vmsave call in vmenter.S.

Fixes: 861377730aa9 ("KVM: SVM: Provide support for SEV-ES vCPU loading")
Link: https://reviews.llvm.org/D93524
Link: https://github.com/ClangBuiltLinux/linux/issues/1216
Signed-off-by: Nathan Chancellor <natechancellor@gmail.com>
---
 arch/x86/kvm/svm/sev.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index e57847ff8bd2..958370758ed0 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -2001,7 +2001,7 @@ void sev_es_vcpu_load(struct vcpu_svm *svm, int cpu)
 	 * of which one step is to perform a VMLOAD. Since hardware does not
 	 * perform a VMSAVE on VMRUN, the host savearea must be updated.
 	 */
-	asm volatile(__ex("vmsave") : : "a" (__sme_page_pa(sd->save_area)) : "memory");
+	asm volatile(__ex("vmsave %%"_ASM_AX) : : "a" (__sme_page_pa(sd->save_area)) : "memory");
 
 	/*
 	 * Certain MSRs are restored on VMEXIT, only save ones that aren't
-- 
2.30.0.rc0


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] KVM: SVM: Add register operand to vmsave call in sev_es_vcpu_load
  2020-12-19  6:37 [PATCH] KVM: SVM: Add register operand to vmsave call in sev_es_vcpu_load Nathan Chancellor
@ 2020-12-21 17:48 ` Sean Christopherson
  2020-12-21 18:10   ` Sean Christopherson
  2020-12-21 18:12 ` Paolo Bonzini
  1 sibling, 1 reply; 4+ messages in thread
From: Sean Christopherson @ 2020-12-21 17:48 UTC (permalink / raw)
  To: Nathan Chancellor
  Cc: Paolo Bonzini, Tom Lendacky, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, x86, kvm, linux-kernel, clang-built-linux,
	Nick Desaulniers, Sami Tolvanen

On Fri, Dec 18, 2020, Nathan Chancellor wrote:
> When using LLVM's integrated assembler (LLVM_IAS=1) while building
> x86_64_defconfig + CONFIG_KVM=y + CONFIG_KVM_AMD=y, the following build
> error occurs:
> 
>  $ make LLVM=1 LLVM_IAS=1 arch/x86/kvm/svm/sev.o
>  arch/x86/kvm/svm/sev.c:2004:15: error: too few operands for instruction
>          asm volatile(__ex("vmsave") : : "a" (__sme_page_pa(sd->save_area)) : "memory");
>                       ^
>  arch/x86/kvm/svm/sev.c:28:17: note: expanded from macro '__ex'
>  #define __ex(x) __kvm_handle_fault_on_reboot(x)
>                  ^
>  ./arch/x86/include/asm/kvm_host.h:1646:10: note: expanded from macro '__kvm_handle_fault_on_reboot'
>          "666: \n\t"                                                     \
>                  ^
>  <inline asm>:2:2: note: instantiated into assembly here
>          vmsave
>          ^
>  1 error generated.
> 
> This happens because LLVM currently does not support calling vmsave
> without the fixed register operand (%rax for 64-bit and %eax for
> 32-bit). This will be fixed in LLVM 12 but the kernel currently supports
> LLVM 10.0.1 and newer so this needs to be handled.
> 
> Add the proper register using the _ASM_AX macro, which matches the
> vmsave call in vmenter.S.

There are also two instances in tools/testing/selftests/kvm/lib/x86_64/svm.c
that likely need to be fixed.
 
> Fixes: 861377730aa9 ("KVM: SVM: Provide support for SEV-ES vCPU loading")
> Link: https://reviews.llvm.org/D93524
> Link: https://github.com/ClangBuiltLinux/linux/issues/1216
> Signed-off-by: Nathan Chancellor <natechancellor@gmail.com>
> ---
>  arch/x86/kvm/svm/sev.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index e57847ff8bd2..958370758ed0 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -2001,7 +2001,7 @@ void sev_es_vcpu_load(struct vcpu_svm *svm, int cpu)
>  	 * of which one step is to perform a VMLOAD. Since hardware does not
>  	 * perform a VMSAVE on VMRUN, the host savearea must be updated.
>  	 */
> -	asm volatile(__ex("vmsave") : : "a" (__sme_page_pa(sd->save_area)) : "memory");
> +	asm volatile(__ex("vmsave %%"_ASM_AX) : : "a" (__sme_page_pa(sd->save_area)) : "memory");

I vote to add a helper in svm.h to encode VMSAVE, even if there is only the one
user.  Between the rAX behavior (it _must_ be rAX) and taking the HPA of the
VMCB, the semantics of VMSAVE are just odd enough to cause a bit of head
scratching when reading the code for the first time.  E.g. something like:

void vmsave(struct page *vmcb)
{
	/*
	 * VMSAVE takes the HPA of a VMCB in rAX (hardcoded by VMSAVE itself).
	 * The _ASM_AX operand is required to specify the address size, which
	 * means VMSAVE cannot consume a 64-bit address outside of 64-bit mode.
	 */
	hpa_t vmcb_pa = __sme_page_pa(vmcb);

	BUG_ON(!IS_ENABLED(CONFIG_X86_64) && (vmcb_pa >> 32));

	asm volatile(__ex("vmsave %%"_ASM_AX) : : "a" (vmcb_pa) : "memory");
}

>  
>  	/*
>  	 * Certain MSRs are restored on VMEXIT, only save ones that aren't
> -- 
> 2.30.0.rc0
> 

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] KVM: SVM: Add register operand to vmsave call in sev_es_vcpu_load
  2020-12-21 17:48 ` Sean Christopherson
@ 2020-12-21 18:10   ` Sean Christopherson
  0 siblings, 0 replies; 4+ messages in thread
From: Sean Christopherson @ 2020-12-21 18:10 UTC (permalink / raw)
  To: Nathan Chancellor
  Cc: Paolo Bonzini, Tom Lendacky, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, x86, kvm, linux-kernel, clang-built-linux,
	Nick Desaulniers, Sami Tolvanen, Michael Roth

+Michael, as this will conflict with an in-progress series to use VMSAVE in the
common SVM run path.

https://lkml.kernel.org/r/20201214174127.1398114-1-michael.roth@amd.com

On Mon, Dec 21, 2020, Sean Christopherson wrote:
> On Fri, Dec 18, 2020, Nathan Chancellor wrote:
> > When using LLVM's integrated assembler (LLVM_IAS=1) while building
> > x86_64_defconfig + CONFIG_KVM=y + CONFIG_KVM_AMD=y, the following build
> > error occurs:
> > 
> >  $ make LLVM=1 LLVM_IAS=1 arch/x86/kvm/svm/sev.o
> >  arch/x86/kvm/svm/sev.c:2004:15: error: too few operands for instruction
> >          asm volatile(__ex("vmsave") : : "a" (__sme_page_pa(sd->save_area)) : "memory");
> >                       ^
> >  arch/x86/kvm/svm/sev.c:28:17: note: expanded from macro '__ex'
> >  #define __ex(x) __kvm_handle_fault_on_reboot(x)
> >                  ^
> >  ./arch/x86/include/asm/kvm_host.h:1646:10: note: expanded from macro '__kvm_handle_fault_on_reboot'
> >          "666: \n\t"                                                     \
> >                  ^
> >  <inline asm>:2:2: note: instantiated into assembly here
> >          vmsave
> >          ^
> >  1 error generated.
> > 
> > This happens because LLVM currently does not support calling vmsave
> > without the fixed register operand (%rax for 64-bit and %eax for
> > 32-bit). This will be fixed in LLVM 12 but the kernel currently supports
> > LLVM 10.0.1 and newer so this needs to be handled.
> > 
> > Add the proper register using the _ASM_AX macro, which matches the
> > vmsave call in vmenter.S.
> 
> There are also two instances in tools/testing/selftests/kvm/lib/x86_64/svm.c
> that likely need to be fixed.
>  
> > Fixes: 861377730aa9 ("KVM: SVM: Provide support for SEV-ES vCPU loading")
> > Link: https://reviews.llvm.org/D93524
> > Link: https://github.com/ClangBuiltLinux/linux/issues/1216
> > Signed-off-by: Nathan Chancellor <natechancellor@gmail.com>
> > ---
> >  arch/x86/kvm/svm/sev.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> > index e57847ff8bd2..958370758ed0 100644
> > --- a/arch/x86/kvm/svm/sev.c
> > +++ b/arch/x86/kvm/svm/sev.c
> > @@ -2001,7 +2001,7 @@ void sev_es_vcpu_load(struct vcpu_svm *svm, int cpu)
> >  	 * of which one step is to perform a VMLOAD. Since hardware does not
> >  	 * perform a VMSAVE on VMRUN, the host savearea must be updated.
> >  	 */
> > -	asm volatile(__ex("vmsave") : : "a" (__sme_page_pa(sd->save_area)) : "memory");
> > +	asm volatile(__ex("vmsave %%"_ASM_AX) : : "a" (__sme_page_pa(sd->save_area)) : "memory");
> 
> I vote to add a helper in svm.h to encode VMSAVE, even if there is only the one
> user.  Between the rAX behavior (it _must_ be rAX) and taking the HPA of the
> VMCB, the semantics of VMSAVE are just odd enough to cause a bit of head
> scratching when reading the code for the first time.  E.g. something like:
> 
> void vmsave(struct page *vmcb)
> {
> 	/*
> 	 * VMSAVE takes the HPA of a VMCB in rAX (hardcoded by VMSAVE itself).
> 	 * The _ASM_AX operand is required to specify the address size, which
> 	 * means VMSAVE cannot consume a 64-bit address outside of 64-bit mode.
> 	 */
> 	hpa_t vmcb_pa = __sme_page_pa(vmcb);
> 
> 	BUG_ON(!IS_ENABLED(CONFIG_X86_64) && (vmcb_pa >> 32));
> 
> 	asm volatile(__ex("vmsave %%"_ASM_AX) : : "a" (vmcb_pa) : "memory");
> }
> 
> >  
> >  	/*
> >  	 * Certain MSRs are restored on VMEXIT, only save ones that aren't
> > -- 
> > 2.30.0.rc0
> > 

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] KVM: SVM: Add register operand to vmsave call in sev_es_vcpu_load
  2020-12-19  6:37 [PATCH] KVM: SVM: Add register operand to vmsave call in sev_es_vcpu_load Nathan Chancellor
  2020-12-21 17:48 ` Sean Christopherson
@ 2020-12-21 18:12 ` Paolo Bonzini
  1 sibling, 0 replies; 4+ messages in thread
From: Paolo Bonzini @ 2020-12-21 18:12 UTC (permalink / raw)
  To: Nathan Chancellor
  Cc: Tom Lendacky, Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li,
	Jim Mattson, Joerg Roedel, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, x86, kvm, linux-kernel, clang-built-linux,
	Nick Desaulniers, Sami Tolvanen

On 19/12/20 07:37, Nathan Chancellor wrote:
> When using LLVM's integrated assembler (LLVM_IAS=1) while building
> x86_64_defconfig + CONFIG_KVM=y + CONFIG_KVM_AMD=y, the following build
> error occurs:
> 
>   $ make LLVM=1 LLVM_IAS=1 arch/x86/kvm/svm/sev.o
>   arch/x86/kvm/svm/sev.c:2004:15: error: too few operands for instruction
>           asm volatile(__ex("vmsave") : : "a" (__sme_page_pa(sd->save_area)) : "memory");
>                        ^
>   arch/x86/kvm/svm/sev.c:28:17: note: expanded from macro '__ex'
>   #define __ex(x) __kvm_handle_fault_on_reboot(x)
>                   ^
>   ./arch/x86/include/asm/kvm_host.h:1646:10: note: expanded from macro '__kvm_handle_fault_on_reboot'
>           "666: \n\t"                                                     \
>                   ^
>   <inline asm>:2:2: note: instantiated into assembly here
>           vmsave
>           ^
>   1 error generated.
> 
> This happens because LLVM currently does not support calling vmsave
> without the fixed register operand (%rax for 64-bit and %eax for
> 32-bit). This will be fixed in LLVM 12 but the kernel currently supports
> LLVM 10.0.1 and newer so this needs to be handled.
> 
> Add the proper register using the _ASM_AX macro, which matches the
> vmsave call in vmenter.S.
> 
> Fixes: 861377730aa9 ("KVM: SVM: Provide support for SEV-ES vCPU loading")
> Link: https://reviews.llvm.org/D93524
> Link: https://github.com/ClangBuiltLinux/linux/issues/1216
> Signed-off-by: Nathan Chancellor <natechancellor@gmail.com>
> ---
>   arch/x86/kvm/svm/sev.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index e57847ff8bd2..958370758ed0 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -2001,7 +2001,7 @@ void sev_es_vcpu_load(struct vcpu_svm *svm, int cpu)
>   	 * of which one step is to perform a VMLOAD. Since hardware does not
>   	 * perform a VMSAVE on VMRUN, the host savearea must be updated.
>   	 */
> -	asm volatile(__ex("vmsave") : : "a" (__sme_page_pa(sd->save_area)) : "memory");
> +	asm volatile(__ex("vmsave %%"_ASM_AX) : : "a" (__sme_page_pa(sd->save_area)) : "memory");
>   
>   	/*
>   	 * Certain MSRs are restored on VMEXIT, only save ones that aren't
> 

Queued, thanks.

Paolo


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-12-21 18:19 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-19  6:37 [PATCH] KVM: SVM: Add register operand to vmsave call in sev_es_vcpu_load Nathan Chancellor
2020-12-21 17:48 ` Sean Christopherson
2020-12-21 18:10   ` Sean Christopherson
2020-12-21 18:12 ` Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).