All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/2] SVM: guest state handling adjustments
@ 2018-05-04 15:07 Jan Beulich
  2018-05-04 15:10 ` [PATCH v3 1/2] SVM: re-work VMCB sync-ing Jan Beulich
                   ` (5 more replies)
  0 siblings, 6 replies; 15+ messages in thread
From: Jan Beulich @ 2018-05-04 15:07 UTC (permalink / raw)
  To: xen-devel; +Cc: Juergen Gross, Andrew Cooper, Boris Ostrovsky

Only patch 1 is clearly meant for 4.11. The second patch, however, eliminates
a (theoretical) window the first patch still leaves, so should at least be considered.
Furthermore previous discussion suggests that it might even be desirable to fold
both patches into one (or swap their order).

1: re-work VMCB sync-ing
2: introduce a VM entry helper

Signed-off-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v3 1/2] SVM: re-work VMCB sync-ing
  2018-05-04 15:07 [PATCH v3 0/2] SVM: guest state handling adjustments Jan Beulich
@ 2018-05-04 15:10 ` Jan Beulich
  2018-05-04 15:11 ` [PATCH v3 2/2] SVM: introduce a VM entry helper Jan Beulich
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 15+ messages in thread
From: Jan Beulich @ 2018-05-04 15:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Juergen Gross, Andrew Cooper, Boris Ostrovsky

While the main problem to be addressed here is the issue of what so far
was named "vmcb_in_sync" starting out with the wrong value (should have
been true instead of false, to prevent performing a VMSAVE without ever
having VMLOADed the vCPU's state), go a step further and make the
sync-ed state a tristate: CPU and memory may be in sync or an update
may be required in either direction. Rename the field and introduce an
enum. Callers of svm_sync_vmcb() now indicate the intended new state
(with a slight "anomaly" when requesting VMLOAD: we could store
vmcb_needs_vmsave in those cases as the callers request, but the VMCB
really is in sync at that point, and hence there's no need to VMSAVE in
case we don't make it out to guest context), and all syncing goes
through that function.

With that, there's no need to VMLOAD the state perhaps multiple times;
all that's needed is loading it once before VM entry.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v3: Move conditionals around the svm_sync_vmcb() invocations from
    svm_do_resume() and svm_vmexit_handler() into the function.
v2: Also handle VMLOAD in svm_sync_vmcb(). Add comment to enum
    vmcb_sync_state.
---
I've been considering to put the VMLOAD invocation in
svm_asid_handle_vmrun() (instead of the two copies in svm_do_resume()
and svm_vmexit_handler()), but that seemed a little too abusive of the
function. See patch 2.
I'm also not really certain about svm_vmexit_do_vmload(): All I'm doing
here is a 1:1 change from previous behavior, but I'm unconvinced this
was/is really correct.

--- a/xen/arch/x86/hvm/svm/entry.S
+++ b/xen/arch/x86/hvm/svm/entry.S
@@ -112,7 +112,6 @@ UNLIKELY_END(svm_trace)
         /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */
 
         mov  VCPU_svm_vmcb(%rbx),%rcx
-        movb $0,VCPU_svm_vmcb_in_sync(%rbx)
         mov  VMCB_rax(%rcx),%rax
         mov  %rax,UREGS_rax(%rsp)
         mov  VMCB_rip(%rcx),%rax
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -680,16 +680,26 @@ static void svm_cpuid_policy_changed(str
                       cp->extd.ibpb ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW);
 }
 
-static void svm_sync_vmcb(struct vcpu *v)
+static void svm_sync_vmcb(struct vcpu *v, enum vmcb_sync_state new_state)
 {
     struct arch_svm_struct *arch_svm = &v->arch.hvm_svm;
 
-    if ( arch_svm->vmcb_in_sync )
-        return;
-
-    arch_svm->vmcb_in_sync = 1;
+    if ( new_state == vmcb_needs_vmsave )
+    {
+        if ( arch_svm->vmcb_sync_state == vmcb_needs_vmload )
+        {
+            svm_vmload(arch_svm->vmcb);
+            arch_svm->vmcb_sync_state = vmcb_in_sync;
+        }
+    }
+    else
+    {
+        if ( arch_svm->vmcb_sync_state == vmcb_needs_vmsave )
+            svm_vmsave(arch_svm->vmcb);
 
-    svm_vmsave(arch_svm->vmcb);
+        if ( arch_svm->vmcb_sync_state != vmcb_needs_vmload )
+            arch_svm->vmcb_sync_state = new_state;
+    }
 }
 
 static unsigned int svm_get_cpl(struct vcpu *v)
@@ -707,7 +717,7 @@ static void svm_get_segment_register(str
     switch ( seg )
     {
     case x86_seg_fs ... x86_seg_gs:
-        svm_sync_vmcb(v);
+        svm_sync_vmcb(v, vmcb_in_sync);
 
         /* Fallthrough. */
     case x86_seg_es ... x86_seg_ds:
@@ -718,7 +728,7 @@ static void svm_get_segment_register(str
         break;
 
     case x86_seg_tr:
-        svm_sync_vmcb(v);
+        svm_sync_vmcb(v, vmcb_in_sync);
         *reg = vmcb->tr;
         break;
 
@@ -731,7 +741,7 @@ static void svm_get_segment_register(str
         break;
 
     case x86_seg_ldtr:
-        svm_sync_vmcb(v);
+        svm_sync_vmcb(v, vmcb_in_sync);
         *reg = vmcb->ldtr;
         break;
 
@@ -746,7 +756,6 @@ static void svm_set_segment_register(str
                                      struct segment_register *reg)
 {
     struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
-    bool sync = false;
 
     ASSERT((v == current) || !vcpu_runnable(v));
 
@@ -768,7 +777,8 @@ static void svm_set_segment_register(str
     case x86_seg_gs:
     case x86_seg_tr:
     case x86_seg_ldtr:
-        sync = (v == current);
+        if ( v == current )
+            svm_sync_vmcb(v, vmcb_needs_vmload);
         break;
 
     default:
@@ -777,9 +787,6 @@ static void svm_set_segment_register(str
         return;
     }
 
-    if ( sync )
-        svm_sync_vmcb(v);
-
     switch ( seg )
     {
     case x86_seg_ss:
@@ -813,9 +820,6 @@ static void svm_set_segment_register(str
         ASSERT_UNREACHABLE();
         break;
     }
-
-    if ( sync )
-        svm_vmload(vmcb);
 }
 
 static unsigned long svm_get_shadow_gs_base(struct vcpu *v)
@@ -1086,7 +1090,7 @@ static void svm_ctxt_switch_from(struct
     svm_lwp_save(v);
     svm_tsc_ratio_save(v);
 
-    svm_sync_vmcb(v);
+    svm_sync_vmcb(v, vmcb_needs_vmload);
     svm_vmload_pa(per_cpu(host_vmcb, cpu));
 
     /* Resume use of ISTs now that the host TR is reinstated. */
@@ -1114,7 +1118,6 @@ static void svm_ctxt_switch_to(struct vc
     svm_restore_dr(v);
 
     svm_vmsave_pa(per_cpu(host_vmcb, cpu));
-    svm_vmload(vmcb);
     vmcb->cleanbits.bytes = 0;
     svm_lwp_load(v);
     svm_tsc_ratio_load(v);
@@ -1168,6 +1171,8 @@ static void noreturn svm_do_resume(struc
 
     hvm_do_resume(v);
 
+    svm_sync_vmcb(v, vmcb_needs_vmsave);
+
     reset_stack_and_jump(svm_asm_do_resume);
 }
 
@@ -1895,7 +1900,7 @@ static int svm_msr_read_intercept(unsign
     case MSR_FS_BASE:
     case MSR_GS_BASE:
     case MSR_SHADOW_GS_BASE:
-        svm_sync_vmcb(v);
+        svm_sync_vmcb(v, vmcb_in_sync);
         break;
     }
 
@@ -2067,7 +2072,6 @@ static int svm_msr_write_intercept(unsig
     int ret, result = X86EMUL_OKAY;
     struct vcpu *v = current;
     struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
-    bool sync = false;
 
     switch ( msr )
     {
@@ -2081,13 +2085,10 @@ static int svm_msr_write_intercept(unsig
     case MSR_FS_BASE:
     case MSR_GS_BASE:
     case MSR_SHADOW_GS_BASE:
-        sync = true;
+        svm_sync_vmcb(v, vmcb_needs_vmload);
         break;
     }
 
-    if ( sync )
-        svm_sync_vmcb(v);
-
     switch ( msr )
     {
     case MSR_IA32_SYSENTER_ESP:
@@ -2261,9 +2262,6 @@ static int svm_msr_write_intercept(unsig
         break;
     }
 
-    if ( sync )
-        svm_vmload(vmcb);
-
     return result;
 
  gpf:
@@ -2413,7 +2411,7 @@ svm_vmexit_do_vmload(struct vmcb_struct
     put_page(page);
 
     /* State in L1 VMCB is stale now */
-    v->arch.hvm_svm.vmcb_in_sync = 0;
+    v->arch.hvm_svm.vmcb_sync_state = vmcb_needs_vmsave;
 
     __update_guest_eip(regs, inst_len);
 }
@@ -2623,6 +2621,7 @@ void svm_vmexit_handler(struct cpu_user_
     bool_t vcpu_guestmode = 0;
     struct vlapic *vlapic = vcpu_vlapic(v);
 
+    v->arch.hvm_svm.vmcb_sync_state = vmcb_needs_vmsave;
     hvm_invalidate_regs_fields(regs);
 
     if ( paging_mode_hap(v->domain) )
@@ -3109,6 +3108,8 @@ void svm_vmexit_handler(struct cpu_user_
     }
 
   out:
+    svm_sync_vmcb(v, vmcb_needs_vmsave);
+
     if ( vcpu_guestmode || vlapic_hw_disabled(vlapic) )
         return;
 
@@ -3117,6 +3118,7 @@ void svm_vmexit_handler(struct cpu_user_
     intr.fields.tpr =
         (vlapic_get_reg(vlapic, APIC_TASKPRI) & 0xFF) >> 4;
     vmcb_set_vintr(vmcb, intr);
+    ASSERT(v->arch.hvm_svm.vmcb_sync_state != vmcb_needs_vmload);
 }
 
 void svm_trace_vmentry(void)
--- a/xen/arch/x86/hvm/svm/vmcb.c
+++ b/xen/arch/x86/hvm/svm/vmcb.c
@@ -84,6 +84,8 @@ static int construct_vmcb(struct vcpu *v
                              CR_INTERCEPT_CR8_READ |
                              CR_INTERCEPT_CR8_WRITE);
 
+    arch_svm->vmcb_sync_state = vmcb_needs_vmload;
+
     /* I/O and MSR permission bitmaps. */
     arch_svm->msrpm = alloc_xenheap_pages(get_order_from_bytes(MSRPM_SIZE), 0);
     if ( arch_svm->msrpm == NULL )
--- a/xen/arch/x86/x86_64/asm-offsets.c
+++ b/xen/arch/x86/x86_64/asm-offsets.c
@@ -102,7 +102,6 @@ void __dummy__(void)
 
     OFFSET(VCPU_svm_vmcb_pa, struct vcpu, arch.hvm_svm.vmcb_pa);
     OFFSET(VCPU_svm_vmcb, struct vcpu, arch.hvm_svm.vmcb);
-    OFFSET(VCPU_svm_vmcb_in_sync, struct vcpu, arch.hvm_svm.vmcb_in_sync);
     BLANK();
 
     OFFSET(VCPU_vmx_launched, struct vcpu, arch.hvm_vmx.launched);
--- a/xen/include/asm-x86/hvm/svm/vmcb.h
+++ b/xen/include/asm-x86/hvm/svm/vmcb.h
@@ -495,12 +495,28 @@ struct vmcb_struct {
 struct svm_domain {
 };
 
+/*
+ * VMRUN doesn't switch fs/gs/tr/ldtr and SHADOWGS/SYSCALL/SYSENTER state.
+ * Therefore, guest state is in the hardware registers when servicing a
+ * VMExit.
+ *
+ * Immediately after a VMExit, the vmcb is stale, and needs to be brought
+ * into sync by VMSAVE.  If state in the vmcb is modified, a VMLOAD is
+ * needed before the following VMRUN.
+ */
+enum vmcb_sync_state {
+    vmcb_in_sync,
+    vmcb_needs_vmsave,    /* VMCB out of sync (VMSAVE needed)? */
+    vmcb_needs_vmload     /* VMCB dirty (VMLOAD needed)? */
+};
+
 struct arch_svm_struct {
     struct vmcb_struct *vmcb;
     u64    vmcb_pa;
     unsigned long *msrpm;
     int    launch_core;
-    bool_t vmcb_in_sync;    /* VMCB sync'ed with VMSAVE? */
+
+    uint8_t vmcb_sync_state; /* enum vmcb_sync_state */
 
     /* VMCB has a cached instruction from #PF/#NPF Decode Assist? */
     uint8_t cached_insn_len; /* Zero if no cached instruction. */




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v3 2/2] SVM: introduce a VM entry helper
  2018-05-04 15:07 [PATCH v3 0/2] SVM: guest state handling adjustments Jan Beulich
  2018-05-04 15:10 ` [PATCH v3 1/2] SVM: re-work VMCB sync-ing Jan Beulich
@ 2018-05-04 15:11 ` Jan Beulich
  2018-05-07 14:11   ` Jan Beulich
  2018-05-04 17:52 ` [PATCH v3 0/2] SVM: guest state handling adjustments Andrew Cooper
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 15+ messages in thread
From: Jan Beulich @ 2018-05-04 15:11 UTC (permalink / raw)
  To: xen-devel; +Cc: Juergen Gross, Andrew Cooper, Boris Ostrovsky

Neither the register values copying nor the trace entry generation need
doing in assembly. The VMLOAD invocation can also be further deferred
(and centralized). Therefore replace the svm_asid_handle_vmrun()
invocation with one of the new helper.

Similarly move the VM exit side register value copying into
svm_vmexit_handler().

Now that we always make it out to guest context after VMLOAD,
svm_sync_vmcb() no longer overrides vmcb_needs_vmsave, making
svm_vmexit_handler() setting the field early unnecessary.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v3: svm_vmexit_handler() no longer explicitly sets vmcb_sync_state, and
    svm_sync_vmcb() no longer converts a needs-vmsave request into
    in-sync state. Also move the svm_trace_vmentry() invocation to C.
v2: New.

--- a/xen/arch/x86/hvm/svm/entry.S
+++ b/xen/arch/x86/hvm/svm/entry.S
@@ -61,23 +61,8 @@ UNLIKELY_START(ne, nsvm_hap)
         jmp  .Lsvm_do_resume
 __UNLIKELY_END(nsvm_hap)
 
-        call svm_asid_handle_vmrun
-
-        cmpb $0,tb_init_done(%rip)
-UNLIKELY_START(nz, svm_trace)
-        call svm_trace_vmentry
-UNLIKELY_END(svm_trace)
-
-        mov  VCPU_svm_vmcb(%rbx),%rcx
-        mov  UREGS_rax(%rsp),%rax
-        mov  %rax,VMCB_rax(%rcx)
-        mov  UREGS_rip(%rsp),%rax
-        mov  %rax,VMCB_rip(%rcx)
-        mov  UREGS_rsp(%rsp),%rax
-        mov  %rax,VMCB_rsp(%rcx)
-        mov  UREGS_eflags(%rsp),%rax
-        or   $X86_EFLAGS_MBS,%rax
-        mov  %rax,VMCB_rflags(%rcx)
+        mov  %rsp, %rdi
+        call svm_vmenter_helper
 
         mov VCPU_arch_msr(%rbx), %rax
         mov VCPUMSR_spec_ctrl_raw(%rax), %eax
@@ -111,16 +96,6 @@ UNLIKELY_END(svm_trace)
         SPEC_CTRL_ENTRY_FROM_VMEXIT /* Req: b=curr %rsp=regs/cpuinfo, Clob: acd */
         /* WARNING! `ret`, `call *`, `jmp *` not safe before this point. */
 
-        mov  VCPU_svm_vmcb(%rbx),%rcx
-        mov  VMCB_rax(%rcx),%rax
-        mov  %rax,UREGS_rax(%rsp)
-        mov  VMCB_rip(%rcx),%rax
-        mov  %rax,UREGS_rip(%rsp)
-        mov  VMCB_rsp(%rcx),%rax
-        mov  %rax,UREGS_rsp(%rsp)
-        mov  VMCB_rflags(%rcx),%rax
-        mov  %rax,UREGS_eflags(%rsp)
-
         STGI
 GLOBAL(svm_stgi_label)
         mov  %rsp,%rdi
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -687,10 +687,9 @@ static void svm_sync_vmcb(struct vcpu *v
     if ( new_state == vmcb_needs_vmsave )
     {
         if ( arch_svm->vmcb_sync_state == vmcb_needs_vmload )
-        {
             svm_vmload(arch_svm->vmcb);
-            arch_svm->vmcb_sync_state = vmcb_in_sync;
-        }
+
+        arch_svm->vmcb_sync_state = new_state;
     }
     else
     {
@@ -1171,11 +1170,29 @@ static void noreturn svm_do_resume(struc
 
     hvm_do_resume(v);
 
-    svm_sync_vmcb(v, vmcb_needs_vmsave);
-
     reset_stack_and_jump(svm_asm_do_resume);
 }
 
+void svm_vmenter_helper(const struct cpu_user_regs *regs)
+{
+    struct vcpu *curr = current;
+    struct vmcb_struct *vmcb = curr->arch.hvm_svm.vmcb;
+
+    svm_asid_handle_vmrun();
+
+    if ( unlikely(tb_init_done) )
+        HVMTRACE_ND(VMENTRY,
+                    nestedhvm_vcpu_in_guestmode(curr) ? TRC_HVM_NESTEDFLAG : 0,
+                    1/*cycles*/, 0, 0, 0, 0, 0, 0, 0);
+
+    svm_sync_vmcb(curr, vmcb_needs_vmsave);
+
+    vmcb->rax = regs->rax;
+    vmcb->rip = regs->rip;
+    vmcb->rsp = regs->rsp;
+    vmcb->rflags = regs->rflags | X86_EFLAGS_MBS;
+}
+
 static void svm_guest_osvw_init(struct vcpu *vcpu)
 {
     if ( boot_cpu_data.x86_vendor != X86_VENDOR_AMD )
@@ -2621,7 +2638,11 @@ void svm_vmexit_handler(struct cpu_user_
     bool_t vcpu_guestmode = 0;
     struct vlapic *vlapic = vcpu_vlapic(v);
 
-    v->arch.hvm_svm.vmcb_sync_state = vmcb_needs_vmsave;
+    regs->rax = vmcb->rax;
+    regs->rip = vmcb->rip;
+    regs->rsp = vmcb->rsp;
+    regs->rflags = vmcb->rflags;
+
     hvm_invalidate_regs_fields(regs);
 
     if ( paging_mode_hap(v->domain) )
@@ -3108,8 +3129,6 @@ void svm_vmexit_handler(struct cpu_user_
     }
 
   out:
-    svm_sync_vmcb(v, vmcb_needs_vmsave);
-
     if ( vcpu_guestmode || vlapic_hw_disabled(vlapic) )
         return;
 
@@ -3118,17 +3137,8 @@ void svm_vmexit_handler(struct cpu_user_
     intr.fields.tpr =
         (vlapic_get_reg(vlapic, APIC_TASKPRI) & 0xFF) >> 4;
     vmcb_set_vintr(vmcb, intr);
-    ASSERT(v->arch.hvm_svm.vmcb_sync_state != vmcb_needs_vmload);
 }
 
-void svm_trace_vmentry(void)
-{
-    struct vcpu *curr = current;
-    HVMTRACE_ND(VMENTRY,
-                nestedhvm_vcpu_in_guestmode(curr) ? TRC_HVM_NESTEDFLAG : 0,
-                1/*cycles*/, 0, 0, 0, 0, 0, 0, 0);
-}
-  
 /*
  * Local variables:
  * mode: C
--- a/xen/arch/x86/x86_64/asm-offsets.c
+++ b/xen/arch/x86/x86_64/asm-offsets.c
@@ -119,12 +119,6 @@ void __dummy__(void)
     OFFSET(DOMAIN_is_32bit_pv, struct domain, arch.is_32bit_pv);
     BLANK();
 
-    OFFSET(VMCB_rax, struct vmcb_struct, rax);
-    OFFSET(VMCB_rip, struct vmcb_struct, rip);
-    OFFSET(VMCB_rsp, struct vmcb_struct, rsp);
-    OFFSET(VMCB_rflags, struct vmcb_struct, rflags);
-    BLANK();
-
     OFFSET(VCPUINFO_upcall_pending, struct vcpu_info, evtchn_upcall_pending);
     OFFSET(VCPUINFO_upcall_mask, struct vcpu_info, evtchn_upcall_mask);
     BLANK();
--- a/xen/include/asm-x86/hvm/svm/asid.h
+++ b/xen/include/asm-x86/hvm/svm/asid.h
@@ -23,6 +23,7 @@
 #include <asm/processor.h>
 
 void svm_asid_init(const struct cpuinfo_x86 *c);
+void svm_asid_handle_vmrun(void);
 
 static inline void svm_asid_g_invlpg(struct vcpu *v, unsigned long g_vaddr)
 {




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 0/2] SVM: guest state handling adjustments
  2018-05-04 15:07 [PATCH v3 0/2] SVM: guest state handling adjustments Jan Beulich
  2018-05-04 15:10 ` [PATCH v3 1/2] SVM: re-work VMCB sync-ing Jan Beulich
  2018-05-04 15:11 ` [PATCH v3 2/2] SVM: introduce a VM entry helper Jan Beulich
@ 2018-05-04 17:52 ` Andrew Cooper
  2018-05-04 18:38 ` Boris Ostrovsky
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 15+ messages in thread
From: Andrew Cooper @ 2018-05-04 17:52 UTC (permalink / raw)
  To: Jan Beulich, xen-devel; +Cc: Juergen Gross, Boris Ostrovsky

On 04/05/18 16:07, Jan Beulich wrote:
> Only patch 1 is clearly meant for 4.11. The second patch, however, eliminates
> a (theoretical) window the first patch still leaves, so should at least be considered.
> Furthermore previous discussion suggests that it might even be desirable to fold
> both patches into one (or swap their order).
>
> 1: re-work VMCB sync-ing
> 2: introduce a VM entry helper
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

As this is fixing a real bug and we're getting quite late in 4.11 at
this point, Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

I'm still not happy with the API, and especially that
"svm_sync_vmcb(curr, vmcb_needs_vmsave);" in patch two does not do the
intuitive thing.  That said, I'm going to need to rewrite this anyway in
4.12 to get the MSR infrastructure working, so this code isn't going to
stay long.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 0/2] SVM: guest state handling adjustments
  2018-05-04 15:07 [PATCH v3 0/2] SVM: guest state handling adjustments Jan Beulich
                   ` (2 preceding siblings ...)
  2018-05-04 17:52 ` [PATCH v3 0/2] SVM: guest state handling adjustments Andrew Cooper
@ 2018-05-04 18:38 ` Boris Ostrovsky
       [not found] ` <5AEC77E802000078001C0BEB@suse.com>
       [not found] ` <5AEC782902000078001C0BEE@suse.com>
  5 siblings, 0 replies; 15+ messages in thread
From: Boris Ostrovsky @ 2018-05-04 18:38 UTC (permalink / raw)
  To: Jan Beulich, xen-devel; +Cc: Juergen Gross, Andrew Cooper

On 05/04/2018 11:07 AM, Jan Beulich wrote:
> Only patch 1 is clearly meant for 4.11. The second patch, however, eliminates
> a (theoretical) window the first patch still leaves, so should at least be considered.
> Furthermore previous discussion suggests that it might even be desirable to fold
> both patches into one (or swap their order).
>
> 1: re-work VMCB sync-ing
> 2: introduce a VM entry helper
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
>


Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 1/2] SVM: re-work VMCB sync-ing
       [not found] ` <5AEC77E802000078001C0BEB@suse.com>
@ 2018-05-07  6:30   ` Juergen Gross
  0 siblings, 0 replies; 15+ messages in thread
From: Juergen Gross @ 2018-05-07  6:30 UTC (permalink / raw)
  To: Jan Beulich, xen-devel; +Cc: Andrew Cooper, Boris Ostrovsky

On 04/05/18 17:10, Jan Beulich wrote:
> While the main problem to be addressed here is the issue of what so far
> was named "vmcb_in_sync" starting out with the wrong value (should have
> been true instead of false, to prevent performing a VMSAVE without ever
> having VMLOADed the vCPU's state), go a step further and make the
> sync-ed state a tristate: CPU and memory may be in sync or an update
> may be required in either direction. Rename the field and introduce an
> enum. Callers of svm_sync_vmcb() now indicate the intended new state
> (with a slight "anomaly" when requesting VMLOAD: we could store
> vmcb_needs_vmsave in those cases as the callers request, but the VMCB
> really is in sync at that point, and hence there's no need to VMSAVE in
> case we don't make it out to guest context), and all syncing goes
> through that function.
> 
> With that, there's no need to VMLOAD the state perhaps multiple times;
> all that's needed is loading it once before VM entry.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Release-acked-by: Juergen Gross <jgross@suse.com>


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 2/2] SVM: introduce a VM entry helper
       [not found] ` <5AEC782902000078001C0BEE@suse.com>
@ 2018-05-07  6:31   ` Juergen Gross
  0 siblings, 0 replies; 15+ messages in thread
From: Juergen Gross @ 2018-05-07  6:31 UTC (permalink / raw)
  To: Jan Beulich, xen-devel; +Cc: Andrew Cooper, Boris Ostrovsky

On 04/05/18 17:11, Jan Beulich wrote:
> Neither the register values copying nor the trace entry generation need
> doing in assembly. The VMLOAD invocation can also be further deferred
> (and centralized). Therefore replace the svm_asid_handle_vmrun()
> invocation with one of the new helper.
> 
> Similarly move the VM exit side register value copying into
> svm_vmexit_handler().
> 
> Now that we always make it out to guest context after VMLOAD,
> svm_sync_vmcb() no longer overrides vmcb_needs_vmsave, making
> svm_vmexit_handler() setting the field early unnecessary.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Release-acked-by: Juergen Gross <jgross@suse.com>


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 2/2] SVM: introduce a VM entry helper
  2018-05-04 15:11 ` [PATCH v3 2/2] SVM: introduce a VM entry helper Jan Beulich
@ 2018-05-07 14:11   ` Jan Beulich
  2018-05-07 14:19     ` Andrew Cooper
  0 siblings, 1 reply; 15+ messages in thread
From: Jan Beulich @ 2018-05-07 14:11 UTC (permalink / raw)
  To: Andrew Cooper, Boris Ostrovsky; +Cc: Juergen Gross, xen-devel

>>> On 04.05.18 at 17:11, <JBeulich@suse.com> wrote:
> --- a/xen/arch/x86/hvm/svm/entry.S
> +++ b/xen/arch/x86/hvm/svm/entry.S
> @@ -61,23 +61,8 @@ UNLIKELY_START(ne, nsvm_hap)
>          jmp  .Lsvm_do_resume
>  __UNLIKELY_END(nsvm_hap)
>  
> -        call svm_asid_handle_vmrun
> -
> -        cmpb $0,tb_init_done(%rip)
> -UNLIKELY_START(nz, svm_trace)
> -        call svm_trace_vmentry
> -UNLIKELY_END(svm_trace)
> -
> -        mov  VCPU_svm_vmcb(%rbx),%rcx
> -        mov  UREGS_rax(%rsp),%rax
> -        mov  %rax,VMCB_rax(%rcx)
> -        mov  UREGS_rip(%rsp),%rax
> -        mov  %rax,VMCB_rip(%rcx)
> -        mov  UREGS_rsp(%rsp),%rax
> -        mov  %rax,VMCB_rsp(%rcx)
> -        mov  UREGS_eflags(%rsp),%rax
> -        or   $X86_EFLAGS_MBS,%rax
> -        mov  %rax,VMCB_rflags(%rcx)
> +        mov  %rsp, %rdi
> +        call svm_vmenter_helper

While I had committed this earlier today, there's one concern I've just come
to think of: Now that we're calling into C land with CLGI in effect (for more
than just the trivial svm_trace_vmentry()) we are at risk of confusing
parties using local_irq_is_enabled(), first and foremost
common/spinlock.c:check_lock(). While it's some extra overhead, I wonder
whether the call wouldn't better be framed by a CLI/STI pair.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 2/2] SVM: introduce a VM entry helper
  2018-05-07 14:11   ` Jan Beulich
@ 2018-05-07 14:19     ` Andrew Cooper
  2018-05-07 15:25       ` Jan Beulich
  0 siblings, 1 reply; 15+ messages in thread
From: Andrew Cooper @ 2018-05-07 14:19 UTC (permalink / raw)
  To: Jan Beulich, Boris Ostrovsky; +Cc: Juergen Gross, xen-devel

On 07/05/18 15:11, Jan Beulich wrote:
>>>> On 04.05.18 at 17:11, <JBeulich@suse.com> wrote:
>> --- a/xen/arch/x86/hvm/svm/entry.S
>> +++ b/xen/arch/x86/hvm/svm/entry.S
>> @@ -61,23 +61,8 @@ UNLIKELY_START(ne, nsvm_hap)
>>          jmp  .Lsvm_do_resume
>>  __UNLIKELY_END(nsvm_hap)
>>  
>> -        call svm_asid_handle_vmrun
>> -
>> -        cmpb $0,tb_init_done(%rip)
>> -UNLIKELY_START(nz, svm_trace)
>> -        call svm_trace_vmentry
>> -UNLIKELY_END(svm_trace)
>> -
>> -        mov  VCPU_svm_vmcb(%rbx),%rcx
>> -        mov  UREGS_rax(%rsp),%rax
>> -        mov  %rax,VMCB_rax(%rcx)
>> -        mov  UREGS_rip(%rsp),%rax
>> -        mov  %rax,VMCB_rip(%rcx)
>> -        mov  UREGS_rsp(%rsp),%rax
>> -        mov  %rax,VMCB_rsp(%rcx)
>> -        mov  UREGS_eflags(%rsp),%rax
>> -        or   $X86_EFLAGS_MBS,%rax
>> -        mov  %rax,VMCB_rflags(%rcx)
>> +        mov  %rsp, %rdi
>> +        call svm_vmenter_helper
> While I had committed this earlier today, there's one concern I've just come
> to think of: Now that we're calling into C land with CLGI in effect (for more
> than just the trivial svm_trace_vmentry()) we are at risk of confusing
> parties using local_irq_is_enabled(), first and foremost
> common/spinlock.c:check_lock(). While it's some extra overhead, I wonder
> whether the call wouldn't better be framed by a CLI/STI pair.

I can't see why the SVM vmentry path uses CLGI/STGI in the first place.

The VMX path uses plain cli/sti and our NMI/MCE handlers can cope. 
Furthermore, processing NMIs/MCEs at this point will be more efficient
that taking a vmentry then immediately exiting again.

As for running with interrupts disabled, that is already the case on the
VMX side, and I don't see why the SVM side needs to be different.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 2/2] SVM: introduce a VM entry helper
  2018-05-07 14:19     ` Andrew Cooper
@ 2018-05-07 15:25       ` Jan Beulich
  2018-05-07 15:29         ` Andrew Cooper
  0 siblings, 1 reply; 15+ messages in thread
From: Jan Beulich @ 2018-05-07 15:25 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Juergen Gross, xen-devel, Boris Ostrovsky

>>> On 07.05.18 at 16:19, <andrew.cooper3@citrix.com> wrote:
> On 07/05/18 15:11, Jan Beulich wrote:
>>>>> On 04.05.18 at 17:11, <JBeulich@suse.com> wrote:
>>> --- a/xen/arch/x86/hvm/svm/entry.S
>>> +++ b/xen/arch/x86/hvm/svm/entry.S
>>> @@ -61,23 +61,8 @@ UNLIKELY_START(ne, nsvm_hap)
>>>          jmp  .Lsvm_do_resume
>>>  __UNLIKELY_END(nsvm_hap)
>>>  
>>> -        call svm_asid_handle_vmrun
>>> -
>>> -        cmpb $0,tb_init_done(%rip)
>>> -UNLIKELY_START(nz, svm_trace)
>>> -        call svm_trace_vmentry
>>> -UNLIKELY_END(svm_trace)
>>> -
>>> -        mov  VCPU_svm_vmcb(%rbx),%rcx
>>> -        mov  UREGS_rax(%rsp),%rax
>>> -        mov  %rax,VMCB_rax(%rcx)
>>> -        mov  UREGS_rip(%rsp),%rax
>>> -        mov  %rax,VMCB_rip(%rcx)
>>> -        mov  UREGS_rsp(%rsp),%rax
>>> -        mov  %rax,VMCB_rsp(%rcx)
>>> -        mov  UREGS_eflags(%rsp),%rax
>>> -        or   $X86_EFLAGS_MBS,%rax
>>> -        mov  %rax,VMCB_rflags(%rcx)
>>> +        mov  %rsp, %rdi
>>> +        call svm_vmenter_helper
>> While I had committed this earlier today, there's one concern I've just come
>> to think of: Now that we're calling into C land with CLGI in effect (for 
> more
>> than just the trivial svm_trace_vmentry()) we are at risk of confusing
>> parties using local_irq_is_enabled(), first and foremost
>> common/spinlock.c:check_lock(). While it's some extra overhead, I wonder
>> whether the call wouldn't better be framed by a CLI/STI pair.
> 
> I can't see why the SVM vmentry path uses CLGI/STGI in the first place.
> 
> The VMX path uses plain cli/sti and our NMI/MCE handlers can cope. 
> Furthermore, processing NMIs/MCEs at this point will be more efficient
> that taking a vmentry then immediately exiting again.

Perhaps you're right, i.e. we could replace all current CLGI/STGI by
CLI/STI, adding a single STGI right after VMRUN.

> As for running with interrupts disabled, that is already the case on the
> VMX side, and I don't see why the SVM side needs to be different.

Sure, as does SVM - CLGI is a superset of CLI, after all. My observation
was just that this state of interrupts being disabled can't be observed by
users of the normal infrastructure (inspecting EFLAGS.IF).

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 2/2] SVM: introduce a VM entry helper
  2018-05-07 15:25       ` Jan Beulich
@ 2018-05-07 15:29         ` Andrew Cooper
  2018-05-07 15:46           ` Boris Ostrovsky
  0 siblings, 1 reply; 15+ messages in thread
From: Andrew Cooper @ 2018-05-07 15:29 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Juergen Gross, xen-devel, Boris Ostrovsky

On 07/05/18 16:25, Jan Beulich wrote:
>>>> On 07.05.18 at 16:19, <andrew.cooper3@citrix.com> wrote:
>> On 07/05/18 15:11, Jan Beulich wrote:
>>>>>> On 04.05.18 at 17:11, <JBeulich@suse.com> wrote:
>>>> --- a/xen/arch/x86/hvm/svm/entry.S
>>>> +++ b/xen/arch/x86/hvm/svm/entry.S
>>>> @@ -61,23 +61,8 @@ UNLIKELY_START(ne, nsvm_hap)
>>>>          jmp  .Lsvm_do_resume
>>>>  __UNLIKELY_END(nsvm_hap)
>>>>  
>>>> -        call svm_asid_handle_vmrun
>>>> -
>>>> -        cmpb $0,tb_init_done(%rip)
>>>> -UNLIKELY_START(nz, svm_trace)
>>>> -        call svm_trace_vmentry
>>>> -UNLIKELY_END(svm_trace)
>>>> -
>>>> -        mov  VCPU_svm_vmcb(%rbx),%rcx
>>>> -        mov  UREGS_rax(%rsp),%rax
>>>> -        mov  %rax,VMCB_rax(%rcx)
>>>> -        mov  UREGS_rip(%rsp),%rax
>>>> -        mov  %rax,VMCB_rip(%rcx)
>>>> -        mov  UREGS_rsp(%rsp),%rax
>>>> -        mov  %rax,VMCB_rsp(%rcx)
>>>> -        mov  UREGS_eflags(%rsp),%rax
>>>> -        or   $X86_EFLAGS_MBS,%rax
>>>> -        mov  %rax,VMCB_rflags(%rcx)
>>>> +        mov  %rsp, %rdi
>>>> +        call svm_vmenter_helper
>>> While I had committed this earlier today, there's one concern I've just come
>>> to think of: Now that we're calling into C land with CLGI in effect (for 
>> more
>>> than just the trivial svm_trace_vmentry()) we are at risk of confusing
>>> parties using local_irq_is_enabled(), first and foremost
>>> common/spinlock.c:check_lock(). While it's some extra overhead, I wonder
>>> whether the call wouldn't better be framed by a CLI/STI pair.
>> I can't see why the SVM vmentry path uses CLGI/STGI in the first place.
>>
>> The VMX path uses plain cli/sti and our NMI/MCE handlers can cope. 
>> Furthermore, processing NMIs/MCEs at this point will be more efficient
>> that taking a vmentry then immediately exiting again.
> Perhaps you're right, i.e. we could replace all current CLGI/STGI by
> CLI/STI, adding a single STGI right after VMRUN.

We want to retain the one STGI on the svm_stgi_label, but I think all
other CLGI/STGI should be downgraded to CLI/STI.

>
>> As for running with interrupts disabled, that is already the case on the
>> VMX side, and I don't see why the SVM side needs to be different.
> Sure, as does SVM - CLGI is a superset of CLI, after all. My observation
> was just that this state of interrupts being disabled can't be observed by
> users of the normal infrastructure (inspecting EFLAGS.IF).

Ah ok.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 2/2] SVM: introduce a VM entry helper
  2018-05-07 15:29         ` Andrew Cooper
@ 2018-05-07 15:46           ` Boris Ostrovsky
  2018-05-07 15:47             ` Jan Beulich
  2018-05-07 15:49             ` Andrew Cooper
  0 siblings, 2 replies; 15+ messages in thread
From: Boris Ostrovsky @ 2018-05-07 15:46 UTC (permalink / raw)
  To: Andrew Cooper, Jan Beulich; +Cc: Juergen Gross, xen-devel

On 05/07/2018 11:29 AM, Andrew Cooper wrote:
> On 07/05/18 16:25, Jan Beulich wrote:
>>>>> On 07.05.18 at 16:19, <andrew.cooper3@citrix.com> wrote:
>>> On 07/05/18 15:11, Jan Beulich wrote:
>>>>>>> On 04.05.18 at 17:11, <JBeulich@suse.com> wrote:
>>>>> --- a/xen/arch/x86/hvm/svm/entry.S
>>>>> +++ b/xen/arch/x86/hvm/svm/entry.S
>>>>> @@ -61,23 +61,8 @@ UNLIKELY_START(ne, nsvm_hap)
>>>>>          jmp  .Lsvm_do_resume
>>>>>  __UNLIKELY_END(nsvm_hap)
>>>>>  
>>>>> -        call svm_asid_handle_vmrun
>>>>> -
>>>>> -        cmpb $0,tb_init_done(%rip)
>>>>> -UNLIKELY_START(nz, svm_trace)
>>>>> -        call svm_trace_vmentry
>>>>> -UNLIKELY_END(svm_trace)
>>>>> -
>>>>> -        mov  VCPU_svm_vmcb(%rbx),%rcx
>>>>> -        mov  UREGS_rax(%rsp),%rax
>>>>> -        mov  %rax,VMCB_rax(%rcx)
>>>>> -        mov  UREGS_rip(%rsp),%rax
>>>>> -        mov  %rax,VMCB_rip(%rcx)
>>>>> -        mov  UREGS_rsp(%rsp),%rax
>>>>> -        mov  %rax,VMCB_rsp(%rcx)
>>>>> -        mov  UREGS_eflags(%rsp),%rax
>>>>> -        or   $X86_EFLAGS_MBS,%rax
>>>>> -        mov  %rax,VMCB_rflags(%rcx)
>>>>> +        mov  %rsp, %rdi
>>>>> +        call svm_vmenter_helper
>>>> While I had committed this earlier today, there's one concern I've just come
>>>> to think of: Now that we're calling into C land with CLGI in effect (for 
>>> more
>>>> than just the trivial svm_trace_vmentry()) we are at risk of confusing
>>>> parties using local_irq_is_enabled(), first and foremost
>>>> common/spinlock.c:check_lock(). While it's some extra overhead, I wonder
>>>> whether the call wouldn't better be framed by a CLI/STI pair.
>>> I can't see why the SVM vmentry path uses CLGI/STGI in the first place.
>>>
>>> The VMX path uses plain cli/sti and our NMI/MCE handlers can cope. 
>>> Furthermore, processing NMIs/MCEs at this point will be more efficient
>>> that taking a vmentry then immediately exiting again.
>> Perhaps you're right, i.e. we could replace all current CLGI/STGI by
>> CLI/STI, adding a single STGI right after VMRUN.


The APM say "It is assumed that VMM software cleared GIF some time before
executing the VMRUN instruction, to ensure an atomic state switch."

Not sure if this is meant as suggestion or requirement.

-boris

> We want to retain the one STGI on the svm_stgi_label, but I think all
> other CLGI/STGI should be downgraded to CLI/STI.
>
>>> As for running with interrupts disabled, that is already the case on the
>>> VMX side, and I don't see why the SVM side needs to be different.
>> Sure, as does SVM - CLGI is a superset of CLI, after all. My observation
>> was just that this state of interrupts being disabled can't be observed by
>> users of the normal infrastructure (inspecting EFLAGS.IF).
> Ah ok.
>
> ~Andrew


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 2/2] SVM: introduce a VM entry helper
  2018-05-07 15:46           ` Boris Ostrovsky
@ 2018-05-07 15:47             ` Jan Beulich
  2018-05-07 15:49             ` Andrew Cooper
  1 sibling, 0 replies; 15+ messages in thread
From: Jan Beulich @ 2018-05-07 15:47 UTC (permalink / raw)
  To: Boris Ostrovsky; +Cc: Juergen Gross, Andrew Cooper, xen-devel

>>> On 07.05.18 at 17:46, <boris.ostrovsky@oracle.com> wrote:
> On 05/07/2018 11:29 AM, Andrew Cooper wrote:
>> On 07/05/18 16:25, Jan Beulich wrote:
>>>>>> On 07.05.18 at 16:19, <andrew.cooper3@citrix.com> wrote:
>>>> On 07/05/18 15:11, Jan Beulich wrote:
>>>>>>>> On 04.05.18 at 17:11, <JBeulich@suse.com> wrote:
>>>>>> --- a/xen/arch/x86/hvm/svm/entry.S
>>>>>> +++ b/xen/arch/x86/hvm/svm/entry.S
>>>>>> @@ -61,23 +61,8 @@ UNLIKELY_START(ne, nsvm_hap)
>>>>>>          jmp  .Lsvm_do_resume
>>>>>>  __UNLIKELY_END(nsvm_hap)
>>>>>>  
>>>>>> -        call svm_asid_handle_vmrun
>>>>>> -
>>>>>> -        cmpb $0,tb_init_done(%rip)
>>>>>> -UNLIKELY_START(nz, svm_trace)
>>>>>> -        call svm_trace_vmentry
>>>>>> -UNLIKELY_END(svm_trace)
>>>>>> -
>>>>>> -        mov  VCPU_svm_vmcb(%rbx),%rcx
>>>>>> -        mov  UREGS_rax(%rsp),%rax
>>>>>> -        mov  %rax,VMCB_rax(%rcx)
>>>>>> -        mov  UREGS_rip(%rsp),%rax
>>>>>> -        mov  %rax,VMCB_rip(%rcx)
>>>>>> -        mov  UREGS_rsp(%rsp),%rax
>>>>>> -        mov  %rax,VMCB_rsp(%rcx)
>>>>>> -        mov  UREGS_eflags(%rsp),%rax
>>>>>> -        or   $X86_EFLAGS_MBS,%rax
>>>>>> -        mov  %rax,VMCB_rflags(%rcx)
>>>>>> +        mov  %rsp, %rdi
>>>>>> +        call svm_vmenter_helper
>>>>> While I had committed this earlier today, there's one concern I've just come
>>>>> to think of: Now that we're calling into C land with CLGI in effect (for 
>>>> more
>>>>> than just the trivial svm_trace_vmentry()) we are at risk of confusing
>>>>> parties using local_irq_is_enabled(), first and foremost
>>>>> common/spinlock.c:check_lock(). While it's some extra overhead, I wonder
>>>>> whether the call wouldn't better be framed by a CLI/STI pair.
>>>> I can't see why the SVM vmentry path uses CLGI/STGI in the first place.
>>>>
>>>> The VMX path uses plain cli/sti and our NMI/MCE handlers can cope. 
>>>> Furthermore, processing NMIs/MCEs at this point will be more efficient
>>>> that taking a vmentry then immediately exiting again.
>>> Perhaps you're right, i.e. we could replace all current CLGI/STGI by
>>> CLI/STI, adding a single STGI right after VMRUN.
> 
> 
> The APM say "It is assumed that VMM software cleared GIF some time before
> executing the VMRUN instruction, to ensure an atomic state switch."

Well, that means we might additionally need CLGI right before VMRUN.

> Not sure if this is meant as suggestion or requirement.

How do we find out?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 2/2] SVM: introduce a VM entry helper
  2018-05-07 15:46           ` Boris Ostrovsky
  2018-05-07 15:47             ` Jan Beulich
@ 2018-05-07 15:49             ` Andrew Cooper
  2018-05-07 16:16               ` Boris Ostrovsky
  1 sibling, 1 reply; 15+ messages in thread
From: Andrew Cooper @ 2018-05-07 15:49 UTC (permalink / raw)
  To: Boris Ostrovsky, Jan Beulich; +Cc: Juergen Gross, xen-devel

On 07/05/18 16:46, Boris Ostrovsky wrote:
> On 05/07/2018 11:29 AM, Andrew Cooper wrote:
>> On 07/05/18 16:25, Jan Beulich wrote:
>>>>>> On 07.05.18 at 16:19, <andrew.cooper3@citrix.com> wrote:
>>>> On 07/05/18 15:11, Jan Beulich wrote:
>>>>>>>> On 04.05.18 at 17:11, <JBeulich@suse.com> wrote:
>>>>>> --- a/xen/arch/x86/hvm/svm/entry.S
>>>>>> +++ b/xen/arch/x86/hvm/svm/entry.S
>>>>>> @@ -61,23 +61,8 @@ UNLIKELY_START(ne, nsvm_hap)
>>>>>>          jmp  .Lsvm_do_resume
>>>>>>  __UNLIKELY_END(nsvm_hap)
>>>>>>  
>>>>>> -        call svm_asid_handle_vmrun
>>>>>> -
>>>>>> -        cmpb $0,tb_init_done(%rip)
>>>>>> -UNLIKELY_START(nz, svm_trace)
>>>>>> -        call svm_trace_vmentry
>>>>>> -UNLIKELY_END(svm_trace)
>>>>>> -
>>>>>> -        mov  VCPU_svm_vmcb(%rbx),%rcx
>>>>>> -        mov  UREGS_rax(%rsp),%rax
>>>>>> -        mov  %rax,VMCB_rax(%rcx)
>>>>>> -        mov  UREGS_rip(%rsp),%rax
>>>>>> -        mov  %rax,VMCB_rip(%rcx)
>>>>>> -        mov  UREGS_rsp(%rsp),%rax
>>>>>> -        mov  %rax,VMCB_rsp(%rcx)
>>>>>> -        mov  UREGS_eflags(%rsp),%rax
>>>>>> -        or   $X86_EFLAGS_MBS,%rax
>>>>>> -        mov  %rax,VMCB_rflags(%rcx)
>>>>>> +        mov  %rsp, %rdi
>>>>>> +        call svm_vmenter_helper
>>>>> While I had committed this earlier today, there's one concern I've just come
>>>>> to think of: Now that we're calling into C land with CLGI in effect (for 
>>>> more
>>>>> than just the trivial svm_trace_vmentry()) we are at risk of confusing
>>>>> parties using local_irq_is_enabled(), first and foremost
>>>>> common/spinlock.c:check_lock(). While it's some extra overhead, I wonder
>>>>> whether the call wouldn't better be framed by a CLI/STI pair.
>>>> I can't see why the SVM vmentry path uses CLGI/STGI in the first place.
>>>>
>>>> The VMX path uses plain cli/sti and our NMI/MCE handlers can cope. 
>>>> Furthermore, processing NMIs/MCEs at this point will be more efficient
>>>> that taking a vmentry then immediately exiting again.
>>> Perhaps you're right, i.e. we could replace all current CLGI/STGI by
>>> CLI/STI, adding a single STGI right after VMRUN.
>
> The APM say "It is assumed that VMM software cleared GIF some time before
> executing the VMRUN instruction, to ensure an atomic state switch."
>
> Not sure if this is meant as suggestion or requirement.

Hmm - that can probably be tested with this proposed patch and a very
high frequency NMI perf counter.

Basically every other hypervisor does CLGI; VMSAVE (host state); VMLOAD
(guest state); VMRUN, and Xen's lack of doing this is why we have to
play with the IDT IST settings, as well as why we can't cope cleanly
with stack overflows.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 2/2] SVM: introduce a VM entry helper
  2018-05-07 15:49             ` Andrew Cooper
@ 2018-05-07 16:16               ` Boris Ostrovsky
  0 siblings, 0 replies; 15+ messages in thread
From: Boris Ostrovsky @ 2018-05-07 16:16 UTC (permalink / raw)
  To: Andrew Cooper, Jan Beulich
  Cc: Juergen Gross, xen-devel, Brian Woods, Suthikulpanit, Suravee

On 05/07/2018 11:49 AM, Andrew Cooper wrote:
> On 07/05/18 16:46, Boris Ostrovsky wrote:
>> On 05/07/2018 11:29 AM, Andrew Cooper wrote:
>>> On 07/05/18 16:25, Jan Beulich wrote:
>>>>>>> On 07.05.18 at 16:19, <andrew.cooper3@citrix.com> wrote:
>>>>> On 07/05/18 15:11, Jan Beulich wrote:
>>>>>>>>> On 04.05.18 at 17:11, <JBeulich@suse.com> wrote:
>>>>>>> --- a/xen/arch/x86/hvm/svm/entry.S
>>>>>>> +++ b/xen/arch/x86/hvm/svm/entry.S
>>>>>>> @@ -61,23 +61,8 @@ UNLIKELY_START(ne, nsvm_hap)
>>>>>>>          jmp  .Lsvm_do_resume
>>>>>>>  __UNLIKELY_END(nsvm_hap)
>>>>>>>  
>>>>>>> -        call svm_asid_handle_vmrun
>>>>>>> -
>>>>>>> -        cmpb $0,tb_init_done(%rip)
>>>>>>> -UNLIKELY_START(nz, svm_trace)
>>>>>>> -        call svm_trace_vmentry
>>>>>>> -UNLIKELY_END(svm_trace)
>>>>>>> -
>>>>>>> -        mov  VCPU_svm_vmcb(%rbx),%rcx
>>>>>>> -        mov  UREGS_rax(%rsp),%rax
>>>>>>> -        mov  %rax,VMCB_rax(%rcx)
>>>>>>> -        mov  UREGS_rip(%rsp),%rax
>>>>>>> -        mov  %rax,VMCB_rip(%rcx)
>>>>>>> -        mov  UREGS_rsp(%rsp),%rax
>>>>>>> -        mov  %rax,VMCB_rsp(%rcx)
>>>>>>> -        mov  UREGS_eflags(%rsp),%rax
>>>>>>> -        or   $X86_EFLAGS_MBS,%rax
>>>>>>> -        mov  %rax,VMCB_rflags(%rcx)
>>>>>>> +        mov  %rsp, %rdi
>>>>>>> +        call svm_vmenter_helper
>>>>>> While I had committed this earlier today, there's one concern I've just come
>>>>>> to think of: Now that we're calling into C land with CLGI in effect (for 
>>>>> more
>>>>>> than just the trivial svm_trace_vmentry()) we are at risk of confusing
>>>>>> parties using local_irq_is_enabled(), first and foremost
>>>>>> common/spinlock.c:check_lock(). While it's some extra overhead, I wonder
>>>>>> whether the call wouldn't better be framed by a CLI/STI pair.
>>>>> I can't see why the SVM vmentry path uses CLGI/STGI in the first place.
>>>>>
>>>>> The VMX path uses plain cli/sti and our NMI/MCE handlers can cope. 
>>>>> Furthermore, processing NMIs/MCEs at this point will be more efficient
>>>>> that taking a vmentry then immediately exiting again.
>>>> Perhaps you're right, i.e. we could replace all current CLGI/STGI by
>>>> CLI/STI, adding a single STGI right after VMRUN.
>> The APM say "It is assumed that VMM software cleared GIF some time before
>> executing the VMRUN instruction, to ensure an atomic state switch."
>>
>> Not sure if this is meant as suggestion or requirement.
> Hmm - that can probably be tested with this proposed patch and a very
> high frequency NMI perf counter.


This may only prove the we do need it, if the test without CLGI fails.

If the test passes I don't think we can say anything one way or the other.

I am adding Suravee and Brian, perhaps they know the answer (or can
check internally).


>
> Basically every other hypervisor does CLGI; VMSAVE (host state); VMLOAD
> (guest state); VMRUN, and Xen's lack of doing this is why we have to
> play with the IDT IST settings, as well as why we can't cope cleanly
> with stack overflows.
>

KVM manipulates both GIF and RFLAGS.IF.

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2018-05-07 16:14 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-04 15:07 [PATCH v3 0/2] SVM: guest state handling adjustments Jan Beulich
2018-05-04 15:10 ` [PATCH v3 1/2] SVM: re-work VMCB sync-ing Jan Beulich
2018-05-04 15:11 ` [PATCH v3 2/2] SVM: introduce a VM entry helper Jan Beulich
2018-05-07 14:11   ` Jan Beulich
2018-05-07 14:19     ` Andrew Cooper
2018-05-07 15:25       ` Jan Beulich
2018-05-07 15:29         ` Andrew Cooper
2018-05-07 15:46           ` Boris Ostrovsky
2018-05-07 15:47             ` Jan Beulich
2018-05-07 15:49             ` Andrew Cooper
2018-05-07 16:16               ` Boris Ostrovsky
2018-05-04 17:52 ` [PATCH v3 0/2] SVM: guest state handling adjustments Andrew Cooper
2018-05-04 18:38 ` Boris Ostrovsky
     [not found] ` <5AEC77E802000078001C0BEB@suse.com>
2018-05-07  6:30   ` [PATCH v3 1/2] SVM: re-work VMCB sync-ing Juergen Gross
     [not found] ` <5AEC782902000078001C0BEE@suse.com>
2018-05-07  6:31   ` [PATCH v3 2/2] SVM: introduce a VM entry helper Juergen Gross

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.