All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v17 00/14] x86/domctl: Save info for one vcpu instance
@ 2018-08-22 14:02 Alexandru Isaila
  2018-08-22 14:02 ` [PATCH v17 01/13] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func Alexandru Isaila
                   ` (12 more replies)
  0 siblings, 13 replies; 31+ messages in thread
From: Alexandru Isaila @ 2018-08-22 14:02 UTC (permalink / raw)
  To: xen-devel; +Cc: wei.liu2, paul.durrant, ian.jackson, jbeulich, andrew.cooper3

Hi all,

This patch series addresses the ideea of saving data from a single vcpu instance.
First it starts by adding *save_one functions, then it introduces a handler for the
new save_one* funcs and makes use of it in the hvm_save and hvm_save_one funcs.
The final patches are used for clean up and change the hvm_save_one() func while 
changing domain_pause to vcpu_pause.

Cheers,

Alexandru Isaila (13):

x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func
x86/hvm: Introduce hvm_save_tsc_adjust_one() func
x86/hvm: Introduce hvm_save_cpu_ctxt_one func
x86/hvm: Introduce hvm_save_cpu_xsave_states_one
x86/hvm: Introduce hvm_save_cpu_msrs_one func
x86/hvm: Introduce hvm_save_mtrr_msr_one func
x86/hvm: Introduce viridian_save_vcpu_ctxt_one()
x86/hvm: Introduce lapic_save_hidden_one
x86/hvm: Introduce lapic_save_regs_one func
x86/hvm: Add handler for save_one funcs
x86/domctl: Use hvm_save_vcpu_handler
x86/hvm: Remove redundant save functions
x86/domctl: Don't pause the whole domain if only

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH v17 01/13] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func
  2018-08-22 14:02 [PATCH v17 00/14] x86/domctl: Save info for one vcpu instance Alexandru Isaila
@ 2018-08-22 14:02 ` Alexandru Isaila
  2018-08-22 14:02 ` [PATCH v17 02/13] x86/hvm: Introduce hvm_save_tsc_adjust_one() func Alexandru Isaila
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 31+ messages in thread
From: Alexandru Isaila @ 2018-08-22 14:02 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

---
Changes since V11:
	- Removed the memset and added init with {}.
---
 xen/arch/x86/cpu/mcheck/vmce.c | 21 +++++++++++++--------
 1 file changed, 13 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index e07cd2f..31e553c 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -349,6 +349,18 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
     return ret;
 }
 
+static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+    struct hvm_vmce_vcpu ctxt = {
+        .caps = v->arch.vmce.mcg_cap,
+        .mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2,
+        .mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2,
+        .mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl,
+    };
+
+    return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt);
+}
+
 static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
     struct vcpu *v;
@@ -356,14 +368,7 @@ static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 
     for_each_vcpu ( d, v )
     {
-        struct hvm_vmce_vcpu ctxt = {
-            .caps = v->arch.vmce.mcg_cap,
-            .mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2,
-            .mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2,
-            .mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl,
-        };
-
-        err = hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt);
+        err = vmce_save_vcpu_ctxt_one(v, h);
         if ( err )
             break;
     }
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v17 02/13] x86/hvm: Introduce hvm_save_tsc_adjust_one() func
  2018-08-22 14:02 [PATCH v17 00/14] x86/domctl: Save info for one vcpu instance Alexandru Isaila
  2018-08-22 14:02 ` [PATCH v17 01/13] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func Alexandru Isaila
@ 2018-08-22 14:02 ` Alexandru Isaila
  2018-08-22 14:02 ` [PATCH v17 03/13] x86/hvm: Introduce hvm_save_cpu_ctxt_one func Alexandru Isaila
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 31+ messages in thread
From: Alexandru Isaila @ 2018-08-22 14:02 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

---
Changes since V13:
	- Moved tsc_adjust to the initializer.
---
 xen/arch/x86/hvm/hvm.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 93092d2..d90da9a 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -740,16 +740,23 @@ void hvm_domain_destroy(struct domain *d)
     destroy_vpci_mmcfg(d);
 }
 
+static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+    struct hvm_tsc_adjust ctxt = {
+        .tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust,
+    };
+
+    return hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, &ctxt);
+}
+
 static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 {
     struct vcpu *v;
-    struct hvm_tsc_adjust ctxt;
     int err = 0;
 
     for_each_vcpu ( d, v )
     {
-        ctxt.tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust;
-        err = hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, &ctxt);
+        err = hvm_save_tsc_adjust_one(v, h);
         if ( err )
             break;
     }
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v17 03/13] x86/hvm: Introduce hvm_save_cpu_ctxt_one func
  2018-08-22 14:02 [PATCH v17 00/14] x86/domctl: Save info for one vcpu instance Alexandru Isaila
  2018-08-22 14:02 ` [PATCH v17 01/13] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func Alexandru Isaila
  2018-08-22 14:02 ` [PATCH v17 02/13] x86/hvm: Introduce hvm_save_tsc_adjust_one() func Alexandru Isaila
@ 2018-08-22 14:02 ` Alexandru Isaila
  2018-08-28 15:33   ` Razvan Cojocaru
  2018-08-22 14:02 ` [PATCH v17 04/13] x86/hvm: Introduce hvm_save_cpu_xsave_states_one Alexandru Isaila
                   ` (9 subsequent siblings)
  12 siblings, 1 reply; 31+ messages in thread
From: Alexandru Isaila @ 2018-08-22 14:02 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

---
Changes since V14:
	- Move all free fields to the initializer
	- Add blank line to before the return
	- Move v->pause_flags check to the save_one function.
---
 xen/arch/x86/hvm/hvm.c | 219 +++++++++++++++++++++++++------------------------
 1 file changed, 113 insertions(+), 106 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index d90da9a..333c342 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -787,119 +787,126 @@ static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
                           hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
-static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-    struct vcpu *v;
-    struct hvm_hw_cpu ctxt;
     struct segment_register seg;
+    struct hvm_hw_cpu ctxt = {
+        .tsc = hvm_get_guest_tsc_fixed(v, v->domain->arch.hvm_domain.sync_tsc),
+        .msr_tsc_aux = hvm_msr_tsc_aux(v),
+        .rax = v->arch.user_regs.rax,
+        .rbx = v->arch.user_regs.rbx,
+        .rcx = v->arch.user_regs.rcx,
+        .rdx = v->arch.user_regs.rdx,
+        .rbp = v->arch.user_regs.rbp,
+        .rsi = v->arch.user_regs.rsi,
+        .rdi = v->arch.user_regs.rdi,
+        .rsp = v->arch.user_regs.rsp,
+        .rip = v->arch.user_regs.rip,
+        .rflags = v->arch.user_regs.rflags,
+        .r8  = v->arch.user_regs.r8,
+        .r9  = v->arch.user_regs.r9,
+        .r10 = v->arch.user_regs.r10,
+        .r11 = v->arch.user_regs.r11,
+        .r12 = v->arch.user_regs.r12,
+        .r13 = v->arch.user_regs.r13,
+        .r14 = v->arch.user_regs.r14,
+        .r15 = v->arch.user_regs.r15,
+        .dr0 = v->arch.debugreg[0],
+        .dr1 = v->arch.debugreg[1],
+        .dr2 = v->arch.debugreg[2],
+        .dr3 = v->arch.debugreg[3],
+        .dr6 = v->arch.debugreg[6],
+        .dr7 = v->arch.debugreg[7],
+    };
 
-    for_each_vcpu ( d, v )
+    /*
+     * We don't need to save state for a vcpu that is down; the restore
+     * code will leave it down if there is nothing saved.
+     */
+    if ( v->pause_flags & VPF_down )
+        return 0;
+
+    /* Architecture-specific vmcs/vmcb bits */
+    hvm_funcs.save_cpu_ctxt(v, &ctxt);
+
+    hvm_get_segment_register(v, x86_seg_idtr, &seg);
+    ctxt.idtr_limit = seg.limit;
+    ctxt.idtr_base = seg.base;
+
+    hvm_get_segment_register(v, x86_seg_gdtr, &seg);
+    ctxt.gdtr_limit = seg.limit;
+    ctxt.gdtr_base = seg.base;
+
+    hvm_get_segment_register(v, x86_seg_cs, &seg);
+    ctxt.cs_sel = seg.sel;
+    ctxt.cs_limit = seg.limit;
+    ctxt.cs_base = seg.base;
+    ctxt.cs_arbytes = seg.attr;
+
+    hvm_get_segment_register(v, x86_seg_ds, &seg);
+    ctxt.ds_sel = seg.sel;
+    ctxt.ds_limit = seg.limit;
+    ctxt.ds_base = seg.base;
+    ctxt.ds_arbytes = seg.attr;
+
+    hvm_get_segment_register(v, x86_seg_es, &seg);
+    ctxt.es_sel = seg.sel;
+    ctxt.es_limit = seg.limit;
+    ctxt.es_base = seg.base;
+    ctxt.es_arbytes = seg.attr;
+
+    hvm_get_segment_register(v, x86_seg_ss, &seg);
+    ctxt.ss_sel = seg.sel;
+    ctxt.ss_limit = seg.limit;
+    ctxt.ss_base = seg.base;
+    ctxt.ss_arbytes = seg.attr;
+
+    hvm_get_segment_register(v, x86_seg_fs, &seg);
+    ctxt.fs_sel = seg.sel;
+    ctxt.fs_limit = seg.limit;
+    ctxt.fs_base = seg.base;
+    ctxt.fs_arbytes = seg.attr;
+
+    hvm_get_segment_register(v, x86_seg_gs, &seg);
+    ctxt.gs_sel = seg.sel;
+    ctxt.gs_limit = seg.limit;
+    ctxt.gs_base = seg.base;
+    ctxt.gs_arbytes = seg.attr;
+
+    hvm_get_segment_register(v, x86_seg_tr, &seg);
+    ctxt.tr_sel = seg.sel;
+    ctxt.tr_limit = seg.limit;
+    ctxt.tr_base = seg.base;
+    ctxt.tr_arbytes = seg.attr;
+
+    hvm_get_segment_register(v, x86_seg_ldtr, &seg);
+    ctxt.ldtr_sel = seg.sel;
+    ctxt.ldtr_limit = seg.limit;
+    ctxt.ldtr_base = seg.base;
+    ctxt.ldtr_arbytes = seg.attr;
+
+    if ( v->fpu_initialised )
     {
-        /* We don't need to save state for a vcpu that is down; the restore 
-         * code will leave it down if there is nothing saved. */
-        if ( v->pause_flags & VPF_down )
-            continue;
+        memcpy(ctxt.fpu_regs, v->arch.fpu_ctxt, sizeof(ctxt.fpu_regs));
+        ctxt.flags = XEN_X86_FPU_INITIALISED;
+    }
 
-        memset(&ctxt, 0, sizeof(ctxt));
-
-        /* Architecture-specific vmcs/vmcb bits */
-        hvm_funcs.save_cpu_ctxt(v, &ctxt);
-
-        ctxt.tsc = hvm_get_guest_tsc_fixed(v, d->arch.hvm_domain.sync_tsc);
-
-        ctxt.msr_tsc_aux = hvm_msr_tsc_aux(v);
-
-        hvm_get_segment_register(v, x86_seg_idtr, &seg);
-        ctxt.idtr_limit = seg.limit;
-        ctxt.idtr_base = seg.base;
-
-        hvm_get_segment_register(v, x86_seg_gdtr, &seg);
-        ctxt.gdtr_limit = seg.limit;
-        ctxt.gdtr_base = seg.base;
-
-        hvm_get_segment_register(v, x86_seg_cs, &seg);
-        ctxt.cs_sel = seg.sel;
-        ctxt.cs_limit = seg.limit;
-        ctxt.cs_base = seg.base;
-        ctxt.cs_arbytes = seg.attr;
-
-        hvm_get_segment_register(v, x86_seg_ds, &seg);
-        ctxt.ds_sel = seg.sel;
-        ctxt.ds_limit = seg.limit;
-        ctxt.ds_base = seg.base;
-        ctxt.ds_arbytes = seg.attr;
-
-        hvm_get_segment_register(v, x86_seg_es, &seg);
-        ctxt.es_sel = seg.sel;
-        ctxt.es_limit = seg.limit;
-        ctxt.es_base = seg.base;
-        ctxt.es_arbytes = seg.attr;
-
-        hvm_get_segment_register(v, x86_seg_ss, &seg);
-        ctxt.ss_sel = seg.sel;
-        ctxt.ss_limit = seg.limit;
-        ctxt.ss_base = seg.base;
-        ctxt.ss_arbytes = seg.attr;
-
-        hvm_get_segment_register(v, x86_seg_fs, &seg);
-        ctxt.fs_sel = seg.sel;
-        ctxt.fs_limit = seg.limit;
-        ctxt.fs_base = seg.base;
-        ctxt.fs_arbytes = seg.attr;
-
-        hvm_get_segment_register(v, x86_seg_gs, &seg);
-        ctxt.gs_sel = seg.sel;
-        ctxt.gs_limit = seg.limit;
-        ctxt.gs_base = seg.base;
-        ctxt.gs_arbytes = seg.attr;
-
-        hvm_get_segment_register(v, x86_seg_tr, &seg);
-        ctxt.tr_sel = seg.sel;
-        ctxt.tr_limit = seg.limit;
-        ctxt.tr_base = seg.base;
-        ctxt.tr_arbytes = seg.attr;
-
-        hvm_get_segment_register(v, x86_seg_ldtr, &seg);
-        ctxt.ldtr_sel = seg.sel;
-        ctxt.ldtr_limit = seg.limit;
-        ctxt.ldtr_base = seg.base;
-        ctxt.ldtr_arbytes = seg.attr;
-
-        if ( v->fpu_initialised )
-        {
-            memcpy(ctxt.fpu_regs, v->arch.fpu_ctxt, sizeof(ctxt.fpu_regs));
-            ctxt.flags = XEN_X86_FPU_INITIALISED;
-        }
+    return hvm_save_entry(CPU, v->vcpu_id, h, &ctxt);
+}
+
+static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+{
+    struct vcpu *v;
+    int err = 0;
 
-        ctxt.rax = v->arch.user_regs.rax;
-        ctxt.rbx = v->arch.user_regs.rbx;
-        ctxt.rcx = v->arch.user_regs.rcx;
-        ctxt.rdx = v->arch.user_regs.rdx;
-        ctxt.rbp = v->arch.user_regs.rbp;
-        ctxt.rsi = v->arch.user_regs.rsi;
-        ctxt.rdi = v->arch.user_regs.rdi;
-        ctxt.rsp = v->arch.user_regs.rsp;
-        ctxt.rip = v->arch.user_regs.rip;
-        ctxt.rflags = v->arch.user_regs.rflags;
-        ctxt.r8  = v->arch.user_regs.r8;
-        ctxt.r9  = v->arch.user_regs.r9;
-        ctxt.r10 = v->arch.user_regs.r10;
-        ctxt.r11 = v->arch.user_regs.r11;
-        ctxt.r12 = v->arch.user_regs.r12;
-        ctxt.r13 = v->arch.user_regs.r13;
-        ctxt.r14 = v->arch.user_regs.r14;
-        ctxt.r15 = v->arch.user_regs.r15;
-        ctxt.dr0 = v->arch.debugreg[0];
-        ctxt.dr1 = v->arch.debugreg[1];
-        ctxt.dr2 = v->arch.debugreg[2];
-        ctxt.dr3 = v->arch.debugreg[3];
-        ctxt.dr6 = v->arch.debugreg[6];
-        ctxt.dr7 = v->arch.debugreg[7];
-
-        if ( hvm_save_entry(CPU, v->vcpu_id, h, &ctxt) != 0 )
-            return 1; 
+    for_each_vcpu ( d, v )
+    {
+        err = hvm_save_cpu_ctxt_one(v, h);
+        if ( err )
+            break;
     }
-    return 0;
+
+    return err;
 }
 
 /* Return a string indicating the error, or NULL for valid. */
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v17 04/13] x86/hvm: Introduce hvm_save_cpu_xsave_states_one
  2018-08-22 14:02 [PATCH v17 00/14] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (2 preceding siblings ...)
  2018-08-22 14:02 ` [PATCH v17 03/13] x86/hvm: Introduce hvm_save_cpu_ctxt_one func Alexandru Isaila
@ 2018-08-22 14:02 ` Alexandru Isaila
  2018-08-22 14:02 ` [PATCH v17 05/13] x86/hvm: Introduce hvm_save_cpu_msrs_one func Alexandru Isaila
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 31+ messages in thread
From: Alexandru Isaila @ 2018-08-22 14:02 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

---
Changes since V14:
	- Remove err init
	- Add blank line ahead of return
	- Move xsave_enabled() check to the save_one func.
---
 xen/arch/x86/hvm/hvm.c | 47 +++++++++++++++++++++++++++++------------------
 1 file changed, 29 insertions(+), 18 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 333c342..5b0820e 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1187,35 +1187,46 @@ HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_load_cpu_ctxt,
                                            save_area) + \
                                   xstate_ctxt_size(xcr0))
 
-static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_xsave_states_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-    struct vcpu *v;
     struct hvm_hw_cpu_xsave *ctxt;
+    unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
+    int err;
 
-    if ( !cpu_has_xsave )
+    if ( !cpu_has_xsave || !xsave_enabled(v) )
         return 0;   /* do nothing */
 
-    for_each_vcpu ( d, v )
-    {
-        unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
+    err = _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size);
+    if ( err )
+        return err;
 
-        if ( !xsave_enabled(v) )
-            continue;
-        if ( _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size) )
-            return 1;
-        ctxt = (struct hvm_hw_cpu_xsave *)&h->data[h->cur];
-        h->cur += size;
+    ctxt = (struct hvm_hw_cpu_xsave *)&h->data[h->cur];
+    h->cur += size;
+    ctxt->xfeature_mask = xfeature_mask;
+    ctxt->xcr0 = v->arch.xcr0;
+    ctxt->xcr0_accum = v->arch.xcr0_accum;
 
-        ctxt->xfeature_mask = xfeature_mask;
-        ctxt->xcr0 = v->arch.xcr0;
-        ctxt->xcr0_accum = v->arch.xcr0_accum;
-        expand_xsave_states(v, &ctxt->save_area,
-                            size - offsetof(typeof(*ctxt), save_area));
-    }
+    expand_xsave_states(v, &ctxt->save_area,
+                        size - offsetof(typeof(*ctxt), save_area));
 
     return 0;
 }
 
+static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
+{
+    struct vcpu *v;
+    int err = 0;
+
+    for_each_vcpu ( d, v )
+    {
+        err = hvm_save_cpu_xsave_states_one(v, h);
+        if ( err )
+            break;
+    }
+
+    return err;
+}
+
 /*
  * Structure layout conformity checks, documenting correctness of the cast in
  * the invocation of validate_xstate() below.
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v17 05/13] x86/hvm: Introduce hvm_save_cpu_msrs_one func
  2018-08-22 14:02 [PATCH v17 00/14] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (3 preceding siblings ...)
  2018-08-22 14:02 ` [PATCH v17 04/13] x86/hvm: Introduce hvm_save_cpu_xsave_states_one Alexandru Isaila
@ 2018-08-22 14:02 ` Alexandru Isaila
  2018-08-22 14:02 ` [PATCH v17 06/13] x86/hvm: Introduce hvm_save_mtrr_msr_one func Alexandru Isaila
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 31+ messages in thread
From: Alexandru Isaila @ 2018-08-22 14:02 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

---
Changes since V14:
	- Remove err init
	- Add blank line ahead of return.
---
 xen/arch/x86/hvm/hvm.c | 106 +++++++++++++++++++++++++++----------------------
 1 file changed, 59 insertions(+), 47 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 5b0820e..7df8744 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1364,69 +1364,81 @@ static const uint32_t msrs_to_send[] = {
 };
 static unsigned int __read_mostly msr_count_max = ARRAY_SIZE(msrs_to_send);
 
-static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_msrs_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-    struct vcpu *v;
+    struct hvm_save_descriptor *desc = _p(&h->data[h->cur]);
+    struct hvm_msr *ctxt;
+    unsigned int i;
+    int err;
 
-    for_each_vcpu ( d, v )
+    err = _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
+                         HVM_CPU_MSR_SIZE(msr_count_max));
+    if ( err )
+        return err;
+    ctxt = (struct hvm_msr *)&h->data[h->cur];
+    ctxt->count = 0;
+
+    for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
     {
-        struct hvm_save_descriptor *desc = _p(&h->data[h->cur]);
-        struct hvm_msr *ctxt;
-        unsigned int i;
+        uint64_t val;
+        int rc = guest_rdmsr(v, msrs_to_send[i], &val);
 
-        if ( _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
-                             HVM_CPU_MSR_SIZE(msr_count_max)) )
-            return 1;
-        ctxt = (struct hvm_msr *)&h->data[h->cur];
-        ctxt->count = 0;
+        /*
+         * It is the programmers responsibility to ensure that
+         * msrs_to_send[] contain generally-read/write MSRs.
+         * X86EMUL_EXCEPTION here implies a missing feature, and that the
+         * guest doesn't have access to the MSR.
+         */
+        if ( rc == X86EMUL_EXCEPTION )
+            continue;
 
-        for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
+        if ( rc != X86EMUL_OKAY )
         {
-            uint64_t val;
-            int rc = guest_rdmsr(v, msrs_to_send[i], &val);
+            ASSERT_UNREACHABLE();
+            return -ENXIO;
+        }
 
-            /*
-             * It is the programmers responsibility to ensure that
-             * msrs_to_send[] contain generally-read/write MSRs.
-             * X86EMUL_EXCEPTION here implies a missing feature, and that the
-             * guest doesn't have access to the MSR.
-             */
-            if ( rc == X86EMUL_EXCEPTION )
-                continue;
+        if ( !val )
+            continue; /* Skip empty MSRs. */
 
-            if ( rc != X86EMUL_OKAY )
-            {
-                ASSERT_UNREACHABLE();
-                return -ENXIO;
-            }
+        ctxt->msr[ctxt->count].index = msrs_to_send[i];
+        ctxt->msr[ctxt->count++].val = val;
+    }
 
-            if ( !val )
-                continue; /* Skip empty MSRs. */
+    if ( hvm_funcs.save_msr )
+        hvm_funcs.save_msr(v, ctxt);
 
-            ctxt->msr[ctxt->count].index = msrs_to_send[i];
-            ctxt->msr[ctxt->count++].val = val;
-        }
+    ASSERT(ctxt->count <= msr_count_max);
 
-        if ( hvm_funcs.save_msr )
-            hvm_funcs.save_msr(v, ctxt);
+    for ( i = 0; i < ctxt->count; ++i )
+        ctxt->msr[i]._rsvd = 0;
 
-        ASSERT(ctxt->count <= msr_count_max);
+    if ( ctxt->count )
+    {
+        /* Rewrite length to indicate how much space we actually used. */
+        desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
+        h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
+    }
+    else
+        /* or rewind and remove the descriptor from the stream. */
+        h->cur -= sizeof(struct hvm_save_descriptor);
 
-        for ( i = 0; i < ctxt->count; ++i )
-            ctxt->msr[i]._rsvd = 0;
+    return 0;
+}
 
-        if ( ctxt->count )
-        {
-            /* Rewrite length to indicate how much space we actually used. */
-            desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
-            h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
-        }
-        else
-            /* or rewind and remove the descriptor from the stream. */
-            h->cur -= sizeof(struct hvm_save_descriptor);
+static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+{
+    struct vcpu *v;
+    int err = 0;
+
+    for_each_vcpu ( d, v )
+    {
+        err = hvm_save_cpu_msrs_one(v, h);
+        if ( err )
+            break;
     }
 
-    return 0;
+    return err;
 }
 
 static int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v17 06/13] x86/hvm: Introduce hvm_save_mtrr_msr_one func
  2018-08-22 14:02 [PATCH v17 00/14] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (4 preceding siblings ...)
  2018-08-22 14:02 ` [PATCH v17 05/13] x86/hvm: Introduce hvm_save_cpu_msrs_one func Alexandru Isaila
@ 2018-08-22 14:02 ` Alexandru Isaila
  2018-08-22 14:02 ` [PATCH v17 07/13] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func Alexandru Isaila
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 31+ messages in thread
From: Alexandru Isaila @ 2018-08-22 14:02 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>i
Reviewed-by: Jan Beulich <jbeulich@suse.com>

---
Changes since v16:
	- Address style comments.

Note: This patch is based on Roger Pau Monne's series[1]
---
 xen/arch/x86/hvm/mtrr.c | 80 ++++++++++++++++++++++++++-----------------------
 1 file changed, 43 insertions(+), 37 deletions(-)

diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 48facbb..298d7ee 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -718,52 +718,58 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start,
     return 0;
 }
 
-static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_mtrr_msr_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-    struct vcpu *v;
+    const struct mtrr_state *mtrr_state = &v->arch.hvm_vcpu.mtrr;
+    struct hvm_hw_mtrr hw_mtrr = {
+        .msr_mtrr_def_type = mtrr_state->def_type |
+                             MASK_INSR(mtrr_state->fixed_enabled,
+                                       MTRRdefType_FE) |
+                             MASK_INSR(mtrr_state->enabled, MTRRdefType_E),
+        .msr_mtrr_cap      = mtrr_state->mtrr_cap,
+    };
+    unsigned int i;
 
-    /* save mtrr&pat */
-    for_each_vcpu(d, v)
+    if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
+         (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
     {
-        const struct mtrr_state *mtrr_state = &v->arch.hvm_vcpu.mtrr;
-        struct hvm_hw_mtrr hw_mtrr = {
-            .msr_mtrr_def_type = mtrr_state->def_type |
-                                 MASK_INSR(mtrr_state->fixed_enabled,
-                                           MTRRdefType_FE) |
-                                 MASK_INSR(mtrr_state->enabled, MTRRdefType_E),
-            .msr_mtrr_cap      = mtrr_state->mtrr_cap,
-        };
-        unsigned int i;
+        dprintk(XENLOG_G_ERR,
+                "HVM save: %pv: too many (%lu) variable range MTRRs\n",
+                v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
+        return -EINVAL;
+    }
 
-        if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
-             (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
-        {
-            dprintk(XENLOG_G_ERR,
-                    "HVM save: %pv: too many (%lu) variable range MTRRs\n",
-                    v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
-            return -EINVAL;
-        }
+    hvm_get_guest_pat(v, &hw_mtrr.msr_pat_cr);
+
+    for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
+    {
+        hw_mtrr.msr_mtrr_var[i * 2] = mtrr_state->var_ranges->base;
+        hw_mtrr.msr_mtrr_var[i * 2 + 1] = mtrr_state->var_ranges->mask;
+    }
 
-        hvm_get_guest_pat(v, &hw_mtrr.msr_pat_cr);
+    BUILD_BUG_ON(sizeof(hw_mtrr.msr_mtrr_fixed) !=
+                 sizeof(mtrr_state->fixed_ranges));
 
-        for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
-        {
-            /* save physbase */
-            hw_mtrr.msr_mtrr_var[i*2] =
-                ((uint64_t*)mtrr_state->var_ranges)[i*2];
-            /* save physmask */
-            hw_mtrr.msr_mtrr_var[i*2+1] =
-                ((uint64_t*)mtrr_state->var_ranges)[i*2+1];
-        }
+    memcpy(hw_mtrr.msr_mtrr_fixed, mtrr_state->fixed_ranges,
+           sizeof(hw_mtrr.msr_mtrr_fixed));
 
-        for ( i = 0; i < NUM_FIXED_MSR; i++ )
-            hw_mtrr.msr_mtrr_fixed[i] =
-                ((uint64_t*)mtrr_state->fixed_ranges)[i];
+    return hvm_save_entry(MTRR, v->vcpu_id, h, &hw_mtrr);
+}
+
+static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
+{
+    struct vcpu *v;
+    int err = 0;
 
-        if ( hvm_save_entry(MTRR, v->vcpu_id, h, &hw_mtrr) != 0 )
-            return 1;
+    /* save mtrr&pat */
+    for_each_vcpu(d, v)
+    {
+       err = hvm_save_mtrr_msr_one(v, h);
+       if ( err )
+           break;
     }
-    return 0;
+
+    return err;
 }
 
 static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v17 07/13] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func
  2018-08-22 14:02 [PATCH v17 00/14] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (5 preceding siblings ...)
  2018-08-22 14:02 ` [PATCH v17 06/13] x86/hvm: Introduce hvm_save_mtrr_msr_one func Alexandru Isaila
@ 2018-08-22 14:02 ` Alexandru Isaila
  2018-08-22 14:02 ` [PATCH v17 08/13] x86/hvm: Introduce lapic_save_hidden_one Alexandru Isaila
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 31+ messages in thread
From: Alexandru Isaila @ 2018-08-22 14:02 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

---
Changes since V14:
	- Moved all the operations in the initializer.
---
 xen/arch/x86/hvm/viridian.c | 30 +++++++++++++++++++-----------
 1 file changed, 19 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index 694eae6..3f52d38 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -1026,24 +1026,32 @@ static int viridian_load_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
 HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt,
                           viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
 
-static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-    struct vcpu *v;
+    struct hvm_viridian_vcpu_context ctxt = {
+        .vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw,
+        .vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending,
+    };
 
-    if ( !is_viridian_domain(d) )
+    if ( !is_viridian_domain(v->domain) )
         return 0;
 
-    for_each_vcpu( d, v ) {
-        struct hvm_viridian_vcpu_context ctxt = {
-            .vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw,
-            .vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending,
-        };
+    return hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt);
+}
+
+static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+{
+    struct vcpu *v;
+    int err = 0;
 
-        if ( hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt) != 0 )
-            return 1;
+    for_each_vcpu ( d, v )
+    {
+        err = viridian_save_vcpu_ctxt_one(v, h);
+        if ( err )
+            break;
     }
 
-    return 0;
+    return err;
 }
 
 static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v17 08/13] x86/hvm: Introduce lapic_save_hidden_one
  2018-08-22 14:02 [PATCH v17 00/14] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (6 preceding siblings ...)
  2018-08-22 14:02 ` [PATCH v17 07/13] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func Alexandru Isaila
@ 2018-08-22 14:02 ` Alexandru Isaila
  2018-08-22 14:02 ` [PATCH v17 09/13] x86/hvm: Introduce lapic_save_regs_one func Alexandru Isaila
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 31+ messages in thread
From: Alexandru Isaila @ 2018-08-22 14:02 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

---
Changes since v15:
        - Drop struct vlapic *s.
---
 xen/arch/x86/hvm/vlapic.c | 20 ++++++++++++--------
 1 file changed, 12 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 1b9f00a..429ffb5 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1435,23 +1435,27 @@ static void lapic_rearm(struct vlapic *s)
     s->timer_last_update = s->pt.last_plt_gtime;
 }
 
+static int lapic_save_hidden_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+    if ( !has_vlapic(v->domain) )
+        return 0;
+
+    return hvm_save_entry(LAPIC, v->vcpu_id, h, &vcpu_vlapic(v)->hw);
+}
+
 static int lapic_save_hidden(struct domain *d, hvm_domain_context_t *h)
 {
     struct vcpu *v;
-    struct vlapic *s;
-    int rc = 0;
-
-    if ( !has_vlapic(d) )
-        return 0;
+    int err = 0;
 
     for_each_vcpu ( d, v )
     {
-        s = vcpu_vlapic(v);
-        if ( (rc = hvm_save_entry(LAPIC, v->vcpu_id, h, &s->hw)) != 0 )
+        err = lapic_save_hidden_one(v, h);
+        if ( err )
             break;
     }
 
-    return rc;
+    return err;
 }
 
 static int lapic_save_regs(struct domain *d, hvm_domain_context_t *h)
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v17 09/13] x86/hvm: Introduce lapic_save_regs_one func
  2018-08-22 14:02 [PATCH v17 00/14] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (7 preceding siblings ...)
  2018-08-22 14:02 ` [PATCH v17 08/13] x86/hvm: Introduce lapic_save_hidden_one Alexandru Isaila
@ 2018-08-22 14:02 ` Alexandru Isaila
  2018-08-22 14:02 ` [PATCH v17 10/13] x86/hvm: Add handler for save_one funcs Alexandru Isaila
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 31+ messages in thread
From: Alexandru Isaila @ 2018-08-22 14:02 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

---
Changes since v15:
	- Drop struct vlapic *s.
---
 xen/arch/x86/hvm/vlapic.c | 26 +++++++++++++++-----------
 1 file changed, 15 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 429ffb5..2e73615 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1458,26 +1458,30 @@ static int lapic_save_hidden(struct domain *d, hvm_domain_context_t *h)
     return err;
 }
 
+static int lapic_save_regs_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+    if ( !has_vlapic(v->domain) )
+        return 0;
+
+    if ( hvm_funcs.sync_pir_to_irr )
+        hvm_funcs.sync_pir_to_irr(v);
+
+    return hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, vcpu_vlapic(v)->regs);
+}
+
 static int lapic_save_regs(struct domain *d, hvm_domain_context_t *h)
 {
     struct vcpu *v;
-    struct vlapic *s;
-    int rc = 0;
-
-    if ( !has_vlapic(d) )
-        return 0;
+    int err = 0;
 
     for_each_vcpu ( d, v )
     {
-        if ( hvm_funcs.sync_pir_to_irr )
-            hvm_funcs.sync_pir_to_irr(v);
-
-        s = vcpu_vlapic(v);
-        if ( (rc = hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, s->regs)) != 0 )
+        err = lapic_save_regs_one(v, h);
+        if ( err )
             break;
     }
 
-    return rc;
+    return err;
 }
 
 /*
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v17 10/13] x86/hvm: Add handler for save_one funcs
  2018-08-22 14:02 [PATCH v17 00/14] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (8 preceding siblings ...)
  2018-08-22 14:02 ` [PATCH v17 09/13] x86/hvm: Introduce lapic_save_regs_one func Alexandru Isaila
@ 2018-08-22 14:02 ` Alexandru Isaila
  2018-08-22 14:02 ` [PATCH v17 11/13] x86/domctl: Use hvm_save_vcpu_handler Alexandru Isaila
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 31+ messages in thread
From: Alexandru Isaila @ 2018-08-22 14:02 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

---
Changes since V14:
	- Change handler name from hvm_save_one_handler to
	  hvm_save_vcpu_handler.
---
 xen/arch/x86/cpu/mcheck/vmce.c | 1 +
 xen/arch/x86/hvm/hpet.c        | 2 +-
 xen/arch/x86/hvm/hvm.c         | 6 +++++-
 xen/arch/x86/hvm/i8254.c       | 2 +-
 xen/arch/x86/hvm/irq.c         | 6 +++---
 xen/arch/x86/hvm/mtrr.c        | 4 ++--
 xen/arch/x86/hvm/pmtimer.c     | 2 +-
 xen/arch/x86/hvm/rtc.c         | 2 +-
 xen/arch/x86/hvm/save.c        | 3 +++
 xen/arch/x86/hvm/vioapic.c     | 2 +-
 xen/arch/x86/hvm/viridian.c    | 3 ++-
 xen/arch/x86/hvm/vlapic.c      | 4 ++--
 xen/arch/x86/hvm/vpic.c        | 2 +-
 xen/include/asm-x86/hvm/save.h | 6 +++++-
 14 files changed, 29 insertions(+), 16 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 31e553c..35044d7 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -396,6 +396,7 @@ static int vmce_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
+                          vmce_save_vcpu_ctxt_one,
                           vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 
 /*
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index 2837709..aff8613 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -640,7 +640,7 @@ static int hpet_load(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, hpet_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1, HVMSR_PER_DOM);
 
 static void hpet_set(HPETState *h)
 {
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 7df8744..4a70251 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -785,6 +785,7 @@ static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
+                          hvm_save_tsc_adjust_one,
                           hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
 static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
@@ -1180,7 +1181,8 @@ static int hvm_load_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_load_cpu_ctxt,
+HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_save_cpu_ctxt_one,
+                          hvm_load_cpu_ctxt,
                           1, HVMSR_PER_VCPU);
 
 #define HVM_CPU_XSAVE_SIZE(xcr0) (offsetof(struct hvm_hw_cpu_xsave, \
@@ -1533,6 +1535,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
     hvm_register_savevm(CPU_XSAVE_CODE,
                         "CPU_XSAVE",
                         hvm_save_cpu_xsave_states,
+                        hvm_save_cpu_xsave_states_one,
                         hvm_load_cpu_xsave_states,
                         HVM_CPU_XSAVE_SIZE(xfeature_mask) +
                             sizeof(struct hvm_save_descriptor),
@@ -1545,6 +1548,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
         hvm_register_savevm(CPU_MSR_CODE,
                             "CPU_MSR",
                             hvm_save_cpu_msrs,
+                            hvm_save_cpu_msrs_one,
                             hvm_load_cpu_msrs,
                             HVM_CPU_MSR_SIZE(msr_count_max) +
                                 sizeof(struct hvm_save_descriptor),
diff --git a/xen/arch/x86/hvm/i8254.c b/xen/arch/x86/hvm/i8254.c
index 992f08d..ec77b23 100644
--- a/xen/arch/x86/hvm/i8254.c
+++ b/xen/arch/x86/hvm/i8254.c
@@ -437,7 +437,7 @@ static int pit_load(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, pit_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, NULL, pit_load, 1, HVMSR_PER_DOM);
 
 void pit_reset(struct domain *d)
 {
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index c85d004..770eab7 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -764,9 +764,9 @@ static int irq_load_link(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, irq_load_pci,
+HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, NULL, irq_load_pci,
                           1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, irq_load_isa, 
+HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, NULL, irq_load_isa,
                           1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, irq_load_link,
+HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, NULL, irq_load_link,
                           1, HVMSR_PER_DOM);
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 298d7ee..1c4e731 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -822,8 +822,8 @@ static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr, hvm_load_mtrr_msr,
-                          1, HVMSR_PER_VCPU);
+HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr, hvm_save_mtrr_msr_one,
+                          hvm_load_mtrr_msr, 1, HVMSR_PER_VCPU);
 
 void memory_type_changed(struct domain *d)
 {
diff --git a/xen/arch/x86/hvm/pmtimer.c b/xen/arch/x86/hvm/pmtimer.c
index 435647f..0a5e8ce 100644
--- a/xen/arch/x86/hvm/pmtimer.c
+++ b/xen/arch/x86/hvm/pmtimer.c
@@ -309,7 +309,7 @@ static int acpi_load(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PMTIMER, acpi_save, acpi_load,
+HVM_REGISTER_SAVE_RESTORE(PMTIMER, acpi_save, NULL, acpi_load,
                           1, HVMSR_PER_DOM);
 
 int pmtimer_change_ioport(struct domain *d, unsigned int version)
diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index cb75b99..ce7e71b 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -783,7 +783,7 @@ static int rtc_load(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(RTC, rtc_save, rtc_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(RTC, rtc_save, NULL, rtc_load, 1, HVMSR_PER_DOM);
 
 void rtc_reset(struct domain *d)
 {
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 422b96c..1106b96 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -85,6 +85,7 @@ int arch_hvm_load(struct domain *d, struct hvm_save_header *hdr)
 /* List of handlers for various HVM save and restore types */
 static struct {
     hvm_save_handler save;
+    hvm_save_vcpu_handler save_one;
     hvm_load_handler load;
     const char *name;
     size_t size;
@@ -95,6 +96,7 @@ static struct {
 void __init hvm_register_savevm(uint16_t typecode,
                                 const char *name,
                                 hvm_save_handler save_state,
+                                hvm_save_vcpu_handler save_one,
                                 hvm_load_handler load_state,
                                 size_t size, int kind)
 {
@@ -102,6 +104,7 @@ void __init hvm_register_savevm(uint16_t typecode,
     ASSERT(hvm_sr_handlers[typecode].save == NULL);
     ASSERT(hvm_sr_handlers[typecode].load == NULL);
     hvm_sr_handlers[typecode].save = save_state;
+    hvm_sr_handlers[typecode].save_one = save_one;
     hvm_sr_handlers[typecode].load = load_state;
     hvm_sr_handlers[typecode].name = name;
     hvm_sr_handlers[typecode].size = size;
diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index 97b419f..66f54e4 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -601,7 +601,7 @@ static int ioapic_load(struct domain *d, hvm_domain_context_t *h)
     return hvm_load_entry(IOAPIC, h, &s->domU);
 }
 
-HVM_REGISTER_SAVE_RESTORE(IOAPIC, ioapic_save, ioapic_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(IOAPIC, ioapic_save, NULL, ioapic_load, 1, HVMSR_PER_DOM);
 
 void vioapic_reset(struct domain *d)
 {
diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index 3f52d38..268ccce 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -1023,7 +1023,7 @@ static int viridian_load_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt,
+HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt, NULL,
                           viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
 
 static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
@@ -1085,6 +1085,7 @@ static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_VCPU, viridian_save_vcpu_ctxt,
+                          viridian_save_vcpu_ctxt_one,
                           viridian_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 
 static int __init parse_viridian_version(const char *arg)
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 2e73615..7b6f400 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1584,9 +1584,9 @@ static int lapic_load_regs(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, lapic_load_hidden,
+HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, lapic_save_hidden_one, lapic_load_hidden,
                           1, HVMSR_PER_VCPU);
-HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, lapic_load_regs,
+HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, lapic_save_regs_one, lapic_load_regs,
                           1, HVMSR_PER_VCPU);
 
 int vlapic_init(struct vcpu *v)
diff --git a/xen/arch/x86/hvm/vpic.c b/xen/arch/x86/hvm/vpic.c
index e160bbd..ca9b4cb 100644
--- a/xen/arch/x86/hvm/vpic.c
+++ b/xen/arch/x86/hvm/vpic.c
@@ -411,7 +411,7 @@ static int vpic_load(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PIC, vpic_save, vpic_load, 2, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIC, vpic_save, NULL, vpic_load, 2, HVMSR_PER_DOM);
 
 void vpic_reset(struct domain *d)
 {
diff --git a/xen/include/asm-x86/hvm/save.h b/xen/include/asm-x86/hvm/save.h
index f889e8f..f2283fc 100644
--- a/xen/include/asm-x86/hvm/save.h
+++ b/xen/include/asm-x86/hvm/save.h
@@ -97,6 +97,8 @@ static inline uint16_t hvm_load_instance(struct hvm_domain_context *h)
  * restoring.  Both return non-zero on error. */
 typedef int (*hvm_save_handler) (struct domain *d, 
                                  hvm_domain_context_t *h);
+typedef int (*hvm_save_vcpu_handler)(struct  vcpu *v,
+                                    hvm_domain_context_t *h);
 typedef int (*hvm_load_handler) (struct domain *d,
                                  hvm_domain_context_t *h);
 
@@ -105,6 +107,7 @@ typedef int (*hvm_load_handler) (struct domain *d,
 void hvm_register_savevm(uint16_t typecode,
                          const char *name, 
                          hvm_save_handler save_state,
+                         hvm_save_vcpu_handler save_one,
                          hvm_load_handler load_state,
                          size_t size, int kind);
 
@@ -114,12 +117,13 @@ void hvm_register_savevm(uint16_t typecode,
 
 /* Syntactic sugar around that function: specify the max number of
  * saves, and this calculates the size of buffer needed */
-#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _load, _num, _k)             \
+#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _save_one, _load, _num, _k)  \
 static int __init __hvm_register_##_x##_save_and_restore(void)            \
 {                                                                         \
     hvm_register_savevm(HVM_SAVE_CODE(_x),                                \
                         #_x,                                              \
                         &_save,                                           \
+                        _save_one,                                        \
                         &_load,                                           \
                         (_num) * (HVM_SAVE_LENGTH(_x)                     \
                                   + sizeof (struct hvm_save_descriptor)), \
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v17 11/13] x86/domctl: Use hvm_save_vcpu_handler
  2018-08-22 14:02 [PATCH v17 00/14] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (9 preceding siblings ...)
  2018-08-22 14:02 ` [PATCH v17 10/13] x86/hvm: Add handler for save_one funcs Alexandru Isaila
@ 2018-08-22 14:02 ` Alexandru Isaila
  2018-08-22 14:28   ` Roger Pau Monné
  2018-08-22 14:02 ` [PATCH v17 12/13] x86/hvm: Remove redundant save functions Alexandru Isaila
  2018-08-22 14:02 ` [PATCH v17 13/13] x86/domctl: Don't pause the whole domain if only getting vcpu state Alexandru Isaila
  12 siblings, 1 reply; 31+ messages in thread
From: Alexandru Isaila @ 2018-08-22 14:02 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This patch is aimed on using the new save_one fuctions in the hvm_save

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>

---
Changes since V15:
	- Moved declarations into their scopes
	- Remove redundant NULL check
	- Remove rc variable
	- Change fault return to -ENODATA.
---
 xen/arch/x86/hvm/save.c | 27 +++++++++++++++++++++++----
 1 file changed, 23 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 1106b96..1eb2b01 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -195,7 +195,6 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
     char *c;
     struct hvm_save_header hdr;
     struct hvm_save_end end;
-    hvm_save_handler handler;
     unsigned int i;
 
     if ( d->is_dying )
@@ -223,17 +222,37 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
     /* Save all available kinds of state */
     for ( i = 0; i <= HVM_SAVE_CODE_MAX; i++ )
     {
-        handler = hvm_sr_handlers[i].save;
-        if ( handler != NULL )
+        struct vcpu *v;
+        hvm_save_vcpu_handler save_one_handler = hvm_sr_handlers[i].save_one;
+        hvm_save_handler handler = hvm_sr_handlers[i].save;;
+
+        if ( save_one_handler )
+        {
+            for_each_vcpu ( d, v )
+            {
+                printk(XENLOG_G_INFO "HVM %pv save: %s\n",
+                       v, hvm_sr_handlers[i].name);
+
+                if ( save_one_handler(v, h) != 0 )
+                {
+                    printk(XENLOG_G_ERR
+                           "HVM %pv save: failed to save type %"PRIu16"\n",
+                           v, i);
+                    return -ENODATA;
+                }
+            }
+        }
+        else if ( handler )
         {
             printk(XENLOG_G_INFO "HVM%d save: %s\n",
                    d->domain_id, hvm_sr_handlers[i].name);
+
             if ( handler(d, h) != 0 )
             {
                 printk(XENLOG_G_ERR
                        "HVM%d save: failed to save type %"PRIu16"\n",
                        d->domain_id, i);
-                return -EFAULT;
+                return -ENODATA;
             }
         }
     }
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v17 12/13] x86/hvm: Remove redundant save functions
  2018-08-22 14:02 [PATCH v17 00/14] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (10 preceding siblings ...)
  2018-08-22 14:02 ` [PATCH v17 11/13] x86/domctl: Use hvm_save_vcpu_handler Alexandru Isaila
@ 2018-08-22 14:02 ` Alexandru Isaila
  2018-08-22 15:04   ` Roger Pau Monné
  2018-08-28 15:27   ` Jan Beulich
  2018-08-22 14:02 ` [PATCH v17 13/13] x86/domctl: Don't pause the whole domain if only getting vcpu state Alexandru Isaila
  12 siblings, 2 replies; 31+ messages in thread
From: Alexandru Isaila @ 2018-08-22 14:02 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This patch removes the redundant save functions and renames the
save_one* to save. It then changes the domain param to vcpu in the
save funcs.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>

---
Changes since V16:
        - Drop the "instance = 0" from hvm_save_one
	- Changed if ( handler ) to if ( !handler )
	- Call handler(d->vcpu[0] for the HVMSR_PER_DOM case.
---
 xen/arch/x86/cpu/mcheck/vmce.c | 18 +----------
 xen/arch/x86/hvm/hpet.c        |  7 ++--
 xen/arch/x86/hvm/hvm.c         | 73 +++---------------------------------------
 xen/arch/x86/hvm/i8254.c       |  5 +--
 xen/arch/x86/hvm/irq.c         | 15 +++++----
 xen/arch/x86/hvm/mtrr.c        | 20 ++----------
 xen/arch/x86/hvm/pmtimer.c     |  5 +--
 xen/arch/x86/hvm/rtc.c         |  5 +--
 xen/arch/x86/hvm/save.c        | 26 ++++++++-------
 xen/arch/x86/hvm/vioapic.c     |  5 +--
 xen/arch/x86/hvm/viridian.c    | 23 +++----------
 xen/arch/x86/hvm/vlapic.c      | 38 +++-------------------
 xen/arch/x86/hvm/vpic.c        |  5 +--
 xen/include/asm-x86/hvm/save.h |  8 ++---
 14 files changed, 60 insertions(+), 193 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 35044d7..763d56b 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -349,7 +349,7 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
     return ret;
 }
 
-static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+static int vmce_save_vcpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
 {
     struct hvm_vmce_vcpu ctxt = {
         .caps = v->arch.vmce.mcg_cap,
@@ -361,21 +361,6 @@ static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
     return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt);
 }
 
-static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-{
-    struct vcpu *v;
-    int err = 0;
-
-    for_each_vcpu ( d, v )
-    {
-        err = vmce_save_vcpu_ctxt_one(v, h);
-        if ( err )
-            break;
-    }
-
-    return err;
-}
-
 static int vmce_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
     unsigned int vcpuid = hvm_load_instance(h);
@@ -396,7 +381,6 @@ static int vmce_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
-                          vmce_save_vcpu_ctxt_one,
                           vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 
 /*
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index aff8613..4afa2ab 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -516,16 +516,17 @@ static const struct hvm_mmio_ops hpet_mmio_ops = {
 };
 
 
-static int hpet_save(struct domain *d, hvm_domain_context_t *h)
+static int hpet_save(struct vcpu *v, hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     HPETState *hp = domain_vhpet(d);
-    struct vcpu *v = pt_global_vcpu_target(d);
     int rc;
     uint64_t guest_time;
 
     if ( !has_vhpet(d) )
         return 0;
 
+    v = pt_global_vcpu_target(d);
     write_lock(&hp->lock);
     guest_time = (v->arch.hvm_vcpu.guest_time ?: hvm_get_guest_time(v)) /
                  STIME_PER_HPET_TICK;
@@ -640,7 +641,7 @@ static int hpet_load(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, hpet_load, 1, HVMSR_PER_DOM);
 
 static void hpet_set(HPETState *h)
 {
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 4a70251..831f86b 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -740,7 +740,7 @@ void hvm_domain_destroy(struct domain *d)
     destroy_vpci_mmcfg(d);
 }
 
-static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_tsc_adjust(struct vcpu *v, hvm_domain_context_t *h)
 {
     struct hvm_tsc_adjust ctxt = {
         .tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust,
@@ -749,21 +749,6 @@ static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
     return hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, &ctxt);
 }
 
-static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
-{
-    struct vcpu *v;
-    int err = 0;
-
-    for_each_vcpu ( d, v )
-    {
-        err = hvm_save_tsc_adjust_one(v, h);
-        if ( err )
-            break;
-    }
-
-    return err;
-}
-
 static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 {
     unsigned int vcpuid = hvm_load_instance(h);
@@ -785,10 +770,9 @@ static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
-                          hvm_save_tsc_adjust_one,
                           hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
-static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_cpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
 {
     struct segment_register seg;
     struct hvm_hw_cpu ctxt = {
@@ -895,21 +879,6 @@ static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
     return hvm_save_entry(CPU, v->vcpu_id, h, &ctxt);
 }
 
-static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-{
-    struct vcpu *v;
-    int err = 0;
-
-    for_each_vcpu ( d, v )
-    {
-        err = hvm_save_cpu_ctxt_one(v, h);
-        if ( err )
-            break;
-    }
-
-    return err;
-}
-
 /* Return a string indicating the error, or NULL for valid. */
 const char *hvm_efer_valid(const struct vcpu *v, uint64_t value,
                            signed int cr0_pg)
@@ -1181,7 +1150,7 @@ static int hvm_load_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_save_cpu_ctxt_one,
+HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt,
                           hvm_load_cpu_ctxt,
                           1, HVMSR_PER_VCPU);
 
@@ -1189,7 +1158,7 @@ HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_save_cpu_ctxt_one,
                                            save_area) + \
                                   xstate_ctxt_size(xcr0))
 
-static int hvm_save_cpu_xsave_states_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_cpu_xsave_states(struct vcpu *v, hvm_domain_context_t *h)
 {
     struct hvm_hw_cpu_xsave *ctxt;
     unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
@@ -1214,21 +1183,6 @@ static int hvm_save_cpu_xsave_states_one(struct vcpu *v, hvm_domain_context_t *h
     return 0;
 }
 
-static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
-{
-    struct vcpu *v;
-    int err = 0;
-
-    for_each_vcpu ( d, v )
-    {
-        err = hvm_save_cpu_xsave_states_one(v, h);
-        if ( err )
-            break;
-    }
-
-    return err;
-}
-
 /*
  * Structure layout conformity checks, documenting correctness of the cast in
  * the invocation of validate_xstate() below.
@@ -1366,7 +1320,7 @@ static const uint32_t msrs_to_send[] = {
 };
 static unsigned int __read_mostly msr_count_max = ARRAY_SIZE(msrs_to_send);
 
-static int hvm_save_cpu_msrs_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_cpu_msrs(struct vcpu *v, hvm_domain_context_t *h)
 {
     struct hvm_save_descriptor *desc = _p(&h->data[h->cur]);
     struct hvm_msr *ctxt;
@@ -1428,21 +1382,6 @@ static int hvm_save_cpu_msrs_one(struct vcpu *v, hvm_domain_context_t *h)
     return 0;
 }
 
-static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
-{
-    struct vcpu *v;
-    int err = 0;
-
-    for_each_vcpu ( d, v )
-    {
-        err = hvm_save_cpu_msrs_one(v, h);
-        if ( err )
-            break;
-    }
-
-    return err;
-}
-
 static int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
 {
     unsigned int i, vcpuid = hvm_load_instance(h);
@@ -1535,7 +1474,6 @@ static int __init hvm_register_CPU_save_and_restore(void)
     hvm_register_savevm(CPU_XSAVE_CODE,
                         "CPU_XSAVE",
                         hvm_save_cpu_xsave_states,
-                        hvm_save_cpu_xsave_states_one,
                         hvm_load_cpu_xsave_states,
                         HVM_CPU_XSAVE_SIZE(xfeature_mask) +
                             sizeof(struct hvm_save_descriptor),
@@ -1548,7 +1486,6 @@ static int __init hvm_register_CPU_save_and_restore(void)
         hvm_register_savevm(CPU_MSR_CODE,
                             "CPU_MSR",
                             hvm_save_cpu_msrs,
-                            hvm_save_cpu_msrs_one,
                             hvm_load_cpu_msrs,
                             HVM_CPU_MSR_SIZE(msr_count_max) +
                                 sizeof(struct hvm_save_descriptor),
diff --git a/xen/arch/x86/hvm/i8254.c b/xen/arch/x86/hvm/i8254.c
index ec77b23..e0d2255 100644
--- a/xen/arch/x86/hvm/i8254.c
+++ b/xen/arch/x86/hvm/i8254.c
@@ -390,8 +390,9 @@ void pit_stop_channel0_irq(PITState *pit)
     spin_unlock(&pit->lock);
 }
 
-static int pit_save(struct domain *d, hvm_domain_context_t *h)
+static int pit_save(struct vcpu *v, hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     PITState *pit = domain_vpit(d);
     int rc;
 
@@ -437,7 +438,7 @@ static int pit_load(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, NULL, pit_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, pit_load, 1, HVMSR_PER_DOM);
 
 void pit_reset(struct domain *d)
 {
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index 770eab7..b37275c 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -630,8 +630,9 @@ static int __init dump_irq_info_key_init(void)
 }
 __initcall(dump_irq_info_key_init);
 
-static int irq_save_pci(struct domain *d, hvm_domain_context_t *h)
+static int irq_save_pci(struct vcpu *v, hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     struct hvm_irq *hvm_irq = hvm_domain_irq(d);
     unsigned int asserted, pdev, pintx;
     int rc;
@@ -662,16 +663,18 @@ static int irq_save_pci(struct domain *d, hvm_domain_context_t *h)
     return rc;
 }
 
-static int irq_save_isa(struct domain *d, hvm_domain_context_t *h)
+static int irq_save_isa(struct vcpu *v, hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     struct hvm_irq *hvm_irq = hvm_domain_irq(d);
 
     /* Save ISA IRQ lines */
     return ( hvm_save_entry(ISA_IRQ, 0, h, &hvm_irq->isa_irq) );
 }
 
-static int irq_save_link(struct domain *d, hvm_domain_context_t *h)
+static int irq_save_link(struct vcpu *v, hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     struct hvm_irq *hvm_irq = hvm_domain_irq(d);
 
     /* Save PCI-ISA link state */
@@ -764,9 +767,9 @@ static int irq_load_link(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, NULL, irq_load_pci,
+HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, irq_load_pci,
                           1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, NULL, irq_load_isa,
+HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, irq_load_isa,
                           1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, NULL, irq_load_link,
+HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, irq_load_link,
                           1, HVMSR_PER_DOM);
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 1c4e731..de1bb80 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -718,7 +718,7 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start,
     return 0;
 }
 
-static int hvm_save_mtrr_msr_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_mtrr_msr(struct vcpu *v, hvm_domain_context_t *h)
 {
     const struct mtrr_state *mtrr_state = &v->arch.hvm_vcpu.mtrr;
     struct hvm_hw_mtrr hw_mtrr = {
@@ -756,22 +756,6 @@ static int hvm_save_mtrr_msr_one(struct vcpu *v, hvm_domain_context_t *h)
     return hvm_save_entry(MTRR, v->vcpu_id, h, &hw_mtrr);
 }
 
-static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
-{
-    struct vcpu *v;
-    int err = 0;
-
-    /* save mtrr&pat */
-    for_each_vcpu(d, v)
-    {
-       err = hvm_save_mtrr_msr_one(v, h);
-       if ( err )
-           break;
-    }
-
-    return err;
-}
-
 static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
 {
     int vcpuid, i;
@@ -822,7 +806,7 @@ static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr, hvm_save_mtrr_msr_one,
+HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr,
                           hvm_load_mtrr_msr, 1, HVMSR_PER_VCPU);
 
 void memory_type_changed(struct domain *d)
diff --git a/xen/arch/x86/hvm/pmtimer.c b/xen/arch/x86/hvm/pmtimer.c
index 0a5e8ce..d8dcbc2 100644
--- a/xen/arch/x86/hvm/pmtimer.c
+++ b/xen/arch/x86/hvm/pmtimer.c
@@ -249,8 +249,9 @@ static int handle_pmt_io(
     return X86EMUL_OKAY;
 }
 
-static int acpi_save(struct domain *d, hvm_domain_context_t *h)
+static int acpi_save(struct vcpu *v, hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     struct hvm_hw_acpi *acpi = &d->arch.hvm_domain.acpi;
     PMTState *s = &d->arch.hvm_domain.pl_time->vpmt;
     uint32_t x, msb = acpi->tmr_val & TMR_VAL_MSB;
@@ -309,7 +310,7 @@ static int acpi_load(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PMTIMER, acpi_save, NULL, acpi_load,
+HVM_REGISTER_SAVE_RESTORE(PMTIMER, acpi_save, acpi_load,
                           1, HVMSR_PER_DOM);
 
 int pmtimer_change_ioport(struct domain *d, unsigned int version)
diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index ce7e71b..58b70fc 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -737,8 +737,9 @@ void rtc_migrate_timers(struct vcpu *v)
 }
 
 /* Save RTC hardware state */
-static int rtc_save(struct domain *d, hvm_domain_context_t *h)
+static int rtc_save(struct vcpu *v, hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     RTCState *s = domain_vrtc(d);
     int rc;
 
@@ -783,7 +784,7 @@ static int rtc_load(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(RTC, rtc_save, NULL, rtc_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(RTC, rtc_save, rtc_load, 1, HVMSR_PER_DOM);
 
 void rtc_reset(struct domain *d)
 {
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 1eb2b01..49741e0 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -85,7 +85,6 @@ int arch_hvm_load(struct domain *d, struct hvm_save_header *hdr)
 /* List of handlers for various HVM save and restore types */
 static struct {
     hvm_save_handler save;
-    hvm_save_vcpu_handler save_one;
     hvm_load_handler load;
     const char *name;
     size_t size;
@@ -96,7 +95,6 @@ static struct {
 void __init hvm_register_savevm(uint16_t typecode,
                                 const char *name,
                                 hvm_save_handler save_state,
-                                hvm_save_vcpu_handler save_one,
                                 hvm_load_handler load_state,
                                 size_t size, int kind)
 {
@@ -104,7 +102,6 @@ void __init hvm_register_savevm(uint16_t typecode,
     ASSERT(hvm_sr_handlers[typecode].save == NULL);
     ASSERT(hvm_sr_handlers[typecode].load == NULL);
     hvm_sr_handlers[typecode].save = save_state;
-    hvm_sr_handlers[typecode].save_one = save_one;
     hvm_sr_handlers[typecode].load = load_state;
     hvm_sr_handlers[typecode].name = name;
     hvm_sr_handlers[typecode].size = size;
@@ -148,6 +145,9 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
          !hvm_sr_handlers[typecode].save )
         return -EINVAL;
 
+    if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU &&
+        instance >= d->max_vcpus )
+        return -ENOENT;
     ctxt.size = hvm_sr_handlers[typecode].size;
     if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
         ctxt.size *= d->max_vcpus;
@@ -155,7 +155,7 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
     if ( !ctxt.data )
         return -ENOMEM;
 
-    if ( (rv = hvm_sr_handlers[typecode].save(d, &ctxt)) != 0 )
+    if ( (rv = hvm_sr_handlers[typecode].save(d->vcpu[instance], &ctxt)) != 0 )
         printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
                d->domain_id, typecode, rv);
     else if ( rv = -ENOENT, ctxt.cur >= sizeof(*desc) )
@@ -223,17 +223,19 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
     for ( i = 0; i <= HVM_SAVE_CODE_MAX; i++ )
     {
         struct vcpu *v;
-        hvm_save_vcpu_handler save_one_handler = hvm_sr_handlers[i].save_one;
-        hvm_save_handler handler = hvm_sr_handlers[i].save;;
+        hvm_save_handler handler = hvm_sr_handlers[i].save;
 
-        if ( save_one_handler )
+        if ( !handler )
+            continue;
+
+        if ( hvm_sr_handlers[i].kind == HVMSR_PER_VCPU )
         {
             for_each_vcpu ( d, v )
             {
                 printk(XENLOG_G_INFO "HVM %pv save: %s\n",
                        v, hvm_sr_handlers[i].name);
 
-                if ( save_one_handler(v, h) != 0 )
+                if ( handler(v, h) != 0 )
                 {
                     printk(XENLOG_G_ERR
                            "HVM %pv save: failed to save type %"PRIu16"\n",
@@ -242,15 +244,15 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
                 }
             }
         }
-        else if ( handler )
+        else
         {
-            printk(XENLOG_G_INFO "HVM%d save: %s\n",
+            printk(XENLOG_G_INFO "HVM d%d save: %s\n",
                    d->domain_id, hvm_sr_handlers[i].name);
 
-            if ( handler(d, h) != 0 )
+            if ( handler(d->vcpu[0], h) != 0 )
             {
                 printk(XENLOG_G_ERR
-                       "HVM%d save: failed to save type %"PRIu16"\n",
+                       "HVM d%d save: failed to save type %"PRIu16"\n",
                        d->domain_id, i);
                 return -ENODATA;
             }
diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index 66f54e4..86d02cf 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -569,8 +569,9 @@ int vioapic_get_trigger_mode(const struct domain *d, unsigned int gsi)
     return vioapic->redirtbl[pin].fields.trig_mode;
 }
 
-static int ioapic_save(struct domain *d, hvm_domain_context_t *h)
+static int ioapic_save(struct vcpu *v, hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     struct hvm_vioapic *s;
 
     if ( !has_vioapic(d) )
@@ -601,7 +602,7 @@ static int ioapic_load(struct domain *d, hvm_domain_context_t *h)
     return hvm_load_entry(IOAPIC, h, &s->domU);
 }
 
-HVM_REGISTER_SAVE_RESTORE(IOAPIC, ioapic_save, NULL, ioapic_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(IOAPIC, ioapic_save, ioapic_load, 1, HVMSR_PER_DOM);
 
 void vioapic_reset(struct domain *d)
 {
diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index 268ccce..cc37ab4 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -990,8 +990,9 @@ out:
     return HVM_HCALL_completed;
 }
 
-static int viridian_save_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
+static int viridian_save_domain_ctxt(struct vcpu *v, hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     struct hvm_viridian_domain_context ctxt = {
         .time_ref_count = d->arch.hvm_domain.viridian.time_ref_count.val,
         .hypercall_gpa  = d->arch.hvm_domain.viridian.hypercall_gpa.raw,
@@ -1023,10 +1024,10 @@ static int viridian_load_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt, NULL,
+HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt,
                           viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
 
-static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+static int viridian_save_vcpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
 {
     struct hvm_viridian_vcpu_context ctxt = {
         .vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw,
@@ -1039,21 +1040,6 @@ static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
     return hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt);
 }
 
-static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-{
-    struct vcpu *v;
-    int err = 0;
-
-    for_each_vcpu ( d, v )
-    {
-        err = viridian_save_vcpu_ctxt_one(v, h);
-        if ( err )
-            break;
-    }
-
-    return err;
-}
-
 static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
     int vcpuid;
@@ -1085,7 +1071,6 @@ static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_VCPU, viridian_save_vcpu_ctxt,
-                          viridian_save_vcpu_ctxt_one,
                           viridian_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 
 static int __init parse_viridian_version(const char *arg)
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 7b6f400..a20bde8 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1435,7 +1435,7 @@ static void lapic_rearm(struct vlapic *s)
     s->timer_last_update = s->pt.last_plt_gtime;
 }
 
-static int lapic_save_hidden_one(struct vcpu *v, hvm_domain_context_t *h)
+static int lapic_save_hidden(struct vcpu *v, hvm_domain_context_t *h)
 {
     if ( !has_vlapic(v->domain) )
         return 0;
@@ -1443,22 +1443,7 @@ static int lapic_save_hidden_one(struct vcpu *v, hvm_domain_context_t *h)
     return hvm_save_entry(LAPIC, v->vcpu_id, h, &vcpu_vlapic(v)->hw);
 }
 
-static int lapic_save_hidden(struct domain *d, hvm_domain_context_t *h)
-{
-    struct vcpu *v;
-    int err = 0;
-
-    for_each_vcpu ( d, v )
-    {
-        err = lapic_save_hidden_one(v, h);
-        if ( err )
-            break;
-    }
-
-    return err;
-}
-
-static int lapic_save_regs_one(struct vcpu *v, hvm_domain_context_t *h)
+static int lapic_save_regs(struct vcpu *v, hvm_domain_context_t *h)
 {
     if ( !has_vlapic(v->domain) )
         return 0;
@@ -1469,21 +1454,6 @@ static int lapic_save_regs_one(struct vcpu *v, hvm_domain_context_t *h)
     return hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, vcpu_vlapic(v)->regs);
 }
 
-static int lapic_save_regs(struct domain *d, hvm_domain_context_t *h)
-{
-    struct vcpu *v;
-    int err = 0;
-
-    for_each_vcpu ( d, v )
-    {
-        err = lapic_save_regs_one(v, h);
-        if ( err )
-            break;
-    }
-
-    return err;
-}
-
 /*
  * Following lapic_load_hidden()/lapic_load_regs() we may need to
  * correct ID and LDR when they come from an old, broken hypervisor.
@@ -1584,9 +1554,9 @@ static int lapic_load_regs(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, lapic_save_hidden_one, lapic_load_hidden,
+HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, lapic_load_hidden,
                           1, HVMSR_PER_VCPU);
-HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, lapic_save_regs_one, lapic_load_regs,
+HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, lapic_load_regs,
                           1, HVMSR_PER_VCPU);
 
 int vlapic_init(struct vcpu *v)
diff --git a/xen/arch/x86/hvm/vpic.c b/xen/arch/x86/hvm/vpic.c
index ca9b4cb..bad5066 100644
--- a/xen/arch/x86/hvm/vpic.c
+++ b/xen/arch/x86/hvm/vpic.c
@@ -371,8 +371,9 @@ static int vpic_intercept_elcr_io(
     return X86EMUL_OKAY;
 }
 
-static int vpic_save(struct domain *d, hvm_domain_context_t *h)
+static int vpic_save(struct vcpu *v, hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     struct hvm_hw_vpic *s;
     int i;
 
@@ -411,7 +412,7 @@ static int vpic_load(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PIC, vpic_save, NULL, vpic_load, 2, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIC, vpic_save, vpic_load, 2, HVMSR_PER_DOM);
 
 void vpic_reset(struct domain *d)
 {
diff --git a/xen/include/asm-x86/hvm/save.h b/xen/include/asm-x86/hvm/save.h
index f2283fc..b867027 100644
--- a/xen/include/asm-x86/hvm/save.h
+++ b/xen/include/asm-x86/hvm/save.h
@@ -95,10 +95,8 @@ static inline uint16_t hvm_load_instance(struct hvm_domain_context *h)
  * The save handler may save multiple instances of a type into the buffer;
  * the load handler will be called once for each instance found when
  * restoring.  Both return non-zero on error. */
-typedef int (*hvm_save_handler) (struct domain *d, 
+typedef int (*hvm_save_handler) (struct vcpu *v,
                                  hvm_domain_context_t *h);
-typedef int (*hvm_save_vcpu_handler)(struct  vcpu *v,
-                                    hvm_domain_context_t *h);
 typedef int (*hvm_load_handler) (struct domain *d,
                                  hvm_domain_context_t *h);
 
@@ -107,7 +105,6 @@ typedef int (*hvm_load_handler) (struct domain *d,
 void hvm_register_savevm(uint16_t typecode,
                          const char *name, 
                          hvm_save_handler save_state,
-                         hvm_save_vcpu_handler save_one,
                          hvm_load_handler load_state,
                          size_t size, int kind);
 
@@ -117,13 +114,12 @@ void hvm_register_savevm(uint16_t typecode,
 
 /* Syntactic sugar around that function: specify the max number of
  * saves, and this calculates the size of buffer needed */
-#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _save_one, _load, _num, _k)  \
+#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _load, _num, _k)  \
 static int __init __hvm_register_##_x##_save_and_restore(void)            \
 {                                                                         \
     hvm_register_savevm(HVM_SAVE_CODE(_x),                                \
                         #_x,                                              \
                         &_save,                                           \
-                        _save_one,                                        \
                         &_load,                                           \
                         (_num) * (HVM_SAVE_LENGTH(_x)                     \
                                   + sizeof (struct hvm_save_descriptor)), \
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v17 13/13] x86/domctl: Don't pause the whole domain if only getting vcpu state
  2018-08-22 14:02 [PATCH v17 00/14] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (11 preceding siblings ...)
  2018-08-22 14:02 ` [PATCH v17 12/13] x86/hvm: Remove redundant save functions Alexandru Isaila
@ 2018-08-22 14:02 ` Alexandru Isaila
  2018-08-22 14:41   ` Roger Pau Monné
  12 siblings, 1 reply; 31+ messages in thread
From: Alexandru Isaila @ 2018-08-22 14:02 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This patch is focused on moving changing hvm_save_one() to save one
typecode from one vcpu and now that the save functions get data from a
single vcpu we can pause the specific vcpu instead of the domain.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>

---
Changes since V15:
	- Moved pause/unpause calls into hvm_save_one()
	- Re-add the loop in hvm_save_one().
---
 xen/arch/x86/domctl.c   |  2 --
 xen/arch/x86/hvm/save.c | 12 ++++++++++--
 2 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 8fbbf3a..cb53980 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -591,12 +591,10 @@ long arch_do_domctl(
              !is_hvm_domain(d) )
             break;
 
-        domain_pause(d);
         ret = hvm_save_one(d, domctl->u.hvmcontext_partial.type,
                            domctl->u.hvmcontext_partial.instance,
                            domctl->u.hvmcontext_partial.buffer,
                            &domctl->u.hvmcontext_partial.bufsz);
-        domain_unpause(d);
 
         if ( !ret )
             copyback = true;
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 49741e0..2d35f17 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -149,12 +149,15 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
         instance >= d->max_vcpus )
         return -ENOENT;
     ctxt.size = hvm_sr_handlers[typecode].size;
-    if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
-        ctxt.size *= d->max_vcpus;
     ctxt.data = xmalloc_bytes(ctxt.size);
     if ( !ctxt.data )
         return -ENOMEM;
 
+    if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
+        vcpu_pause(d->vcpu[instance]);
+    else
+        domain_pause(d);
+
     if ( (rv = hvm_sr_handlers[typecode].save(d->vcpu[instance], &ctxt)) != 0 )
         printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
                d->domain_id, typecode, rv);
@@ -186,6 +189,11 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
         }
     }
 
+    if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
+        vcpu_unpause(d->vcpu[instance]);
+    else
+        domain_unpause(d);
+
     xfree(ctxt.data);
     return rv;
 }
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH v17 11/13] x86/domctl: Use hvm_save_vcpu_handler
  2018-08-22 14:02 ` [PATCH v17 11/13] x86/domctl: Use hvm_save_vcpu_handler Alexandru Isaila
@ 2018-08-22 14:28   ` Roger Pau Monné
  2018-08-27  8:39     ` Jan Beulich
  0 siblings, 1 reply; 31+ messages in thread
From: Roger Pau Monné @ 2018-08-22 14:28 UTC (permalink / raw)
  To: Alexandru Isaila
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel, paul.durrant, jbeulich

On Wed, Aug 22, 2018 at 05:02:41PM +0300, Alexandru Isaila wrote:
> This patch is aimed on using the new save_one fuctions in the hvm_save
> 
> Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
> 
> ---
> Changes since V15:
> 	- Moved declarations into their scopes
> 	- Remove redundant NULL check
> 	- Remove rc variable
> 	- Change fault return to -ENODATA.
> ---
>  xen/arch/x86/hvm/save.c | 27 +++++++++++++++++++++++----
>  1 file changed, 23 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
> index 1106b96..1eb2b01 100644
> --- a/xen/arch/x86/hvm/save.c
> +++ b/xen/arch/x86/hvm/save.c
> @@ -195,7 +195,6 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
>      char *c;
>      struct hvm_save_header hdr;
>      struct hvm_save_end end;
> -    hvm_save_handler handler;
>      unsigned int i;
>  
>      if ( d->is_dying )
> @@ -223,17 +222,37 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
>      /* Save all available kinds of state */
>      for ( i = 0; i <= HVM_SAVE_CODE_MAX; i++ )
>      {
> -        handler = hvm_sr_handlers[i].save;
> -        if ( handler != NULL )
> +        struct vcpu *v;

This could be moved...

> +        hvm_save_vcpu_handler save_one_handler = hvm_sr_handlers[i].save_one;
> +        hvm_save_handler handler = hvm_sr_handlers[i].save;;

Double ';'.

Also, I would expect that a device either has a global save handler
(handler) or a per-vcpu save handler (save_one_handler), but not both
at the same time?

In which case, I would add something like:

ASSERT(save_one_handler == NULL || handler == NULL);

In order to make sure both are not set at the same time.

> +
> +        if ( save_one_handler )
> +        {

...here if you are reducing the scope.

> +            for_each_vcpu ( d, v )
> +            {
> +                printk(XENLOG_G_INFO "HVM %pv save: %s\n",
> +                       v, hvm_sr_handlers[i].name);
> +
> +                if ( save_one_handler(v, h) != 0 )
> +                {
> +                    printk(XENLOG_G_ERR
> +                           "HVM %pv save: failed to save type %"PRIu16"\n",

I wonder why PRIu16 is used here in order to print i which is unsigned
int. Isn't it the same as using '%u'?

> +                           v, i);
> +                    return -ENODATA;
> +                }
> +            }
> +        }
> +        else if ( handler )
>          {
>              printk(XENLOG_G_INFO "HVM%d save: %s\n",
>                     d->domain_id, hvm_sr_handlers[i].name);
> +

Stray newline?

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v17 13/13] x86/domctl: Don't pause the whole domain if only getting vcpu state
  2018-08-22 14:02 ` [PATCH v17 13/13] x86/domctl: Don't pause the whole domain if only getting vcpu state Alexandru Isaila
@ 2018-08-22 14:41   ` Roger Pau Monné
  2018-08-22 15:15     ` Isaila Alexandru
  0 siblings, 1 reply; 31+ messages in thread
From: Roger Pau Monné @ 2018-08-22 14:41 UTC (permalink / raw)
  To: Alexandru Isaila
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel, paul.durrant, jbeulich

On Wed, Aug 22, 2018 at 05:02:43PM +0300, Alexandru Isaila wrote:
> This patch is focused on moving changing hvm_save_one() to save one
> typecode from one vcpu and now that the save functions get data from a
> single vcpu we can pause the specific vcpu instead of the domain.

With this infrastructure added allowing to save a single instance of a
specific device I wonder if you would like to add a user to the code
in the tree.

If you look at vcpu_hvm in tools/libxc/xc_dom_x86.c it saves the full
domain context just to get the CPU and the MTRR state of VCPU#0. Do
you think you could switch this code to use the newly introduced
machinery to save a single instance of a specific type?

> Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
> 
> ---
> Changes since V15:
> 	- Moved pause/unpause calls into hvm_save_one()
> 	- Re-add the loop in hvm_save_one().
> ---
>  xen/arch/x86/domctl.c   |  2 --
>  xen/arch/x86/hvm/save.c | 12 ++++++++++--
>  2 files changed, 10 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
> index 8fbbf3a..cb53980 100644
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -591,12 +591,10 @@ long arch_do_domctl(
>               !is_hvm_domain(d) )
>              break;
>  
> -        domain_pause(d);
>          ret = hvm_save_one(d, domctl->u.hvmcontext_partial.type,
>                             domctl->u.hvmcontext_partial.instance,
>                             domctl->u.hvmcontext_partial.buffer,
>                             &domctl->u.hvmcontext_partial.bufsz);
> -        domain_unpause(d);
>  
>          if ( !ret )
>              copyback = true;
> diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
> index 49741e0..2d35f17 100644
> --- a/xen/arch/x86/hvm/save.c
> +++ b/xen/arch/x86/hvm/save.c
> @@ -149,12 +149,15 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
>          instance >= d->max_vcpus )
>          return -ENOENT;
>      ctxt.size = hvm_sr_handlers[typecode].size;
> -    if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
> -        ctxt.size *= d->max_vcpus;

This chunk seems to belong to a different patch?

The change just mentions pausing a vpcu instead of the whole domain,
but the size of the save context doesn't depend on whether the whole
domain is paused vs a single vcpu is paused.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v17 12/13] x86/hvm: Remove redundant save functions
  2018-08-22 14:02 ` [PATCH v17 12/13] x86/hvm: Remove redundant save functions Alexandru Isaila
@ 2018-08-22 15:04   ` Roger Pau Monné
  2018-08-22 15:22     ` Isaila Alexandru
  2018-08-28 15:27   ` Jan Beulich
  1 sibling, 1 reply; 31+ messages in thread
From: Roger Pau Monné @ 2018-08-22 15:04 UTC (permalink / raw)
  To: Alexandru Isaila
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel, paul.durrant, jbeulich

On Wed, Aug 22, 2018 at 05:02:42PM +0300, Alexandru Isaila wrote:
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 4a70251..831f86b 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -740,7 +740,7 @@ void hvm_domain_destroy(struct domain *d)
>      destroy_vpci_mmcfg(d);
>  }
>  
> -static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
> +static int hvm_save_tsc_adjust(struct vcpu *v, hvm_domain_context_t *h)
>  {
>      struct hvm_tsc_adjust ctxt = {
>          .tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust,
> @@ -749,21 +749,6 @@ static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
>      return hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, &ctxt);
>  }
>  
> -static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
> -{
> -    struct vcpu *v;
> -    int err = 0;
> -
> -    for_each_vcpu ( d, v )
> -    {
> -        err = hvm_save_tsc_adjust_one(v, h);
> -        if ( err )
> -            break;
> -    }
> -
> -    return err;
> -}
> -
>  static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
>  {
>      unsigned int vcpuid = hvm_load_instance(h);
> @@ -785,10 +770,9 @@ static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
>  }
>  
>  HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
> -                          hvm_save_tsc_adjust_one,
>                            hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
>  
> -static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
> +static int hvm_save_cpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
>  {
>      struct segment_register seg;
>      struct hvm_hw_cpu ctxt = {
> @@ -895,21 +879,6 @@ static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
>      return hvm_save_entry(CPU, v->vcpu_id, h, &ctxt);
>  }
>  
> -static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
> -{
> -    struct vcpu *v;
> -    int err = 0;
> -
> -    for_each_vcpu ( d, v )
> -    {
> -        err = hvm_save_cpu_ctxt_one(v, h);
> -        if ( err )
> -            break;
> -    }
> -
> -    return err;
> -}
> -
>  /* Return a string indicating the error, or NULL for valid. */
>  const char *hvm_efer_valid(const struct vcpu *v, uint64_t value,
>                             signed int cr0_pg)
> @@ -1181,7 +1150,7 @@ static int hvm_load_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
>      return 0;
>  }
>  
> -HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_save_cpu_ctxt_one,
> +HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt,
>                            hvm_load_cpu_ctxt,
>                            1, HVMSR_PER_VCPU);

Looks like you can fit more in the first line AFAICT.

[...]
> diff --git a/xen/arch/x86/hvm/i8254.c b/xen/arch/x86/hvm/i8254.c
> index ec77b23..e0d2255 100644
> --- a/xen/arch/x86/hvm/i8254.c
> +++ b/xen/arch/x86/hvm/i8254.c
> @@ -390,8 +390,9 @@ void pit_stop_channel0_irq(PITState *pit)
>      spin_unlock(&pit->lock);
>  }
>  
> -static int pit_save(struct domain *d, hvm_domain_context_t *h)
> +static int pit_save(struct vcpu *v, hvm_domain_context_t *h)
>  {
> +    struct domain *d = v->domain;
>      PITState *pit = domain_vpit(d);
>      int rc;
>  
> @@ -437,7 +438,7 @@ static int pit_load(struct domain *d, hvm_domain_context_t *h)
>      return 0;
>  }
>  
> -HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, NULL, pit_load, 1, HVMSR_PER_DOM);
> +HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, pit_load, 1, HVMSR_PER_DOM);
>  
>  void pit_reset(struct domain *d)
>  {
> diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
> index 770eab7..b37275c 100644
> --- a/xen/arch/x86/hvm/irq.c
> +++ b/xen/arch/x86/hvm/irq.c
> @@ -630,8 +630,9 @@ static int __init dump_irq_info_key_init(void)
>  }
>  __initcall(dump_irq_info_key_init);
>  
> -static int irq_save_pci(struct domain *d, hvm_domain_context_t *h)
> +static int irq_save_pci(struct vcpu *v, hvm_domain_context_t *h)
>  {
> +    struct domain *d = v->domain;
>      struct hvm_irq *hvm_irq = hvm_domain_irq(d);
>      unsigned int asserted, pdev, pintx;
>      int rc;
> @@ -662,16 +663,18 @@ static int irq_save_pci(struct domain *d, hvm_domain_context_t *h)
>      return rc;
>  }
>  
> -static int irq_save_isa(struct domain *d, hvm_domain_context_t *h)
> +static int irq_save_isa(struct vcpu *v, hvm_domain_context_t *h)
>  {
> +    struct domain *d = v->domain;

const

>      struct hvm_irq *hvm_irq = hvm_domain_irq(d);
>  
>      /* Save ISA IRQ lines */
>      return ( hvm_save_entry(ISA_IRQ, 0, h, &hvm_irq->isa_irq) );
>  }
>  
> -static int irq_save_link(struct domain *d, hvm_domain_context_t *h)
> +static int irq_save_link(struct vcpu *v, hvm_domain_context_t *h)
>  {
> +    struct domain *d = v->domain;

const

[...]
> diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
> index 1eb2b01..49741e0 100644
> --- a/xen/arch/x86/hvm/save.c
> +++ b/xen/arch/x86/hvm/save.c
> @@ -85,7 +85,6 @@ int arch_hvm_load(struct domain *d, struct hvm_save_header *hdr)
>  /* List of handlers for various HVM save and restore types */
>  static struct {
>      hvm_save_handler save;
> -    hvm_save_vcpu_handler save_one;
>      hvm_load_handler load;
>      const char *name;
>      size_t size;
> @@ -96,7 +95,6 @@ static struct {
>  void __init hvm_register_savevm(uint16_t typecode,
>                                  const char *name,
>                                  hvm_save_handler save_state,
> -                                hvm_save_vcpu_handler save_one,
>                                  hvm_load_handler load_state,
>                                  size_t size, int kind)
>  {
> @@ -104,7 +102,6 @@ void __init hvm_register_savevm(uint16_t typecode,
>      ASSERT(hvm_sr_handlers[typecode].save == NULL);
>      ASSERT(hvm_sr_handlers[typecode].load == NULL);
>      hvm_sr_handlers[typecode].save = save_state;
> -    hvm_sr_handlers[typecode].save_one = save_one;
>      hvm_sr_handlers[typecode].load = load_state;
>      hvm_sr_handlers[typecode].name = name;
>      hvm_sr_handlers[typecode].size = size;
> @@ -148,6 +145,9 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
>           !hvm_sr_handlers[typecode].save )
>          return -EINVAL;
>  
> +    if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU &&
> +        instance >= d->max_vcpus )
> +        return -ENOENT;
>      ctxt.size = hvm_sr_handlers[typecode].size;
>      if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
>          ctxt.size *= d->max_vcpus;
> @@ -155,7 +155,7 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
>      if ( !ctxt.data )
>          return -ENOMEM;
>  
> -    if ( (rv = hvm_sr_handlers[typecode].save(d, &ctxt)) != 0 )
> +    if ( (rv = hvm_sr_handlers[typecode].save(d->vcpu[instance], &ctxt)) != 0 )
>          printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
>                 d->domain_id, typecode, rv);
>      else if ( rv = -ENOENT, ctxt.cur >= sizeof(*desc) )
> @@ -223,17 +223,19 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
>      for ( i = 0; i <= HVM_SAVE_CODE_MAX; i++ )
>      {
>          struct vcpu *v;
> -        hvm_save_vcpu_handler save_one_handler = hvm_sr_handlers[i].save_one;
> -        hvm_save_handler handler = hvm_sr_handlers[i].save;;
> +        hvm_save_handler handler = hvm_sr_handlers[i].save;
>  
> -        if ( save_one_handler )
> +        if ( !handler )
> +            continue;
> +
> +        if ( hvm_sr_handlers[i].kind == HVMSR_PER_VCPU )
>          {
>              for_each_vcpu ( d, v )
>              {
>                  printk(XENLOG_G_INFO "HVM %pv save: %s\n",
>                         v, hvm_sr_handlers[i].name);
>  
> -                if ( save_one_handler(v, h) != 0 )
> +                if ( handler(v, h) != 0 )
>                  {
>                      printk(XENLOG_G_ERR
>                             "HVM %pv save: failed to save type %"PRIu16"\n",
> @@ -242,15 +244,15 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
>                  }
>              }
>          }
> -        else if ( handler )
> +        else
>          {
> -            printk(XENLOG_G_INFO "HVM%d save: %s\n",
> +            printk(XENLOG_G_INFO "HVM d%d save: %s\n",
>                     d->domain_id, hvm_sr_handlers[i].name);
>  
> -            if ( handler(d, h) != 0 )
> +            if ( handler(d->vcpu[0], h) != 0 )
>              {
>                  printk(XENLOG_G_ERR
> -                       "HVM%d save: failed to save type %"PRIu16"\n",
> +                       "HVM d%d save: failed to save type %"PRIu16"\n",

This looks like an unrelated change.

>                         d->domain_id, i);
>                  return -ENODATA;
>              }
> diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
> index 66f54e4..86d02cf 100644
> --- a/xen/arch/x86/hvm/vioapic.c
> +++ b/xen/arch/x86/hvm/vioapic.c
> @@ -569,8 +569,9 @@ int vioapic_get_trigger_mode(const struct domain *d, unsigned int gsi)
>      return vioapic->redirtbl[pin].fields.trig_mode;
>  }
>  
> -static int ioapic_save(struct domain *d, hvm_domain_context_t *h)
> +static int ioapic_save(struct vcpu *v, hvm_domain_context_t *h)
>  {
> +    struct domain *d = v->domain;

const

>      struct hvm_vioapic *s;
>  
>      if ( !has_vioapic(d) )
> @@ -601,7 +602,7 @@ static int ioapic_load(struct domain *d, hvm_domain_context_t *h)
>      return hvm_load_entry(IOAPIC, h, &s->domU);
>  }
>  
> -HVM_REGISTER_SAVE_RESTORE(IOAPIC, ioapic_save, NULL, ioapic_load, 1, HVMSR_PER_DOM);
> +HVM_REGISTER_SAVE_RESTORE(IOAPIC, ioapic_save, ioapic_load, 1, HVMSR_PER_DOM);
>  
>  void vioapic_reset(struct domain *d)
>  {
> diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
> index 268ccce..cc37ab4 100644
> --- a/xen/arch/x86/hvm/viridian.c
> +++ b/xen/arch/x86/hvm/viridian.c
> @@ -990,8 +990,9 @@ out:
>      return HVM_HCALL_completed;
>  }
>  
> -static int viridian_save_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
> +static int viridian_save_domain_ctxt(struct vcpu *v, hvm_domain_context_t *h)
>  {
> +    struct domain *d = v->domain;

const

[...]
> diff --git a/xen/arch/x86/hvm/vpic.c b/xen/arch/x86/hvm/vpic.c
> index ca9b4cb..bad5066 100644
> --- a/xen/arch/x86/hvm/vpic.c
> +++ b/xen/arch/x86/hvm/vpic.c
> @@ -371,8 +371,9 @@ static int vpic_intercept_elcr_io(
>      return X86EMUL_OKAY;
>  }
>  
> -static int vpic_save(struct domain *d, hvm_domain_context_t *h)
> +static int vpic_save(struct vcpu *v, hvm_domain_context_t *h)
>  {
> +    struct domain *d = v->domain;

const

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v17 13/13] x86/domctl: Don't pause the whole domain if only getting vcpu state
  2018-08-22 14:41   ` Roger Pau Monné
@ 2018-08-22 15:15     ` Isaila Alexandru
  2018-08-29 14:02       ` Isaila Alexandru
  0 siblings, 1 reply; 31+ messages in thread
From: Isaila Alexandru @ 2018-08-22 15:15 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel, paul.durrant, jbeulich

On Mi, 2018-08-22 at 16:41 +0200, Roger Pau Monné wrote:
> On Wed, Aug 22, 2018 at 05:02:43PM +0300, Alexandru Isaila wrote:
> > 
> > This patch is focused on moving changing hvm_save_one() to save one
> > typecode from one vcpu and now that the save functions get data
> > from a
> > single vcpu we can pause the specific vcpu instead of the domain.
> With this infrastructure added allowing to save a single instance of
> a
> specific device I wonder if you would like to add a user to the code
> in the tree.
> 
> If you look at vcpu_hvm in tools/libxc/xc_dom_x86.c it saves the full
> domain context just to get the CPU and the MTRR state of VCPU#0. Do
> you think you could switch this code to use the newly introduced
> machinery to save a single instance of a specific type?

Sure, I will add a tool patch at the end of the series
> 
> > 
> > Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
> > 
> > ---
> > Changes since V15:
> > 	- Moved pause/unpause calls into hvm_save_one()
> > 	- Re-add the loop in hvm_save_one().
> > ---
> >  xen/arch/x86/domctl.c   |  2 --
> >  xen/arch/x86/hvm/save.c | 12 ++++++++++--
> >  2 files changed, 10 insertions(+), 4 deletions(-)
> > 
> > diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
> > index 8fbbf3a..cb53980 100644
> > --- a/xen/arch/x86/domctl.c
> > +++ b/xen/arch/x86/domctl.c
> > @@ -591,12 +591,10 @@ long arch_do_domctl(
> >               !is_hvm_domain(d) )
> >              break;
> >  
> > -        domain_pause(d);
> >          ret = hvm_save_one(d, domctl->u.hvmcontext_partial.type,
> >                             domctl->u.hvmcontext_partial.instance,
> >                             domctl->u.hvmcontext_partial.buffer,
> >                             &domctl->u.hvmcontext_partial.bufsz);
> > -        domain_unpause(d);
> >  
> >          if ( !ret )
> >              copyback = true;
> > diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
> > index 49741e0..2d35f17 100644
> > --- a/xen/arch/x86/hvm/save.c
> > +++ b/xen/arch/x86/hvm/save.c
> > @@ -149,12 +149,15 @@ int hvm_save_one(struct domain *d, unsigned
> > int typecode, unsigned int instance,
> >          instance >= d->max_vcpus )
> >          return -ENOENT;
> >      ctxt.size = hvm_sr_handlers[typecode].size;
> > -    if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
> > -        ctxt.size *= d->max_vcpus;
> This chunk seems to belong to a different patch?
> 
> The change just mentions pausing a vpcu instead of the whole domain,
> but the size of the save context doesn't depend on whether the whole
> domain is paused vs a single vcpu is paused.

Right, git rebase did not report any error so it tricked me. I will
have that removed from this patch. 

Thanks, 
Alex

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v17 12/13] x86/hvm: Remove redundant save functions
  2018-08-22 15:04   ` Roger Pau Monné
@ 2018-08-22 15:22     ` Isaila Alexandru
  2018-08-22 15:26       ` Roger Pau Monné
  0 siblings, 1 reply; 31+ messages in thread
From: Isaila Alexandru @ 2018-08-22 15:22 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel, paul.durrant, jbeulich

> >  
> > -            if ( handler(d, h) != 0 )
> > +            if ( handler(d->vcpu[0], h) != 0 )
> >              {
> >                  printk(XENLOG_G_ERR
> > -                       "HVM%d save: failed to save type
> > %"PRIu16"\n",
> > +                       "HVM d%d save: failed to save type
> > %"PRIu16"\n",
> This looks like an unrelated change.
I wanted to change this so it could be in the same order as the
per_vcpu print that was modified at Jan's request.
Now the print is something like:

"
(XEN) HVM d2v0 save: CPU
(XEN) HVM d2v1 save: CPU
(XEN) HVM d2 save: PIC
(XEN) HVM d2 save: IOAPIC
(XEN) HVM d2v0 save: LAPIC
"
in oppose to having "(XEN) HVM2 save: PIC".

If the change is not needed I can drop it.

Thanks, 
Alex

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v17 12/13] x86/hvm: Remove redundant save functions
  2018-08-22 15:22     ` Isaila Alexandru
@ 2018-08-22 15:26       ` Roger Pau Monné
  0 siblings, 0 replies; 31+ messages in thread
From: Roger Pau Monné @ 2018-08-22 15:26 UTC (permalink / raw)
  To: Isaila Alexandru
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel, paul.durrant, jbeulich

On Wed, Aug 22, 2018 at 06:22:15PM +0300, Isaila Alexandru wrote:
> > >  
> > > -            if ( handler(d, h) != 0 )
> > > +            if ( handler(d->vcpu[0], h) != 0 )
> > >              {
> > >                  printk(XENLOG_G_ERR
> > > -                       "HVM%d save: failed to save type
> > > %"PRIu16"\n",
> > > +                       "HVM d%d save: failed to save type
> > > %"PRIu16"\n",
> > This looks like an unrelated change.
> I wanted to change this so it could be in the same order as the
> per_vcpu print that was modified at Jan's request.
> Now the print is something like:
> 
> "
> (XEN) HVM d2v0 save: CPU
> (XEN) HVM d2v1 save: CPU
> (XEN) HVM d2 save: PIC
> (XEN) HVM d2 save: IOAPIC
> (XEN) HVM d2v0 save: LAPIC
> "
> in oppose to having "(XEN) HVM2 save: PIC".
> 
> If the change is not needed I can drop it.

Just add a small comment on the commit saying that while there you
also fix the printing of a message in order to match the format of the
other save related messages.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v17 11/13] x86/domctl: Use hvm_save_vcpu_handler
  2018-08-22 14:28   ` Roger Pau Monné
@ 2018-08-27  8:39     ` Jan Beulich
  0 siblings, 0 replies; 31+ messages in thread
From: Jan Beulich @ 2018-08-27  8:39 UTC (permalink / raw)
  To: aisaila, Roger Pau Monne
  Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel

>>> On 22.08.18 at 16:28, <roger.pau@citrix.com> wrote:
> On Wed, Aug 22, 2018 at 05:02:41PM +0300, Alexandru Isaila wrote:
>> @@ -223,17 +222,37 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
>>      /* Save all available kinds of state */
>>      for ( i = 0; i <= HVM_SAVE_CODE_MAX; i++ )
>>      {
>> -        handler = hvm_sr_handlers[i].save;
>> -        if ( handler != NULL )
>> +        struct vcpu *v;
>> +        hvm_save_vcpu_handler save_one_handler = hvm_sr_handlers[i].save_one;
>> +        hvm_save_handler handler = hvm_sr_handlers[i].save;;
> 
> Double ';'.
> 
> Also, I would expect that a device either has a global save handler
> (handler) or a per-vcpu save handler (save_one_handler), but not both
> at the same time?
> 
> In which case, I would add something like:
> 
> ASSERT(save_one_handler == NULL || handler == NULL);
> 
> In order to make sure both are not set at the same time.

I'm not sure this is worthwhile - the next patch would remove it
again right away.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v17 12/13] x86/hvm: Remove redundant save functions
  2018-08-22 14:02 ` [PATCH v17 12/13] x86/hvm: Remove redundant save functions Alexandru Isaila
  2018-08-22 15:04   ` Roger Pau Monné
@ 2018-08-28 15:27   ` Jan Beulich
  1 sibling, 0 replies; 31+ messages in thread
From: Jan Beulich @ 2018-08-28 15:27 UTC (permalink / raw)
  To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel

>>> On 22.08.18 at 16:02, <aisaila@bitdefender.com> wrote:
 @@ -148,6 +145,9 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
>           !hvm_sr_handlers[typecode].save )
>          return -EINVAL;
>  
> +    if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU &&
> +        instance >= d->max_vcpus )

Indentation.

You're checking instance only for per-vcpu records, but ...

> @@ -155,7 +155,7 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
>      if ( !ctxt.data )
>          return -ENOMEM;
>  
> -    if ( (rv = hvm_sr_handlers[typecode].save(d, &ctxt)) != 0 )
> +    if ( (rv = hvm_sr_handlers[typecode].save(d->vcpu[instance], &ctxt)) != 0 )

... you _again_ use it as array index unconditionally. Please
don't submit new versions without taking care of review
comments given for earlier ones.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v17 03/13] x86/hvm: Introduce hvm_save_cpu_ctxt_one func
  2018-08-22 14:02 ` [PATCH v17 03/13] x86/hvm: Introduce hvm_save_cpu_ctxt_one func Alexandru Isaila
@ 2018-08-28 15:33   ` Razvan Cojocaru
  2018-08-28 15:43     ` Jan Beulich
  0 siblings, 1 reply; 31+ messages in thread
From: Razvan Cojocaru @ 2018-08-28 15:33 UTC (permalink / raw)
  To: xen-devel, andrew.cooper3, jbeulich
  Cc: Alexandru Isaila, ian.jackson, paul.durrant, wei.liu2

On 8/22/18 5:02 PM, Alexandru Isaila wrote:
> This is used to save data from a single instance.
> 
> Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 
> ---
> Changes since V14:
> 	- Move all free fields to the initializer
> 	- Add blank line to before the return
> 	- Move v->pause_flags check to the save_one function.
> ---
>  xen/arch/x86/hvm/hvm.c | 219 +++++++++++++++++++++++++------------------------
>  1 file changed, 113 insertions(+), 106 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index d90da9a..333c342 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -787,119 +787,126 @@ static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
>  HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
>                            hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
>  
> -static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
> +static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
>  {
> -    struct vcpu *v;
> -    struct hvm_hw_cpu ctxt;
>      struct segment_register seg;
> +    struct hvm_hw_cpu ctxt = {
> +        .tsc = hvm_get_guest_tsc_fixed(v, v->domain->arch.hvm_domain.sync_tsc),
> +        .msr_tsc_aux = hvm_msr_tsc_aux(v),
> +        .rax = v->arch.user_regs.rax,
> +        .rbx = v->arch.user_regs.rbx,
> +        .rcx = v->arch.user_regs.rcx,
> +        .rdx = v->arch.user_regs.rdx,
> +        .rbp = v->arch.user_regs.rbp,
> +        .rsi = v->arch.user_regs.rsi,
> +        .rdi = v->arch.user_regs.rdi,
> +        .rsp = v->arch.user_regs.rsp,
> +        .rip = v->arch.user_regs.rip,
> +        .rflags = v->arch.user_regs.rflags,
> +        .r8  = v->arch.user_regs.r8,
> +        .r9  = v->arch.user_regs.r9,
> +        .r10 = v->arch.user_regs.r10,
> +        .r11 = v->arch.user_regs.r11,
> +        .r12 = v->arch.user_regs.r12,
> +        .r13 = v->arch.user_regs.r13,
> +        .r14 = v->arch.user_regs.r14,
> +        .r15 = v->arch.user_regs.r15,
> +        .dr0 = v->arch.debugreg[0],
> +        .dr1 = v->arch.debugreg[1],
> +        .dr2 = v->arch.debugreg[2],
> +        .dr3 = v->arch.debugreg[3],
> +        .dr6 = v->arch.debugreg[6],
> +        .dr7 = v->arch.debugreg[7],
> +    };
>  
> -    for_each_vcpu ( d, v )
> +    /*
> +     * We don't need to save state for a vcpu that is down; the restore
> +     * code will leave it down if there is nothing saved.
> +     */
> +    if ( v->pause_flags & VPF_down )
> +        return 0;

Actually, we'd like to remove this if() if possible - even if the VCPU
is down, we should still be able to query it and receive whatever state
it is in, according to the Intel SDM, "9.1.1 Processor State After
Reset". Any objections to this?

Also, reading the comment and looking at the code, it appears that the
end of hvm_load_cpu_ctxt() actually wakes the VCPU up:

1152     v->arch.debugreg[7] = ctxt.dr7;
1153
1154     v->arch.vgc_flags = VGCF_online;
1155
1156     /* Auxiliary processors should be woken immediately. */
1157     v->is_initialised = 1;
1158     clear_bit(_VPF_down, &v->pause_flags);
1159     vcpu_wake(v);
1160
1161     return 0;
1162 }

which appears to be non-architectural behaviour if we've interpreted
correctly section 8.4 of Volume 3 of the SDM, as basically saying that
VCPUs should be awaken by IPIs (though this is outside the scope of this
patch).


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v17 03/13] x86/hvm: Introduce hvm_save_cpu_ctxt_one func
  2018-08-28 15:33   ` Razvan Cojocaru
@ 2018-08-28 15:43     ` Jan Beulich
  0 siblings, 0 replies; 31+ messages in thread
From: Jan Beulich @ 2018-08-28 15:43 UTC (permalink / raw)
  To: Razvan Cojocaru
  Cc: Wei Liu, Andrew Cooper, Ian Jackson, xen-devel, Paul Durrant, aisaila

>>> On 28.08.18 at 17:33, <rcojocaru@bitdefender.com> wrote:
> On 8/22/18 5:02 PM, Alexandru Isaila wrote:
>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -787,119 +787,126 @@ static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
>>  HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
>>                            hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
>>  
>> -static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
>> +static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
>>  {
>> -    struct vcpu *v;
>> -    struct hvm_hw_cpu ctxt;
>>      struct segment_register seg;
>> +    struct hvm_hw_cpu ctxt = {
>> +        .tsc = hvm_get_guest_tsc_fixed(v, v->domain->arch.hvm_domain.sync_tsc),
>> +        .msr_tsc_aux = hvm_msr_tsc_aux(v),
>> +        .rax = v->arch.user_regs.rax,
>> +        .rbx = v->arch.user_regs.rbx,
>> +        .rcx = v->arch.user_regs.rcx,
>> +        .rdx = v->arch.user_regs.rdx,
>> +        .rbp = v->arch.user_regs.rbp,
>> +        .rsi = v->arch.user_regs.rsi,
>> +        .rdi = v->arch.user_regs.rdi,
>> +        .rsp = v->arch.user_regs.rsp,
>> +        .rip = v->arch.user_regs.rip,
>> +        .rflags = v->arch.user_regs.rflags,
>> +        .r8  = v->arch.user_regs.r8,
>> +        .r9  = v->arch.user_regs.r9,
>> +        .r10 = v->arch.user_regs.r10,
>> +        .r11 = v->arch.user_regs.r11,
>> +        .r12 = v->arch.user_regs.r12,
>> +        .r13 = v->arch.user_regs.r13,
>> +        .r14 = v->arch.user_regs.r14,
>> +        .r15 = v->arch.user_regs.r15,
>> +        .dr0 = v->arch.debugreg[0],
>> +        .dr1 = v->arch.debugreg[1],
>> +        .dr2 = v->arch.debugreg[2],
>> +        .dr3 = v->arch.debugreg[3],
>> +        .dr6 = v->arch.debugreg[6],
>> +        .dr7 = v->arch.debugreg[7],
>> +    };
>>  
>> -    for_each_vcpu ( d, v )
>> +    /*
>> +     * We don't need to save state for a vcpu that is down; the restore
>> +     * code will leave it down if there is nothing saved.
>> +     */
>> +    if ( v->pause_flags & VPF_down )
>> +        return 0;
> 
> Actually, we'd like to remove this if() if possible - even if the VCPU
> is down, we should still be able to query it and receive whatever state
> it is in, according to the Intel SDM, "9.1.1 Processor State After
> Reset". Any objections to this?

Certainly not in this series; it's therefore not really ideal to
discuss this on this thread instead of a separate one.

I'm also not convinced the state you'd receive would
actually be "after reset" state. We may need to synthesize
this if we really wanted to allow this.

> Also, reading the comment and looking at the code, it appears that the
> end of hvm_load_cpu_ctxt() actually wakes the VCPU up:
> 
> 1152     v->arch.debugreg[7] = ctxt.dr7;
> 1153
> 1154     v->arch.vgc_flags = VGCF_online;
> 1155
> 1156     /* Auxiliary processors should be woken immediately. */
> 1157     v->is_initialised = 1;
> 1158     clear_bit(_VPF_down, &v->pause_flags);
> 1159     vcpu_wake(v);
> 1160
> 1161     return 0;
> 1162 }
> 
> which appears to be non-architectural behaviour if we've interpreted
> correctly section 8.4 of Volume 3 of the SDM, as basically saying that
> VCPUs should be awaken by IPIs (though this is outside the scope of this
> patch).

I'm not sure hardware behavior can be used as a reference here:
Hardware doesn't allow loading of state the way we do here. Also
I don't see how/when else you would propose the unpausing would
happen here, in particular in the course of a (live) migration.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v17 13/13] x86/domctl: Don't pause the whole domain if only getting vcpu state
  2018-08-22 15:15     ` Isaila Alexandru
@ 2018-08-29 14:02       ` Isaila Alexandru
  2018-08-29 14:13         ` Jan Beulich
  0 siblings, 1 reply; 31+ messages in thread
From: Isaila Alexandru @ 2018-08-29 14:02 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: wei.liu2, andrew.cooper3, ian.jackson, xen-devel, paul.durrant, jbeulich

On Mi, 2018-08-22 at 18:15 +0300, Isaila Alexandru wrote:
> On Mi, 2018-08-22 at 16:41 +0200, Roger Pau Monné wrote:
> > 
> > On Wed, Aug 22, 2018 at 05:02:43PM +0300, Alexandru Isaila wrote:
> > > 
> > > 
> > > This patch is focused on moving changing hvm_save_one() to save
> > > one
> > > typecode from one vcpu and now that the save functions get data
> > > from a
> > > single vcpu we can pause the specific vcpu instead of the domain.
> > With this infrastructure added allowing to save a single instance
> > of
> > a
> > specific device I wonder if you would like to add a user to the
> > code
> > in the tree.
> > 
> > If you look at vcpu_hvm in tools/libxc/xc_dom_x86.c it saves the
> > full
> > domain context just to get the CPU and the MTRR state of VCPU#0. Do
> > you think you could switch this code to use the newly introduced
> > machinery to save a single instance of a specific type?
> Sure, I will add a tool patch at the end of the series

Hi Roger, 

Is this urgent to be in this series? If not I will add a new patch
after it is all in. 

Thanks, 
Alex
> > 
> > 
> > > 
> > > 
> > > Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
> > > 
> > > ---
> > > Changes since V15:
> > > 	- Moved pause/unpause calls into hvm_save_one()
> > > 	- Re-add the loop in hvm_save_one().
> > > ---
> > >  xen/arch/x86/domctl.c   |  2 --
> > >  xen/arch/x86/hvm/save.c | 12 ++++++++++--
> > >  2 files changed, 10 insertions(+), 4 deletions(-)
> > > 
> > > diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
> > > index 8fbbf3a..cb53980 100644
> > > --- a/xen/arch/x86/domctl.c
> > > +++ b/xen/arch/x86/domctl.c
> > > @@ -591,12 +591,10 @@ long arch_do_domctl(
> > >               !is_hvm_domain(d) )
> > >              break;
> > >  
> > > -        domain_pause(d);
> > >          ret = hvm_save_one(d, domctl->u.hvmcontext_partial.type,
> > >                             domctl-
> > > >u.hvmcontext_partial.instance,
> > >                             domctl->u.hvmcontext_partial.buffer,
> > >                             &domctl->u.hvmcontext_partial.bufsz);
> > > -        domain_unpause(d);
> > >  
> > >          if ( !ret )
> > >              copyback = true;
> > > diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
> > > index 49741e0..2d35f17 100644
> > > --- a/xen/arch/x86/hvm/save.c
> > > +++ b/xen/arch/x86/hvm/save.c
> > > @@ -149,12 +149,15 @@ int hvm_save_one(struct domain *d, unsigned
> > > int typecode, unsigned int instance,
> > >          instance >= d->max_vcpus )
> > >          return -ENOENT;
> > >      ctxt.size = hvm_sr_handlers[typecode].size;
> > > -    if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
> > > -        ctxt.size *= d->max_vcpus;
> > This chunk seems to belong to a different patch?
> > 
> > The change just mentions pausing a vpcu instead of the whole
> > domain,
> > but the size of the save context doesn't depend on whether the
> > whole
> > domain is paused vs a single vcpu is paused.
> Right, git rebase did not report any error so it tricked me. I will
> have that removed from this patch. 
> 
> Thanks, 
> Alex
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v17 13/13] x86/domctl: Don't pause the whole domain if only getting vcpu state
  2018-08-29 14:02       ` Isaila Alexandru
@ 2018-08-29 14:13         ` Jan Beulich
  2018-08-31 13:56           ` Isaila Alexandru
  0 siblings, 1 reply; 31+ messages in thread
From: Jan Beulich @ 2018-08-29 14:13 UTC (permalink / raw)
  To: aisaila
  Cc: Wei Liu, Andrew Cooper, Ian Jackson, xen-devel, Paul Durrant,
	Roger Pau Monne

>>> On 29.08.18 at 16:02, <aisaila@bitdefender.com> wrote:
> On Mi, 2018-08-22 at 18:15 +0300, Isaila Alexandru wrote:
>> On Mi, 2018-08-22 at 16:41 +0200, Roger Pau Monné wrote:
>> > If you look at vcpu_hvm in tools/libxc/xc_dom_x86.c it saves the
>> > full
>> > domain context just to get the CPU and the MTRR state of VCPU#0. Do
>> > you think you could switch this code to use the newly introduced
>> > machinery to save a single instance of a specific type?
>> Sure, I will add a tool patch at the end of the series
> 
> Is this urgent to be in this series? If not I will add a new patch
> after it is all in. 

Considering the problems that there have been with this series,
anything to help build confidence in things still working for all
cases would help here, so I'm pretty glad Roger thought of this,
and while I wouldn't make it as strong as "the series can't go
in without this", I'd still much prefer if you too the time.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v17 13/13] x86/domctl: Don't pause the whole domain if only getting vcpu state
  2018-08-29 14:13         ` Jan Beulich
@ 2018-08-31 13:56           ` Isaila Alexandru
  2018-08-31 15:23             ` Jan Beulich
  2018-09-03 14:36             ` Roger Pau Monné
  0 siblings, 2 replies; 31+ messages in thread
From: Isaila Alexandru @ 2018-08-31 13:56 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Wei Liu, Andrew Cooper, Ian Jackson, xen-devel, Paul Durrant,
	Roger Pau Monne

On Mi, 2018-08-29 at 08:13 -0600, Jan Beulich wrote:
> > 
> > > 
> > > > 
> > > > On 29.08.18 at 16:02, <aisaila@bitdefender.com> wrote:
> > On Mi, 2018-08-22 at 18:15 +0300, Isaila Alexandru wrote:
> > > 
> > > On Mi, 2018-08-22 at 16:41 +0200, Roger Pau Monné wrote:
> > > > 
> > > > If you look at vcpu_hvm in tools/libxc/xc_dom_x86.c it saves
> > > > the
> > > > full
> > > > domain context just to get the CPU and the MTRR state of
> > > > VCPU#0. Do
> > > > you think you could switch this code to use the newly
> > > > introduced
> > > > machinery to save a single instance of a specific type?
> > > Sure, I will add a tool patch at the end of the series
> > Is this urgent to be in this series? If not I will add a new patch
> > after it is all in. 
> Considering the problems that there have been with this series,
> anything to help build confidence in things still working for all
> cases would help here, so I'm pretty glad Roger thought of this,
> and while I wouldn't make it as strong as "the series can't go
> in without this", I'd still much prefer if you too the time.

I don't think it is possible to use getcontext_partial() in vcpu_hvm()
because of the need to have a header for xc_domain_hvm_setcontext() and
the only way to get it is by xc_domain_hvm_getcontext(). There is also
a comment there that states the same thing
"/*
     * Get the full HVM context in order to have the header, it is not
     * possible to get the header with getcontext_partial, and crafting
one
     * from userspace is also not an option since cpuid is trapped and
     * modified by Xen.
     */
"
I hope I understood the request correctly to start with and if not
please clarify. 

Regards,
Alex  

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v17 13/13] x86/domctl: Don't pause the whole domain if only getting vcpu state
  2018-08-31 13:56           ` Isaila Alexandru
@ 2018-08-31 15:23             ` Jan Beulich
  2018-09-03 14:36             ` Roger Pau Monné
  1 sibling, 0 replies; 31+ messages in thread
From: Jan Beulich @ 2018-08-31 15:23 UTC (permalink / raw)
  To: aisaila
  Cc: Wei Liu, Andrew Cooper, Ian Jackson, xen-devel, Paul Durrant,
	Roger Pau Monne

>>> On 31.08.18 at 15:56, <aisaila@bitdefender.com> wrote:
> On Mi, 2018-08-29 at 08:13 -0600, Jan Beulich wrote:
>> > > > On 29.08.18 at 16:02, <aisaila@bitdefender.com> wrote:
>> > On Mi, 2018-08-22 at 18:15 +0300, Isaila Alexandru wrote:
>> > > On Mi, 2018-08-22 at 16:41 +0200, Roger Pau Monné wrote:
>> > > > 
>> > > > If you look at vcpu_hvm in tools/libxc/xc_dom_x86.c it saves
>> > > > the
>> > > > full
>> > > > domain context just to get the CPU and the MTRR state of
>> > > > VCPU#0. Do
>> > > > you think you could switch this code to use the newly
>> > > > introduced
>> > > > machinery to save a single instance of a specific type?
>> > > Sure, I will add a tool patch at the end of the series
>> > Is this urgent to be in this series? If not I will add a new patch
>> > after it is all in. 
>> Considering the problems that there have been with this series,
>> anything to help build confidence in things still working for all
>> cases would help here, so I'm pretty glad Roger thought of this,
>> and while I wouldn't make it as strong as "the series can't go
>> in without this", I'd still much prefer if you too the time.
> 
> I don't think it is possible to use getcontext_partial() in vcpu_hvm()
> because of the need to have a header for xc_domain_hvm_setcontext() and
> the only way to get it is by xc_domain_hvm_getcontext(). There is also
> a comment there that states the same thing
> "/*
>      * Get the full HVM context in order to have the header, it is not
>      * possible to get the header with getcontext_partial, and crafting
> one
>      * from userspace is also not an option since cpuid is trapped and
>      * modified by Xen.
>      */
> "
> I hope I understood the request correctly to start with and if not
> please clarify. 

Hmm, I'm in no way a tools expert, but I agree that this function doesn't
look like it's a good candidate for conversion. I didn't look at it when
Roger pointed you there, and with my previous reply I was only trying
to support the general aspect of Roger's request, not the specific
example. Which means - if there's no better conversion candidate, then
so be it and the series is going to be fine without the extra patch (but
of course we then expect you even more to make sure the best you
can that there are no regressions).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v17 13/13] x86/domctl: Don't pause the whole domain if only getting vcpu state
  2018-08-31 13:56           ` Isaila Alexandru
  2018-08-31 15:23             ` Jan Beulich
@ 2018-09-03 14:36             ` Roger Pau Monné
  2018-09-03 14:42               ` Isaila Alexandru
  1 sibling, 1 reply; 31+ messages in thread
From: Roger Pau Monné @ 2018-09-03 14:36 UTC (permalink / raw)
  To: Isaila Alexandru
  Cc: Wei Liu, Andrew Cooper, Ian Jackson, xen-devel, Paul Durrant,
	Jan Beulich

On Fri, Aug 31, 2018 at 04:56:21PM +0300, Isaila Alexandru wrote:
> On Mi, 2018-08-29 at 08:13 -0600, Jan Beulich wrote:
> > > 
> > > > 
> > > > > 
> > > > > On 29.08.18 at 16:02, <aisaila@bitdefender.com> wrote:
> > > On Mi, 2018-08-22 at 18:15 +0300, Isaila Alexandru wrote:
> > > > 
> > > > On Mi, 2018-08-22 at 16:41 +0200, Roger Pau Monné wrote:
> > > > > 
> > > > > If you look at vcpu_hvm in tools/libxc/xc_dom_x86.c it saves
> > > > > the
> > > > > full
> > > > > domain context just to get the CPU and the MTRR state of
> > > > > VCPU#0. Do
> > > > > you think you could switch this code to use the newly
> > > > > introduced
> > > > > machinery to save a single instance of a specific type?
> > > > Sure, I will add a tool patch at the end of the series
> > > Is this urgent to be in this series? If not I will add a new patch
> > > after it is all in. 
> > Considering the problems that there have been with this series,
> > anything to help build confidence in things still working for all
> > cases would help here, so I'm pretty glad Roger thought of this,
> > and while I wouldn't make it as strong as "the series can't go
> > in without this", I'd still much prefer if you too the time.
> 
> I don't think it is possible to use getcontext_partial() in vcpu_hvm()
> because of the need to have a header for xc_domain_hvm_setcontext() and
> the only way to get it is by xc_domain_hvm_getcontext(). There is also
> a comment there that states the same thing
> "/*
>      * Get the full HVM context in order to have the header, it is not
>      * possible to get the header with getcontext_partial, and crafting
> one
>      * from userspace is also not an option since cpuid is trapped and
>      * modified by Xen.
>      */
> "
> I hope I understood the request correctly to start with and if not
> please clarify. 

But I expect you also get such header when fetching the state of a
single device, or else how do you use this new hypercall in
conjunction with xc_domain_hvm_setcontext?

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v17 13/13] x86/domctl: Don't pause the whole domain if only getting vcpu state
  2018-09-03 14:36             ` Roger Pau Monné
@ 2018-09-03 14:42               ` Isaila Alexandru
  2018-09-03 14:56                 ` Roger Pau Monné
  0 siblings, 1 reply; 31+ messages in thread
From: Isaila Alexandru @ 2018-09-03 14:42 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Wei Liu, Andrew Cooper, Ian Jackson, xen-devel, Paul Durrant,
	Jan Beulich

On Lu, 2018-09-03 at 16:36 +0200, Roger Pau Monné wrote:
> On Fri, Aug 31, 2018 at 04:56:21PM +0300, Isaila Alexandru wrote:
> > 
> > On Mi, 2018-08-29 at 08:13 -0600, Jan Beulich wrote:
> > > 
> > > > 
> > > > 
> > > > > 
> > > > > 
> > > > > > 
> > > > > > 
> > > > > > On 29.08.18 at 16:02, <aisaila@bitdefender.com> wrote:
> > > > On Mi, 2018-08-22 at 18:15 +0300, Isaila Alexandru wrote:
> > > > > 
> > > > > 
> > > > > On Mi, 2018-08-22 at 16:41 +0200, Roger Pau Monné wrote:
> > > > > > 
> > > > > > 
> > > > > > If you look at vcpu_hvm in tools/libxc/xc_dom_x86.c it
> > > > > > saves
> > > > > > the
> > > > > > full
> > > > > > domain context just to get the CPU and the MTRR state of
> > > > > > VCPU#0. Do
> > > > > > you think you could switch this code to use the newly
> > > > > > introduced
> > > > > > machinery to save a single instance of a specific type?
> > > > > Sure, I will add a tool patch at the end of the series
> > > > Is this urgent to be in this series? If not I will add a new
> > > > patch
> > > > after it is all in. 
> > > Considering the problems that there have been with this series,
> > > anything to help build confidence in things still working for all
> > > cases would help here, so I'm pretty glad Roger thought of this,
> > > and while I wouldn't make it as strong as "the series can't go
> > > in without this", I'd still much prefer if you too the time.
> > I don't think it is possible to use getcontext_partial()
> > in vcpu_hvm()
> > because of the need to have a header for xc_domain_hvm_setcontext()
> > and
> > the only way to get it is by xc_domain_hvm_getcontext(). There is
> > also
> > a comment there that states the same thing
> > "/*
> >      * Get the full HVM context in order to have the header, it is
> > not
> >      * possible to get the header with getcontext_partial, and
> > crafting
> > one
> >      * from userspace is also not an option since cpuid is trapped
> > and
> >      * modified by Xen.
> >      */
> > "
> > I hope I understood the request correctly to start with and if not
> > please clarify. 
> But I expect you also get such header when fetching the state of a
> single device, or else how do you use this new hypercall in
> conjunction with xc_domain_hvm_setcontext?
> 
The new *save_one functions are based on the
old xc_domain_hvm_getcontext_partial() that did not send the header. I
had no requests to change this behavior by this point.

Thanks, 
Alex

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v17 13/13] x86/domctl: Don't pause the whole domain if only getting vcpu state
  2018-09-03 14:42               ` Isaila Alexandru
@ 2018-09-03 14:56                 ` Roger Pau Monné
  0 siblings, 0 replies; 31+ messages in thread
From: Roger Pau Monné @ 2018-09-03 14:56 UTC (permalink / raw)
  To: Isaila Alexandru
  Cc: Wei Liu, Andrew Cooper, Ian Jackson, xen-devel, Paul Durrant,
	Jan Beulich

On Mon, Sep 03, 2018 at 05:42:43PM +0300, Isaila Alexandru wrote:
> On Lu, 2018-09-03 at 16:36 +0200, Roger Pau Monné wrote:
> > On Fri, Aug 31, 2018 at 04:56:21PM +0300, Isaila Alexandru wrote:
> > > 
> > > On Mi, 2018-08-29 at 08:13 -0600, Jan Beulich wrote:
> > > > 
> > > > > 
> > > > > 
> > > > > > 
> > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > On 29.08.18 at 16:02, <aisaila@bitdefender.com> wrote:
> > > > > On Mi, 2018-08-22 at 18:15 +0300, Isaila Alexandru wrote:
> > > > > > 
> > > > > > 
> > > > > > On Mi, 2018-08-22 at 16:41 +0200, Roger Pau Monné wrote:
> > > > > > > 
> > > > > > > 
> > > > > > > If you look at vcpu_hvm in tools/libxc/xc_dom_x86.c it
> > > > > > > saves
> > > > > > > the
> > > > > > > full
> > > > > > > domain context just to get the CPU and the MTRR state of
> > > > > > > VCPU#0. Do
> > > > > > > you think you could switch this code to use the newly
> > > > > > > introduced
> > > > > > > machinery to save a single instance of a specific type?
> > > > > > Sure, I will add a tool patch at the end of the series
> > > > > Is this urgent to be in this series? If not I will add a new
> > > > > patch
> > > > > after it is all in. 
> > > > Considering the problems that there have been with this series,
> > > > anything to help build confidence in things still working for all
> > > > cases would help here, so I'm pretty glad Roger thought of this,
> > > > and while I wouldn't make it as strong as "the series can't go
> > > > in without this", I'd still much prefer if you too the time.
> > > I don't think it is possible to use getcontext_partial()
> > > in vcpu_hvm()
> > > because of the need to have a header for xc_domain_hvm_setcontext()
> > > and
> > > the only way to get it is by xc_domain_hvm_getcontext(). There is
> > > also
> > > a comment there that states the same thing
> > > "/*
> > >      * Get the full HVM context in order to have the header, it is
> > > not
> > >      * possible to get the header with getcontext_partial, and
> > > crafting
> > > one
> > >      * from userspace is also not an option since cpuid is trapped
> > > and
> > >      * modified by Xen.
> > >      */
> > > "
> > > I hope I understood the request correctly to start with and if not
> > > please clarify. 
> > But I expect you also get such header when fetching the state of a
> > single device, or else how do you use this new hypercall in
> > conjunction with xc_domain_hvm_setcontext?
> > 
> The new *save_one functions are based on the
> old xc_domain_hvm_getcontext_partial() that did not send the header. I
> had no requests to change this behavior by this point.

Hm, I have to admit I think this interface is broken, but I'm not
going to insist on this anyway.

IMO it would be good for you to fix this so that this new functions
can be used in vcpu_hvm, then you will have a user in-tree, so the
chances of someone accidentally breaking the interface would be very
low. Right now there's no user of the interface in-tree, so breakages
in this interface would go unnoticed by the test system.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2018-09-03 14:56 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-08-22 14:02 [PATCH v17 00/14] x86/domctl: Save info for one vcpu instance Alexandru Isaila
2018-08-22 14:02 ` [PATCH v17 01/13] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func Alexandru Isaila
2018-08-22 14:02 ` [PATCH v17 02/13] x86/hvm: Introduce hvm_save_tsc_adjust_one() func Alexandru Isaila
2018-08-22 14:02 ` [PATCH v17 03/13] x86/hvm: Introduce hvm_save_cpu_ctxt_one func Alexandru Isaila
2018-08-28 15:33   ` Razvan Cojocaru
2018-08-28 15:43     ` Jan Beulich
2018-08-22 14:02 ` [PATCH v17 04/13] x86/hvm: Introduce hvm_save_cpu_xsave_states_one Alexandru Isaila
2018-08-22 14:02 ` [PATCH v17 05/13] x86/hvm: Introduce hvm_save_cpu_msrs_one func Alexandru Isaila
2018-08-22 14:02 ` [PATCH v17 06/13] x86/hvm: Introduce hvm_save_mtrr_msr_one func Alexandru Isaila
2018-08-22 14:02 ` [PATCH v17 07/13] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func Alexandru Isaila
2018-08-22 14:02 ` [PATCH v17 08/13] x86/hvm: Introduce lapic_save_hidden_one Alexandru Isaila
2018-08-22 14:02 ` [PATCH v17 09/13] x86/hvm: Introduce lapic_save_regs_one func Alexandru Isaila
2018-08-22 14:02 ` [PATCH v17 10/13] x86/hvm: Add handler for save_one funcs Alexandru Isaila
2018-08-22 14:02 ` [PATCH v17 11/13] x86/domctl: Use hvm_save_vcpu_handler Alexandru Isaila
2018-08-22 14:28   ` Roger Pau Monné
2018-08-27  8:39     ` Jan Beulich
2018-08-22 14:02 ` [PATCH v17 12/13] x86/hvm: Remove redundant save functions Alexandru Isaila
2018-08-22 15:04   ` Roger Pau Monné
2018-08-22 15:22     ` Isaila Alexandru
2018-08-22 15:26       ` Roger Pau Monné
2018-08-28 15:27   ` Jan Beulich
2018-08-22 14:02 ` [PATCH v17 13/13] x86/domctl: Don't pause the whole domain if only getting vcpu state Alexandru Isaila
2018-08-22 14:41   ` Roger Pau Monné
2018-08-22 15:15     ` Isaila Alexandru
2018-08-29 14:02       ` Isaila Alexandru
2018-08-29 14:13         ` Jan Beulich
2018-08-31 13:56           ` Isaila Alexandru
2018-08-31 15:23             ` Jan Beulich
2018-09-03 14:36             ` Roger Pau Monné
2018-09-03 14:42               ` Isaila Alexandru
2018-09-03 14:56                 ` Roger Pau Monné

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.