All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v18 00/13] x86/domctl: Save info for one vcpu instance
@ 2018-09-03 13:14 Alexandru Isaila
  2018-09-03 13:14 ` [PATCH v18 01/13] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func Alexandru Isaila
                   ` (13 more replies)
  0 siblings, 14 replies; 22+ messages in thread
From: Alexandru Isaila @ 2018-09-03 13:14 UTC (permalink / raw)
  To: xen-devel; +Cc: wei.liu2, paul.durrant, ian.jackson, jbeulich, andrew.cooper3

Hi all,

This patch series addresses the ideea of saving data from a single vcpu instance.
First it starts by adding *save_one functions, then it introduces a handler for the
new save_one* funcs and makes use of it in the hvm_save and hvm_save_one funcs.
The final patches are used for clean up and change the hvm_save_one() func while 
changing domain_pause to vcpu_pause.

Cheers,

Alexandru Isaila (13):

x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func
x86/hvm: Introduce hvm_save_tsc_adjust_one() func
x86/hvm: Introduce hvm_save_cpu_ctxt_one func
x86/hvm: Introduce hvm_save_cpu_xsave_states_one
x86/hvm: Introduce hvm_save_cpu_msrs_one func
x86/hvm: Introduce hvm_save_mtrr_msr_one func
x86/hvm: Introduce viridian_save_vcpu_ctxt_one()
x86/hvm: Introduce lapic_save_hidden_one
x86/hvm: Introduce lapic_save_regs_one func
x86/hvm: Add handler for save_one funcs
x86/domctl: Use hvm_save_vcpu_handler
x86/hvm: Remove redundant save functions
x86/domctl: Don't pause the whole domain if only

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v18 01/13] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func
  2018-09-03 13:14 [PATCH v18 00/13] x86/domctl: Save info for one vcpu instance Alexandru Isaila
@ 2018-09-03 13:14 ` Alexandru Isaila
  2018-09-03 13:14 ` [PATCH v18 02/13] x86/hvm: Introduce hvm_save_tsc_adjust_one() func Alexandru Isaila
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Alexandru Isaila @ 2018-09-03 13:14 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

---
Changes since V11:
	- Removed the memset and added init with {}.
---
 xen/arch/x86/cpu/mcheck/vmce.c | 21 +++++++++++++--------
 1 file changed, 13 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index e07cd2f..31e553c 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -349,6 +349,18 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
     return ret;
 }
 
+static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+    struct hvm_vmce_vcpu ctxt = {
+        .caps = v->arch.vmce.mcg_cap,
+        .mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2,
+        .mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2,
+        .mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl,
+    };
+
+    return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt);
+}
+
 static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
     struct vcpu *v;
@@ -356,14 +368,7 @@ static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 
     for_each_vcpu ( d, v )
     {
-        struct hvm_vmce_vcpu ctxt = {
-            .caps = v->arch.vmce.mcg_cap,
-            .mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2,
-            .mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2,
-            .mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl,
-        };
-
-        err = hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt);
+        err = vmce_save_vcpu_ctxt_one(v, h);
         if ( err )
             break;
     }
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v18 02/13] x86/hvm: Introduce hvm_save_tsc_adjust_one() func
  2018-09-03 13:14 [PATCH v18 00/13] x86/domctl: Save info for one vcpu instance Alexandru Isaila
  2018-09-03 13:14 ` [PATCH v18 01/13] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func Alexandru Isaila
@ 2018-09-03 13:14 ` Alexandru Isaila
  2018-09-03 13:14 ` [PATCH v18 03/13] x86/hvm: Introduce hvm_save_cpu_ctxt_one func Alexandru Isaila
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Alexandru Isaila @ 2018-09-03 13:14 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

---
Changes since V13:
	- Moved tsc_adjust to the initializer.
---
 xen/arch/x86/hvm/hvm.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 93092d2..d90da9a 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -740,16 +740,23 @@ void hvm_domain_destroy(struct domain *d)
     destroy_vpci_mmcfg(d);
 }
 
+static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+    struct hvm_tsc_adjust ctxt = {
+        .tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust,
+    };
+
+    return hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, &ctxt);
+}
+
 static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 {
     struct vcpu *v;
-    struct hvm_tsc_adjust ctxt;
     int err = 0;
 
     for_each_vcpu ( d, v )
     {
-        ctxt.tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust;
-        err = hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, &ctxt);
+        err = hvm_save_tsc_adjust_one(v, h);
         if ( err )
             break;
     }
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v18 03/13] x86/hvm: Introduce hvm_save_cpu_ctxt_one func
  2018-09-03 13:14 [PATCH v18 00/13] x86/domctl: Save info for one vcpu instance Alexandru Isaila
  2018-09-03 13:14 ` [PATCH v18 01/13] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func Alexandru Isaila
  2018-09-03 13:14 ` [PATCH v18 02/13] x86/hvm: Introduce hvm_save_tsc_adjust_one() func Alexandru Isaila
@ 2018-09-03 13:14 ` Alexandru Isaila
  2018-09-03 13:14 ` [PATCH v18 04/13] x86/hvm: Introduce hvm_save_cpu_xsave_states_one Alexandru Isaila
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Alexandru Isaila @ 2018-09-03 13:14 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

---
Changes since V14:
	- Move all free fields to the initializer
	- Add blank line to before the return
	- Move v->pause_flags check to the save_one function.
---
 xen/arch/x86/hvm/hvm.c | 219 +++++++++++++++++++++++++------------------------
 1 file changed, 113 insertions(+), 106 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index d90da9a..333c342 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -787,119 +787,126 @@ static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
                           hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
-static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-    struct vcpu *v;
-    struct hvm_hw_cpu ctxt;
     struct segment_register seg;
+    struct hvm_hw_cpu ctxt = {
+        .tsc = hvm_get_guest_tsc_fixed(v, v->domain->arch.hvm_domain.sync_tsc),
+        .msr_tsc_aux = hvm_msr_tsc_aux(v),
+        .rax = v->arch.user_regs.rax,
+        .rbx = v->arch.user_regs.rbx,
+        .rcx = v->arch.user_regs.rcx,
+        .rdx = v->arch.user_regs.rdx,
+        .rbp = v->arch.user_regs.rbp,
+        .rsi = v->arch.user_regs.rsi,
+        .rdi = v->arch.user_regs.rdi,
+        .rsp = v->arch.user_regs.rsp,
+        .rip = v->arch.user_regs.rip,
+        .rflags = v->arch.user_regs.rflags,
+        .r8  = v->arch.user_regs.r8,
+        .r9  = v->arch.user_regs.r9,
+        .r10 = v->arch.user_regs.r10,
+        .r11 = v->arch.user_regs.r11,
+        .r12 = v->arch.user_regs.r12,
+        .r13 = v->arch.user_regs.r13,
+        .r14 = v->arch.user_regs.r14,
+        .r15 = v->arch.user_regs.r15,
+        .dr0 = v->arch.debugreg[0],
+        .dr1 = v->arch.debugreg[1],
+        .dr2 = v->arch.debugreg[2],
+        .dr3 = v->arch.debugreg[3],
+        .dr6 = v->arch.debugreg[6],
+        .dr7 = v->arch.debugreg[7],
+    };
 
-    for_each_vcpu ( d, v )
+    /*
+     * We don't need to save state for a vcpu that is down; the restore
+     * code will leave it down if there is nothing saved.
+     */
+    if ( v->pause_flags & VPF_down )
+        return 0;
+
+    /* Architecture-specific vmcs/vmcb bits */
+    hvm_funcs.save_cpu_ctxt(v, &ctxt);
+
+    hvm_get_segment_register(v, x86_seg_idtr, &seg);
+    ctxt.idtr_limit = seg.limit;
+    ctxt.idtr_base = seg.base;
+
+    hvm_get_segment_register(v, x86_seg_gdtr, &seg);
+    ctxt.gdtr_limit = seg.limit;
+    ctxt.gdtr_base = seg.base;
+
+    hvm_get_segment_register(v, x86_seg_cs, &seg);
+    ctxt.cs_sel = seg.sel;
+    ctxt.cs_limit = seg.limit;
+    ctxt.cs_base = seg.base;
+    ctxt.cs_arbytes = seg.attr;
+
+    hvm_get_segment_register(v, x86_seg_ds, &seg);
+    ctxt.ds_sel = seg.sel;
+    ctxt.ds_limit = seg.limit;
+    ctxt.ds_base = seg.base;
+    ctxt.ds_arbytes = seg.attr;
+
+    hvm_get_segment_register(v, x86_seg_es, &seg);
+    ctxt.es_sel = seg.sel;
+    ctxt.es_limit = seg.limit;
+    ctxt.es_base = seg.base;
+    ctxt.es_arbytes = seg.attr;
+
+    hvm_get_segment_register(v, x86_seg_ss, &seg);
+    ctxt.ss_sel = seg.sel;
+    ctxt.ss_limit = seg.limit;
+    ctxt.ss_base = seg.base;
+    ctxt.ss_arbytes = seg.attr;
+
+    hvm_get_segment_register(v, x86_seg_fs, &seg);
+    ctxt.fs_sel = seg.sel;
+    ctxt.fs_limit = seg.limit;
+    ctxt.fs_base = seg.base;
+    ctxt.fs_arbytes = seg.attr;
+
+    hvm_get_segment_register(v, x86_seg_gs, &seg);
+    ctxt.gs_sel = seg.sel;
+    ctxt.gs_limit = seg.limit;
+    ctxt.gs_base = seg.base;
+    ctxt.gs_arbytes = seg.attr;
+
+    hvm_get_segment_register(v, x86_seg_tr, &seg);
+    ctxt.tr_sel = seg.sel;
+    ctxt.tr_limit = seg.limit;
+    ctxt.tr_base = seg.base;
+    ctxt.tr_arbytes = seg.attr;
+
+    hvm_get_segment_register(v, x86_seg_ldtr, &seg);
+    ctxt.ldtr_sel = seg.sel;
+    ctxt.ldtr_limit = seg.limit;
+    ctxt.ldtr_base = seg.base;
+    ctxt.ldtr_arbytes = seg.attr;
+
+    if ( v->fpu_initialised )
     {
-        /* We don't need to save state for a vcpu that is down; the restore 
-         * code will leave it down if there is nothing saved. */
-        if ( v->pause_flags & VPF_down )
-            continue;
+        memcpy(ctxt.fpu_regs, v->arch.fpu_ctxt, sizeof(ctxt.fpu_regs));
+        ctxt.flags = XEN_X86_FPU_INITIALISED;
+    }
 
-        memset(&ctxt, 0, sizeof(ctxt));
-
-        /* Architecture-specific vmcs/vmcb bits */
-        hvm_funcs.save_cpu_ctxt(v, &ctxt);
-
-        ctxt.tsc = hvm_get_guest_tsc_fixed(v, d->arch.hvm_domain.sync_tsc);
-
-        ctxt.msr_tsc_aux = hvm_msr_tsc_aux(v);
-
-        hvm_get_segment_register(v, x86_seg_idtr, &seg);
-        ctxt.idtr_limit = seg.limit;
-        ctxt.idtr_base = seg.base;
-
-        hvm_get_segment_register(v, x86_seg_gdtr, &seg);
-        ctxt.gdtr_limit = seg.limit;
-        ctxt.gdtr_base = seg.base;
-
-        hvm_get_segment_register(v, x86_seg_cs, &seg);
-        ctxt.cs_sel = seg.sel;
-        ctxt.cs_limit = seg.limit;
-        ctxt.cs_base = seg.base;
-        ctxt.cs_arbytes = seg.attr;
-
-        hvm_get_segment_register(v, x86_seg_ds, &seg);
-        ctxt.ds_sel = seg.sel;
-        ctxt.ds_limit = seg.limit;
-        ctxt.ds_base = seg.base;
-        ctxt.ds_arbytes = seg.attr;
-
-        hvm_get_segment_register(v, x86_seg_es, &seg);
-        ctxt.es_sel = seg.sel;
-        ctxt.es_limit = seg.limit;
-        ctxt.es_base = seg.base;
-        ctxt.es_arbytes = seg.attr;
-
-        hvm_get_segment_register(v, x86_seg_ss, &seg);
-        ctxt.ss_sel = seg.sel;
-        ctxt.ss_limit = seg.limit;
-        ctxt.ss_base = seg.base;
-        ctxt.ss_arbytes = seg.attr;
-
-        hvm_get_segment_register(v, x86_seg_fs, &seg);
-        ctxt.fs_sel = seg.sel;
-        ctxt.fs_limit = seg.limit;
-        ctxt.fs_base = seg.base;
-        ctxt.fs_arbytes = seg.attr;
-
-        hvm_get_segment_register(v, x86_seg_gs, &seg);
-        ctxt.gs_sel = seg.sel;
-        ctxt.gs_limit = seg.limit;
-        ctxt.gs_base = seg.base;
-        ctxt.gs_arbytes = seg.attr;
-
-        hvm_get_segment_register(v, x86_seg_tr, &seg);
-        ctxt.tr_sel = seg.sel;
-        ctxt.tr_limit = seg.limit;
-        ctxt.tr_base = seg.base;
-        ctxt.tr_arbytes = seg.attr;
-
-        hvm_get_segment_register(v, x86_seg_ldtr, &seg);
-        ctxt.ldtr_sel = seg.sel;
-        ctxt.ldtr_limit = seg.limit;
-        ctxt.ldtr_base = seg.base;
-        ctxt.ldtr_arbytes = seg.attr;
-
-        if ( v->fpu_initialised )
-        {
-            memcpy(ctxt.fpu_regs, v->arch.fpu_ctxt, sizeof(ctxt.fpu_regs));
-            ctxt.flags = XEN_X86_FPU_INITIALISED;
-        }
+    return hvm_save_entry(CPU, v->vcpu_id, h, &ctxt);
+}
+
+static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+{
+    struct vcpu *v;
+    int err = 0;
 
-        ctxt.rax = v->arch.user_regs.rax;
-        ctxt.rbx = v->arch.user_regs.rbx;
-        ctxt.rcx = v->arch.user_regs.rcx;
-        ctxt.rdx = v->arch.user_regs.rdx;
-        ctxt.rbp = v->arch.user_regs.rbp;
-        ctxt.rsi = v->arch.user_regs.rsi;
-        ctxt.rdi = v->arch.user_regs.rdi;
-        ctxt.rsp = v->arch.user_regs.rsp;
-        ctxt.rip = v->arch.user_regs.rip;
-        ctxt.rflags = v->arch.user_regs.rflags;
-        ctxt.r8  = v->arch.user_regs.r8;
-        ctxt.r9  = v->arch.user_regs.r9;
-        ctxt.r10 = v->arch.user_regs.r10;
-        ctxt.r11 = v->arch.user_regs.r11;
-        ctxt.r12 = v->arch.user_regs.r12;
-        ctxt.r13 = v->arch.user_regs.r13;
-        ctxt.r14 = v->arch.user_regs.r14;
-        ctxt.r15 = v->arch.user_regs.r15;
-        ctxt.dr0 = v->arch.debugreg[0];
-        ctxt.dr1 = v->arch.debugreg[1];
-        ctxt.dr2 = v->arch.debugreg[2];
-        ctxt.dr3 = v->arch.debugreg[3];
-        ctxt.dr6 = v->arch.debugreg[6];
-        ctxt.dr7 = v->arch.debugreg[7];
-
-        if ( hvm_save_entry(CPU, v->vcpu_id, h, &ctxt) != 0 )
-            return 1; 
+    for_each_vcpu ( d, v )
+    {
+        err = hvm_save_cpu_ctxt_one(v, h);
+        if ( err )
+            break;
     }
-    return 0;
+
+    return err;
 }
 
 /* Return a string indicating the error, or NULL for valid. */
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v18 04/13] x86/hvm: Introduce hvm_save_cpu_xsave_states_one
  2018-09-03 13:14 [PATCH v18 00/13] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (2 preceding siblings ...)
  2018-09-03 13:14 ` [PATCH v18 03/13] x86/hvm: Introduce hvm_save_cpu_ctxt_one func Alexandru Isaila
@ 2018-09-03 13:14 ` Alexandru Isaila
  2018-09-03 13:14 ` [PATCH v18 05/13] x86/hvm: Introduce hvm_save_cpu_msrs_one func Alexandru Isaila
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Alexandru Isaila @ 2018-09-03 13:14 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

---
Changes since V14:
	- Remove err init
	- Add blank line ahead of return
	- Move xsave_enabled() check to the save_one func.
---
 xen/arch/x86/hvm/hvm.c | 47 +++++++++++++++++++++++++++++------------------
 1 file changed, 29 insertions(+), 18 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 333c342..5b0820e 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1187,35 +1187,46 @@ HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_load_cpu_ctxt,
                                            save_area) + \
                                   xstate_ctxt_size(xcr0))
 
-static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_xsave_states_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-    struct vcpu *v;
     struct hvm_hw_cpu_xsave *ctxt;
+    unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
+    int err;
 
-    if ( !cpu_has_xsave )
+    if ( !cpu_has_xsave || !xsave_enabled(v) )
         return 0;   /* do nothing */
 
-    for_each_vcpu ( d, v )
-    {
-        unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
+    err = _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size);
+    if ( err )
+        return err;
 
-        if ( !xsave_enabled(v) )
-            continue;
-        if ( _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size) )
-            return 1;
-        ctxt = (struct hvm_hw_cpu_xsave *)&h->data[h->cur];
-        h->cur += size;
+    ctxt = (struct hvm_hw_cpu_xsave *)&h->data[h->cur];
+    h->cur += size;
+    ctxt->xfeature_mask = xfeature_mask;
+    ctxt->xcr0 = v->arch.xcr0;
+    ctxt->xcr0_accum = v->arch.xcr0_accum;
 
-        ctxt->xfeature_mask = xfeature_mask;
-        ctxt->xcr0 = v->arch.xcr0;
-        ctxt->xcr0_accum = v->arch.xcr0_accum;
-        expand_xsave_states(v, &ctxt->save_area,
-                            size - offsetof(typeof(*ctxt), save_area));
-    }
+    expand_xsave_states(v, &ctxt->save_area,
+                        size - offsetof(typeof(*ctxt), save_area));
 
     return 0;
 }
 
+static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
+{
+    struct vcpu *v;
+    int err = 0;
+
+    for_each_vcpu ( d, v )
+    {
+        err = hvm_save_cpu_xsave_states_one(v, h);
+        if ( err )
+            break;
+    }
+
+    return err;
+}
+
 /*
  * Structure layout conformity checks, documenting correctness of the cast in
  * the invocation of validate_xstate() below.
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v18 05/13] x86/hvm: Introduce hvm_save_cpu_msrs_one func
  2018-09-03 13:14 [PATCH v18 00/13] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (3 preceding siblings ...)
  2018-09-03 13:14 ` [PATCH v18 04/13] x86/hvm: Introduce hvm_save_cpu_xsave_states_one Alexandru Isaila
@ 2018-09-03 13:14 ` Alexandru Isaila
  2018-09-03 13:14 ` [PATCH v18 06/13] x86/hvm: Introduce hvm_save_mtrr_msr_one func Alexandru Isaila
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Alexandru Isaila @ 2018-09-03 13:14 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

---
Changes since V14:
	- Remove err init
	- Add blank line ahead of return.
---
 xen/arch/x86/hvm/hvm.c | 106 +++++++++++++++++++++++++++----------------------
 1 file changed, 59 insertions(+), 47 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 5b0820e..7df8744 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1364,69 +1364,81 @@ static const uint32_t msrs_to_send[] = {
 };
 static unsigned int __read_mostly msr_count_max = ARRAY_SIZE(msrs_to_send);
 
-static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_msrs_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-    struct vcpu *v;
+    struct hvm_save_descriptor *desc = _p(&h->data[h->cur]);
+    struct hvm_msr *ctxt;
+    unsigned int i;
+    int err;
 
-    for_each_vcpu ( d, v )
+    err = _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
+                         HVM_CPU_MSR_SIZE(msr_count_max));
+    if ( err )
+        return err;
+    ctxt = (struct hvm_msr *)&h->data[h->cur];
+    ctxt->count = 0;
+
+    for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
     {
-        struct hvm_save_descriptor *desc = _p(&h->data[h->cur]);
-        struct hvm_msr *ctxt;
-        unsigned int i;
+        uint64_t val;
+        int rc = guest_rdmsr(v, msrs_to_send[i], &val);
 
-        if ( _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
-                             HVM_CPU_MSR_SIZE(msr_count_max)) )
-            return 1;
-        ctxt = (struct hvm_msr *)&h->data[h->cur];
-        ctxt->count = 0;
+        /*
+         * It is the programmers responsibility to ensure that
+         * msrs_to_send[] contain generally-read/write MSRs.
+         * X86EMUL_EXCEPTION here implies a missing feature, and that the
+         * guest doesn't have access to the MSR.
+         */
+        if ( rc == X86EMUL_EXCEPTION )
+            continue;
 
-        for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
+        if ( rc != X86EMUL_OKAY )
         {
-            uint64_t val;
-            int rc = guest_rdmsr(v, msrs_to_send[i], &val);
+            ASSERT_UNREACHABLE();
+            return -ENXIO;
+        }
 
-            /*
-             * It is the programmers responsibility to ensure that
-             * msrs_to_send[] contain generally-read/write MSRs.
-             * X86EMUL_EXCEPTION here implies a missing feature, and that the
-             * guest doesn't have access to the MSR.
-             */
-            if ( rc == X86EMUL_EXCEPTION )
-                continue;
+        if ( !val )
+            continue; /* Skip empty MSRs. */
 
-            if ( rc != X86EMUL_OKAY )
-            {
-                ASSERT_UNREACHABLE();
-                return -ENXIO;
-            }
+        ctxt->msr[ctxt->count].index = msrs_to_send[i];
+        ctxt->msr[ctxt->count++].val = val;
+    }
 
-            if ( !val )
-                continue; /* Skip empty MSRs. */
+    if ( hvm_funcs.save_msr )
+        hvm_funcs.save_msr(v, ctxt);
 
-            ctxt->msr[ctxt->count].index = msrs_to_send[i];
-            ctxt->msr[ctxt->count++].val = val;
-        }
+    ASSERT(ctxt->count <= msr_count_max);
 
-        if ( hvm_funcs.save_msr )
-            hvm_funcs.save_msr(v, ctxt);
+    for ( i = 0; i < ctxt->count; ++i )
+        ctxt->msr[i]._rsvd = 0;
 
-        ASSERT(ctxt->count <= msr_count_max);
+    if ( ctxt->count )
+    {
+        /* Rewrite length to indicate how much space we actually used. */
+        desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
+        h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
+    }
+    else
+        /* or rewind and remove the descriptor from the stream. */
+        h->cur -= sizeof(struct hvm_save_descriptor);
 
-        for ( i = 0; i < ctxt->count; ++i )
-            ctxt->msr[i]._rsvd = 0;
+    return 0;
+}
 
-        if ( ctxt->count )
-        {
-            /* Rewrite length to indicate how much space we actually used. */
-            desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
-            h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
-        }
-        else
-            /* or rewind and remove the descriptor from the stream. */
-            h->cur -= sizeof(struct hvm_save_descriptor);
+static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+{
+    struct vcpu *v;
+    int err = 0;
+
+    for_each_vcpu ( d, v )
+    {
+        err = hvm_save_cpu_msrs_one(v, h);
+        if ( err )
+            break;
     }
 
-    return 0;
+    return err;
 }
 
 static int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v18 06/13] x86/hvm: Introduce hvm_save_mtrr_msr_one func
  2018-09-03 13:14 [PATCH v18 00/13] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (4 preceding siblings ...)
  2018-09-03 13:14 ` [PATCH v18 05/13] x86/hvm: Introduce hvm_save_cpu_msrs_one func Alexandru Isaila
@ 2018-09-03 13:14 ` Alexandru Isaila
  2018-09-03 13:14 ` [PATCH v18 07/13] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func Alexandru Isaila
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Alexandru Isaila @ 2018-09-03 13:14 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>i
Reviewed-by: Jan Beulich <jbeulich@suse.com>

---
Changes since v16:
	- Address style comments.

Note: This patch is based on Roger Pau Monne's series[1]
---
 xen/arch/x86/hvm/mtrr.c | 80 ++++++++++++++++++++++++++-----------------------
 1 file changed, 43 insertions(+), 37 deletions(-)

diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 48facbb..298d7ee 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -718,52 +718,58 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start,
     return 0;
 }
 
-static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_mtrr_msr_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-    struct vcpu *v;
+    const struct mtrr_state *mtrr_state = &v->arch.hvm_vcpu.mtrr;
+    struct hvm_hw_mtrr hw_mtrr = {
+        .msr_mtrr_def_type = mtrr_state->def_type |
+                             MASK_INSR(mtrr_state->fixed_enabled,
+                                       MTRRdefType_FE) |
+                             MASK_INSR(mtrr_state->enabled, MTRRdefType_E),
+        .msr_mtrr_cap      = mtrr_state->mtrr_cap,
+    };
+    unsigned int i;
 
-    /* save mtrr&pat */
-    for_each_vcpu(d, v)
+    if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
+         (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
     {
-        const struct mtrr_state *mtrr_state = &v->arch.hvm_vcpu.mtrr;
-        struct hvm_hw_mtrr hw_mtrr = {
-            .msr_mtrr_def_type = mtrr_state->def_type |
-                                 MASK_INSR(mtrr_state->fixed_enabled,
-                                           MTRRdefType_FE) |
-                                 MASK_INSR(mtrr_state->enabled, MTRRdefType_E),
-            .msr_mtrr_cap      = mtrr_state->mtrr_cap,
-        };
-        unsigned int i;
+        dprintk(XENLOG_G_ERR,
+                "HVM save: %pv: too many (%lu) variable range MTRRs\n",
+                v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
+        return -EINVAL;
+    }
 
-        if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
-             (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
-        {
-            dprintk(XENLOG_G_ERR,
-                    "HVM save: %pv: too many (%lu) variable range MTRRs\n",
-                    v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
-            return -EINVAL;
-        }
+    hvm_get_guest_pat(v, &hw_mtrr.msr_pat_cr);
+
+    for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
+    {
+        hw_mtrr.msr_mtrr_var[i * 2] = mtrr_state->var_ranges->base;
+        hw_mtrr.msr_mtrr_var[i * 2 + 1] = mtrr_state->var_ranges->mask;
+    }
 
-        hvm_get_guest_pat(v, &hw_mtrr.msr_pat_cr);
+    BUILD_BUG_ON(sizeof(hw_mtrr.msr_mtrr_fixed) !=
+                 sizeof(mtrr_state->fixed_ranges));
 
-        for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
-        {
-            /* save physbase */
-            hw_mtrr.msr_mtrr_var[i*2] =
-                ((uint64_t*)mtrr_state->var_ranges)[i*2];
-            /* save physmask */
-            hw_mtrr.msr_mtrr_var[i*2+1] =
-                ((uint64_t*)mtrr_state->var_ranges)[i*2+1];
-        }
+    memcpy(hw_mtrr.msr_mtrr_fixed, mtrr_state->fixed_ranges,
+           sizeof(hw_mtrr.msr_mtrr_fixed));
 
-        for ( i = 0; i < NUM_FIXED_MSR; i++ )
-            hw_mtrr.msr_mtrr_fixed[i] =
-                ((uint64_t*)mtrr_state->fixed_ranges)[i];
+    return hvm_save_entry(MTRR, v->vcpu_id, h, &hw_mtrr);
+}
+
+static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
+{
+    struct vcpu *v;
+    int err = 0;
 
-        if ( hvm_save_entry(MTRR, v->vcpu_id, h, &hw_mtrr) != 0 )
-            return 1;
+    /* save mtrr&pat */
+    for_each_vcpu(d, v)
+    {
+       err = hvm_save_mtrr_msr_one(v, h);
+       if ( err )
+           break;
     }
-    return 0;
+
+    return err;
 }
 
 static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v18 07/13] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func
  2018-09-03 13:14 [PATCH v18 00/13] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (5 preceding siblings ...)
  2018-09-03 13:14 ` [PATCH v18 06/13] x86/hvm: Introduce hvm_save_mtrr_msr_one func Alexandru Isaila
@ 2018-09-03 13:14 ` Alexandru Isaila
  2018-09-03 13:14 ` [PATCH v18 08/13] x86/hvm: Introduce lapic_save_hidden_one Alexandru Isaila
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Alexandru Isaila @ 2018-09-03 13:14 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

---
Changes since V14:
	- Moved all the operations in the initializer.
---
 xen/arch/x86/hvm/viridian.c | 30 +++++++++++++++++++-----------
 1 file changed, 19 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index 694eae6..3f52d38 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -1026,24 +1026,32 @@ static int viridian_load_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
 HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt,
                           viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
 
-static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-    struct vcpu *v;
+    struct hvm_viridian_vcpu_context ctxt = {
+        .vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw,
+        .vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending,
+    };
 
-    if ( !is_viridian_domain(d) )
+    if ( !is_viridian_domain(v->domain) )
         return 0;
 
-    for_each_vcpu( d, v ) {
-        struct hvm_viridian_vcpu_context ctxt = {
-            .vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw,
-            .vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending,
-        };
+    return hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt);
+}
+
+static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+{
+    struct vcpu *v;
+    int err = 0;
 
-        if ( hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt) != 0 )
-            return 1;
+    for_each_vcpu ( d, v )
+    {
+        err = viridian_save_vcpu_ctxt_one(v, h);
+        if ( err )
+            break;
     }
 
-    return 0;
+    return err;
 }
 
 static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v18 08/13] x86/hvm: Introduce lapic_save_hidden_one
  2018-09-03 13:14 [PATCH v18 00/13] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (6 preceding siblings ...)
  2018-09-03 13:14 ` [PATCH v18 07/13] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func Alexandru Isaila
@ 2018-09-03 13:14 ` Alexandru Isaila
  2018-09-03 13:14 ` [PATCH v18 09/13] x86/hvm: Introduce lapic_save_regs_one func Alexandru Isaila
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Alexandru Isaila @ 2018-09-03 13:14 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

---
Changes since v15:
        - Drop struct vlapic *s.
---
 xen/arch/x86/hvm/vlapic.c | 20 ++++++++++++--------
 1 file changed, 12 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 1b9f00a..429ffb5 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1435,23 +1435,27 @@ static void lapic_rearm(struct vlapic *s)
     s->timer_last_update = s->pt.last_plt_gtime;
 }
 
+static int lapic_save_hidden_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+    if ( !has_vlapic(v->domain) )
+        return 0;
+
+    return hvm_save_entry(LAPIC, v->vcpu_id, h, &vcpu_vlapic(v)->hw);
+}
+
 static int lapic_save_hidden(struct domain *d, hvm_domain_context_t *h)
 {
     struct vcpu *v;
-    struct vlapic *s;
-    int rc = 0;
-
-    if ( !has_vlapic(d) )
-        return 0;
+    int err = 0;
 
     for_each_vcpu ( d, v )
     {
-        s = vcpu_vlapic(v);
-        if ( (rc = hvm_save_entry(LAPIC, v->vcpu_id, h, &s->hw)) != 0 )
+        err = lapic_save_hidden_one(v, h);
+        if ( err )
             break;
     }
 
-    return rc;
+    return err;
 }
 
 static int lapic_save_regs(struct domain *d, hvm_domain_context_t *h)
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v18 09/13] x86/hvm: Introduce lapic_save_regs_one func
  2018-09-03 13:14 [PATCH v18 00/13] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (7 preceding siblings ...)
  2018-09-03 13:14 ` [PATCH v18 08/13] x86/hvm: Introduce lapic_save_hidden_one Alexandru Isaila
@ 2018-09-03 13:14 ` Alexandru Isaila
  2018-09-03 13:14 ` [PATCH v18 10/13] x86/hvm: Add handler for save_one funcs Alexandru Isaila
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Alexandru Isaila @ 2018-09-03 13:14 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

---
Changes since v15:
	- Drop struct vlapic *s.
---
 xen/arch/x86/hvm/vlapic.c | 26 +++++++++++++++-----------
 1 file changed, 15 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 429ffb5..2e73615 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1458,26 +1458,30 @@ static int lapic_save_hidden(struct domain *d, hvm_domain_context_t *h)
     return err;
 }
 
+static int lapic_save_regs_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+    if ( !has_vlapic(v->domain) )
+        return 0;
+
+    if ( hvm_funcs.sync_pir_to_irr )
+        hvm_funcs.sync_pir_to_irr(v);
+
+    return hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, vcpu_vlapic(v)->regs);
+}
+
 static int lapic_save_regs(struct domain *d, hvm_domain_context_t *h)
 {
     struct vcpu *v;
-    struct vlapic *s;
-    int rc = 0;
-
-    if ( !has_vlapic(d) )
-        return 0;
+    int err = 0;
 
     for_each_vcpu ( d, v )
     {
-        if ( hvm_funcs.sync_pir_to_irr )
-            hvm_funcs.sync_pir_to_irr(v);
-
-        s = vcpu_vlapic(v);
-        if ( (rc = hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, s->regs)) != 0 )
+        err = lapic_save_regs_one(v, h);
+        if ( err )
             break;
     }
 
-    return rc;
+    return err;
 }
 
 /*
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v18 10/13] x86/hvm: Add handler for save_one funcs
  2018-09-03 13:14 [PATCH v18 00/13] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (8 preceding siblings ...)
  2018-09-03 13:14 ` [PATCH v18 09/13] x86/hvm: Introduce lapic_save_regs_one func Alexandru Isaila
@ 2018-09-03 13:14 ` Alexandru Isaila
  2018-09-03 13:14 ` [PATCH v18 11/13] x86/domctl: Use hvm_save_vcpu_handler Alexandru Isaila
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 22+ messages in thread
From: Alexandru Isaila @ 2018-09-03 13:14 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

---
Changes since V14:
	- Change handler name from hvm_save_one_handler to
	  hvm_save_vcpu_handler.
---
 xen/arch/x86/cpu/mcheck/vmce.c | 1 +
 xen/arch/x86/hvm/hpet.c        | 2 +-
 xen/arch/x86/hvm/hvm.c         | 6 +++++-
 xen/arch/x86/hvm/i8254.c       | 2 +-
 xen/arch/x86/hvm/irq.c         | 6 +++---
 xen/arch/x86/hvm/mtrr.c        | 4 ++--
 xen/arch/x86/hvm/pmtimer.c     | 2 +-
 xen/arch/x86/hvm/rtc.c         | 2 +-
 xen/arch/x86/hvm/save.c        | 3 +++
 xen/arch/x86/hvm/vioapic.c     | 2 +-
 xen/arch/x86/hvm/viridian.c    | 3 ++-
 xen/arch/x86/hvm/vlapic.c      | 4 ++--
 xen/arch/x86/hvm/vpic.c        | 2 +-
 xen/include/asm-x86/hvm/save.h | 6 +++++-
 14 files changed, 29 insertions(+), 16 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 31e553c..35044d7 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -396,6 +396,7 @@ static int vmce_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
+                          vmce_save_vcpu_ctxt_one,
                           vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 
 /*
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index 2837709..aff8613 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -640,7 +640,7 @@ static int hpet_load(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, hpet_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1, HVMSR_PER_DOM);
 
 static void hpet_set(HPETState *h)
 {
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 7df8744..4a70251 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -785,6 +785,7 @@ static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
+                          hvm_save_tsc_adjust_one,
                           hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
 static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
@@ -1180,7 +1181,8 @@ static int hvm_load_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_load_cpu_ctxt,
+HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_save_cpu_ctxt_one,
+                          hvm_load_cpu_ctxt,
                           1, HVMSR_PER_VCPU);
 
 #define HVM_CPU_XSAVE_SIZE(xcr0) (offsetof(struct hvm_hw_cpu_xsave, \
@@ -1533,6 +1535,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
     hvm_register_savevm(CPU_XSAVE_CODE,
                         "CPU_XSAVE",
                         hvm_save_cpu_xsave_states,
+                        hvm_save_cpu_xsave_states_one,
                         hvm_load_cpu_xsave_states,
                         HVM_CPU_XSAVE_SIZE(xfeature_mask) +
                             sizeof(struct hvm_save_descriptor),
@@ -1545,6 +1548,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
         hvm_register_savevm(CPU_MSR_CODE,
                             "CPU_MSR",
                             hvm_save_cpu_msrs,
+                            hvm_save_cpu_msrs_one,
                             hvm_load_cpu_msrs,
                             HVM_CPU_MSR_SIZE(msr_count_max) +
                                 sizeof(struct hvm_save_descriptor),
diff --git a/xen/arch/x86/hvm/i8254.c b/xen/arch/x86/hvm/i8254.c
index 992f08d..ec77b23 100644
--- a/xen/arch/x86/hvm/i8254.c
+++ b/xen/arch/x86/hvm/i8254.c
@@ -437,7 +437,7 @@ static int pit_load(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, pit_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, NULL, pit_load, 1, HVMSR_PER_DOM);
 
 void pit_reset(struct domain *d)
 {
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index c85d004..770eab7 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -764,9 +764,9 @@ static int irq_load_link(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, irq_load_pci,
+HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, NULL, irq_load_pci,
                           1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, irq_load_isa, 
+HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, NULL, irq_load_isa,
                           1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, irq_load_link,
+HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, NULL, irq_load_link,
                           1, HVMSR_PER_DOM);
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 298d7ee..1c4e731 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -822,8 +822,8 @@ static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr, hvm_load_mtrr_msr,
-                          1, HVMSR_PER_VCPU);
+HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr, hvm_save_mtrr_msr_one,
+                          hvm_load_mtrr_msr, 1, HVMSR_PER_VCPU);
 
 void memory_type_changed(struct domain *d)
 {
diff --git a/xen/arch/x86/hvm/pmtimer.c b/xen/arch/x86/hvm/pmtimer.c
index 435647f..0a5e8ce 100644
--- a/xen/arch/x86/hvm/pmtimer.c
+++ b/xen/arch/x86/hvm/pmtimer.c
@@ -309,7 +309,7 @@ static int acpi_load(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PMTIMER, acpi_save, acpi_load,
+HVM_REGISTER_SAVE_RESTORE(PMTIMER, acpi_save, NULL, acpi_load,
                           1, HVMSR_PER_DOM);
 
 int pmtimer_change_ioport(struct domain *d, unsigned int version)
diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index cb75b99..ce7e71b 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -783,7 +783,7 @@ static int rtc_load(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(RTC, rtc_save, rtc_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(RTC, rtc_save, NULL, rtc_load, 1, HVMSR_PER_DOM);
 
 void rtc_reset(struct domain *d)
 {
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 422b96c..1106b96 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -85,6 +85,7 @@ int arch_hvm_load(struct domain *d, struct hvm_save_header *hdr)
 /* List of handlers for various HVM save and restore types */
 static struct {
     hvm_save_handler save;
+    hvm_save_vcpu_handler save_one;
     hvm_load_handler load;
     const char *name;
     size_t size;
@@ -95,6 +96,7 @@ static struct {
 void __init hvm_register_savevm(uint16_t typecode,
                                 const char *name,
                                 hvm_save_handler save_state,
+                                hvm_save_vcpu_handler save_one,
                                 hvm_load_handler load_state,
                                 size_t size, int kind)
 {
@@ -102,6 +104,7 @@ void __init hvm_register_savevm(uint16_t typecode,
     ASSERT(hvm_sr_handlers[typecode].save == NULL);
     ASSERT(hvm_sr_handlers[typecode].load == NULL);
     hvm_sr_handlers[typecode].save = save_state;
+    hvm_sr_handlers[typecode].save_one = save_one;
     hvm_sr_handlers[typecode].load = load_state;
     hvm_sr_handlers[typecode].name = name;
     hvm_sr_handlers[typecode].size = size;
diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index 97b419f..66f54e4 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -601,7 +601,7 @@ static int ioapic_load(struct domain *d, hvm_domain_context_t *h)
     return hvm_load_entry(IOAPIC, h, &s->domU);
 }
 
-HVM_REGISTER_SAVE_RESTORE(IOAPIC, ioapic_save, ioapic_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(IOAPIC, ioapic_save, NULL, ioapic_load, 1, HVMSR_PER_DOM);
 
 void vioapic_reset(struct domain *d)
 {
diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index 3f52d38..268ccce 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -1023,7 +1023,7 @@ static int viridian_load_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt,
+HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt, NULL,
                           viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
 
 static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
@@ -1085,6 +1085,7 @@ static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_VCPU, viridian_save_vcpu_ctxt,
+                          viridian_save_vcpu_ctxt_one,
                           viridian_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 
 static int __init parse_viridian_version(const char *arg)
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 2e73615..7b6f400 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1584,9 +1584,9 @@ static int lapic_load_regs(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, lapic_load_hidden,
+HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, lapic_save_hidden_one, lapic_load_hidden,
                           1, HVMSR_PER_VCPU);
-HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, lapic_load_regs,
+HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, lapic_save_regs_one, lapic_load_regs,
                           1, HVMSR_PER_VCPU);
 
 int vlapic_init(struct vcpu *v)
diff --git a/xen/arch/x86/hvm/vpic.c b/xen/arch/x86/hvm/vpic.c
index e160bbd..ca9b4cb 100644
--- a/xen/arch/x86/hvm/vpic.c
+++ b/xen/arch/x86/hvm/vpic.c
@@ -411,7 +411,7 @@ static int vpic_load(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PIC, vpic_save, vpic_load, 2, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIC, vpic_save, NULL, vpic_load, 2, HVMSR_PER_DOM);
 
 void vpic_reset(struct domain *d)
 {
diff --git a/xen/include/asm-x86/hvm/save.h b/xen/include/asm-x86/hvm/save.h
index f889e8f..f2283fc 100644
--- a/xen/include/asm-x86/hvm/save.h
+++ b/xen/include/asm-x86/hvm/save.h
@@ -97,6 +97,8 @@ static inline uint16_t hvm_load_instance(struct hvm_domain_context *h)
  * restoring.  Both return non-zero on error. */
 typedef int (*hvm_save_handler) (struct domain *d, 
                                  hvm_domain_context_t *h);
+typedef int (*hvm_save_vcpu_handler)(struct  vcpu *v,
+                                    hvm_domain_context_t *h);
 typedef int (*hvm_load_handler) (struct domain *d,
                                  hvm_domain_context_t *h);
 
@@ -105,6 +107,7 @@ typedef int (*hvm_load_handler) (struct domain *d,
 void hvm_register_savevm(uint16_t typecode,
                          const char *name, 
                          hvm_save_handler save_state,
+                         hvm_save_vcpu_handler save_one,
                          hvm_load_handler load_state,
                          size_t size, int kind);
 
@@ -114,12 +117,13 @@ void hvm_register_savevm(uint16_t typecode,
 
 /* Syntactic sugar around that function: specify the max number of
  * saves, and this calculates the size of buffer needed */
-#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _load, _num, _k)             \
+#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _save_one, _load, _num, _k)  \
 static int __init __hvm_register_##_x##_save_and_restore(void)            \
 {                                                                         \
     hvm_register_savevm(HVM_SAVE_CODE(_x),                                \
                         #_x,                                              \
                         &_save,                                           \
+                        _save_one,                                        \
                         &_load,                                           \
                         (_num) * (HVM_SAVE_LENGTH(_x)                     \
                                   + sizeof (struct hvm_save_descriptor)), \
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v18 11/13] x86/domctl: Use hvm_save_vcpu_handler
  2018-09-03 13:14 [PATCH v18 00/13] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (9 preceding siblings ...)
  2018-09-03 13:14 ` [PATCH v18 10/13] x86/hvm: Add handler for save_one funcs Alexandru Isaila
@ 2018-09-03 13:14 ` Alexandru Isaila
  2018-09-07  9:20   ` Jan Beulich
  2018-09-03 13:14 ` [PATCH v18 12/13] x86/hvm: Remove redundant save functions Alexandru Isaila
                   ` (2 subsequent siblings)
  13 siblings, 1 reply; 22+ messages in thread
From: Alexandru Isaila @ 2018-09-03 13:14 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This patch is aimed on using the new save_one fuctions in the hvm_save

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>

---
Changes since V17:
	- Remove double ;
	- Move struct vcpu *v to reduce scope
	- Remove stray lines.
---
 xen/arch/x86/hvm/save.c | 26 ++++++++++++++++++++++----
 1 file changed, 22 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 1106b96..3d08aab 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -195,7 +195,6 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
     char *c;
     struct hvm_save_header hdr;
     struct hvm_save_end end;
-    hvm_save_handler handler;
     unsigned int i;
 
     if ( d->is_dying )
@@ -223,8 +222,27 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
     /* Save all available kinds of state */
     for ( i = 0; i <= HVM_SAVE_CODE_MAX; i++ )
     {
-        handler = hvm_sr_handlers[i].save;
-        if ( handler != NULL )
+        hvm_save_vcpu_handler save_one_handler = hvm_sr_handlers[i].save_one;
+        hvm_save_handler handler = hvm_sr_handlers[i].save;
+
+        if ( save_one_handler )
+        {
+            struct vcpu *v;
+
+            for_each_vcpu ( d, v )
+            {
+                printk(XENLOG_G_INFO "HVM %pv save: %s\n",
+                       v, hvm_sr_handlers[i].name);
+                if ( save_one_handler(v, h) != 0 )
+                {
+                    printk(XENLOG_G_ERR
+                           "HVM %pv save: failed to save type %"PRIu16"\n",
+                           v, i);
+                    return -ENODATA;
+                }
+            }
+        }
+        else if ( handler )
         {
             printk(XENLOG_G_INFO "HVM%d save: %s\n",
                    d->domain_id, hvm_sr_handlers[i].name);
@@ -233,7 +251,7 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
                 printk(XENLOG_G_ERR
                        "HVM%d save: failed to save type %"PRIu16"\n",
                        d->domain_id, i);
-                return -EFAULT;
+                return -ENODATA;
             }
         }
     }
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v18 12/13] x86/hvm: Remove redundant save functions
  2018-09-03 13:14 [PATCH v18 00/13] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (10 preceding siblings ...)
  2018-09-03 13:14 ` [PATCH v18 11/13] x86/domctl: Use hvm_save_vcpu_handler Alexandru Isaila
@ 2018-09-03 13:14 ` Alexandru Isaila
  2018-09-07  9:43   ` Jan Beulich
  2018-09-03 13:14 ` [PATCH v18 13/13] x86/domctl: Don't pause the whole domain if only getting vcpu state Alexandru Isaila
  2018-09-07  9:48 ` [PATCH v18 00/13] x86/domctl: Save info for one vcpu instance Jan Beulich
  13 siblings, 1 reply; 22+ messages in thread
From: Alexandru Isaila @ 2018-09-03 13:14 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This patch removes the redundant save functions and renames the
save_one* to save. It then changes the domain param to vcpu in the
save funcs and adapts print messages in order to match the format of the
other save related messages.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>

---
Changes since V17:
	- Refit HVM_REGISTER_SAVE_RESTORE(CPU)
	- Add const to the added struct domain *d
	- Changed the instance bound check from hvm_save_one()
  	- Update ctxt.size for save_one instance.
---
 xen/arch/x86/cpu/mcheck/vmce.c | 18 +---------
 xen/arch/x86/hvm/hpet.c        |  7 ++--
 xen/arch/x86/hvm/hvm.c         | 76 ++++--------------------------------------
 xen/arch/x86/hvm/i8254.c       |  5 +--
 xen/arch/x86/hvm/irq.c         | 15 +++++----
 xen/arch/x86/hvm/mtrr.c        | 20 ++---------
 xen/arch/x86/hvm/pmtimer.c     |  5 +--
 xen/arch/x86/hvm/rtc.c         |  5 +--
 xen/arch/x86/hvm/save.c        | 25 +++++++-------
 xen/arch/x86/hvm/vioapic.c     |  5 +--
 xen/arch/x86/hvm/viridian.c    | 23 +++----------
 xen/arch/x86/hvm/vlapic.c      | 38 +++------------------
 xen/arch/x86/hvm/vpic.c        |  5 +--
 xen/include/asm-x86/hvm/save.h |  8 ++---
 14 files changed, 59 insertions(+), 196 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 35044d7..763d56b 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -349,7 +349,7 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
     return ret;
 }
 
-static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+static int vmce_save_vcpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
 {
     struct hvm_vmce_vcpu ctxt = {
         .caps = v->arch.vmce.mcg_cap,
@@ -361,21 +361,6 @@ static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
     return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt);
 }
 
-static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-{
-    struct vcpu *v;
-    int err = 0;
-
-    for_each_vcpu ( d, v )
-    {
-        err = vmce_save_vcpu_ctxt_one(v, h);
-        if ( err )
-            break;
-    }
-
-    return err;
-}
-
 static int vmce_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
     unsigned int vcpuid = hvm_load_instance(h);
@@ -396,7 +381,6 @@ static int vmce_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
-                          vmce_save_vcpu_ctxt_one,
                           vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 
 /*
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index aff8613..4afa2ab 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -516,16 +516,17 @@ static const struct hvm_mmio_ops hpet_mmio_ops = {
 };
 
 
-static int hpet_save(struct domain *d, hvm_domain_context_t *h)
+static int hpet_save(struct vcpu *v, hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     HPETState *hp = domain_vhpet(d);
-    struct vcpu *v = pt_global_vcpu_target(d);
     int rc;
     uint64_t guest_time;
 
     if ( !has_vhpet(d) )
         return 0;
 
+    v = pt_global_vcpu_target(d);
     write_lock(&hp->lock);
     guest_time = (v->arch.hvm_vcpu.guest_time ?: hvm_get_guest_time(v)) /
                  STIME_PER_HPET_TICK;
@@ -640,7 +641,7 @@ static int hpet_load(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, hpet_load, 1, HVMSR_PER_DOM);
 
 static void hpet_set(HPETState *h)
 {
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 4a70251..35192ee 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -740,7 +740,7 @@ void hvm_domain_destroy(struct domain *d)
     destroy_vpci_mmcfg(d);
 }
 
-static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_tsc_adjust(struct vcpu *v, hvm_domain_context_t *h)
 {
     struct hvm_tsc_adjust ctxt = {
         .tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust,
@@ -749,21 +749,6 @@ static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
     return hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, &ctxt);
 }
 
-static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
-{
-    struct vcpu *v;
-    int err = 0;
-
-    for_each_vcpu ( d, v )
-    {
-        err = hvm_save_tsc_adjust_one(v, h);
-        if ( err )
-            break;
-    }
-
-    return err;
-}
-
 static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 {
     unsigned int vcpuid = hvm_load_instance(h);
@@ -785,10 +770,9 @@ static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
-                          hvm_save_tsc_adjust_one,
                           hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
-static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_cpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
 {
     struct segment_register seg;
     struct hvm_hw_cpu ctxt = {
@@ -895,21 +879,6 @@ static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
     return hvm_save_entry(CPU, v->vcpu_id, h, &ctxt);
 }
 
-static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-{
-    struct vcpu *v;
-    int err = 0;
-
-    for_each_vcpu ( d, v )
-    {
-        err = hvm_save_cpu_ctxt_one(v, h);
-        if ( err )
-            break;
-    }
-
-    return err;
-}
-
 /* Return a string indicating the error, or NULL for valid. */
 const char *hvm_efer_valid(const struct vcpu *v, uint64_t value,
                            signed int cr0_pg)
@@ -1181,15 +1150,14 @@ static int hvm_load_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_save_cpu_ctxt_one,
-                          hvm_load_cpu_ctxt,
-                          1, HVMSR_PER_VCPU);
+HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_load_cpu_ctxt, 1,
+                          HVMSR_PER_VCPU);
 
 #define HVM_CPU_XSAVE_SIZE(xcr0) (offsetof(struct hvm_hw_cpu_xsave, \
                                            save_area) + \
                                   xstate_ctxt_size(xcr0))
 
-static int hvm_save_cpu_xsave_states_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_cpu_xsave_states(struct vcpu *v, hvm_domain_context_t *h)
 {
     struct hvm_hw_cpu_xsave *ctxt;
     unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
@@ -1214,21 +1182,6 @@ static int hvm_save_cpu_xsave_states_one(struct vcpu *v, hvm_domain_context_t *h
     return 0;
 }
 
-static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
-{
-    struct vcpu *v;
-    int err = 0;
-
-    for_each_vcpu ( d, v )
-    {
-        err = hvm_save_cpu_xsave_states_one(v, h);
-        if ( err )
-            break;
-    }
-
-    return err;
-}
-
 /*
  * Structure layout conformity checks, documenting correctness of the cast in
  * the invocation of validate_xstate() below.
@@ -1366,7 +1319,7 @@ static const uint32_t msrs_to_send[] = {
 };
 static unsigned int __read_mostly msr_count_max = ARRAY_SIZE(msrs_to_send);
 
-static int hvm_save_cpu_msrs_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_cpu_msrs(struct vcpu *v, hvm_domain_context_t *h)
 {
     struct hvm_save_descriptor *desc = _p(&h->data[h->cur]);
     struct hvm_msr *ctxt;
@@ -1428,21 +1381,6 @@ static int hvm_save_cpu_msrs_one(struct vcpu *v, hvm_domain_context_t *h)
     return 0;
 }
 
-static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
-{
-    struct vcpu *v;
-    int err = 0;
-
-    for_each_vcpu ( d, v )
-    {
-        err = hvm_save_cpu_msrs_one(v, h);
-        if ( err )
-            break;
-    }
-
-    return err;
-}
-
 static int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
 {
     unsigned int i, vcpuid = hvm_load_instance(h);
@@ -1535,7 +1473,6 @@ static int __init hvm_register_CPU_save_and_restore(void)
     hvm_register_savevm(CPU_XSAVE_CODE,
                         "CPU_XSAVE",
                         hvm_save_cpu_xsave_states,
-                        hvm_save_cpu_xsave_states_one,
                         hvm_load_cpu_xsave_states,
                         HVM_CPU_XSAVE_SIZE(xfeature_mask) +
                             sizeof(struct hvm_save_descriptor),
@@ -1548,7 +1485,6 @@ static int __init hvm_register_CPU_save_and_restore(void)
         hvm_register_savevm(CPU_MSR_CODE,
                             "CPU_MSR",
                             hvm_save_cpu_msrs,
-                            hvm_save_cpu_msrs_one,
                             hvm_load_cpu_msrs,
                             HVM_CPU_MSR_SIZE(msr_count_max) +
                                 sizeof(struct hvm_save_descriptor),
diff --git a/xen/arch/x86/hvm/i8254.c b/xen/arch/x86/hvm/i8254.c
index ec77b23..e0d2255 100644
--- a/xen/arch/x86/hvm/i8254.c
+++ b/xen/arch/x86/hvm/i8254.c
@@ -390,8 +390,9 @@ void pit_stop_channel0_irq(PITState *pit)
     spin_unlock(&pit->lock);
 }
 
-static int pit_save(struct domain *d, hvm_domain_context_t *h)
+static int pit_save(struct vcpu *v, hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     PITState *pit = domain_vpit(d);
     int rc;
 
@@ -437,7 +438,7 @@ static int pit_load(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, NULL, pit_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, pit_load, 1, HVMSR_PER_DOM);
 
 void pit_reset(struct domain *d)
 {
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index 770eab7..36498d8 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -630,8 +630,9 @@ static int __init dump_irq_info_key_init(void)
 }
 __initcall(dump_irq_info_key_init);
 
-static int irq_save_pci(struct domain *d, hvm_domain_context_t *h)
+static int irq_save_pci(struct vcpu *v, hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     struct hvm_irq *hvm_irq = hvm_domain_irq(d);
     unsigned int asserted, pdev, pintx;
     int rc;
@@ -662,16 +663,18 @@ static int irq_save_pci(struct domain *d, hvm_domain_context_t *h)
     return rc;
 }
 
-static int irq_save_isa(struct domain *d, hvm_domain_context_t *h)
+static int irq_save_isa(struct vcpu *v, hvm_domain_context_t *h)
 {
+    const struct domain *d = v->domain;
     struct hvm_irq *hvm_irq = hvm_domain_irq(d);
 
     /* Save ISA IRQ lines */
     return ( hvm_save_entry(ISA_IRQ, 0, h, &hvm_irq->isa_irq) );
 }
 
-static int irq_save_link(struct domain *d, hvm_domain_context_t *h)
+static int irq_save_link(struct vcpu *v, hvm_domain_context_t *h)
 {
+    const struct domain *d = v->domain;
     struct hvm_irq *hvm_irq = hvm_domain_irq(d);
 
     /* Save PCI-ISA link state */
@@ -764,9 +767,9 @@ static int irq_load_link(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, NULL, irq_load_pci,
+HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, irq_load_pci,
                           1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, NULL, irq_load_isa,
+HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, irq_load_isa,
                           1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, NULL, irq_load_link,
+HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, irq_load_link,
                           1, HVMSR_PER_DOM);
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 1c4e731..de1bb80 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -718,7 +718,7 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start,
     return 0;
 }
 
-static int hvm_save_mtrr_msr_one(struct vcpu *v, hvm_domain_context_t *h)
+static int hvm_save_mtrr_msr(struct vcpu *v, hvm_domain_context_t *h)
 {
     const struct mtrr_state *mtrr_state = &v->arch.hvm_vcpu.mtrr;
     struct hvm_hw_mtrr hw_mtrr = {
@@ -756,22 +756,6 @@ static int hvm_save_mtrr_msr_one(struct vcpu *v, hvm_domain_context_t *h)
     return hvm_save_entry(MTRR, v->vcpu_id, h, &hw_mtrr);
 }
 
-static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
-{
-    struct vcpu *v;
-    int err = 0;
-
-    /* save mtrr&pat */
-    for_each_vcpu(d, v)
-    {
-       err = hvm_save_mtrr_msr_one(v, h);
-       if ( err )
-           break;
-    }
-
-    return err;
-}
-
 static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
 {
     int vcpuid, i;
@@ -822,7 +806,7 @@ static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr, hvm_save_mtrr_msr_one,
+HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr,
                           hvm_load_mtrr_msr, 1, HVMSR_PER_VCPU);
 
 void memory_type_changed(struct domain *d)
diff --git a/xen/arch/x86/hvm/pmtimer.c b/xen/arch/x86/hvm/pmtimer.c
index 0a5e8ce..d8dcbc2 100644
--- a/xen/arch/x86/hvm/pmtimer.c
+++ b/xen/arch/x86/hvm/pmtimer.c
@@ -249,8 +249,9 @@ static int handle_pmt_io(
     return X86EMUL_OKAY;
 }
 
-static int acpi_save(struct domain *d, hvm_domain_context_t *h)
+static int acpi_save(struct vcpu *v, hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     struct hvm_hw_acpi *acpi = &d->arch.hvm_domain.acpi;
     PMTState *s = &d->arch.hvm_domain.pl_time->vpmt;
     uint32_t x, msb = acpi->tmr_val & TMR_VAL_MSB;
@@ -309,7 +310,7 @@ static int acpi_load(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PMTIMER, acpi_save, NULL, acpi_load,
+HVM_REGISTER_SAVE_RESTORE(PMTIMER, acpi_save, acpi_load,
                           1, HVMSR_PER_DOM);
 
 int pmtimer_change_ioport(struct domain *d, unsigned int version)
diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index ce7e71b..58b70fc 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -737,8 +737,9 @@ void rtc_migrate_timers(struct vcpu *v)
 }
 
 /* Save RTC hardware state */
-static int rtc_save(struct domain *d, hvm_domain_context_t *h)
+static int rtc_save(struct vcpu *v, hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     RTCState *s = domain_vrtc(d);
     int rc;
 
@@ -783,7 +784,7 @@ static int rtc_load(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(RTC, rtc_save, NULL, rtc_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(RTC, rtc_save, rtc_load, 1, HVMSR_PER_DOM);
 
 void rtc_reset(struct domain *d)
 {
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 3d08aab..d66eb62 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -85,7 +85,6 @@ int arch_hvm_load(struct domain *d, struct hvm_save_header *hdr)
 /* List of handlers for various HVM save and restore types */
 static struct {
     hvm_save_handler save;
-    hvm_save_vcpu_handler save_one;
     hvm_load_handler load;
     const char *name;
     size_t size;
@@ -96,7 +95,6 @@ static struct {
 void __init hvm_register_savevm(uint16_t typecode,
                                 const char *name,
                                 hvm_save_handler save_state,
-                                hvm_save_vcpu_handler save_one,
                                 hvm_load_handler load_state,
                                 size_t size, int kind)
 {
@@ -104,7 +102,6 @@ void __init hvm_register_savevm(uint16_t typecode,
     ASSERT(hvm_sr_handlers[typecode].save == NULL);
     ASSERT(hvm_sr_handlers[typecode].load == NULL);
     hvm_sr_handlers[typecode].save = save_state;
-    hvm_sr_handlers[typecode].save_one = save_one;
     hvm_sr_handlers[typecode].load = load_state;
     hvm_sr_handlers[typecode].name = name;
     hvm_sr_handlers[typecode].size = size;
@@ -148,14 +145,14 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
          !hvm_sr_handlers[typecode].save )
         return -EINVAL;
 
+    if ( instance >= d->max_vcpus )
+        return -ENOENT;
     ctxt.size = hvm_sr_handlers[typecode].size;
-    if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
-        ctxt.size *= d->max_vcpus;
     ctxt.data = xmalloc_bytes(ctxt.size);
     if ( !ctxt.data )
         return -ENOMEM;
 
-    if ( (rv = hvm_sr_handlers[typecode].save(d, &ctxt)) != 0 )
+    if ( (rv = hvm_sr_handlers[typecode].save(d->vcpu[instance], &ctxt)) != 0 )
         printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
                d->domain_id, typecode, rv);
     else if ( rv = -ENOENT, ctxt.cur >= sizeof(*desc) )
@@ -222,10 +219,12 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
     /* Save all available kinds of state */
     for ( i = 0; i <= HVM_SAVE_CODE_MAX; i++ )
     {
-        hvm_save_vcpu_handler save_one_handler = hvm_sr_handlers[i].save_one;
         hvm_save_handler handler = hvm_sr_handlers[i].save;
 
-        if ( save_one_handler )
+        if ( !handler )
+            continue;
+
+        if ( hvm_sr_handlers[i].kind == HVMSR_PER_VCPU )
         {
             struct vcpu *v;
 
@@ -233,7 +232,7 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
             {
                 printk(XENLOG_G_INFO "HVM %pv save: %s\n",
                        v, hvm_sr_handlers[i].name);
-                if ( save_one_handler(v, h) != 0 )
+                if ( handler(v, h) != 0 )
                 {
                     printk(XENLOG_G_ERR
                            "HVM %pv save: failed to save type %"PRIu16"\n",
@@ -242,14 +241,14 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
                 }
             }
         }
-        else if ( handler )
+        else
         {
-            printk(XENLOG_G_INFO "HVM%d save: %s\n",
+            printk(XENLOG_G_INFO "HVM d%d save: %s\n",
                    d->domain_id, hvm_sr_handlers[i].name);
-            if ( handler(d, h) != 0 )
+            if ( handler(d->vcpu[0], h) != 0 )
             {
                 printk(XENLOG_G_ERR
-                       "HVM%d save: failed to save type %"PRIu16"\n",
+                       "HVM d%d save: failed to save type %"PRIu16"\n",
                        d->domain_id, i);
                 return -ENODATA;
             }
diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index 66f54e4..426b181 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -569,8 +569,9 @@ int vioapic_get_trigger_mode(const struct domain *d, unsigned int gsi)
     return vioapic->redirtbl[pin].fields.trig_mode;
 }
 
-static int ioapic_save(struct domain *d, hvm_domain_context_t *h)
+static int ioapic_save(struct vcpu *v, hvm_domain_context_t *h)
 {
+    const struct domain *d = v->domain;
     struct hvm_vioapic *s;
 
     if ( !has_vioapic(d) )
@@ -601,7 +602,7 @@ static int ioapic_load(struct domain *d, hvm_domain_context_t *h)
     return hvm_load_entry(IOAPIC, h, &s->domU);
 }
 
-HVM_REGISTER_SAVE_RESTORE(IOAPIC, ioapic_save, NULL, ioapic_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(IOAPIC, ioapic_save, ioapic_load, 1, HVMSR_PER_DOM);
 
 void vioapic_reset(struct domain *d)
 {
diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index 268ccce..17cba46 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -990,8 +990,9 @@ out:
     return HVM_HCALL_completed;
 }
 
-static int viridian_save_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
+static int viridian_save_domain_ctxt(struct vcpu *v, hvm_domain_context_t *h)
 {
+    const struct domain *d = v->domain;
     struct hvm_viridian_domain_context ctxt = {
         .time_ref_count = d->arch.hvm_domain.viridian.time_ref_count.val,
         .hypercall_gpa  = d->arch.hvm_domain.viridian.hypercall_gpa.raw,
@@ -1023,10 +1024,10 @@ static int viridian_load_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt, NULL,
+HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt,
                           viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
 
-static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+static int viridian_save_vcpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
 {
     struct hvm_viridian_vcpu_context ctxt = {
         .vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw,
@@ -1039,21 +1040,6 @@ static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
     return hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt);
 }
 
-static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-{
-    struct vcpu *v;
-    int err = 0;
-
-    for_each_vcpu ( d, v )
-    {
-        err = viridian_save_vcpu_ctxt_one(v, h);
-        if ( err )
-            break;
-    }
-
-    return err;
-}
-
 static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
     int vcpuid;
@@ -1085,7 +1071,6 @@ static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_VCPU, viridian_save_vcpu_ctxt,
-                          viridian_save_vcpu_ctxt_one,
                           viridian_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 
 static int __init parse_viridian_version(const char *arg)
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 7b6f400..a20bde8 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1435,7 +1435,7 @@ static void lapic_rearm(struct vlapic *s)
     s->timer_last_update = s->pt.last_plt_gtime;
 }
 
-static int lapic_save_hidden_one(struct vcpu *v, hvm_domain_context_t *h)
+static int lapic_save_hidden(struct vcpu *v, hvm_domain_context_t *h)
 {
     if ( !has_vlapic(v->domain) )
         return 0;
@@ -1443,22 +1443,7 @@ static int lapic_save_hidden_one(struct vcpu *v, hvm_domain_context_t *h)
     return hvm_save_entry(LAPIC, v->vcpu_id, h, &vcpu_vlapic(v)->hw);
 }
 
-static int lapic_save_hidden(struct domain *d, hvm_domain_context_t *h)
-{
-    struct vcpu *v;
-    int err = 0;
-
-    for_each_vcpu ( d, v )
-    {
-        err = lapic_save_hidden_one(v, h);
-        if ( err )
-            break;
-    }
-
-    return err;
-}
-
-static int lapic_save_regs_one(struct vcpu *v, hvm_domain_context_t *h)
+static int lapic_save_regs(struct vcpu *v, hvm_domain_context_t *h)
 {
     if ( !has_vlapic(v->domain) )
         return 0;
@@ -1469,21 +1454,6 @@ static int lapic_save_regs_one(struct vcpu *v, hvm_domain_context_t *h)
     return hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, vcpu_vlapic(v)->regs);
 }
 
-static int lapic_save_regs(struct domain *d, hvm_domain_context_t *h)
-{
-    struct vcpu *v;
-    int err = 0;
-
-    for_each_vcpu ( d, v )
-    {
-        err = lapic_save_regs_one(v, h);
-        if ( err )
-            break;
-    }
-
-    return err;
-}
-
 /*
  * Following lapic_load_hidden()/lapic_load_regs() we may need to
  * correct ID and LDR when they come from an old, broken hypervisor.
@@ -1584,9 +1554,9 @@ static int lapic_load_regs(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, lapic_save_hidden_one, lapic_load_hidden,
+HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, lapic_load_hidden,
                           1, HVMSR_PER_VCPU);
-HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, lapic_save_regs_one, lapic_load_regs,
+HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, lapic_load_regs,
                           1, HVMSR_PER_VCPU);
 
 int vlapic_init(struct vcpu *v)
diff --git a/xen/arch/x86/hvm/vpic.c b/xen/arch/x86/hvm/vpic.c
index ca9b4cb..bad5066 100644
--- a/xen/arch/x86/hvm/vpic.c
+++ b/xen/arch/x86/hvm/vpic.c
@@ -371,8 +371,9 @@ static int vpic_intercept_elcr_io(
     return X86EMUL_OKAY;
 }
 
-static int vpic_save(struct domain *d, hvm_domain_context_t *h)
+static int vpic_save(struct vcpu *v, hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     struct hvm_hw_vpic *s;
     int i;
 
@@ -411,7 +412,7 @@ static int vpic_load(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PIC, vpic_save, NULL, vpic_load, 2, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIC, vpic_save, vpic_load, 2, HVMSR_PER_DOM);
 
 void vpic_reset(struct domain *d)
 {
diff --git a/xen/include/asm-x86/hvm/save.h b/xen/include/asm-x86/hvm/save.h
index f2283fc..b867027 100644
--- a/xen/include/asm-x86/hvm/save.h
+++ b/xen/include/asm-x86/hvm/save.h
@@ -95,10 +95,8 @@ static inline uint16_t hvm_load_instance(struct hvm_domain_context *h)
  * The save handler may save multiple instances of a type into the buffer;
  * the load handler will be called once for each instance found when
  * restoring.  Both return non-zero on error. */
-typedef int (*hvm_save_handler) (struct domain *d, 
+typedef int (*hvm_save_handler) (struct vcpu *v,
                                  hvm_domain_context_t *h);
-typedef int (*hvm_save_vcpu_handler)(struct  vcpu *v,
-                                    hvm_domain_context_t *h);
 typedef int (*hvm_load_handler) (struct domain *d,
                                  hvm_domain_context_t *h);
 
@@ -107,7 +105,6 @@ typedef int (*hvm_load_handler) (struct domain *d,
 void hvm_register_savevm(uint16_t typecode,
                          const char *name, 
                          hvm_save_handler save_state,
-                         hvm_save_vcpu_handler save_one,
                          hvm_load_handler load_state,
                          size_t size, int kind);
 
@@ -117,13 +114,12 @@ void hvm_register_savevm(uint16_t typecode,
 
 /* Syntactic sugar around that function: specify the max number of
  * saves, and this calculates the size of buffer needed */
-#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _save_one, _load, _num, _k)  \
+#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _load, _num, _k)  \
 static int __init __hvm_register_##_x##_save_and_restore(void)            \
 {                                                                         \
     hvm_register_savevm(HVM_SAVE_CODE(_x),                                \
                         #_x,                                              \
                         &_save,                                           \
-                        _save_one,                                        \
                         &_load,                                           \
                         (_num) * (HVM_SAVE_LENGTH(_x)                     \
                                   + sizeof (struct hvm_save_descriptor)), \
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v18 13/13] x86/domctl: Don't pause the whole domain if only getting vcpu state
  2018-09-03 13:14 [PATCH v18 00/13] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (11 preceding siblings ...)
  2018-09-03 13:14 ` [PATCH v18 12/13] x86/hvm: Remove redundant save functions Alexandru Isaila
@ 2018-09-03 13:14 ` Alexandru Isaila
  2018-09-07  9:47   ` Jan Beulich
  2018-09-07  9:48 ` [PATCH v18 00/13] x86/domctl: Save info for one vcpu instance Jan Beulich
  13 siblings, 1 reply; 22+ messages in thread
From: Alexandru Isaila @ 2018-09-03 13:14 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This patch is focused on moving changing hvm_save_one() to save one
typecode from one vcpu and now that the save functions get data from a
single vcpu we can pause the specific vcpu instead of the domain.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>

---
Changes since V15:
	- Moved pause/unpause calls into hvm_save_one()
	- Re-add the loop in hvm_save_one().
---
 xen/arch/x86/domctl.c   |  2 --
 xen/arch/x86/hvm/save.c | 10 ++++++++++
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 8fbbf3a..cb53980 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -591,12 +591,10 @@ long arch_do_domctl(
              !is_hvm_domain(d) )
             break;
 
-        domain_pause(d);
         ret = hvm_save_one(d, domctl->u.hvmcontext_partial.type,
                            domctl->u.hvmcontext_partial.instance,
                            domctl->u.hvmcontext_partial.buffer,
                            &domctl->u.hvmcontext_partial.bufsz);
-        domain_unpause(d);
 
         if ( !ret )
             copyback = true;
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index d66eb62..fcda226 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -152,6 +152,11 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
     if ( !ctxt.data )
         return -ENOMEM;
 
+    if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
+        vcpu_pause(d->vcpu[instance]);
+    else
+        domain_pause(d);
+
     if ( (rv = hvm_sr_handlers[typecode].save(d->vcpu[instance], &ctxt)) != 0 )
         printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
                d->domain_id, typecode, rv);
@@ -183,6 +188,11 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
         }
     }
 
+    if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
+        vcpu_unpause(d->vcpu[instance]);
+    else
+        domain_unpause(d);
+
     xfree(ctxt.data);
     return rv;
 }
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH v18 11/13] x86/domctl: Use hvm_save_vcpu_handler
  2018-09-03 13:14 ` [PATCH v18 11/13] x86/domctl: Use hvm_save_vcpu_handler Alexandru Isaila
@ 2018-09-07  9:20   ` Jan Beulich
  0 siblings, 0 replies; 22+ messages in thread
From: Jan Beulich @ 2018-09-07  9:20 UTC (permalink / raw)
  To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel

>>> On 03.09.18 at 15:14, <aisaila@bitdefender.com> wrote:
> This patch is aimed on using the new save_one fuctions in the hvm_save
> 
> Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v18 12/13] x86/hvm: Remove redundant save functions
  2018-09-03 13:14 ` [PATCH v18 12/13] x86/hvm: Remove redundant save functions Alexandru Isaila
@ 2018-09-07  9:43   ` Jan Beulich
  2018-09-07 12:02     ` Isaila Alexandru
  0 siblings, 1 reply; 22+ messages in thread
From: Jan Beulich @ 2018-09-07  9:43 UTC (permalink / raw)
  To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel

>>> On 03.09.18 at 15:14, <aisaila@bitdefender.com> wrote:
> This patch removes the redundant save functions and renames the
> save_one* to save. It then changes the domain param to vcpu in the
> save funcs and adapts print messages in order to match the format of the
> other save related messages.
> 
> Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
> 
> ---
> Changes since V17:
> 	- Refit HVM_REGISTER_SAVE_RESTORE(CPU)
> 	- Add const to the added struct domain *d

How does this match up with ...

> --- a/xen/arch/x86/hvm/hpet.c
> +++ b/xen/arch/x86/hvm/hpet.c
> @@ -516,16 +516,17 @@ static const struct hvm_mmio_ops hpet_mmio_ops = {
>  };
>  
>  
> -static int hpet_save(struct domain *d, hvm_domain_context_t *h)
> +static int hpet_save(struct vcpu *v, hvm_domain_context_t *h)
>  {
> +    struct domain *d = v->domain;

... this and at least the vPIT and vRTC cases as well? It's not clear to me
why const couldn't be added here; it pretty clearly can't be added in the
vPIT case, but then in the vRTC it looks to be possible again. Especially if
you give a broad/unspecific statement as above, please make sure you've
also done the conversion uniformly.

> @@ -148,14 +145,14 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
>           !hvm_sr_handlers[typecode].save )
>          return -EINVAL;
>  
> +    if ( instance >= d->max_vcpus )
> +        return -ENOENT;

Excuse me, but how many more times do I need to point out that this
is wrong? Once again for a single-vCPU guest saving the second PIC
instance will become impossible with this. Just to re-iterate: The check
above has to be limited to just HVMSR_PER_VCPU kind records.

>      ctxt.size = hvm_sr_handlers[typecode].size;
> -    if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
> -        ctxt.size *= d->max_vcpus;
>      ctxt.data = xmalloc_bytes(ctxt.size);
>      if ( !ctxt.data )
>          return -ENOMEM;
>  
> -    if ( (rv = hvm_sr_handlers[typecode].save(d, &ctxt)) != 0 )
> +    if ( (rv = hvm_sr_handlers[typecode].save(d->vcpu[instance], &ctxt)) != 0 )

And you need to use d->vcpu[0] for all others here.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v18 13/13] x86/domctl: Don't pause the whole domain if only getting vcpu state
  2018-09-03 13:14 ` [PATCH v18 13/13] x86/domctl: Don't pause the whole domain if only getting vcpu state Alexandru Isaila
@ 2018-09-07  9:47   ` Jan Beulich
  0 siblings, 0 replies; 22+ messages in thread
From: Jan Beulich @ 2018-09-07  9:47 UTC (permalink / raw)
  To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel

>>> On 03.09.18 at 15:14, <aisaila@bitdefender.com> wrote:
> This patch is focused on moving changing hvm_save_one() to save one
> typecode from one vcpu and now that the save functions get data from a
> single vcpu we can pause the specific vcpu instead of the domain.
> 
> Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v18 00/13] x86/domctl: Save info for one vcpu instance
  2018-09-03 13:14 [PATCH v18 00/13] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (12 preceding siblings ...)
  2018-09-03 13:14 ` [PATCH v18 13/13] x86/domctl: Don't pause the whole domain if only getting vcpu state Alexandru Isaila
@ 2018-09-07  9:48 ` Jan Beulich
  2018-09-07 11:15   ` Isaila Alexandru
  13 siblings, 1 reply; 22+ messages in thread
From: Jan Beulich @ 2018-09-07  9:48 UTC (permalink / raw)
  To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel

>>> On 03.09.18 at 15:14, <aisaila@bitdefender.com> wrote:
> This patch series addresses the ideea of saving data from a single vcpu instance.
> First it starts by adding *save_one functions, then it introduces a handler for the
> new save_one* funcs and makes use of it in the hvm_save and hvm_save_one funcs.
> The final patches are used for clean up and change the hvm_save_one() func while 
> changing domain_pause to vcpu_pause.

Considering the ongoing problems with the series, please going
forward indicate here what exact testing you've done to make
sure you introduce no regressions.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v18 00/13] x86/domctl: Save info for one vcpu instance
  2018-09-07  9:48 ` [PATCH v18 00/13] x86/domctl: Save info for one vcpu instance Jan Beulich
@ 2018-09-07 11:15   ` Isaila Alexandru
  2018-09-07 11:48     ` Jan Beulich
  0 siblings, 1 reply; 22+ messages in thread
From: Isaila Alexandru @ 2018-09-07 11:15 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel

On Fri, 2018-09-07 at 03:48 -0600, Jan Beulich wrote:
> > > > On 03.09.18 at 15:14, <aisaila@bitdefender.com> wrote:
> > 
> > This patch series addresses the ideea of saving data from a single
> > vcpu instance.
> > First it starts by adding *save_one functions, then it introduces a
> > handler for the
> > new save_one* funcs and makes use of it in the hvm_save and
> > hvm_save_one funcs.
> > The final patches are used for clean up and change the
> > hvm_save_one() func while 
> > changing domain_pause to vcpu_pause.
> 
> Considering the ongoing problems with the series, please going
> forward indicate here what exact testing you've done to make
> sure you introduce no regressions.
> 

I've tested with tools/misc/xen-hvmctx for xc_domain_hvm_getcontext 
and with tools/xentrace/xenctx for xc_domain_hvm_getcontext_partial.

Before this I first looked at the part of the guest startup when it
calls xc_domain_hvm_getcontext to save all the info.

If there are more tests that I could run please provide.

Thanks, 
Alex

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v18 00/13] x86/domctl: Save info for one vcpu instance
  2018-09-07 11:15   ` Isaila Alexandru
@ 2018-09-07 11:48     ` Jan Beulich
  0 siblings, 0 replies; 22+ messages in thread
From: Jan Beulich @ 2018-09-07 11:48 UTC (permalink / raw)
  To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel

>>> On 07.09.18 at 13:15, <aisaila@bitdefender.com> wrote:
> On Fri, 2018-09-07 at 03:48 -0600, Jan Beulich wrote:
>> > > > On 03.09.18 at 15:14, <aisaila@bitdefender.com> wrote:
>> > 
>> > This patch series addresses the ideea of saving data from a single
>> > vcpu instance.
>> > First it starts by adding *save_one functions, then it introduces a
>> > handler for the
>> > new save_one* funcs and makes use of it in the hvm_save and
>> > hvm_save_one funcs.
>> > The final patches are used for clean up and change the
>> > hvm_save_one() func while 
>> > changing domain_pause to vcpu_pause.
>> 
>> Considering the ongoing problems with the series, please going
>> forward indicate here what exact testing you've done to make
>> sure you introduce no regressions.
>> 
> 
> I've tested with tools/misc/xen-hvmctx for xc_domain_hvm_getcontext 
> and with tools/xentrace/xenctx for xc_domain_hvm_getcontext_partial.
> 
> Before this I first looked at the part of the guest startup when it
> calls xc_domain_hvm_getcontext to save all the info.
> 
> If there are more tests that I could run please provide.

Migration / save / restore, as pointed out a number of times
before. Plus you of course realize you're free to put together
extra testing code, just to make sure you've left no corner
cases.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v18 12/13] x86/hvm: Remove redundant save functions
  2018-09-07  9:43   ` Jan Beulich
@ 2018-09-07 12:02     ` Isaila Alexandru
  2018-09-07 14:52       ` Jan Beulich
  0 siblings, 1 reply; 22+ messages in thread
From: Isaila Alexandru @ 2018-09-07 12:02 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel

On Fri, 2018-09-07 at 03:43 -0600, Jan Beulich wrote:
> > > > On 03.09.18 at 15:14, <aisaila@bitdefender.com> wrote:
> > 
> > This patch removes the redundant save functions and renames the
> > save_one* to save. It then changes the domain param to vcpu in the
> > save funcs and adapts print messages in order to match the format
> > of the
> > other save related messages.
> > 
> > Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
> > 
> > ---
> > Changes since V17:
> > 	- Refit HVM_REGISTER_SAVE_RESTORE(CPU)
> > 	- Add const to the added struct domain *d
> 
> Excuse me, but how many more times do I need to point out that this
> is wrong? Once again for a single-vCPU guest saving the second PIC
> instance will become impossible with this. Just to re-iterate: The
> check
> above has to be limited to just HVMSR_PER_VCPU kind records.

Sorry for the misunderstanding of this matter but in v16 I had

 +    if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU &&
 +        instance >= d->max_vcpus )
 +        return -ENOENT;
 +    if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_DOM )
 +        instance = 0;

and this solved the save call for per_dom having instance = 0.
Later, in v17 I dropped the PER_DOM if and in v18 I dropped the &&
PER_VCPU. 

Now the question is how exactly should I go with this?
> 
> >      ctxt.size = hvm_sr_handlers[typecode].size;
> > -    if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
> > -        ctxt.size *= d->max_vcpus;
> >      ctxt.data = xmalloc_bytes(ctxt.size);
> >      if ( !ctxt.data )
> >          return -ENOMEM;
> >  
> > -    if ( (rv = hvm_sr_handlers[typecode].save(d, &ctxt)) != 0 )
> > +    if ( (rv = hvm_sr_handlers[typecode].save(d->vcpu[instance],
> > &ctxt)) != 0 )
> 
> And you need to use d->vcpu[0] for all others here.
> 
I can add a switch (kind) here and for the PER_VCPU case to call
save(d->vcpu[instance]) then for PER_DOM case call save(d->vcpu[0]). 

Is this a good way to go?

Thanks, 
Alex

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v18 12/13] x86/hvm: Remove redundant save functions
  2018-09-07 12:02     ` Isaila Alexandru
@ 2018-09-07 14:52       ` Jan Beulich
  0 siblings, 0 replies; 22+ messages in thread
From: Jan Beulich @ 2018-09-07 14:52 UTC (permalink / raw)
  To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel

>>> On 07.09.18 at 14:02, <aisaila@bitdefender.com> wrote:
> On Fri, 2018-09-07 at 03:43 -0600, Jan Beulich wrote:
>> > > > On 03.09.18 at 15:14, <aisaila@bitdefender.com> wrote:
>> > 
>> > This patch removes the redundant save functions and renames the
>> > save_one* to save. It then changes the domain param to vcpu in the
>> > save funcs and adapts print messages in order to match the format
>> > of the
>> > other save related messages.
>> > 
>> > Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
>> > 
>> > ---
>> > Changes since V17:
>> > 	- Refit HVM_REGISTER_SAVE_RESTORE(CPU)
>> > 	- Add const to the added struct domain *d
>> 
>> Excuse me, but how many more times do I need to point out that this
>> is wrong? Once again for a single-vCPU guest saving the second PIC
>> instance will become impossible with this. Just to re-iterate: The
>> check
>> above has to be limited to just HVMSR_PER_VCPU kind records.
> 
> Sorry for the misunderstanding of this matter but in v16 I had
> 
>  +    if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU &&
>  +        instance >= d->max_vcpus )
>  +        return -ENOENT;
>  +    if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_DOM )
>  +        instance = 0;
> 
> and this solved the save call for per_dom having instance = 0.
> Later, in v17 I dropped the PER_DOM if and in v18 I dropped the &&
> PER_VCPU. 
> 
> Now the question is how exactly should I go with this?

PER_VCPU instances need to be bounded. PER_DOM ones need
to be left alone (and vCPU 0 used for access).

>> >      ctxt.size = hvm_sr_handlers[typecode].size;
>> > -    if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
>> > -        ctxt.size *= d->max_vcpus;
>> >      ctxt.data = xmalloc_bytes(ctxt.size);
>> >      if ( !ctxt.data )
>> >          return -ENOMEM;
>> >  
>> > -    if ( (rv = hvm_sr_handlers[typecode].save(d, &ctxt)) != 0 )
>> > +    if ( (rv = hvm_sr_handlers[typecode].save(d->vcpu[instance],
>> > &ctxt)) != 0 )
>> 
>> And you need to use d->vcpu[0] for all others here.
>> 
> I can add a switch (kind) here and for the PER_VCPU case to call
> save(d->vcpu[instance]) then for PER_DOM case call save(d->vcpu[0]). 
> 
> Is this a good way to go?

Perhaps, but I think you'd be better off latching the subject vCPU
into a local variable.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2018-09-07 14:52 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-03 13:14 [PATCH v18 00/13] x86/domctl: Save info for one vcpu instance Alexandru Isaila
2018-09-03 13:14 ` [PATCH v18 01/13] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func Alexandru Isaila
2018-09-03 13:14 ` [PATCH v18 02/13] x86/hvm: Introduce hvm_save_tsc_adjust_one() func Alexandru Isaila
2018-09-03 13:14 ` [PATCH v18 03/13] x86/hvm: Introduce hvm_save_cpu_ctxt_one func Alexandru Isaila
2018-09-03 13:14 ` [PATCH v18 04/13] x86/hvm: Introduce hvm_save_cpu_xsave_states_one Alexandru Isaila
2018-09-03 13:14 ` [PATCH v18 05/13] x86/hvm: Introduce hvm_save_cpu_msrs_one func Alexandru Isaila
2018-09-03 13:14 ` [PATCH v18 06/13] x86/hvm: Introduce hvm_save_mtrr_msr_one func Alexandru Isaila
2018-09-03 13:14 ` [PATCH v18 07/13] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func Alexandru Isaila
2018-09-03 13:14 ` [PATCH v18 08/13] x86/hvm: Introduce lapic_save_hidden_one Alexandru Isaila
2018-09-03 13:14 ` [PATCH v18 09/13] x86/hvm: Introduce lapic_save_regs_one func Alexandru Isaila
2018-09-03 13:14 ` [PATCH v18 10/13] x86/hvm: Add handler for save_one funcs Alexandru Isaila
2018-09-03 13:14 ` [PATCH v18 11/13] x86/domctl: Use hvm_save_vcpu_handler Alexandru Isaila
2018-09-07  9:20   ` Jan Beulich
2018-09-03 13:14 ` [PATCH v18 12/13] x86/hvm: Remove redundant save functions Alexandru Isaila
2018-09-07  9:43   ` Jan Beulich
2018-09-07 12:02     ` Isaila Alexandru
2018-09-07 14:52       ` Jan Beulich
2018-09-03 13:14 ` [PATCH v18 13/13] x86/domctl: Don't pause the whole domain if only getting vcpu state Alexandru Isaila
2018-09-07  9:47   ` Jan Beulich
2018-09-07  9:48 ` [PATCH v18 00/13] x86/domctl: Save info for one vcpu instance Jan Beulich
2018-09-07 11:15   ` Isaila Alexandru
2018-09-07 11:48     ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.