All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v10 00/11] x86/domctl: Save info for one vcpu instance
@ 2018-07-04 13:32 Alexandru Isaila
  2018-07-04 13:32 ` [PATCH v10 01/11] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func Alexandru Isaila
                   ` (10 more replies)
  0 siblings, 11 replies; 23+ messages in thread
From: Alexandru Isaila @ 2018-07-04 13:32 UTC (permalink / raw)
  To: xen-devel; +Cc: wei.liu2, paul.durrant, ian.jackson, jbeulich, andrew.cooper3

Hi all,

This patch series addresses the ideea of saving data from a single vcpu instance.
First it starts by adding *save_one functions, then it introduces a handler for the
new save_one* funcs and makes use of it in the hvm_save and hvm_save_one funcs.
The final 2 patches are used for clean up. The first one removes the save* funcs and
renames the save_one* to save.
The last patch removes the save_one* handler that is no longer used.

Cheers,

Alexandru Isaila (11):

x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func
x86/hvm: Introduce hvm_save_tsc_adjust_one() func
x86/hvm: Introduce hvm_save_cpu_ctxt_one func
x86/hvm: Introduce hvm_save_cpu_xsave_states_one
x86/hvm: Introduce hvm_save_cpu_msrs_one func
x86/hvm: Introduce hvm_save_mtrr_msr_one func
x86/hvm: Introduce viridian_save_vcpu_ctxt_one()
x86/hvm: Add handler for save_one funcs
x86/domctl: Don't pause the whole domain if only
x86/hvm: Remove redundant save functions
x86/hvm: Remove save_one handler

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v10 01/11] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func
  2018-07-04 13:32 [PATCH v10 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
@ 2018-07-04 13:32 ` Alexandru Isaila
  2018-07-05  8:41   ` Paul Durrant
  2018-07-04 13:32 ` [PATCH v10 02/11] x86/hvm: Introduce hvm_save_tsc_adjust_one() func Alexandru Isaila
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 23+ messages in thread
From: Alexandru Isaila @ 2018-07-04 13:32 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>

---
Changes since V9:
	- Change return of the save_one func to return hvm_save_entry.
---
 xen/arch/x86/cpu/mcheck/vmce.c | 21 +++++++++++++--------
 1 file changed, 13 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index e07cd2f..8a1fbfc 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -349,6 +349,18 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
     return ret;
 }
 
+static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+ {
+    struct hvm_vmce_vcpu ctxt;
+
+    ctxt.caps = v->arch.vmce.mcg_cap;
+    ctxt.mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2;
+    ctxt.mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2;
+    ctxt.mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl;
+
+    return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt);
+ }
+
 static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
     struct vcpu *v;
@@ -356,14 +368,7 @@ static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 
     for_each_vcpu ( d, v )
     {
-        struct hvm_vmce_vcpu ctxt = {
-            .caps = v->arch.vmce.mcg_cap,
-            .mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2,
-            .mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2,
-            .mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl,
-        };
-
-        err = hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt);
+        err = vmce_save_vcpu_ctxt_one(v, h);
         if ( err )
             break;
     }
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v10 02/11] x86/hvm: Introduce hvm_save_tsc_adjust_one() func
  2018-07-04 13:32 [PATCH v10 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
  2018-07-04 13:32 ` [PATCH v10 01/11] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func Alexandru Isaila
@ 2018-07-04 13:32 ` Alexandru Isaila
  2018-07-05  8:43   ` Paul Durrant
  2018-07-04 13:32 ` [PATCH v10 03/11] x86/hvm: Introduce hvm_save_cpu_ctxt_one func Alexandru Isaila
                   ` (8 subsequent siblings)
  10 siblings, 1 reply; 23+ messages in thread
From: Alexandru Isaila @ 2018-07-04 13:32 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>

---
Changes since V9:
     - Change return of the save_one func to return hvm_save_entry.
---
 xen/arch/x86/hvm/hvm.c | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 93092d2..c47d162 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -740,20 +740,26 @@ void hvm_domain_destroy(struct domain *d)
     destroy_vpci_mmcfg(d);
 }
 
+static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
+ {
+    struct hvm_tsc_adjust ctxt;
+
+    ctxt.tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust;
+
+    return hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, &ctxt);
+ }
+
 static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 {
     struct vcpu *v;
-    struct hvm_tsc_adjust ctxt;
     int err = 0;
 
     for_each_vcpu ( d, v )
     {
-        ctxt.tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust;
-        err = hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, &ctxt);
+        err = hvm_save_tsc_adjust_one(v, h);
         if ( err )
             break;
     }
-
     return err;
 }
 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v10 03/11] x86/hvm: Introduce hvm_save_cpu_ctxt_one func
  2018-07-04 13:32 [PATCH v10 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
  2018-07-04 13:32 ` [PATCH v10 01/11] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func Alexandru Isaila
  2018-07-04 13:32 ` [PATCH v10 02/11] x86/hvm: Introduce hvm_save_tsc_adjust_one() func Alexandru Isaila
@ 2018-07-04 13:32 ` Alexandru Isaila
  2018-07-05  8:44   ` Paul Durrant
  2018-07-04 13:32 ` [PATCH v10 04/11] x86/hvm: Introduce hvm_save_cpu_xsave_states_one Alexandru Isaila
                   ` (7 subsequent siblings)
  10 siblings, 1 reply; 23+ messages in thread
From: Alexandru Isaila @ 2018-07-04 13:32 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>

---
Changes since V8:
	- Change return of the save_one func to return hvm_save_entry
	- Move continue out of on func
	- Remove #define CONTINUE.
---
 xen/arch/x86/hvm/hvm.c | 211 ++++++++++++++++++++++++++-----------------------
 1 file changed, 110 insertions(+), 101 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index c47d162..c343ba8 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -786,117 +786,126 @@ static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
                           hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
+static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+    struct segment_register seg;
+    struct hvm_hw_cpu ctxt;
+
+    memset(&ctxt, 0, sizeof(ctxt));
+
+    /* Architecture-specific vmcs/vmcb bits */
+    hvm_funcs.save_cpu_ctxt(v, &ctxt);
+
+    ctxt.tsc = hvm_get_guest_tsc_fixed(v, v->domain->arch.hvm_domain.sync_tsc);
+
+    ctxt.msr_tsc_aux = hvm_msr_tsc_aux(v);
+
+    hvm_get_segment_register(v, x86_seg_idtr, &seg);
+    ctxt.idtr_limit = seg.limit;
+    ctxt.idtr_base = seg.base;
+
+    hvm_get_segment_register(v, x86_seg_gdtr, &seg);
+    ctxt.gdtr_limit = seg.limit;
+    ctxt.gdtr_base = seg.base;
+
+    hvm_get_segment_register(v, x86_seg_cs, &seg);
+    ctxt.cs_sel = seg.sel;
+    ctxt.cs_limit = seg.limit;
+    ctxt.cs_base = seg.base;
+    ctxt.cs_arbytes = seg.attr;
+
+    hvm_get_segment_register(v, x86_seg_ds, &seg);
+    ctxt.ds_sel = seg.sel;
+    ctxt.ds_limit = seg.limit;
+    ctxt.ds_base = seg.base;
+    ctxt.ds_arbytes = seg.attr;
+
+    hvm_get_segment_register(v, x86_seg_es, &seg);
+    ctxt.es_sel = seg.sel;
+    ctxt.es_limit = seg.limit;
+    ctxt.es_base = seg.base;
+    ctxt.es_arbytes = seg.attr;
+
+    hvm_get_segment_register(v, x86_seg_ss, &seg);
+    ctxt.ss_sel = seg.sel;
+    ctxt.ss_limit = seg.limit;
+    ctxt.ss_base = seg.base;
+    ctxt.ss_arbytes = seg.attr;
+
+    hvm_get_segment_register(v, x86_seg_fs, &seg);
+    ctxt.fs_sel = seg.sel;
+    ctxt.fs_limit = seg.limit;
+    ctxt.fs_base = seg.base;
+    ctxt.fs_arbytes = seg.attr;
+
+    hvm_get_segment_register(v, x86_seg_gs, &seg);
+    ctxt.gs_sel = seg.sel;
+    ctxt.gs_limit = seg.limit;
+    ctxt.gs_base = seg.base;
+    ctxt.gs_arbytes = seg.attr;
+
+    hvm_get_segment_register(v, x86_seg_tr, &seg);
+    ctxt.tr_sel = seg.sel;
+    ctxt.tr_limit = seg.limit;
+    ctxt.tr_base = seg.base;
+    ctxt.tr_arbytes = seg.attr;
+
+    hvm_get_segment_register(v, x86_seg_ldtr, &seg);
+    ctxt.ldtr_sel = seg.sel;
+    ctxt.ldtr_limit = seg.limit;
+    ctxt.ldtr_base = seg.base;
+    ctxt.ldtr_arbytes = seg.attr;
+
+    if ( v->fpu_initialised )
+    {
+        memcpy(ctxt.fpu_regs, v->arch.fpu_ctxt, sizeof(ctxt.fpu_regs));
+        ctxt.flags = XEN_X86_FPU_INITIALISED;
+    }
+
+    ctxt.rax = v->arch.user_regs.rax;
+    ctxt.rbx = v->arch.user_regs.rbx;
+    ctxt.rcx = v->arch.user_regs.rcx;
+    ctxt.rdx = v->arch.user_regs.rdx;
+    ctxt.rbp = v->arch.user_regs.rbp;
+    ctxt.rsi = v->arch.user_regs.rsi;
+    ctxt.rdi = v->arch.user_regs.rdi;
+    ctxt.rsp = v->arch.user_regs.rsp;
+    ctxt.rip = v->arch.user_regs.rip;
+    ctxt.rflags = v->arch.user_regs.rflags;
+    ctxt.r8  = v->arch.user_regs.r8;
+    ctxt.r9  = v->arch.user_regs.r9;
+    ctxt.r10 = v->arch.user_regs.r10;
+    ctxt.r11 = v->arch.user_regs.r11;
+    ctxt.r12 = v->arch.user_regs.r12;
+    ctxt.r13 = v->arch.user_regs.r13;
+    ctxt.r14 = v->arch.user_regs.r14;
+    ctxt.r15 = v->arch.user_regs.r15;
+    ctxt.dr0 = v->arch.debugreg[0];
+    ctxt.dr1 = v->arch.debugreg[1];
+    ctxt.dr2 = v->arch.debugreg[2];
+    ctxt.dr3 = v->arch.debugreg[3];
+    ctxt.dr6 = v->arch.debugreg[6];
+    ctxt.dr7 = v->arch.debugreg[7];
+
+    return hvm_save_entry(CPU, v->vcpu_id, h, &ctxt);
+}
+
 static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
     struct vcpu *v;
-    struct hvm_hw_cpu ctxt;
-    struct segment_register seg;
 
     for_each_vcpu ( d, v )
     {
-        /* We don't need to save state for a vcpu that is down; the restore 
-         * code will leave it down if there is nothing saved. */
+        /*
+         * We don't need to save state for a vcpu that is down; the restore
+         * code will leave it down if there is nothing saved.
+         */
         if ( v->pause_flags & VPF_down )
             continue;
 
-        memset(&ctxt, 0, sizeof(ctxt));
-
-        /* Architecture-specific vmcs/vmcb bits */
-        hvm_funcs.save_cpu_ctxt(v, &ctxt);
-
-        ctxt.tsc = hvm_get_guest_tsc_fixed(v, d->arch.hvm_domain.sync_tsc);
-
-        ctxt.msr_tsc_aux = hvm_msr_tsc_aux(v);
-
-        hvm_get_segment_register(v, x86_seg_idtr, &seg);
-        ctxt.idtr_limit = seg.limit;
-        ctxt.idtr_base = seg.base;
-
-        hvm_get_segment_register(v, x86_seg_gdtr, &seg);
-        ctxt.gdtr_limit = seg.limit;
-        ctxt.gdtr_base = seg.base;
-
-        hvm_get_segment_register(v, x86_seg_cs, &seg);
-        ctxt.cs_sel = seg.sel;
-        ctxt.cs_limit = seg.limit;
-        ctxt.cs_base = seg.base;
-        ctxt.cs_arbytes = seg.attr;
-
-        hvm_get_segment_register(v, x86_seg_ds, &seg);
-        ctxt.ds_sel = seg.sel;
-        ctxt.ds_limit = seg.limit;
-        ctxt.ds_base = seg.base;
-        ctxt.ds_arbytes = seg.attr;
-
-        hvm_get_segment_register(v, x86_seg_es, &seg);
-        ctxt.es_sel = seg.sel;
-        ctxt.es_limit = seg.limit;
-        ctxt.es_base = seg.base;
-        ctxt.es_arbytes = seg.attr;
-
-        hvm_get_segment_register(v, x86_seg_ss, &seg);
-        ctxt.ss_sel = seg.sel;
-        ctxt.ss_limit = seg.limit;
-        ctxt.ss_base = seg.base;
-        ctxt.ss_arbytes = seg.attr;
-
-        hvm_get_segment_register(v, x86_seg_fs, &seg);
-        ctxt.fs_sel = seg.sel;
-        ctxt.fs_limit = seg.limit;
-        ctxt.fs_base = seg.base;
-        ctxt.fs_arbytes = seg.attr;
-
-        hvm_get_segment_register(v, x86_seg_gs, &seg);
-        ctxt.gs_sel = seg.sel;
-        ctxt.gs_limit = seg.limit;
-        ctxt.gs_base = seg.base;
-        ctxt.gs_arbytes = seg.attr;
-
-        hvm_get_segment_register(v, x86_seg_tr, &seg);
-        ctxt.tr_sel = seg.sel;
-        ctxt.tr_limit = seg.limit;
-        ctxt.tr_base = seg.base;
-        ctxt.tr_arbytes = seg.attr;
-
-        hvm_get_segment_register(v, x86_seg_ldtr, &seg);
-        ctxt.ldtr_sel = seg.sel;
-        ctxt.ldtr_limit = seg.limit;
-        ctxt.ldtr_base = seg.base;
-        ctxt.ldtr_arbytes = seg.attr;
-
-        if ( v->fpu_initialised )
-        {
-            memcpy(ctxt.fpu_regs, v->arch.fpu_ctxt, sizeof(ctxt.fpu_regs));
-            ctxt.flags = XEN_X86_FPU_INITIALISED;
-        }
 
-        ctxt.rax = v->arch.user_regs.rax;
-        ctxt.rbx = v->arch.user_regs.rbx;
-        ctxt.rcx = v->arch.user_regs.rcx;
-        ctxt.rdx = v->arch.user_regs.rdx;
-        ctxt.rbp = v->arch.user_regs.rbp;
-        ctxt.rsi = v->arch.user_regs.rsi;
-        ctxt.rdi = v->arch.user_regs.rdi;
-        ctxt.rsp = v->arch.user_regs.rsp;
-        ctxt.rip = v->arch.user_regs.rip;
-        ctxt.rflags = v->arch.user_regs.rflags;
-        ctxt.r8  = v->arch.user_regs.r8;
-        ctxt.r9  = v->arch.user_regs.r9;
-        ctxt.r10 = v->arch.user_regs.r10;
-        ctxt.r11 = v->arch.user_regs.r11;
-        ctxt.r12 = v->arch.user_regs.r12;
-        ctxt.r13 = v->arch.user_regs.r13;
-        ctxt.r14 = v->arch.user_regs.r14;
-        ctxt.r15 = v->arch.user_regs.r15;
-        ctxt.dr0 = v->arch.debugreg[0];
-        ctxt.dr1 = v->arch.debugreg[1];
-        ctxt.dr2 = v->arch.debugreg[2];
-        ctxt.dr3 = v->arch.debugreg[3];
-        ctxt.dr6 = v->arch.debugreg[6];
-        ctxt.dr7 = v->arch.debugreg[7];
-
-        if ( hvm_save_entry(CPU, v->vcpu_id, h, &ctxt) != 0 )
-            return 1; 
+        if ( hvm_save_cpu_ctxt_one(v, h) != 0 )
+            return 1;
     }
     return 0;
 }
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v10 04/11] x86/hvm: Introduce hvm_save_cpu_xsave_states_one
  2018-07-04 13:32 [PATCH v10 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (2 preceding siblings ...)
  2018-07-04 13:32 ` [PATCH v10 03/11] x86/hvm: Introduce hvm_save_cpu_ctxt_one func Alexandru Isaila
@ 2018-07-04 13:32 ` Alexandru Isaila
  2018-07-05  8:47   ` Paul Durrant
  2018-07-04 13:32 ` [PATCH v10 05/11] x86/hvm: Introduce hvm_save_cpu_msrs_one func Alexandru Isaila
                   ` (6 subsequent siblings)
  10 siblings, 1 reply; 23+ messages in thread
From: Alexandru Isaila @ 2018-07-04 13:32 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>

---
Changes since V9:
	- Move continue out of save_one func.
---
 xen/arch/x86/hvm/hvm.c | 34 +++++++++++++++++++++-------------
 1 file changed, 21 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index c343ba8..495abe5 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1188,30 +1188,38 @@ HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_load_cpu_ctxt,
                                            save_area) + \
                                   xstate_ctxt_size(xcr0))
 
-static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_xsave_states_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-    struct vcpu *v;
     struct hvm_hw_cpu_xsave *ctxt;
+    unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
 
     if ( !cpu_has_xsave )
         return 0;   /* do nothing */
 
+    if ( _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size) )
+        return 1;
+    ctxt = (struct hvm_hw_cpu_xsave *)&h->data[h->cur];
+    h->cur += size;
+    ctxt->xfeature_mask = xfeature_mask;
+    ctxt->xcr0 = v->arch.xcr0;
+    ctxt->xcr0_accum = v->arch.xcr0_accum;
+
+    expand_xsave_states(v, &ctxt->save_area,
+                        size - offsetof(typeof(*ctxt), save_area));
+    return 0;
+ }
+
+static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
+{
+    struct vcpu *v;
+
     for_each_vcpu ( d, v )
     {
-        unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
-
         if ( !xsave_enabled(v) )
             continue;
-        if ( _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size) )
+
+        if ( hvm_save_cpu_xsave_states_one(v, h) != 0 )
             return 1;
-        ctxt = (struct hvm_hw_cpu_xsave *)&h->data[h->cur];
-        h->cur += size;
-
-        ctxt->xfeature_mask = xfeature_mask;
-        ctxt->xcr0 = v->arch.xcr0;
-        ctxt->xcr0_accum = v->arch.xcr0_accum;
-        expand_xsave_states(v, &ctxt->save_area,
-                            size - offsetof(typeof(*ctxt), save_area));
     }
 
     return 0;
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v10 05/11] x86/hvm: Introduce hvm_save_cpu_msrs_one func
  2018-07-04 13:32 [PATCH v10 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (3 preceding siblings ...)
  2018-07-04 13:32 ` [PATCH v10 04/11] x86/hvm: Introduce hvm_save_cpu_xsave_states_one Alexandru Isaila
@ 2018-07-04 13:32 ` Alexandru Isaila
  2018-07-05  8:53   ` Paul Durrant
  2018-07-04 13:32 ` [PATCH v10 06/11] x86/hvm: Introduce hvm_save_mtrr_msr_one func Alexandru Isaila
                   ` (5 subsequent siblings)
  10 siblings, 1 reply; 23+ messages in thread
From: Alexandru Isaila @ 2018-07-04 13:32 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>

---
Changes since V7:
	- Moved the init of ctxt->count to hvm_save_cpu_msrs_one()
---
 xen/arch/x86/hvm/hvm.c | 101 +++++++++++++++++++++++++++----------------------
 1 file changed, 55 insertions(+), 46 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 495abe5..9ff9954 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1362,66 +1362,75 @@ static const uint32_t msrs_to_send[] = {
 };
 static unsigned int __read_mostly msr_count_max = ARRAY_SIZE(msrs_to_send);
 
-static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_msrs_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-    struct vcpu *v;
+    unsigned int i;
+    struct hvm_msr *ctxt;
+    struct hvm_save_descriptor *desc = _p(&h->data[h->cur]);
 
-    for_each_vcpu ( d, v )
+    if ( _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
+                         HVM_CPU_MSR_SIZE(msr_count_max)) )
+        return 1;
+    ctxt = (struct hvm_msr *)&h->data[h->cur];
+
+    ctxt->count = 0;
+    for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
     {
-        struct hvm_save_descriptor *desc = _p(&h->data[h->cur]);
-        struct hvm_msr *ctxt;
-        unsigned int i;
+        uint64_t val;
+        int rc = guest_rdmsr(v, msrs_to_send[i], &val);
 
-        if ( _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
-                             HVM_CPU_MSR_SIZE(msr_count_max)) )
-            return 1;
-        ctxt = (struct hvm_msr *)&h->data[h->cur];
-        ctxt->count = 0;
+        /*
+         * It is the programmers responsibility to ensure that
+         * msrs_to_send[] contain generally-read/write MSRs.
+         * X86EMUL_EXCEPTION here implies a missing feature, and that the
+         * guest doesn't have access to the MSR.
+         */
+        if ( rc == X86EMUL_EXCEPTION )
+            continue;
 
-        for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
+        if ( rc != X86EMUL_OKAY )
         {
-            uint64_t val;
-            int rc = guest_rdmsr(v, msrs_to_send[i], &val);
+            ASSERT_UNREACHABLE();
+            return -ENXIO;
+        }
 
-            /*
-             * It is the programmers responsibility to ensure that
-             * msrs_to_send[] contain generally-read/write MSRs.
-             * X86EMUL_EXCEPTION here implies a missing feature, and that the
-             * guest doesn't have access to the MSR.
-             */
-            if ( rc == X86EMUL_EXCEPTION )
-                continue;
+        if ( !val )
+            continue; /* Skip empty MSRs. */
 
-            if ( rc != X86EMUL_OKAY )
-            {
-                ASSERT_UNREACHABLE();
-                return -ENXIO;
-            }
+        ctxt->msr[ctxt->count].index = msrs_to_send[i];
+        ctxt->msr[ctxt->count++].val = val;
+    }
 
-            if ( !val )
-                continue; /* Skip empty MSRs. */
+    if ( hvm_funcs.save_msr )
+        hvm_funcs.save_msr(v, ctxt);
 
-            ctxt->msr[ctxt->count].index = msrs_to_send[i];
-            ctxt->msr[ctxt->count++].val = val;
-        }
+    ASSERT(ctxt->count <= msr_count_max);
 
-        if ( hvm_funcs.save_msr )
-            hvm_funcs.save_msr(v, ctxt);
+    for ( i = 0; i < ctxt->count; ++i )
+        ctxt->msr[i]._rsvd = 0;
 
-        ASSERT(ctxt->count <= msr_count_max);
+    if ( ctxt->count )
+    {
+        /* Rewrite length to indicate how much space we actually used. */
+        desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
+        h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
+    }
+    else
+        /* or rewind and remove the descriptor from the stream. */
+        h->cur -= sizeof(struct hvm_save_descriptor);
 
-        for ( i = 0; i < ctxt->count; ++i )
-            ctxt->msr[i]._rsvd = 0;
+    return 0;
+}
 
-        if ( ctxt->count )
-        {
-            /* Rewrite length to indicate how much space we actually used. */
-            desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
-            h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
-        }
-        else
-            /* or rewind and remove the descriptor from the stream. */
-            h->cur -= sizeof(struct hvm_save_descriptor);
+
+static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+{
+    struct vcpu *v;
+
+    for_each_vcpu ( d, v )
+    {
+        if ( hvm_save_cpu_msrs_one(v, h) != 0 )
+            return 1;
     }
 
     return 0;
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v10 06/11] x86/hvm: Introduce hvm_save_mtrr_msr_one func
  2018-07-04 13:32 [PATCH v10 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (4 preceding siblings ...)
  2018-07-04 13:32 ` [PATCH v10 05/11] x86/hvm: Introduce hvm_save_cpu_msrs_one func Alexandru Isaila
@ 2018-07-04 13:32 ` Alexandru Isaila
  2018-07-05  8:56   ` Paul Durrant
  2018-07-04 13:32 ` [PATCH v10 07/11] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func Alexandru Isaila
                   ` (4 subsequent siblings)
  10 siblings, 1 reply; 23+ messages in thread
From: Alexandru Isaila @ 2018-07-04 13:32 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>

---
Changes since V9:
	- Change return of the save_one func to return hvm_save_entry.

Note: This patch is based on Roger Pau Monne's series[1]
---
 xen/arch/x86/hvm/mtrr.c | 75 +++++++++++++++++++++++++------------------------
 1 file changed, 39 insertions(+), 36 deletions(-)

diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 48facbb..9bbff59 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -718,6 +718,44 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start,
     return 0;
 }
 
+static int hvm_save_mtrr_msr_one(struct vcpu *v, hvm_domain_context_t *h)
+{
+    const struct mtrr_state *mtrr_state = &v->arch.hvm_vcpu.mtrr;
+    struct hvm_hw_mtrr hw_mtrr;
+    unsigned int i;
+    memset(&hw_mtrr, 0, sizeof(hw_mtrr));
+    hw_mtrr.msr_mtrr_def_type = mtrr_state->def_type |
+                                MASK_INSR(mtrr_state->fixed_enabled,
+                                          MTRRdefType_FE) |
+                                 MASK_INSR(mtrr_state->enabled, MTRRdefType_E);
+    hw_mtrr.msr_mtrr_cap = mtrr_state->mtrr_cap;
+
+    if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
+         (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
+    {
+        dprintk(XENLOG_G_ERR,
+                "HVM save: %pv: too many (%lu) variable range MTRRs\n",
+                v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
+        return -EINVAL;
+    }
+    hvm_get_guest_pat(v, &hw_mtrr.msr_pat_cr);
+    for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
+    {
+        /* save physbase */
+        hw_mtrr.msr_mtrr_var[i*2] =
+            ((uint64_t*)mtrr_state->var_ranges)[i*2];
+        /* save physmask */
+        hw_mtrr.msr_mtrr_var[i*2+1] =
+            ((uint64_t*)mtrr_state->var_ranges)[i*2+1];
+    }
+
+    for ( i = 0; i < NUM_FIXED_MSR; i++ )
+        hw_mtrr.msr_mtrr_fixed[i] =
+            ((uint64_t*)mtrr_state->fixed_ranges)[i];
+
+    return hvm_save_entry(MTRR, v->vcpu_id, h, &hw_mtrr);
+ }
+
 static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
 {
     struct vcpu *v;
@@ -725,42 +763,7 @@ static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
     /* save mtrr&pat */
     for_each_vcpu(d, v)
     {
-        const struct mtrr_state *mtrr_state = &v->arch.hvm_vcpu.mtrr;
-        struct hvm_hw_mtrr hw_mtrr = {
-            .msr_mtrr_def_type = mtrr_state->def_type |
-                                 MASK_INSR(mtrr_state->fixed_enabled,
-                                           MTRRdefType_FE) |
-                                 MASK_INSR(mtrr_state->enabled, MTRRdefType_E),
-            .msr_mtrr_cap      = mtrr_state->mtrr_cap,
-        };
-        unsigned int i;
-
-        if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
-             (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
-        {
-            dprintk(XENLOG_G_ERR,
-                    "HVM save: %pv: too many (%lu) variable range MTRRs\n",
-                    v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
-            return -EINVAL;
-        }
-
-        hvm_get_guest_pat(v, &hw_mtrr.msr_pat_cr);
-
-        for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
-        {
-            /* save physbase */
-            hw_mtrr.msr_mtrr_var[i*2] =
-                ((uint64_t*)mtrr_state->var_ranges)[i*2];
-            /* save physmask */
-            hw_mtrr.msr_mtrr_var[i*2+1] =
-                ((uint64_t*)mtrr_state->var_ranges)[i*2+1];
-        }
-
-        for ( i = 0; i < NUM_FIXED_MSR; i++ )
-            hw_mtrr.msr_mtrr_fixed[i] =
-                ((uint64_t*)mtrr_state->fixed_ranges)[i];
-
-        if ( hvm_save_entry(MTRR, v->vcpu_id, h, &hw_mtrr) != 0 )
+        if ( hvm_save_mtrr_msr_one(v, h) != 0 )
             return 1;
     }
     return 0;
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v10 07/11] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func
  2018-07-04 13:32 [PATCH v10 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (5 preceding siblings ...)
  2018-07-04 13:32 ` [PATCH v10 06/11] x86/hvm: Introduce hvm_save_mtrr_msr_one func Alexandru Isaila
@ 2018-07-04 13:32 ` Alexandru Isaila
  2018-07-05  8:25   ` Paul Durrant
  2018-07-04 13:32 ` [PATCH v10 08/11] x86/hvm: Add handler for save_one funcs Alexandru Isaila
                   ` (3 subsequent siblings)
  10 siblings, 1 reply; 23+ messages in thread
From: Alexandru Isaila @ 2018-07-04 13:32 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
 xen/arch/x86/hvm/viridian.c | 24 +++++++++++++++---------
 1 file changed, 15 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index 694eae6..1e87cd6 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -1026,20 +1026,26 @@ static int viridian_load_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
 HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt,
                           viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
 
-static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
 {
-    struct vcpu *v;
+    struct hvm_viridian_vcpu_context ctxt;
 
-    if ( !is_viridian_domain(d) )
+    if ( !is_viridian_domain(v->domain) )
         return 0;
 
-    for_each_vcpu( d, v ) {
-        struct hvm_viridian_vcpu_context ctxt = {
-            .vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw,
-            .vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending,
-        };
+    memset(&ctxt, 0, sizeof(ctxt));
+    ctxt.vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw;
+    ctxt.vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending;
 
-        if ( hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt) != 0 )
+    return hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt);
+}
+
+static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+{
+    struct vcpu *v;
+
+    for_each_vcpu( d, v ) {
+        if ( viridian_save_vcpu_ctxt_one(v, h) != 0 )
             return 1;
     }
 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v10 08/11] x86/hvm: Add handler for save_one funcs
  2018-07-04 13:32 [PATCH v10 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (6 preceding siblings ...)
  2018-07-04 13:32 ` [PATCH v10 07/11] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func Alexandru Isaila
@ 2018-07-04 13:32 ` Alexandru Isaila
  2018-07-05  8:33   ` Paul Durrant
  2018-07-04 13:32 ` [PATCH v10 09/11] x86/domctl: Don't pause the whole domain if only getting vcpu state Alexandru Isaila
                   ` (2 subsequent siblings)
  10 siblings, 1 reply; 23+ messages in thread
From: Alexandru Isaila @ 2018-07-04 13:32 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>

---
Changes since V8:
	- Add comment for the handler return values.
---
 xen/arch/x86/cpu/mcheck/vmce.c | 1 +
 xen/arch/x86/hvm/hpet.c        | 2 +-
 xen/arch/x86/hvm/hvm.c         | 6 +++++-
 xen/arch/x86/hvm/i8254.c       | 2 +-
 xen/arch/x86/hvm/irq.c         | 6 +++---
 xen/arch/x86/hvm/mtrr.c        | 4 ++--
 xen/arch/x86/hvm/pmtimer.c     | 2 +-
 xen/arch/x86/hvm/rtc.c         | 2 +-
 xen/arch/x86/hvm/save.c        | 5 ++++-
 xen/arch/x86/hvm/vioapic.c     | 2 +-
 xen/arch/x86/hvm/viridian.c    | 3 ++-
 xen/arch/x86/hvm/vlapic.c      | 4 ++--
 xen/arch/x86/hvm/vpic.c        | 2 +-
 xen/include/asm-x86/hvm/save.h | 6 +++++-
 14 files changed, 30 insertions(+), 17 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 8a1fbfc..29898a6 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -396,6 +396,7 @@ static int vmce_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
+                          vmce_save_vcpu_ctxt_one,
                           vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 
 /*
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index 2837709..aff8613 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -640,7 +640,7 @@ static int hpet_load(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, hpet_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1, HVMSR_PER_DOM);
 
 static void hpet_set(HPETState *h)
 {
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 9ff9954..7d2a12d 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -784,6 +784,7 @@ static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
+                          hvm_save_tsc_adjust_one,
                           hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
 static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
@@ -1181,7 +1182,8 @@ static int hvm_load_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_load_cpu_ctxt,
+HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_save_cpu_ctxt_one,
+                          hvm_load_cpu_ctxt,
                           1, HVMSR_PER_VCPU);
 
 #define HVM_CPU_XSAVE_SIZE(xcr0) (offsetof(struct hvm_hw_cpu_xsave, \
@@ -1528,6 +1530,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
     hvm_register_savevm(CPU_XSAVE_CODE,
                         "CPU_XSAVE",
                         hvm_save_cpu_xsave_states,
+                        hvm_save_cpu_xsave_states_one,
                         hvm_load_cpu_xsave_states,
                         HVM_CPU_XSAVE_SIZE(xfeature_mask) +
                             sizeof(struct hvm_save_descriptor),
@@ -1540,6 +1543,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
         hvm_register_savevm(CPU_MSR_CODE,
                             "CPU_MSR",
                             hvm_save_cpu_msrs,
+                            hvm_save_cpu_msrs_one,
                             hvm_load_cpu_msrs,
                             HVM_CPU_MSR_SIZE(msr_count_max) +
                                 sizeof(struct hvm_save_descriptor),
diff --git a/xen/arch/x86/hvm/i8254.c b/xen/arch/x86/hvm/i8254.c
index 992f08d..ec77b23 100644
--- a/xen/arch/x86/hvm/i8254.c
+++ b/xen/arch/x86/hvm/i8254.c
@@ -437,7 +437,7 @@ static int pit_load(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, pit_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, NULL, pit_load, 1, HVMSR_PER_DOM);
 
 void pit_reset(struct domain *d)
 {
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index c85d004..770eab7 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -764,9 +764,9 @@ static int irq_load_link(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, irq_load_pci,
+HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, NULL, irq_load_pci,
                           1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, irq_load_isa, 
+HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, NULL, irq_load_isa,
                           1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, irq_load_link,
+HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, NULL, irq_load_link,
                           1, HVMSR_PER_DOM);
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 9bbff59..ff9ff69 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -819,8 +819,8 @@ static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr, hvm_load_mtrr_msr,
-                          1, HVMSR_PER_VCPU);
+HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr, hvm_save_mtrr_msr_one,
+                          hvm_load_mtrr_msr, 1, HVMSR_PER_VCPU);
 
 void memory_type_changed(struct domain *d)
 {
diff --git a/xen/arch/x86/hvm/pmtimer.c b/xen/arch/x86/hvm/pmtimer.c
index 435647f..0a5e8ce 100644
--- a/xen/arch/x86/hvm/pmtimer.c
+++ b/xen/arch/x86/hvm/pmtimer.c
@@ -309,7 +309,7 @@ static int acpi_load(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PMTIMER, acpi_save, acpi_load,
+HVM_REGISTER_SAVE_RESTORE(PMTIMER, acpi_save, NULL, acpi_load,
                           1, HVMSR_PER_DOM);
 
 int pmtimer_change_ioport(struct domain *d, unsigned int version)
diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index cb75b99..ce7e71b 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -783,7 +783,7 @@ static int rtc_load(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(RTC, rtc_save, rtc_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(RTC, rtc_save, NULL, rtc_load, 1, HVMSR_PER_DOM);
 
 void rtc_reset(struct domain *d)
 {
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 8984a23..b674937 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -85,16 +85,18 @@ int arch_hvm_load(struct domain *d, struct hvm_save_header *hdr)
 /* List of handlers for various HVM save and restore types */
 static struct {
     hvm_save_handler save;
+    hvm_save_one_handler save_one;
     hvm_load_handler load;
     const char *name;
     size_t size;
     int kind;
-} hvm_sr_handlers[HVM_SAVE_CODE_MAX + 1] = { {NULL, NULL, "<?>"}, };
+} hvm_sr_handlers[HVM_SAVE_CODE_MAX + 1] = { {NULL, NULL, NULL, "<?>"}, };
 
 /* Init-time function to add entries to that list */
 void __init hvm_register_savevm(uint16_t typecode,
                                 const char *name,
                                 hvm_save_handler save_state,
+                                hvm_save_one_handler save_one,
                                 hvm_load_handler load_state,
                                 size_t size, int kind)
 {
@@ -102,6 +104,7 @@ void __init hvm_register_savevm(uint16_t typecode,
     ASSERT(hvm_sr_handlers[typecode].save == NULL);
     ASSERT(hvm_sr_handlers[typecode].load == NULL);
     hvm_sr_handlers[typecode].save = save_state;
+    hvm_sr_handlers[typecode].save_one = save_one;
     hvm_sr_handlers[typecode].load = load_state;
     hvm_sr_handlers[typecode].name = name;
     hvm_sr_handlers[typecode].size = size;
diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index 97b419f..66f54e4 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -601,7 +601,7 @@ static int ioapic_load(struct domain *d, hvm_domain_context_t *h)
     return hvm_load_entry(IOAPIC, h, &s->domU);
 }
 
-HVM_REGISTER_SAVE_RESTORE(IOAPIC, ioapic_save, ioapic_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(IOAPIC, ioapic_save, NULL, ioapic_load, 1, HVMSR_PER_DOM);
 
 void vioapic_reset(struct domain *d)
 {
diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index 1e87cd6..466e015 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -1023,7 +1023,7 @@ static int viridian_load_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt,
+HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt, NULL,
                           viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
 
 static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
@@ -1083,6 +1083,7 @@ static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_VCPU, viridian_save_vcpu_ctxt,
+                          viridian_save_vcpu_ctxt_one,
                           viridian_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 
 static int __init parse_viridian_version(const char *arg)
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 1b9f00a..eff6070 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1576,9 +1576,9 @@ static int lapic_load_regs(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, lapic_load_hidden,
+HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, NULL, lapic_load_hidden,
                           1, HVMSR_PER_VCPU);
-HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, lapic_load_regs,
+HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, NULL, lapic_load_regs,
                           1, HVMSR_PER_VCPU);
 
 int vlapic_init(struct vcpu *v)
diff --git a/xen/arch/x86/hvm/vpic.c b/xen/arch/x86/hvm/vpic.c
index e160bbd..ca9b4cb 100644
--- a/xen/arch/x86/hvm/vpic.c
+++ b/xen/arch/x86/hvm/vpic.c
@@ -411,7 +411,7 @@ static int vpic_load(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PIC, vpic_save, vpic_load, 2, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIC, vpic_save, NULL, vpic_load, 2, HVMSR_PER_DOM);
 
 void vpic_reset(struct domain *d)
 {
diff --git a/xen/include/asm-x86/hvm/save.h b/xen/include/asm-x86/hvm/save.h
index f889e8f..2538628 100644
--- a/xen/include/asm-x86/hvm/save.h
+++ b/xen/include/asm-x86/hvm/save.h
@@ -97,6 +97,8 @@ static inline uint16_t hvm_load_instance(struct hvm_domain_context *h)
  * restoring.  Both return non-zero on error. */
 typedef int (*hvm_save_handler) (struct domain *d, 
                                  hvm_domain_context_t *h);
+typedef int (*hvm_save_one_handler)(struct  vcpu *v,
+                                    hvm_domain_context_t *h);
 typedef int (*hvm_load_handler) (struct domain *d,
                                  hvm_domain_context_t *h);
 
@@ -105,6 +107,7 @@ typedef int (*hvm_load_handler) (struct domain *d,
 void hvm_register_savevm(uint16_t typecode,
                          const char *name, 
                          hvm_save_handler save_state,
+                         hvm_save_one_handler save_one,
                          hvm_load_handler load_state,
                          size_t size, int kind);
 
@@ -114,12 +117,13 @@ void hvm_register_savevm(uint16_t typecode,
 
 /* Syntactic sugar around that function: specify the max number of
  * saves, and this calculates the size of buffer needed */
-#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _load, _num, _k)             \
+#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _save_one, _load, _num, _k)  \
 static int __init __hvm_register_##_x##_save_and_restore(void)            \
 {                                                                         \
     hvm_register_savevm(HVM_SAVE_CODE(_x),                                \
                         #_x,                                              \
                         &_save,                                           \
+                        _save_one,                                        \
                         &_load,                                           \
                         (_num) * (HVM_SAVE_LENGTH(_x)                     \
                                   + sizeof (struct hvm_save_descriptor)), \
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v10 09/11] x86/domctl: Don't pause the whole domain if only getting vcpu state
  2018-07-04 13:32 [PATCH v10 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (7 preceding siblings ...)
  2018-07-04 13:32 ` [PATCH v10 08/11] x86/hvm: Add handler for save_one funcs Alexandru Isaila
@ 2018-07-04 13:32 ` Alexandru Isaila
  2018-07-04 13:32 ` [PATCH v10 10/11] x86/hvm: Remove redundant save functions Alexandru Isaila
  2018-07-04 13:32 ` [PATCH v10 11/11] x86/hvm: Remove save_one handler Alexandru Isaila
  10 siblings, 0 replies; 23+ messages in thread
From: Alexandru Isaila @ 2018-07-04 13:32 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This patch is focused on moving the for loop to the caller so
now we can save info for a single vcpu instance with the save_one
handlers.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
 xen/arch/x86/hvm/save.c | 141 +++++++++++++++++++++++++++++++++++++-----------
 1 file changed, 111 insertions(+), 30 deletions(-)

diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index b674937..1b28e7f 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -138,9 +138,12 @@ size_t hvm_save_size(struct domain *d)
 int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
                  XEN_GUEST_HANDLE_64(uint8) handle, uint64_t *bufsz)
 {
-    int rv;
+    int rv = 0;
     hvm_domain_context_t ctxt = { };
     const struct hvm_save_descriptor *desc;
+    bool is_single_instance = false;
+    uint32_t off = 0;
+    struct vcpu *v;
 
     if ( d->is_dying ||
          typecode > HVM_SAVE_CODE_MAX ||
@@ -148,43 +151,94 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
          !hvm_sr_handlers[typecode].save )
         return -EINVAL;
 
+    if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU &&
+        instance < d->max_vcpus )
+        is_single_instance = true;
+
     ctxt.size = hvm_sr_handlers[typecode].size;
-    if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
+    if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU &&
+        instance == d->max_vcpus )
         ctxt.size *= d->max_vcpus;
     ctxt.data = xmalloc_bytes(ctxt.size);
     if ( !ctxt.data )
         return -ENOMEM;
 
-    if ( (rv = hvm_sr_handlers[typecode].save(d, &ctxt)) != 0 )
-        printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
-               d->domain_id, typecode, rv);
-    else if ( rv = -ENOENT, ctxt.cur >= sizeof(*desc) )
+    if ( is_single_instance )
+        vcpu_pause(d->vcpu[instance]);
+    else
+        domain_pause(d);
+
+    if ( is_single_instance )
     {
-        uint32_t off;
+        if ( hvm_sr_handlers[typecode].save_one != NULL )
+            rv = hvm_sr_handlers[typecode].save_one(d->vcpu[instance],
+                                                    &ctxt);
+        else
+            rv = hvm_sr_handlers[typecode].save(d, &ctxt);
 
-        for ( off = 0; off <= (ctxt.cur - sizeof(*desc)); off += desc->length )
+        if ( rv != 0 )
         {
-            desc = (void *)(ctxt.data + off);
-            /* Move past header */
-            off += sizeof(*desc);
-            if ( ctxt.cur < desc->length ||
-                 off > ctxt.cur - desc->length )
-                break;
-            if ( instance == desc->instance )
-            {
-                rv = 0;
-                if ( guest_handle_is_null(handle) )
-                    *bufsz = desc->length;
-                else if ( *bufsz < desc->length )
-                    rv = -ENOBUFS;
-                else if ( copy_to_guest(handle, ctxt.data + off, desc->length) )
-                    rv = -EFAULT;
-                else
-                    *bufsz = desc->length;
-                break;
-            }
+            printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
+                   d->domain_id, typecode, rv);
+            vcpu_unpause(d->vcpu[instance]);
+        }
+        else if ( ctxt.cur >= sizeof(*desc) )
+        {
+            rv = -ENOENT;
+            desc = (void *)(ctxt.data);
+             /* Move past header */
+            off = sizeof(*desc);
+             if ( ctxt.cur < desc->length ||
+                  off > ctxt.cur - desc->length )
+                rv = -EFAULT;
+            rv = 0;
+            if ( guest_handle_is_null(handle) )
+                *bufsz = desc->length;
+            else if ( *bufsz < desc->length )
+               rv = -ENOBUFS;
+            else if ( copy_to_guest(handle, ctxt.data + off, desc->length) )
+                rv = -EFAULT;
+            else
+                *bufsz = desc->length;
+            vcpu_unpause(d->vcpu[instance]);
         }
     }
+    else
+    {
+        for_each_vcpu ( d, v )
+        {
+            if ( (rv = hvm_sr_handlers[typecode].save(d, &ctxt)) != 0 )
+            {
+                printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
+                       d->domain_id, typecode, rv);
+            }
+            else if ( ctxt.cur >= sizeof(*desc) )
+            {
+                rv = -ENOENT;
+                desc = (void *)(ctxt.data + off);
+                /* Move past header */
+                off += sizeof(*desc);
+                if ( ctxt.cur < desc->length ||
+                     off > ctxt.cur - desc->length )
+                    break;
+                if ( instance == desc->instance )
+                {
+                    rv = 0;
+                    if ( guest_handle_is_null(handle) )
+                        *bufsz = desc->length;
+                    else if ( *bufsz < desc->length )
+                        rv = -ENOBUFS;
+                    else if ( copy_to_guest(handle, ctxt.data + off, desc->length) )
+                        rv = -EFAULT;
+                    else
+                        *bufsz = desc->length;
+                    break;
+                }
+                off += desc->length;
+             }
+         }
+        domain_unpause(d);
+     }
 
     xfree(ctxt.data);
     return rv;
@@ -196,7 +250,9 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
     struct hvm_save_header hdr;
     struct hvm_save_end end;
     hvm_save_handler handler;
-    unsigned int i;
+    hvm_save_one_handler save_one_handler;
+    unsigned int i, rc;
+    struct vcpu *v = NULL;
 
     if ( d->is_dying )
         return -EINVAL;
@@ -224,11 +280,36 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
     for ( i = 0; i <= HVM_SAVE_CODE_MAX; i++ )
     {
         handler = hvm_sr_handlers[i].save;
-        if ( handler != NULL )
+        save_one_handler = hvm_sr_handlers[i].save_one;
+        if ( save_one_handler != NULL )
         {
             printk(XENLOG_G_INFO "HVM%d save: %s\n",
                    d->domain_id, hvm_sr_handlers[i].name);
-            if ( handler(d, h) != 0 )
+            for_each_vcpu ( d, v )
+            {
+                rc = save_one_handler(v, h);
+                if( rc == CONTINUE )
+                    continue;
+
+                if( rc != 0 )
+                {
+                    printk(XENLOG_G_ERR
+                           "HVM%d save: failed to save type %"PRIu16"\n",
+                           d->domain_id, i);
+                    return -EFAULT;
+                }
+            }
+        }
+        else if ( handler != NULL )
+        {
+            printk(XENLOG_G_INFO "HVM%d save: %s\n",
+                   d->domain_id, hvm_sr_handlers[i].name);
+
+            rc = handler(d, h);
+            if( rc == CONTINUE )
+                continue;
+
+            if( rc != 0 )
             {
                 printk(XENLOG_G_ERR
                        "HVM%d save: failed to save type %"PRIu16"\n",
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v10 10/11] x86/hvm: Remove redundant save functions
  2018-07-04 13:32 [PATCH v10 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (8 preceding siblings ...)
  2018-07-04 13:32 ` [PATCH v10 09/11] x86/domctl: Don't pause the whole domain if only getting vcpu state Alexandru Isaila
@ 2018-07-04 13:32 ` Alexandru Isaila
  2018-07-04 13:32 ` [PATCH v10 11/11] x86/hvm: Remove save_one handler Alexandru Isaila
  10 siblings, 0 replies; 23+ messages in thread
From: Alexandru Isaila @ 2018-07-04 13:32 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This patch removes the redundant save functions and renames the
save_one* to save. It then changes the domain param to vcpu in the save
funcs.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>

---
Changes since V9:
        - Add enum return type for save funcs
---
 xen/arch/x86/cpu/mcheck/vmce.c |  24 +++------
 xen/arch/x86/hvm/hpet.c        |   8 ++-
 xen/arch/x86/hvm/hvm.c         | 112 ++++++++++++-----------------------------
 xen/arch/x86/hvm/i8254.c       |   9 ++--
 xen/arch/x86/hvm/irq.c         |  24 ++++++---
 xen/arch/x86/hvm/mtrr.c        |  22 +++-----
 xen/arch/x86/hvm/pmtimer.c     |  10 ++--
 xen/arch/x86/hvm/rtc.c         |   9 ++--
 xen/arch/x86/hvm/save.c        |  29 +++--------
 xen/arch/x86/hvm/vioapic.c     |  10 ++--
 xen/arch/x86/hvm/viridian.c    |  36 ++++++-------
 xen/arch/x86/hvm/vlapic.c      |  45 +++++++----------
 xen/arch/x86/hvm/vpic.c        |  12 +++--
 xen/include/asm-x86/hvm/save.h |   8 ++-
 14 files changed, 148 insertions(+), 210 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 29898a6..c054ae9 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -349,7 +349,8 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
     return ret;
 }
 
-static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+static enum save_return_type_t vmce_save_vcpu_ctxt(struct vcpu *v,
+                                                   hvm_domain_context_t *h)
  {
     struct hvm_vmce_vcpu ctxt;
 
@@ -358,24 +359,11 @@ static int vmce_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
     ctxt.mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2;
     ctxt.mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl;
 
-    return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt);
+    if ( hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt) != 0 )
+        return ERR;
+    return OK;
  }
 
-static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-{
-    struct vcpu *v;
-    int err = 0;
-
-    for_each_vcpu ( d, v )
-    {
-        err = vmce_save_vcpu_ctxt_one(v, h);
-        if ( err )
-            break;
-    }
-
-    return err;
-}
-
 static int vmce_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
     unsigned int vcpuid = hvm_load_instance(h);
@@ -396,7 +384,7 @@ static int vmce_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
-                          vmce_save_vcpu_ctxt_one,
+                          NULL,
                           vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 
 /*
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index aff8613..0eb302f 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -516,8 +516,10 @@ static const struct hvm_mmio_ops hpet_mmio_ops = {
 };
 
 
-static int hpet_save(struct domain *d, hvm_domain_context_t *h)
+static enum save_return_type_t hpet_save(struct vcpu *vcpu,
+                                         hvm_domain_context_t *h)
 {
+    struct domain *d = vcpu->domain;
     HPETState *hp = domain_vhpet(d);
     struct vcpu *v = pt_global_vcpu_target(d);
     int rc;
@@ -575,7 +577,9 @@ static int hpet_save(struct domain *d, hvm_domain_context_t *h)
 
     write_unlock(&hp->lock);
 
-    return rc;
+    if ( rc != 0 )
+        return ERR;
+    return OK;
 }
 
 static int hpet_load(struct domain *d, hvm_domain_context_t *h)
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 7d2a12d..cdf91f4 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -740,29 +740,18 @@ void hvm_domain_destroy(struct domain *d)
     destroy_vpci_mmcfg(d);
 }
 
-static int hvm_save_tsc_adjust_one(struct vcpu *v, hvm_domain_context_t *h)
+static enum save_return_type_t hvm_save_tsc_adjust(struct vcpu *v,
+                                                   hvm_domain_context_t *h)
  {
     struct hvm_tsc_adjust ctxt;
 
     ctxt.tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust;
 
-    return hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, &ctxt);
+    if ( hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, &ctxt) != 0 )
+        return ERR;
+    return OK;
  }
 
-static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
-{
-    struct vcpu *v;
-    int err = 0;
-
-    for_each_vcpu ( d, v )
-    {
-        err = hvm_save_tsc_adjust_one(v, h);
-        if ( err )
-            break;
-    }
-    return err;
-}
-
 static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 {
     unsigned int vcpuid = hvm_load_instance(h);
@@ -784,14 +773,22 @@ static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
-                          hvm_save_tsc_adjust_one,
+                          NULL,
                           hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
-static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+static enum save_return_type_t hvm_save_cpu_ctxt(struct vcpu *v,
+                                                 hvm_domain_context_t *h)
 {
     struct segment_register seg;
     struct hvm_hw_cpu ctxt;
 
+    /*
+     * We don't need to save state for a vcpu that is down; the restore
+     * code will leave it down if there is nothing saved.
+     */
+    if ( v->pause_flags & VPF_down )
+        return CONTINUE;
+
     memset(&ctxt, 0, sizeof(ctxt));
 
     /* Architecture-specific vmcs/vmcb bits */
@@ -888,27 +885,9 @@ static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
     ctxt.dr6 = v->arch.debugreg[6];
     ctxt.dr7 = v->arch.debugreg[7];
 
-    return hvm_save_entry(CPU, v->vcpu_id, h, &ctxt);
-}
-
-static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-{
-    struct vcpu *v;
-
-    for_each_vcpu ( d, v )
-    {
-        /*
-         * We don't need to save state for a vcpu that is down; the restore
-         * code will leave it down if there is nothing saved.
-         */
-        if ( v->pause_flags & VPF_down )
-            continue;
-
-
-        if ( hvm_save_cpu_ctxt_one(v, h) != 0 )
-            return 1;
-    }
-    return 0;
+    if ( hvm_save_entry(CPU, v->vcpu_id, h, &ctxt) != 0 )
+        return ERR;
+    return OK;
 }
 
 /* Return a string indicating the error, or NULL for valid. */
@@ -1182,7 +1161,7 @@ static int hvm_load_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_save_cpu_ctxt_one,
+HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, NULL,
                           hvm_load_cpu_ctxt,
                           1, HVMSR_PER_VCPU);
 
@@ -1190,16 +1169,18 @@ HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_save_cpu_ctxt_one,
                                            save_area) + \
                                   xstate_ctxt_size(xcr0))
 
-static int hvm_save_cpu_xsave_states_one(struct vcpu *v, hvm_domain_context_t *h)
+static enum save_return_type_t hvm_save_cpu_xsave_states(struct vcpu *v,
+                                                         hvm_domain_context_t *h)
 {
     struct hvm_hw_cpu_xsave *ctxt;
     unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
 
     if ( !cpu_has_xsave )
-        return 0;   /* do nothing */
-
+        return OK;   /* do nothing */
+    if ( !xsave_enabled(v) )
+        return CONTINUE;
     if ( _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size) )
-        return 1;
+        return ERR;
     ctxt = (struct hvm_hw_cpu_xsave *)&h->data[h->cur];
     h->cur += size;
     ctxt->xfeature_mask = xfeature_mask;
@@ -1208,25 +1189,9 @@ static int hvm_save_cpu_xsave_states_one(struct vcpu *v, hvm_domain_context_t *h
 
     expand_xsave_states(v, &ctxt->save_area,
                         size - offsetof(typeof(*ctxt), save_area));
-    return 0;
+    return OK;
  }
 
-static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
-{
-    struct vcpu *v;
-
-    for_each_vcpu ( d, v )
-    {
-        if ( !xsave_enabled(v) )
-            continue;
-
-        if ( hvm_save_cpu_xsave_states_one(v, h) != 0 )
-            return 1;
-    }
-
-    return 0;
-}
-
 /*
  * Structure layout conformity checks, documenting correctness of the cast in
  * the invocation of validate_xstate() below.
@@ -1364,7 +1329,8 @@ static const uint32_t msrs_to_send[] = {
 };
 static unsigned int __read_mostly msr_count_max = ARRAY_SIZE(msrs_to_send);
 
-static int hvm_save_cpu_msrs_one(struct vcpu *v, hvm_domain_context_t *h)
+static enum save_return_type_t hvm_save_cpu_msrs(struct vcpu *v,
+                                                 hvm_domain_context_t *h)
 {
     unsigned int i;
     struct hvm_msr *ctxt;
@@ -1372,7 +1338,7 @@ static int hvm_save_cpu_msrs_one(struct vcpu *v, hvm_domain_context_t *h)
 
     if ( _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
                          HVM_CPU_MSR_SIZE(msr_count_max)) )
-        return 1;
+        return ERR;
     ctxt = (struct hvm_msr *)&h->data[h->cur];
 
     ctxt->count = 0;
@@ -1421,21 +1387,7 @@ static int hvm_save_cpu_msrs_one(struct vcpu *v, hvm_domain_context_t *h)
         /* or rewind and remove the descriptor from the stream. */
         h->cur -= sizeof(struct hvm_save_descriptor);
 
-    return 0;
-}
-
-
-static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
-{
-    struct vcpu *v;
-
-    for_each_vcpu ( d, v )
-    {
-        if ( hvm_save_cpu_msrs_one(v, h) != 0 )
-            return 1;
-    }
-
-    return 0;
+    return OK;
 }
 
 static int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
@@ -1530,7 +1482,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
     hvm_register_savevm(CPU_XSAVE_CODE,
                         "CPU_XSAVE",
                         hvm_save_cpu_xsave_states,
-                        hvm_save_cpu_xsave_states_one,
+                        NULL,
                         hvm_load_cpu_xsave_states,
                         HVM_CPU_XSAVE_SIZE(xfeature_mask) +
                             sizeof(struct hvm_save_descriptor),
@@ -1543,7 +1495,7 @@ static int __init hvm_register_CPU_save_and_restore(void)
         hvm_register_savevm(CPU_MSR_CODE,
                             "CPU_MSR",
                             hvm_save_cpu_msrs,
-                            hvm_save_cpu_msrs_one,
+                            NULL,
                             hvm_load_cpu_msrs,
                             HVM_CPU_MSR_SIZE(msr_count_max) +
                                 sizeof(struct hvm_save_descriptor),
diff --git a/xen/arch/x86/hvm/i8254.c b/xen/arch/x86/hvm/i8254.c
index ec77b23..00cabac 100644
--- a/xen/arch/x86/hvm/i8254.c
+++ b/xen/arch/x86/hvm/i8254.c
@@ -390,8 +390,10 @@ void pit_stop_channel0_irq(PITState *pit)
     spin_unlock(&pit->lock);
 }
 
-static int pit_save(struct domain *d, hvm_domain_context_t *h)
+static enum save_return_type_t pit_save(struct vcpu *v,
+                                        hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     PITState *pit = domain_vpit(d);
     int rc;
 
@@ -403,8 +405,9 @@ static int pit_save(struct domain *d, hvm_domain_context_t *h)
     rc = hvm_save_entry(PIT, 0, h, &pit->hw);
 
     spin_unlock(&pit->lock);
-
-    return rc;
+    if ( rc != 0 )
+        return ERR;
+    return OK;
 }
 
 static int pit_load(struct domain *d, hvm_domain_context_t *h)
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index 770eab7..73f48a9 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -630,8 +630,10 @@ static int __init dump_irq_info_key_init(void)
 }
 __initcall(dump_irq_info_key_init);
 
-static int irq_save_pci(struct domain *d, hvm_domain_context_t *h)
+static enum save_return_type_t irq_save_pci(struct vcpu *v,
+                                            hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     struct hvm_irq *hvm_irq = hvm_domain_irq(d);
     unsigned int asserted, pdev, pintx;
     int rc;
@@ -659,23 +661,33 @@ static int irq_save_pci(struct domain *d, hvm_domain_context_t *h)
 
     spin_unlock(&d->arch.hvm_domain.irq_lock);
 
-    return rc;
+    if ( rc != 0 )
+        return ERR;
+    return OK;
 }
 
-static int irq_save_isa(struct domain *d, hvm_domain_context_t *h)
+static enum save_return_type_t irq_save_isa(struct vcpu *v,
+                                            hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     struct hvm_irq *hvm_irq = hvm_domain_irq(d);
 
     /* Save ISA IRQ lines */
-    return ( hvm_save_entry(ISA_IRQ, 0, h, &hvm_irq->isa_irq) );
+    if ( hvm_save_entry(ISA_IRQ, 0, h, &hvm_irq->isa_irq) != 0 )
+        return ERR;
+    return OK;
 }
 
-static int irq_save_link(struct domain *d, hvm_domain_context_t *h)
+static enum save_return_type_t irq_save_link(struct vcpu *v,
+                                             hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     struct hvm_irq *hvm_irq = hvm_domain_irq(d);
 
     /* Save PCI-ISA link state */
-    return ( hvm_save_entry(PCI_LINK, 0, h, &hvm_irq->pci_link) );
+    if ( hvm_save_entry(PCI_LINK, 0, h, &hvm_irq->pci_link) != 0 )
+        return ERR;
+    return OK;
 }
 
 static int irq_load_pci(struct domain *d, hvm_domain_context_t *h)
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index ff9ff69..b4e1335 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -718,7 +718,8 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start,
     return 0;
 }
 
-static int hvm_save_mtrr_msr_one(struct vcpu *v, hvm_domain_context_t *h)
+static enum save_return_type_t hvm_save_mtrr_msr(struct vcpu *v,
+                                                 hvm_domain_context_t *h)
 {
     const struct mtrr_state *mtrr_state = &v->arch.hvm_vcpu.mtrr;
     struct hvm_hw_mtrr hw_mtrr;
@@ -753,22 +754,11 @@ static int hvm_save_mtrr_msr_one(struct vcpu *v, hvm_domain_context_t *h)
         hw_mtrr.msr_mtrr_fixed[i] =
             ((uint64_t*)mtrr_state->fixed_ranges)[i];
 
-    return hvm_save_entry(MTRR, v->vcpu_id, h, &hw_mtrr);
+    if ( hvm_save_entry(MTRR, v->vcpu_id, h, &hw_mtrr) != 0 )
+        return ERR;
+    return OK;
  }
 
-static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
-{
-    struct vcpu *v;
-
-    /* save mtrr&pat */
-    for_each_vcpu(d, v)
-    {
-        if ( hvm_save_mtrr_msr_one(v, h) != 0 )
-            return 1;
-    }
-    return 0;
-}
-
 static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
 {
     int vcpuid, i;
@@ -819,7 +809,7 @@ static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr, hvm_save_mtrr_msr_one,
+HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr, NULL,
                           hvm_load_mtrr_msr, 1, HVMSR_PER_VCPU);
 
 void memory_type_changed(struct domain *d)
diff --git a/xen/arch/x86/hvm/pmtimer.c b/xen/arch/x86/hvm/pmtimer.c
index 0a5e8ce..b96ab5b 100644
--- a/xen/arch/x86/hvm/pmtimer.c
+++ b/xen/arch/x86/hvm/pmtimer.c
@@ -249,15 +249,17 @@ static int handle_pmt_io(
     return X86EMUL_OKAY;
 }
 
-static int acpi_save(struct domain *d, hvm_domain_context_t *h)
+static enum save_return_type_t acpi_save(struct vcpu *v,
+                                         hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     struct hvm_hw_acpi *acpi = &d->arch.hvm_domain.acpi;
     PMTState *s = &d->arch.hvm_domain.pl_time->vpmt;
     uint32_t x, msb = acpi->tmr_val & TMR_VAL_MSB;
     int rc;
 
     if ( !has_vpm(d) )
-        return 0;
+        return OK;
 
     spin_lock(&s->lock);
 
@@ -277,7 +279,9 @@ static int acpi_save(struct domain *d, hvm_domain_context_t *h)
 
     spin_unlock(&s->lock);
 
-    return rc;
+    if ( rc != 0 )
+        return ERR;
+    return OK;
 }
 
 static int acpi_load(struct domain *d, hvm_domain_context_t *h)
diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index ce7e71b..e4a9720 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -737,18 +737,21 @@ void rtc_migrate_timers(struct vcpu *v)
 }
 
 /* Save RTC hardware state */
-static int rtc_save(struct domain *d, hvm_domain_context_t *h)
+static enum save_return_type_t rtc_save(struct vcpu *v, hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     RTCState *s = domain_vrtc(d);
     int rc;
 
     if ( !has_vrtc(d) )
-        return 0;
+        return OK;
 
     spin_lock(&s->lock);
     rc = hvm_save_entry(RTC, 0, h, &s->hw);
     spin_unlock(&s->lock);
-    return rc;
+    if ( rc != 0 )
+        return ERR;
+    return OK;
 }
 
 /* Reload the hardware state from a saved domain */
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 1b28e7f..1230f0e 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -174,7 +174,7 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
             rv = hvm_sr_handlers[typecode].save_one(d->vcpu[instance],
                                                     &ctxt);
         else
-            rv = hvm_sr_handlers[typecode].save(d, &ctxt);
+            rv = hvm_sr_handlers[typecode].save(d->vcpu[instance], &ctxt);
 
         if ( rv != 0 )
         {
@@ -207,7 +207,8 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
     {
         for_each_vcpu ( d, v )
         {
-            if ( (rv = hvm_sr_handlers[typecode].save(d, &ctxt)) != 0 )
+            if ( (rv = hvm_sr_handlers[typecode].save(d->vcpu[instance],
+                                                      &ctxt)) != 0 )
             {
                 printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
                        d->domain_id, typecode, rv);
@@ -250,7 +251,6 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
     struct hvm_save_header hdr;
     struct hvm_save_end end;
     hvm_save_handler handler;
-    hvm_save_one_handler save_one_handler;
     unsigned int i, rc;
     struct vcpu *v = NULL;
 
@@ -280,14 +280,14 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
     for ( i = 0; i <= HVM_SAVE_CODE_MAX; i++ )
     {
         handler = hvm_sr_handlers[i].save;
-        save_one_handler = hvm_sr_handlers[i].save_one;
-        if ( save_one_handler != NULL )
+        if ( handler != NULL )
         {
             printk(XENLOG_G_INFO "HVM%d save: %s\n",
                    d->domain_id, hvm_sr_handlers[i].name);
+
             for_each_vcpu ( d, v )
             {
-                rc = save_one_handler(v, h);
+                rc = handler(v, h);
                 if( rc == CONTINUE )
                     continue;
 
@@ -300,23 +300,6 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
                 }
             }
         }
-        else if ( handler != NULL )
-        {
-            printk(XENLOG_G_INFO "HVM%d save: %s\n",
-                   d->domain_id, hvm_sr_handlers[i].name);
-
-            rc = handler(d, h);
-            if( rc == CONTINUE )
-                continue;
-
-            if( rc != 0 )
-            {
-                printk(XENLOG_G_ERR
-                       "HVM%d save: failed to save type %"PRIu16"\n",
-                       d->domain_id, i);
-                return -EFAULT;
-            }
-        }
     }
 
     /* Save an end-of-file marker */
diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index 66f54e4..1f59d64 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -569,12 +569,14 @@ int vioapic_get_trigger_mode(const struct domain *d, unsigned int gsi)
     return vioapic->redirtbl[pin].fields.trig_mode;
 }
 
-static int ioapic_save(struct domain *d, hvm_domain_context_t *h)
+static enum save_return_type_t ioapic_save(struct vcpu *v,
+                                           hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     struct hvm_vioapic *s;
 
     if ( !has_vioapic(d) )
-        return 0;
+        return OK;
 
     s = domain_vioapic(d, 0);
 
@@ -582,7 +584,9 @@ static int ioapic_save(struct domain *d, hvm_domain_context_t *h)
          d->arch.hvm_domain.nr_vioapics != 1 )
         return -EOPNOTSUPP;
 
-    return hvm_save_entry(IOAPIC, 0, h, &s->domU);
+    if ( hvm_save_entry(IOAPIC, 0, h, &s->domU) != 0 )
+        return ERR;
+    return OK;
 }
 
 static int ioapic_load(struct domain *d, hvm_domain_context_t *h)
diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index 466e015..ddae657 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -990,8 +990,10 @@ out:
     return HVM_HCALL_completed;
 }
 
-static int viridian_save_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
+static enum save_return_type_t viridian_save_domain_ctxt(struct vcpu *v,
+                                                         hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     struct hvm_viridian_domain_context ctxt = {
         .time_ref_count = d->arch.hvm_domain.viridian.time_ref_count.val,
         .hypercall_gpa  = d->arch.hvm_domain.viridian.hypercall_gpa.raw,
@@ -1000,12 +1002,15 @@ static int viridian_save_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
     };
 
     if ( !is_viridian_domain(d) )
-        return 0;
+        return OK;
 
-    return (hvm_save_entry(VIRIDIAN_DOMAIN, 0, h, &ctxt) != 0);
+    if ( hvm_save_entry(VIRIDIAN_DOMAIN, 0, h, &ctxt) != 0 )
+        return ERR;
+    return OK;
 }
 
-static int viridian_load_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
+static int viridian_load_domain_ctxt(struct domain *d,
+                                     hvm_domain_context_t *h)
 {
     struct hvm_viridian_domain_context ctxt;
 
@@ -1026,30 +1031,21 @@ static int viridian_load_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
 HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt, NULL,
                           viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
 
-static int viridian_save_vcpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
+static enum save_return_type_t viridian_save_vcpu_ctxt(struct vcpu *v,
+                                                       hvm_domain_context_t *h)
 {
     struct hvm_viridian_vcpu_context ctxt;
 
     if ( !is_viridian_domain(v->domain) )
-        return 0;
+        return OK;
 
     memset(&ctxt, 0, sizeof(ctxt));
     ctxt.vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw;
     ctxt.vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending;
 
-    return hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt);
-}
-
-static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-{
-    struct vcpu *v;
-
-    for_each_vcpu( d, v ) {
-        if ( viridian_save_vcpu_ctxt_one(v, h) != 0 )
-            return 1;
-    }
-
-    return 0;
+    if ( hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt) != 0 )
+        return ERR;
+    return OK;
 }
 
 static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
@@ -1083,7 +1079,7 @@ static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_VCPU, viridian_save_vcpu_ctxt,
-                          viridian_save_vcpu_ctxt_one,
+                          NULL,
                           viridian_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 
 static int __init parse_viridian_version(const char *arg)
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index eff6070..227bed4 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1435,45 +1435,36 @@ static void lapic_rearm(struct vlapic *s)
     s->timer_last_update = s->pt.last_plt_gtime;
 }
 
-static int lapic_save_hidden(struct domain *d, hvm_domain_context_t *h)
+static enum save_return_type_t lapic_save_hidden(struct vcpu *v,
+                                                 hvm_domain_context_t *h)
 {
-    struct vcpu *v;
+    struct domain *d = v->domain;
     struct vlapic *s;
-    int rc = 0;
 
     if ( !has_vlapic(d) )
-        return 0;
+             return OK;
 
-    for_each_vcpu ( d, v )
-    {
-        s = vcpu_vlapic(v);
-        if ( (rc = hvm_save_entry(LAPIC, v->vcpu_id, h, &s->hw)) != 0 )
-            break;
-    }
-
-    return rc;
+    s = vcpu_vlapic(v);
+    if ( hvm_save_entry(LAPIC, v->vcpu_id, h, &s->hw) != 0 )
+        return ERR;
+    return OK;
 }
 
-static int lapic_save_regs(struct domain *d, hvm_domain_context_t *h)
+static enum save_return_type_t lapic_save_regs(struct vcpu *v,
+                                               hvm_domain_context_t *h)
 {
-    struct vcpu *v;
+    struct domain *d = v->domain;
     struct vlapic *s;
-    int rc = 0;
 
     if ( !has_vlapic(d) )
-        return 0;
-
-    for_each_vcpu ( d, v )
-    {
-        if ( hvm_funcs.sync_pir_to_irr )
-            hvm_funcs.sync_pir_to_irr(v);
-
-        s = vcpu_vlapic(v);
-        if ( (rc = hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, s->regs)) != 0 )
-            break;
-    }
+        return OK;
+    if ( hvm_funcs.sync_pir_to_irr )
+        hvm_funcs.sync_pir_to_irr(v);
 
-    return rc;
+    s = vcpu_vlapic(v);
+    if ( hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, s->regs) != 0 )
+        return ERR;
+    return OK;
 }
 
 /*
diff --git a/xen/arch/x86/hvm/vpic.c b/xen/arch/x86/hvm/vpic.c
index ca9b4cb..a0140d8 100644
--- a/xen/arch/x86/hvm/vpic.c
+++ b/xen/arch/x86/hvm/vpic.c
@@ -371,23 +371,25 @@ static int vpic_intercept_elcr_io(
     return X86EMUL_OKAY;
 }
 
-static int vpic_save(struct domain *d, hvm_domain_context_t *h)
+static enum save_return_type_t vpic_save(struct vcpu *v,
+                                         hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     struct hvm_hw_vpic *s;
     int i;
 
     if ( !has_vpic(d) )
-        return 0;
+        return OK;
 
     /* Save the state of both PICs */
     for ( i = 0; i < 2 ; i++ )
     {
         s = &d->arch.hvm_domain.vpic[i];
-        if ( hvm_save_entry(PIC, i, h, s) )
-            return 1;
+        if ( hvm_save_entry(PIC, i, h, s) != 0 )
+            return ERR;
     }
 
-    return 0;
+    return OK;
 }
 
 static int vpic_load(struct domain *d, hvm_domain_context_t *h)
diff --git a/xen/include/asm-x86/hvm/save.h b/xen/include/asm-x86/hvm/save.h
index 2538628..22e5a92 100644
--- a/xen/include/asm-x86/hvm/save.h
+++ b/xen/include/asm-x86/hvm/save.h
@@ -91,11 +91,17 @@ static inline uint16_t hvm_load_instance(struct hvm_domain_context *h)
     return d->instance;
 }
 
+enum save_return_type_t {
+    OK,
+    ERR,
+    CONTINUE,
+};
+
 /* Handler types for different types of save-file entry. 
  * The save handler may save multiple instances of a type into the buffer;
  * the load handler will be called once for each instance found when
  * restoring.  Both return non-zero on error. */
-typedef int (*hvm_save_handler) (struct domain *d, 
+typedef enum save_return_type_t (*hvm_save_handler) (struct  vcpu *v,
                                  hvm_domain_context_t *h);
 typedef int (*hvm_save_one_handler)(struct  vcpu *v,
                                     hvm_domain_context_t *h);
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v10 11/11] x86/hvm: Remove save_one handler
  2018-07-04 13:32 [PATCH v10 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (9 preceding siblings ...)
  2018-07-04 13:32 ` [PATCH v10 10/11] x86/hvm: Remove redundant save functions Alexandru Isaila
@ 2018-07-04 13:32 ` Alexandru Isaila
  10 siblings, 0 replies; 23+ messages in thread
From: Alexandru Isaila @ 2018-07-04 13:32 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
 xen/arch/x86/cpu/mcheck/vmce.c |  1 -
 xen/arch/x86/hvm/hpet.c        |  2 +-
 xen/arch/x86/hvm/hvm.c         |  5 +----
 xen/arch/x86/hvm/i8254.c       |  2 +-
 xen/arch/x86/hvm/irq.c         |  6 +++---
 xen/arch/x86/hvm/mtrr.c        |  2 +-
 xen/arch/x86/hvm/pmtimer.c     |  2 +-
 xen/arch/x86/hvm/rtc.c         |  2 +-
 xen/arch/x86/hvm/save.c        | 14 +++-----------
 xen/arch/x86/hvm/vioapic.c     |  2 +-
 xen/arch/x86/hvm/viridian.c    |  3 +--
 xen/arch/x86/hvm/vlapic.c      |  4 ++--
 xen/arch/x86/hvm/vpic.c        |  2 +-
 xen/include/asm-x86/hvm/save.h |  6 +-----
 14 files changed, 18 insertions(+), 35 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index c054ae9..bb941b0 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -384,7 +384,6 @@ static int vmce_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
-                          NULL,
                           vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 
 /*
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index 0eb302f..22e087e 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -644,7 +644,7 @@ static int hpet_load(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, hpet_load, 1, HVMSR_PER_DOM);
 
 static void hpet_set(HPETState *h)
 {
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index cdf91f4..d11fba6 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -773,7 +773,6 @@ static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
-                          NULL,
                           hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
 static enum save_return_type_t hvm_save_cpu_ctxt(struct vcpu *v,
@@ -1161,7 +1160,7 @@ static int hvm_load_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, NULL,
+HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt,
                           hvm_load_cpu_ctxt,
                           1, HVMSR_PER_VCPU);
 
@@ -1482,7 +1481,6 @@ static int __init hvm_register_CPU_save_and_restore(void)
     hvm_register_savevm(CPU_XSAVE_CODE,
                         "CPU_XSAVE",
                         hvm_save_cpu_xsave_states,
-                        NULL,
                         hvm_load_cpu_xsave_states,
                         HVM_CPU_XSAVE_SIZE(xfeature_mask) +
                             sizeof(struct hvm_save_descriptor),
@@ -1495,7 +1493,6 @@ static int __init hvm_register_CPU_save_and_restore(void)
         hvm_register_savevm(CPU_MSR_CODE,
                             "CPU_MSR",
                             hvm_save_cpu_msrs,
-                            NULL,
                             hvm_load_cpu_msrs,
                             HVM_CPU_MSR_SIZE(msr_count_max) +
                                 sizeof(struct hvm_save_descriptor),
diff --git a/xen/arch/x86/hvm/i8254.c b/xen/arch/x86/hvm/i8254.c
index 00cabac..0d6b84a 100644
--- a/xen/arch/x86/hvm/i8254.c
+++ b/xen/arch/x86/hvm/i8254.c
@@ -440,7 +440,7 @@ static int pit_load(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, NULL, pit_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, pit_load, 1, HVMSR_PER_DOM);
 
 void pit_reset(struct domain *d)
 {
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index 73f48a9..14f8dfd 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -776,9 +776,9 @@ static int irq_load_link(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, NULL, irq_load_pci,
+HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, irq_load_pci,
                           1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, NULL, irq_load_isa,
+HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, irq_load_isa,
                           1, HVMSR_PER_DOM);
-HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, NULL, irq_load_link,
+HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, irq_load_link,
                           1, HVMSR_PER_DOM);
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index b4e1335..2a5fad7 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -809,7 +809,7 @@ static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr, NULL,
+HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr,
                           hvm_load_mtrr_msr, 1, HVMSR_PER_VCPU);
 
 void memory_type_changed(struct domain *d)
diff --git a/xen/arch/x86/hvm/pmtimer.c b/xen/arch/x86/hvm/pmtimer.c
index b96ab5b..ad16a43 100644
--- a/xen/arch/x86/hvm/pmtimer.c
+++ b/xen/arch/x86/hvm/pmtimer.c
@@ -313,7 +313,7 @@ static int acpi_load(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PMTIMER, acpi_save, NULL, acpi_load,
+HVM_REGISTER_SAVE_RESTORE(PMTIMER, acpi_save, acpi_load,
                           1, HVMSR_PER_DOM);
 
 int pmtimer_change_ioport(struct domain *d, unsigned int version)
diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index e4a9720..664e8cc 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -786,7 +786,7 @@ static int rtc_load(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(RTC, rtc_save, NULL, rtc_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(RTC, rtc_save, rtc_load, 1, HVMSR_PER_DOM);
 
 void rtc_reset(struct domain *d)
 {
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 1230f0e..785a7fc 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -85,18 +85,16 @@ int arch_hvm_load(struct domain *d, struct hvm_save_header *hdr)
 /* List of handlers for various HVM save and restore types */
 static struct {
     hvm_save_handler save;
-    hvm_save_one_handler save_one;
     hvm_load_handler load;
     const char *name;
     size_t size;
     int kind;
-} hvm_sr_handlers[HVM_SAVE_CODE_MAX + 1] = { {NULL, NULL, NULL, "<?>"}, };
+} hvm_sr_handlers[HVM_SAVE_CODE_MAX + 1] = { {NULL, NULL, "<?>"}, };
 
 /* Init-time function to add entries to that list */
 void __init hvm_register_savevm(uint16_t typecode,
                                 const char *name,
                                 hvm_save_handler save_state,
-                                hvm_save_one_handler save_one,
                                 hvm_load_handler load_state,
                                 size_t size, int kind)
 {
@@ -104,7 +102,6 @@ void __init hvm_register_savevm(uint16_t typecode,
     ASSERT(hvm_sr_handlers[typecode].save == NULL);
     ASSERT(hvm_sr_handlers[typecode].load == NULL);
     hvm_sr_handlers[typecode].save = save_state;
-    hvm_sr_handlers[typecode].save_one = save_one;
     hvm_sr_handlers[typecode].load = load_state;
     hvm_sr_handlers[typecode].name = name;
     hvm_sr_handlers[typecode].size = size;
@@ -170,13 +167,8 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
 
     if ( is_single_instance )
     {
-        if ( hvm_sr_handlers[typecode].save_one != NULL )
-            rv = hvm_sr_handlers[typecode].save_one(d->vcpu[instance],
-                                                    &ctxt);
-        else
-            rv = hvm_sr_handlers[typecode].save(d->vcpu[instance], &ctxt);
-
-        if ( rv != 0 )
+        if ( (rv = hvm_sr_handlers[typecode].save(d->vcpu[instance],
+                                                  &ctxt)) != 0 )
         {
             printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
                    d->domain_id, typecode, rv);
diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index 1f59d64..cb403ff 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -605,7 +605,7 @@ static int ioapic_load(struct domain *d, hvm_domain_context_t *h)
     return hvm_load_entry(IOAPIC, h, &s->domU);
 }
 
-HVM_REGISTER_SAVE_RESTORE(IOAPIC, ioapic_save, NULL, ioapic_load, 1, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(IOAPIC, ioapic_save, ioapic_load, 1, HVMSR_PER_DOM);
 
 void vioapic_reset(struct domain *d)
 {
diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index ddae657..8db3e40 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -1028,7 +1028,7 @@ static int viridian_load_domain_ctxt(struct domain *d,
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt, NULL,
+HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt,
                           viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
 
 static enum save_return_type_t viridian_save_vcpu_ctxt(struct vcpu *v,
@@ -1079,7 +1079,6 @@ static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 }
 
 HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_VCPU, viridian_save_vcpu_ctxt,
-                          NULL,
                           viridian_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
 
 static int __init parse_viridian_version(const char *arg)
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 227bed4..f5c57da 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1567,9 +1567,9 @@ static int lapic_load_regs(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, NULL, lapic_load_hidden,
+HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, lapic_load_hidden,
                           1, HVMSR_PER_VCPU);
-HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, NULL, lapic_load_regs,
+HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, lapic_load_regs,
                           1, HVMSR_PER_VCPU);
 
 int vlapic_init(struct vcpu *v)
diff --git a/xen/arch/x86/hvm/vpic.c b/xen/arch/x86/hvm/vpic.c
index a0140d8..e8db9e5 100644
--- a/xen/arch/x86/hvm/vpic.c
+++ b/xen/arch/x86/hvm/vpic.c
@@ -413,7 +413,7 @@ static int vpic_load(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(PIC, vpic_save, NULL, vpic_load, 2, HVMSR_PER_DOM);
+HVM_REGISTER_SAVE_RESTORE(PIC, vpic_save, vpic_load, 2, HVMSR_PER_DOM);
 
 void vpic_reset(struct domain *d)
 {
diff --git a/xen/include/asm-x86/hvm/save.h b/xen/include/asm-x86/hvm/save.h
index 22e5a92..c2cbd6d 100644
--- a/xen/include/asm-x86/hvm/save.h
+++ b/xen/include/asm-x86/hvm/save.h
@@ -103,8 +103,6 @@ enum save_return_type_t {
  * restoring.  Both return non-zero on error. */
 typedef enum save_return_type_t (*hvm_save_handler) (struct  vcpu *v,
                                  hvm_domain_context_t *h);
-typedef int (*hvm_save_one_handler)(struct  vcpu *v,
-                                    hvm_domain_context_t *h);
 typedef int (*hvm_load_handler) (struct domain *d,
                                  hvm_domain_context_t *h);
 
@@ -113,7 +111,6 @@ typedef int (*hvm_load_handler) (struct domain *d,
 void hvm_register_savevm(uint16_t typecode,
                          const char *name, 
                          hvm_save_handler save_state,
-                         hvm_save_one_handler save_one,
                          hvm_load_handler load_state,
                          size_t size, int kind);
 
@@ -123,13 +120,12 @@ void hvm_register_savevm(uint16_t typecode,
 
 /* Syntactic sugar around that function: specify the max number of
  * saves, and this calculates the size of buffer needed */
-#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _save_one, _load, _num, _k)  \
+#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _load, _num, _k)             \
 static int __init __hvm_register_##_x##_save_and_restore(void)            \
 {                                                                         \
     hvm_register_savevm(HVM_SAVE_CODE(_x),                                \
                         #_x,                                              \
                         &_save,                                           \
-                        _save_one,                                        \
                         &_load,                                           \
                         (_num) * (HVM_SAVE_LENGTH(_x)                     \
                                   + sizeof (struct hvm_save_descriptor)), \
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH v10 07/11] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func
  2018-07-04 13:32 ` [PATCH v10 07/11] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func Alexandru Isaila
@ 2018-07-05  8:25   ` Paul Durrant
  0 siblings, 0 replies; 23+ messages in thread
From: Paul Durrant @ 2018-07-05  8:25 UTC (permalink / raw)
  To: 'Alexandru Isaila', xen-devel
  Cc: Ian Jackson, Wei Liu, jbeulich, Andrew Cooper

> -----Original Message-----
> From: Alexandru Isaila [mailto:aisaila@bitdefender.com]
> Sent: 04 July 2018 14:32
> To: xen-devel@lists.xen.org
> Cc: Ian Jackson <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.com>;
> jbeulich@suse.com; Andrew Cooper <Andrew.Cooper3@citrix.com>; Paul
> Durrant <Paul.Durrant@citrix.com>; Alexandru Isaila
> <aisaila@bitdefender.com>
> Subject: [PATCH v10 07/11] x86/hvm: Introduce
> viridian_save_vcpu_ctxt_one() func
> 
> This is used to save data from a single instance.
> 
> Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>

Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

> ---
>  xen/arch/x86/hvm/viridian.c | 24 +++++++++++++++---------
>  1 file changed, 15 insertions(+), 9 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
> index 694eae6..1e87cd6 100644
> --- a/xen/arch/x86/hvm/viridian.c
> +++ b/xen/arch/x86/hvm/viridian.c
> @@ -1026,20 +1026,26 @@ static int viridian_load_domain_ctxt(struct
> domain *d, hvm_domain_context_t *h)
>  HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN,
> viridian_save_domain_ctxt,
>                            viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
> 
> -static int viridian_save_vcpu_ctxt(struct domain *d,
> hvm_domain_context_t *h)
> +static int viridian_save_vcpu_ctxt_one(struct vcpu *v,
> hvm_domain_context_t *h)
>  {
> -    struct vcpu *v;
> +    struct hvm_viridian_vcpu_context ctxt;
> 
> -    if ( !is_viridian_domain(d) )
> +    if ( !is_viridian_domain(v->domain) )
>          return 0;
> 
> -    for_each_vcpu( d, v ) {
> -        struct hvm_viridian_vcpu_context ctxt = {
> -            .vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw,
> -            .vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending,
> -        };
> +    memset(&ctxt, 0, sizeof(ctxt));
> +    ctxt.vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw;
> +    ctxt.vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending;
> 
> -        if ( hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt) != 0 )
> +    return hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt);
> +}
> +
> +static int viridian_save_vcpu_ctxt(struct domain *d,
> hvm_domain_context_t *h)
> +{
> +    struct vcpu *v;
> +
> +    for_each_vcpu( d, v ) {
> +        if ( viridian_save_vcpu_ctxt_one(v, h) != 0 )
>              return 1;
>      }
> 
> --
> 2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v10 08/11] x86/hvm: Add handler for save_one funcs
  2018-07-04 13:32 ` [PATCH v10 08/11] x86/hvm: Add handler for save_one funcs Alexandru Isaila
@ 2018-07-05  8:33   ` Paul Durrant
  2018-07-05  8:48     ` Alexandru Stefan ISAILA
  0 siblings, 1 reply; 23+ messages in thread
From: Paul Durrant @ 2018-07-05  8:33 UTC (permalink / raw)
  To: 'Alexandru Isaila', xen-devel
  Cc: Ian Jackson, Wei Liu, jbeulich, Andrew Cooper

> -----Original Message-----
> From: Alexandru Isaila [mailto:aisaila@bitdefender.com]
> Sent: 04 July 2018 14:32
> To: xen-devel@lists.xen.org
> Cc: Ian Jackson <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.com>;
> jbeulich@suse.com; Andrew Cooper <Andrew.Cooper3@citrix.com>; Paul
> Durrant <Paul.Durrant@citrix.com>; Alexandru Isaila
> <aisaila@bitdefender.com>
> Subject: [PATCH v10 08/11] x86/hvm: Add handler for save_one funcs
> 
> Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>

Patch looks ok so...

Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

but...

> 
> ---
> Changes since V8:
> 	- Add comment for the handler return values.

...I don't see any comment. Am I missing something?

> ---
>  xen/arch/x86/cpu/mcheck/vmce.c | 1 +
>  xen/arch/x86/hvm/hpet.c        | 2 +-
>  xen/arch/x86/hvm/hvm.c         | 6 +++++-
>  xen/arch/x86/hvm/i8254.c       | 2 +-
>  xen/arch/x86/hvm/irq.c         | 6 +++---
>  xen/arch/x86/hvm/mtrr.c        | 4 ++--
>  xen/arch/x86/hvm/pmtimer.c     | 2 +-
>  xen/arch/x86/hvm/rtc.c         | 2 +-
>  xen/arch/x86/hvm/save.c        | 5 ++++-
>  xen/arch/x86/hvm/vioapic.c     | 2 +-
>  xen/arch/x86/hvm/viridian.c    | 3 ++-
>  xen/arch/x86/hvm/vlapic.c      | 4 ++--
>  xen/arch/x86/hvm/vpic.c        | 2 +-
>  xen/include/asm-x86/hvm/save.h | 6 +++++-
>  14 files changed, 30 insertions(+), 17 deletions(-)
> 
> diff --git a/xen/arch/x86/cpu/mcheck/vmce.c
> b/xen/arch/x86/cpu/mcheck/vmce.c
> index 8a1fbfc..29898a6 100644
> --- a/xen/arch/x86/cpu/mcheck/vmce.c
> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
> @@ -396,6 +396,7 @@ static int vmce_load_vcpu_ctxt(struct domain *d,
> hvm_domain_context_t *h)
>  }
> 
>  HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
> +                          vmce_save_vcpu_ctxt_one,
>                            vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
> 
>  /*
> diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
> index 2837709..aff8613 100644
> --- a/xen/arch/x86/hvm/hpet.c
> +++ b/xen/arch/x86/hvm/hpet.c
> @@ -640,7 +640,7 @@ static int hpet_load(struct domain *d,
> hvm_domain_context_t *h)
>      return 0;
>  }
> 
> -HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, hpet_load, 1,
> HVMSR_PER_DOM);
> +HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1,
> HVMSR_PER_DOM);
> 
>  static void hpet_set(HPETState *h)
>  {
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 9ff9954..7d2a12d 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -784,6 +784,7 @@ static int hvm_load_tsc_adjust(struct domain *d,
> hvm_domain_context_t *h)
>  }
> 
>  HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
> +                          hvm_save_tsc_adjust_one,
>                            hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
> 
>  static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t
> *h)
> @@ -1181,7 +1182,8 @@ static int hvm_load_cpu_ctxt(struct domain *d,
> hvm_domain_context_t *h)
>      return 0;
>  }
> 
> -HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt,
> hvm_load_cpu_ctxt,
> +HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt,
> hvm_save_cpu_ctxt_one,
> +                          hvm_load_cpu_ctxt,
>                            1, HVMSR_PER_VCPU);
> 
>  #define HVM_CPU_XSAVE_SIZE(xcr0) (offsetof(struct hvm_hw_cpu_xsave,
> \
> @@ -1528,6 +1530,7 @@ static int __init
> hvm_register_CPU_save_and_restore(void)
>      hvm_register_savevm(CPU_XSAVE_CODE,
>                          "CPU_XSAVE",
>                          hvm_save_cpu_xsave_states,
> +                        hvm_save_cpu_xsave_states_one,
>                          hvm_load_cpu_xsave_states,
>                          HVM_CPU_XSAVE_SIZE(xfeature_mask) +
>                              sizeof(struct hvm_save_descriptor),
> @@ -1540,6 +1543,7 @@ static int __init
> hvm_register_CPU_save_and_restore(void)
>          hvm_register_savevm(CPU_MSR_CODE,
>                              "CPU_MSR",
>                              hvm_save_cpu_msrs,
> +                            hvm_save_cpu_msrs_one,
>                              hvm_load_cpu_msrs,
>                              HVM_CPU_MSR_SIZE(msr_count_max) +
>                                  sizeof(struct hvm_save_descriptor),
> diff --git a/xen/arch/x86/hvm/i8254.c b/xen/arch/x86/hvm/i8254.c
> index 992f08d..ec77b23 100644
> --- a/xen/arch/x86/hvm/i8254.c
> +++ b/xen/arch/x86/hvm/i8254.c
> @@ -437,7 +437,7 @@ static int pit_load(struct domain *d,
> hvm_domain_context_t *h)
>      return 0;
>  }
> 
> -HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, pit_load, 1,
> HVMSR_PER_DOM);
> +HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, NULL, pit_load, 1,
> HVMSR_PER_DOM);
> 
>  void pit_reset(struct domain *d)
>  {
> diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
> index c85d004..770eab7 100644
> --- a/xen/arch/x86/hvm/irq.c
> +++ b/xen/arch/x86/hvm/irq.c
> @@ -764,9 +764,9 @@ static int irq_load_link(struct domain *d,
> hvm_domain_context_t *h)
>      return 0;
>  }
> 
> -HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, irq_load_pci,
> +HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, NULL,
> irq_load_pci,
>                            1, HVMSR_PER_DOM);
> -HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, irq_load_isa,
> +HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, NULL, irq_load_isa,
>                            1, HVMSR_PER_DOM);
> -HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, irq_load_link,
> +HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, NULL,
> irq_load_link,
>                            1, HVMSR_PER_DOM);
> diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
> index 9bbff59..ff9ff69 100644
> --- a/xen/arch/x86/hvm/mtrr.c
> +++ b/xen/arch/x86/hvm/mtrr.c
> @@ -819,8 +819,8 @@ static int hvm_load_mtrr_msr(struct domain *d,
> hvm_domain_context_t *h)
>      return 0;
>  }
> 
> -HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr,
> hvm_load_mtrr_msr,
> -                          1, HVMSR_PER_VCPU);
> +HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr,
> hvm_save_mtrr_msr_one,
> +                          hvm_load_mtrr_msr, 1, HVMSR_PER_VCPU);
> 
>  void memory_type_changed(struct domain *d)
>  {
> diff --git a/xen/arch/x86/hvm/pmtimer.c b/xen/arch/x86/hvm/pmtimer.c
> index 435647f..0a5e8ce 100644
> --- a/xen/arch/x86/hvm/pmtimer.c
> +++ b/xen/arch/x86/hvm/pmtimer.c
> @@ -309,7 +309,7 @@ static int acpi_load(struct domain *d,
> hvm_domain_context_t *h)
>      return 0;
>  }
> 
> -HVM_REGISTER_SAVE_RESTORE(PMTIMER, acpi_save, acpi_load,
> +HVM_REGISTER_SAVE_RESTORE(PMTIMER, acpi_save, NULL, acpi_load,
>                            1, HVMSR_PER_DOM);
> 
>  int pmtimer_change_ioport(struct domain *d, unsigned int version)
> diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
> index cb75b99..ce7e71b 100644
> --- a/xen/arch/x86/hvm/rtc.c
> +++ b/xen/arch/x86/hvm/rtc.c
> @@ -783,7 +783,7 @@ static int rtc_load(struct domain *d,
> hvm_domain_context_t *h)
>      return 0;
>  }
> 
> -HVM_REGISTER_SAVE_RESTORE(RTC, rtc_save, rtc_load, 1,
> HVMSR_PER_DOM);
> +HVM_REGISTER_SAVE_RESTORE(RTC, rtc_save, NULL, rtc_load, 1,
> HVMSR_PER_DOM);
> 
>  void rtc_reset(struct domain *d)
>  {
> diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
> index 8984a23..b674937 100644
> --- a/xen/arch/x86/hvm/save.c
> +++ b/xen/arch/x86/hvm/save.c
> @@ -85,16 +85,18 @@ int arch_hvm_load(struct domain *d, struct
> hvm_save_header *hdr)
>  /* List of handlers for various HVM save and restore types */
>  static struct {
>      hvm_save_handler save;
> +    hvm_save_one_handler save_one;
>      hvm_load_handler load;
>      const char *name;
>      size_t size;
>      int kind;
> -} hvm_sr_handlers[HVM_SAVE_CODE_MAX + 1] = { {NULL, NULL, "<?>"}, };
> +} hvm_sr_handlers[HVM_SAVE_CODE_MAX + 1] = { {NULL, NULL, NULL,
> "<?>"}, };
> 
>  /* Init-time function to add entries to that list */
>  void __init hvm_register_savevm(uint16_t typecode,
>                                  const char *name,
>                                  hvm_save_handler save_state,
> +                                hvm_save_one_handler save_one,
>                                  hvm_load_handler load_state,
>                                  size_t size, int kind)
>  {
> @@ -102,6 +104,7 @@ void __init hvm_register_savevm(uint16_t typecode,
>      ASSERT(hvm_sr_handlers[typecode].save == NULL);
>      ASSERT(hvm_sr_handlers[typecode].load == NULL);
>      hvm_sr_handlers[typecode].save = save_state;
> +    hvm_sr_handlers[typecode].save_one = save_one;
>      hvm_sr_handlers[typecode].load = load_state;
>      hvm_sr_handlers[typecode].name = name;
>      hvm_sr_handlers[typecode].size = size;
> diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
> index 97b419f..66f54e4 100644
> --- a/xen/arch/x86/hvm/vioapic.c
> +++ b/xen/arch/x86/hvm/vioapic.c
> @@ -601,7 +601,7 @@ static int ioapic_load(struct domain *d,
> hvm_domain_context_t *h)
>      return hvm_load_entry(IOAPIC, h, &s->domU);
>  }
> 
> -HVM_REGISTER_SAVE_RESTORE(IOAPIC, ioapic_save, ioapic_load, 1,
> HVMSR_PER_DOM);
> +HVM_REGISTER_SAVE_RESTORE(IOAPIC, ioapic_save, NULL, ioapic_load, 1,
> HVMSR_PER_DOM);
> 
>  void vioapic_reset(struct domain *d)
>  {
> diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
> index 1e87cd6..466e015 100644
> --- a/xen/arch/x86/hvm/viridian.c
> +++ b/xen/arch/x86/hvm/viridian.c
> @@ -1023,7 +1023,7 @@ static int viridian_load_domain_ctxt(struct domain
> *d, hvm_domain_context_t *h)
>      return 0;
>  }
> 
> -HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN,
> viridian_save_domain_ctxt,
> +HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN,
> viridian_save_domain_ctxt, NULL,
>                            viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
> 
>  static int viridian_save_vcpu_ctxt_one(struct vcpu *v,
> hvm_domain_context_t *h)
> @@ -1083,6 +1083,7 @@ static int viridian_load_vcpu_ctxt(struct domain *d,
> hvm_domain_context_t *h)
>  }
> 
>  HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_VCPU, viridian_save_vcpu_ctxt,
> +                          viridian_save_vcpu_ctxt_one,
>                            viridian_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
> 
>  static int __init parse_viridian_version(const char *arg)
> diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
> index 1b9f00a..eff6070 100644
> --- a/xen/arch/x86/hvm/vlapic.c
> +++ b/xen/arch/x86/hvm/vlapic.c
> @@ -1576,9 +1576,9 @@ static int lapic_load_regs(struct domain *d,
> hvm_domain_context_t *h)
>      return 0;
>  }
> 
> -HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden,
> lapic_load_hidden,
> +HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, NULL,
> lapic_load_hidden,
>                            1, HVMSR_PER_VCPU);
> -HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs,
> lapic_load_regs,
> +HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, NULL,
> lapic_load_regs,
>                            1, HVMSR_PER_VCPU);
> 
>  int vlapic_init(struct vcpu *v)
> diff --git a/xen/arch/x86/hvm/vpic.c b/xen/arch/x86/hvm/vpic.c
> index e160bbd..ca9b4cb 100644
> --- a/xen/arch/x86/hvm/vpic.c
> +++ b/xen/arch/x86/hvm/vpic.c
> @@ -411,7 +411,7 @@ static int vpic_load(struct domain *d,
> hvm_domain_context_t *h)
>      return 0;
>  }
> 
> -HVM_REGISTER_SAVE_RESTORE(PIC, vpic_save, vpic_load, 2,
> HVMSR_PER_DOM);
> +HVM_REGISTER_SAVE_RESTORE(PIC, vpic_save, NULL, vpic_load, 2,
> HVMSR_PER_DOM);
> 
>  void vpic_reset(struct domain *d)
>  {
> diff --git a/xen/include/asm-x86/hvm/save.h b/xen/include/asm-
> x86/hvm/save.h
> index f889e8f..2538628 100644
> --- a/xen/include/asm-x86/hvm/save.h
> +++ b/xen/include/asm-x86/hvm/save.h
> @@ -97,6 +97,8 @@ static inline uint16_t hvm_load_instance(struct
> hvm_domain_context *h)
>   * restoring.  Both return non-zero on error. */
>  typedef int (*hvm_save_handler) (struct domain *d,
>                                   hvm_domain_context_t *h);
> +typedef int (*hvm_save_one_handler)(struct  vcpu *v,
> +                                    hvm_domain_context_t *h);
>  typedef int (*hvm_load_handler) (struct domain *d,
>                                   hvm_domain_context_t *h);
> 
> @@ -105,6 +107,7 @@ typedef int (*hvm_load_handler) (struct domain *d,
>  void hvm_register_savevm(uint16_t typecode,
>                           const char *name,
>                           hvm_save_handler save_state,
> +                         hvm_save_one_handler save_one,
>                           hvm_load_handler load_state,
>                           size_t size, int kind);
> 
> @@ -114,12 +117,13 @@ void hvm_register_savevm(uint16_t typecode,
> 
>  /* Syntactic sugar around that function: specify the max number of
>   * saves, and this calculates the size of buffer needed */
> -#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _load, _num, _k)
> \
> +#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _save_one, _load,
> _num, _k)  \
>  static int __init __hvm_register_##_x##_save_and_restore(void)            \
>  {                                                                         \
>      hvm_register_savevm(HVM_SAVE_CODE(_x),                                \
>                          #_x,                                              \
>                          &_save,                                           \
> +                        _save_one,                                        \
>                          &_load,                                           \
>                          (_num) * (HVM_SAVE_LENGTH(_x)                     \
>                                    + sizeof (struct hvm_save_descriptor)), \
> --
> 2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v10 01/11] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func
  2018-07-04 13:32 ` [PATCH v10 01/11] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func Alexandru Isaila
@ 2018-07-05  8:41   ` Paul Durrant
  2018-07-05  8:54     ` Alexandru Stefan ISAILA
  0 siblings, 1 reply; 23+ messages in thread
From: Paul Durrant @ 2018-07-05  8:41 UTC (permalink / raw)
  To: 'Alexandru Isaila', xen-devel
  Cc: Ian Jackson, Wei Liu, jbeulich, Andrew Cooper

> -----Original Message-----
> From: Alexandru Isaila [mailto:aisaila@bitdefender.com]
> Sent: 04 July 2018 14:32
> To: xen-devel@lists.xen.org
> Cc: Ian Jackson <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.com>;
> jbeulich@suse.com; Andrew Cooper <Andrew.Cooper3@citrix.com>; Paul
> Durrant <Paul.Durrant@citrix.com>; Alexandru Isaila
> <aisaila@bitdefender.com>
> Subject: [PATCH v10 01/11] x86/cpu: Introduce vmce_save_vcpu_ctxt_one()
> func
> 
> This is used to save data from a single instance.
> 
> Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
> 
> ---
> Changes since V9:
> 	- Change return of the save_one func to return hvm_save_entry.
> ---
>  xen/arch/x86/cpu/mcheck/vmce.c | 21 +++++++++++++--------
>  1 file changed, 13 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/arch/x86/cpu/mcheck/vmce.c
> b/xen/arch/x86/cpu/mcheck/vmce.c
> index e07cd2f..8a1fbfc 100644
> --- a/xen/arch/x86/cpu/mcheck/vmce.c
> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
> @@ -349,6 +349,18 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
>      return ret;
>  }
> 
> +static int vmce_save_vcpu_ctxt_one(struct vcpu *v,
> hvm_domain_context_t *h)
> + {
> +    struct hvm_vmce_vcpu ctxt;
> +

Not my call, but I'd feel more comfortable with memsetting ctxt to zero here...

> +    ctxt.caps = v->arch.vmce.mcg_cap;
> +    ctxt.mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2;
> +    ctxt.mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2;
> +    ctxt.mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl;
> +
> +    return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt);
> + }
> +
>  static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t
> *h)
>  {
>      struct vcpu *v;
> @@ -356,14 +368,7 @@ static int vmce_save_vcpu_ctxt(struct domain *d,
> hvm_domain_context_t *h)
> 
>      for_each_vcpu ( d, v )
>      {

...because it used to be implicit in the initializer here.

  Paul

> -        struct hvm_vmce_vcpu ctxt = {
> -            .caps = v->arch.vmce.mcg_cap,
> -            .mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2,
> -            .mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2,
> -            .mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl,
> -        };
> -
> -        err = hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt);
> +        err = vmce_save_vcpu_ctxt_one(v, h);
>          if ( err )
>              break;
>      }
> --
> 2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v10 02/11] x86/hvm: Introduce hvm_save_tsc_adjust_one() func
  2018-07-04 13:32 ` [PATCH v10 02/11] x86/hvm: Introduce hvm_save_tsc_adjust_one() func Alexandru Isaila
@ 2018-07-05  8:43   ` Paul Durrant
  0 siblings, 0 replies; 23+ messages in thread
From: Paul Durrant @ 2018-07-05  8:43 UTC (permalink / raw)
  To: 'Alexandru Isaila', xen-devel
  Cc: Ian Jackson, Wei Liu, jbeulich, Andrew Cooper

> -----Original Message-----
> From: Alexandru Isaila [mailto:aisaila@bitdefender.com]
> Sent: 04 July 2018 14:32
> To: xen-devel@lists.xen.org
> Cc: Ian Jackson <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.com>;
> jbeulich@suse.com; Andrew Cooper <Andrew.Cooper3@citrix.com>; Paul
> Durrant <Paul.Durrant@citrix.com>; Alexandru Isaila
> <aisaila@bitdefender.com>
> Subject: [PATCH v10 02/11] x86/hvm: Introduce hvm_save_tsc_adjust_one()
> func
> 
> This is used to save data from a single instance.
> 
> Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>

Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

> 
> ---
> Changes since V9:
>      - Change return of the save_one func to return hvm_save_entry.
> ---
>  xen/arch/x86/hvm/hvm.c | 14 ++++++++++----
>  1 file changed, 10 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 93092d2..c47d162 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -740,20 +740,26 @@ void hvm_domain_destroy(struct domain *d)
>      destroy_vpci_mmcfg(d);
>  }
> 
> +static int hvm_save_tsc_adjust_one(struct vcpu *v,
> hvm_domain_context_t *h)
> + {
> +    struct hvm_tsc_adjust ctxt;
> +
> +    ctxt.tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust;
> +
> +    return hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, &ctxt);
> + }
> +
>  static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t
> *h)
>  {
>      struct vcpu *v;
> -    struct hvm_tsc_adjust ctxt;
>      int err = 0;
> 
>      for_each_vcpu ( d, v )
>      {
> -        ctxt.tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust;
> -        err = hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, &ctxt);
> +        err = hvm_save_tsc_adjust_one(v, h);
>          if ( err )
>              break;
>      }
> -
>      return err;
>  }
> 
> --
> 2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v10 03/11] x86/hvm: Introduce hvm_save_cpu_ctxt_one func
  2018-07-04 13:32 ` [PATCH v10 03/11] x86/hvm: Introduce hvm_save_cpu_ctxt_one func Alexandru Isaila
@ 2018-07-05  8:44   ` Paul Durrant
  0 siblings, 0 replies; 23+ messages in thread
From: Paul Durrant @ 2018-07-05  8:44 UTC (permalink / raw)
  To: 'Alexandru Isaila', xen-devel
  Cc: Ian Jackson, Wei Liu, jbeulich, Andrew Cooper

> -----Original Message-----
> From: Alexandru Isaila [mailto:aisaila@bitdefender.com]
> Sent: 04 July 2018 14:32
> To: xen-devel@lists.xen.org
> Cc: Ian Jackson <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.com>;
> jbeulich@suse.com; Andrew Cooper <Andrew.Cooper3@citrix.com>; Paul
> Durrant <Paul.Durrant@citrix.com>; Alexandru Isaila
> <aisaila@bitdefender.com>
> Subject: [PATCH v10 03/11] x86/hvm: Introduce hvm_save_cpu_ctxt_one
> func
> 
> This is used to save data from a single instance.
> 
> Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>

Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

> 
> ---
> Changes since V8:
> 	- Change return of the save_one func to return hvm_save_entry
> 	- Move continue out of on func
> 	- Remove #define CONTINUE.
> ---
>  xen/arch/x86/hvm/hvm.c | 211 ++++++++++++++++++++++++++------------
> -----------
>  1 file changed, 110 insertions(+), 101 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index c47d162..c343ba8 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -786,117 +786,126 @@ static int hvm_load_tsc_adjust(struct domain *d,
> hvm_domain_context_t *h)
>  HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
>                            hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
> 
> +static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t
> *h)
> +{
> +    struct segment_register seg;
> +    struct hvm_hw_cpu ctxt;
> +
> +    memset(&ctxt, 0, sizeof(ctxt));
> +
> +    /* Architecture-specific vmcs/vmcb bits */
> +    hvm_funcs.save_cpu_ctxt(v, &ctxt);
> +
> +    ctxt.tsc = hvm_get_guest_tsc_fixed(v, v->domain-
> >arch.hvm_domain.sync_tsc);
> +
> +    ctxt.msr_tsc_aux = hvm_msr_tsc_aux(v);
> +
> +    hvm_get_segment_register(v, x86_seg_idtr, &seg);
> +    ctxt.idtr_limit = seg.limit;
> +    ctxt.idtr_base = seg.base;
> +
> +    hvm_get_segment_register(v, x86_seg_gdtr, &seg);
> +    ctxt.gdtr_limit = seg.limit;
> +    ctxt.gdtr_base = seg.base;
> +
> +    hvm_get_segment_register(v, x86_seg_cs, &seg);
> +    ctxt.cs_sel = seg.sel;
> +    ctxt.cs_limit = seg.limit;
> +    ctxt.cs_base = seg.base;
> +    ctxt.cs_arbytes = seg.attr;
> +
> +    hvm_get_segment_register(v, x86_seg_ds, &seg);
> +    ctxt.ds_sel = seg.sel;
> +    ctxt.ds_limit = seg.limit;
> +    ctxt.ds_base = seg.base;
> +    ctxt.ds_arbytes = seg.attr;
> +
> +    hvm_get_segment_register(v, x86_seg_es, &seg);
> +    ctxt.es_sel = seg.sel;
> +    ctxt.es_limit = seg.limit;
> +    ctxt.es_base = seg.base;
> +    ctxt.es_arbytes = seg.attr;
> +
> +    hvm_get_segment_register(v, x86_seg_ss, &seg);
> +    ctxt.ss_sel = seg.sel;
> +    ctxt.ss_limit = seg.limit;
> +    ctxt.ss_base = seg.base;
> +    ctxt.ss_arbytes = seg.attr;
> +
> +    hvm_get_segment_register(v, x86_seg_fs, &seg);
> +    ctxt.fs_sel = seg.sel;
> +    ctxt.fs_limit = seg.limit;
> +    ctxt.fs_base = seg.base;
> +    ctxt.fs_arbytes = seg.attr;
> +
> +    hvm_get_segment_register(v, x86_seg_gs, &seg);
> +    ctxt.gs_sel = seg.sel;
> +    ctxt.gs_limit = seg.limit;
> +    ctxt.gs_base = seg.base;
> +    ctxt.gs_arbytes = seg.attr;
> +
> +    hvm_get_segment_register(v, x86_seg_tr, &seg);
> +    ctxt.tr_sel = seg.sel;
> +    ctxt.tr_limit = seg.limit;
> +    ctxt.tr_base = seg.base;
> +    ctxt.tr_arbytes = seg.attr;
> +
> +    hvm_get_segment_register(v, x86_seg_ldtr, &seg);
> +    ctxt.ldtr_sel = seg.sel;
> +    ctxt.ldtr_limit = seg.limit;
> +    ctxt.ldtr_base = seg.base;
> +    ctxt.ldtr_arbytes = seg.attr;
> +
> +    if ( v->fpu_initialised )
> +    {
> +        memcpy(ctxt.fpu_regs, v->arch.fpu_ctxt, sizeof(ctxt.fpu_regs));
> +        ctxt.flags = XEN_X86_FPU_INITIALISED;
> +    }
> +
> +    ctxt.rax = v->arch.user_regs.rax;
> +    ctxt.rbx = v->arch.user_regs.rbx;
> +    ctxt.rcx = v->arch.user_regs.rcx;
> +    ctxt.rdx = v->arch.user_regs.rdx;
> +    ctxt.rbp = v->arch.user_regs.rbp;
> +    ctxt.rsi = v->arch.user_regs.rsi;
> +    ctxt.rdi = v->arch.user_regs.rdi;
> +    ctxt.rsp = v->arch.user_regs.rsp;
> +    ctxt.rip = v->arch.user_regs.rip;
> +    ctxt.rflags = v->arch.user_regs.rflags;
> +    ctxt.r8  = v->arch.user_regs.r8;
> +    ctxt.r9  = v->arch.user_regs.r9;
> +    ctxt.r10 = v->arch.user_regs.r10;
> +    ctxt.r11 = v->arch.user_regs.r11;
> +    ctxt.r12 = v->arch.user_regs.r12;
> +    ctxt.r13 = v->arch.user_regs.r13;
> +    ctxt.r14 = v->arch.user_regs.r14;
> +    ctxt.r15 = v->arch.user_regs.r15;
> +    ctxt.dr0 = v->arch.debugreg[0];
> +    ctxt.dr1 = v->arch.debugreg[1];
> +    ctxt.dr2 = v->arch.debugreg[2];
> +    ctxt.dr3 = v->arch.debugreg[3];
> +    ctxt.dr6 = v->arch.debugreg[6];
> +    ctxt.dr7 = v->arch.debugreg[7];
> +
> +    return hvm_save_entry(CPU, v->vcpu_id, h, &ctxt);
> +}
> +
>  static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
>  {
>      struct vcpu *v;
> -    struct hvm_hw_cpu ctxt;
> -    struct segment_register seg;
> 
>      for_each_vcpu ( d, v )
>      {
> -        /* We don't need to save state for a vcpu that is down; the restore
> -         * code will leave it down if there is nothing saved. */
> +        /*
> +         * We don't need to save state for a vcpu that is down; the restore
> +         * code will leave it down if there is nothing saved.
> +         */
>          if ( v->pause_flags & VPF_down )
>              continue;
> 
> -        memset(&ctxt, 0, sizeof(ctxt));
> -
> -        /* Architecture-specific vmcs/vmcb bits */
> -        hvm_funcs.save_cpu_ctxt(v, &ctxt);
> -
> -        ctxt.tsc = hvm_get_guest_tsc_fixed(v, d->arch.hvm_domain.sync_tsc);
> -
> -        ctxt.msr_tsc_aux = hvm_msr_tsc_aux(v);
> -
> -        hvm_get_segment_register(v, x86_seg_idtr, &seg);
> -        ctxt.idtr_limit = seg.limit;
> -        ctxt.idtr_base = seg.base;
> -
> -        hvm_get_segment_register(v, x86_seg_gdtr, &seg);
> -        ctxt.gdtr_limit = seg.limit;
> -        ctxt.gdtr_base = seg.base;
> -
> -        hvm_get_segment_register(v, x86_seg_cs, &seg);
> -        ctxt.cs_sel = seg.sel;
> -        ctxt.cs_limit = seg.limit;
> -        ctxt.cs_base = seg.base;
> -        ctxt.cs_arbytes = seg.attr;
> -
> -        hvm_get_segment_register(v, x86_seg_ds, &seg);
> -        ctxt.ds_sel = seg.sel;
> -        ctxt.ds_limit = seg.limit;
> -        ctxt.ds_base = seg.base;
> -        ctxt.ds_arbytes = seg.attr;
> -
> -        hvm_get_segment_register(v, x86_seg_es, &seg);
> -        ctxt.es_sel = seg.sel;
> -        ctxt.es_limit = seg.limit;
> -        ctxt.es_base = seg.base;
> -        ctxt.es_arbytes = seg.attr;
> -
> -        hvm_get_segment_register(v, x86_seg_ss, &seg);
> -        ctxt.ss_sel = seg.sel;
> -        ctxt.ss_limit = seg.limit;
> -        ctxt.ss_base = seg.base;
> -        ctxt.ss_arbytes = seg.attr;
> -
> -        hvm_get_segment_register(v, x86_seg_fs, &seg);
> -        ctxt.fs_sel = seg.sel;
> -        ctxt.fs_limit = seg.limit;
> -        ctxt.fs_base = seg.base;
> -        ctxt.fs_arbytes = seg.attr;
> -
> -        hvm_get_segment_register(v, x86_seg_gs, &seg);
> -        ctxt.gs_sel = seg.sel;
> -        ctxt.gs_limit = seg.limit;
> -        ctxt.gs_base = seg.base;
> -        ctxt.gs_arbytes = seg.attr;
> -
> -        hvm_get_segment_register(v, x86_seg_tr, &seg);
> -        ctxt.tr_sel = seg.sel;
> -        ctxt.tr_limit = seg.limit;
> -        ctxt.tr_base = seg.base;
> -        ctxt.tr_arbytes = seg.attr;
> -
> -        hvm_get_segment_register(v, x86_seg_ldtr, &seg);
> -        ctxt.ldtr_sel = seg.sel;
> -        ctxt.ldtr_limit = seg.limit;
> -        ctxt.ldtr_base = seg.base;
> -        ctxt.ldtr_arbytes = seg.attr;
> -
> -        if ( v->fpu_initialised )
> -        {
> -            memcpy(ctxt.fpu_regs, v->arch.fpu_ctxt, sizeof(ctxt.fpu_regs));
> -            ctxt.flags = XEN_X86_FPU_INITIALISED;
> -        }
> 
> -        ctxt.rax = v->arch.user_regs.rax;
> -        ctxt.rbx = v->arch.user_regs.rbx;
> -        ctxt.rcx = v->arch.user_regs.rcx;
> -        ctxt.rdx = v->arch.user_regs.rdx;
> -        ctxt.rbp = v->arch.user_regs.rbp;
> -        ctxt.rsi = v->arch.user_regs.rsi;
> -        ctxt.rdi = v->arch.user_regs.rdi;
> -        ctxt.rsp = v->arch.user_regs.rsp;
> -        ctxt.rip = v->arch.user_regs.rip;
> -        ctxt.rflags = v->arch.user_regs.rflags;
> -        ctxt.r8  = v->arch.user_regs.r8;
> -        ctxt.r9  = v->arch.user_regs.r9;
> -        ctxt.r10 = v->arch.user_regs.r10;
> -        ctxt.r11 = v->arch.user_regs.r11;
> -        ctxt.r12 = v->arch.user_regs.r12;
> -        ctxt.r13 = v->arch.user_regs.r13;
> -        ctxt.r14 = v->arch.user_regs.r14;
> -        ctxt.r15 = v->arch.user_regs.r15;
> -        ctxt.dr0 = v->arch.debugreg[0];
> -        ctxt.dr1 = v->arch.debugreg[1];
> -        ctxt.dr2 = v->arch.debugreg[2];
> -        ctxt.dr3 = v->arch.debugreg[3];
> -        ctxt.dr6 = v->arch.debugreg[6];
> -        ctxt.dr7 = v->arch.debugreg[7];
> -
> -        if ( hvm_save_entry(CPU, v->vcpu_id, h, &ctxt) != 0 )
> -            return 1;
> +        if ( hvm_save_cpu_ctxt_one(v, h) != 0 )
> +            return 1;
>      }
>      return 0;
>  }
> --
> 2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v10 04/11] x86/hvm: Introduce hvm_save_cpu_xsave_states_one
  2018-07-04 13:32 ` [PATCH v10 04/11] x86/hvm: Introduce hvm_save_cpu_xsave_states_one Alexandru Isaila
@ 2018-07-05  8:47   ` Paul Durrant
  0 siblings, 0 replies; 23+ messages in thread
From: Paul Durrant @ 2018-07-05  8:47 UTC (permalink / raw)
  To: 'Alexandru Isaila', xen-devel
  Cc: Ian Jackson, Wei Liu, jbeulich, Andrew Cooper

> -----Original Message-----
> From: Alexandru Isaila [mailto:aisaila@bitdefender.com]
> Sent: 04 July 2018 14:32
> To: xen-devel@lists.xen.org
> Cc: Ian Jackson <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.com>;
> jbeulich@suse.com; Andrew Cooper <Andrew.Cooper3@citrix.com>; Paul
> Durrant <Paul.Durrant@citrix.com>; Alexandru Isaila
> <aisaila@bitdefender.com>
> Subject: [PATCH v10 04/11] x86/hvm: Introduce
> hvm_save_cpu_xsave_states_one
> 
> This is used to save data from a single instance.
> 
> Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
> 
> ---
> Changes since V9:
> 	- Move continue out of save_one func.
> ---
>  xen/arch/x86/hvm/hvm.c | 34 +++++++++++++++++++++-------------
>  1 file changed, 21 insertions(+), 13 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index c343ba8..495abe5 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -1188,30 +1188,38 @@ HVM_REGISTER_SAVE_RESTORE(CPU,
> hvm_save_cpu_ctxt, hvm_load_cpu_ctxt,
>                                             save_area) + \
>                                    xstate_ctxt_size(xcr0))
> 
> -static int hvm_save_cpu_xsave_states(struct domain *d,
> hvm_domain_context_t *h)
> +static int hvm_save_cpu_xsave_states_one(struct vcpu *v,
> hvm_domain_context_t *h)
>  {
> -    struct vcpu *v;
>      struct hvm_hw_cpu_xsave *ctxt;
> +    unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
> 
>      if ( !cpu_has_xsave )
>          return 0;   /* do nothing */
> 

Since this function should not be called if !xsave_enabled(v) it would seem reasonable to ASSERT that here.

  Paul

> +    if ( _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size) )
> +        return 1;
> +    ctxt = (struct hvm_hw_cpu_xsave *)&h->data[h->cur];
> +    h->cur += size;
> +    ctxt->xfeature_mask = xfeature_mask;
> +    ctxt->xcr0 = v->arch.xcr0;
> +    ctxt->xcr0_accum = v->arch.xcr0_accum;
> +
> +    expand_xsave_states(v, &ctxt->save_area,
> +                        size - offsetof(typeof(*ctxt), save_area));
> +    return 0;
> + }
> +
> +static int hvm_save_cpu_xsave_states(struct domain *d,
> hvm_domain_context_t *h)
> +{
> +    struct vcpu *v;
> +
>      for_each_vcpu ( d, v )
>      {
> -        unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
> -
>          if ( !xsave_enabled(v) )
>              continue;
> -        if ( _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size) )
> +
> +        if ( hvm_save_cpu_xsave_states_one(v, h) != 0 )
>              return 1;
> -        ctxt = (struct hvm_hw_cpu_xsave *)&h->data[h->cur];
> -        h->cur += size;
> -
> -        ctxt->xfeature_mask = xfeature_mask;
> -        ctxt->xcr0 = v->arch.xcr0;
> -        ctxt->xcr0_accum = v->arch.xcr0_accum;
> -        expand_xsave_states(v, &ctxt->save_area,
> -                            size - offsetof(typeof(*ctxt), save_area));
>      }
> 
>      return 0;
> --
> 2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v10 08/11] x86/hvm: Add handler for save_one funcs
  2018-07-05  8:33   ` Paul Durrant
@ 2018-07-05  8:48     ` Alexandru Stefan ISAILA
  0 siblings, 0 replies; 23+ messages in thread
From: Alexandru Stefan ISAILA @ 2018-07-05  8:48 UTC (permalink / raw)
  To: Paul.Durrant, xen-devel; +Cc: Ian.Jackson, wei.liu2, jbeulich, Andrew.Cooper3

On Jo, 2018-07-05 at 08:33 +0000, Paul Durrant wrote:
> > 
> > -----Original Message-----
> > From: Alexandru Isaila [mailto:aisaila@bitdefender.com]
> > Sent: 04 July 2018 14:32
> > To: xen-devel@lists.xen.org
> > Cc: Ian Jackson <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.
> > com>;
> > jbeulich@suse.com; Andrew Cooper <Andrew.Cooper3@citrix.com>; Paul
> > Durrant <Paul.Durrant@citrix.com>; Alexandru Isaila
> > <aisaila@bitdefender.com>
> > Subject: [PATCH v10 08/11] x86/hvm: Add handler for save_one funcs
> > 
> > Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
> Patch looks ok so...
> 
> Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
> 
> but...
> 
> > 
> > 
> > ---
> > Changes since V8:
> > 	- Add comment for the handler return values.
> ...I don't see any comment. Am I missing something?

I've removed the comment and added a return enum that has the members
defined explicit. The change is done in patch 10.
> > ---
> >  xen/arch/x86/cpu/mcheck/vmce.c | 1 +
> >  xen/arch/x86/hvm/hpet.c        | 2 +-
> >  xen/arch/x86/hvm/hvm.c         | 6 +++++-
> >  xen/arch/x86/hvm/i8254.c       | 2 +-
> >  xen/arch/x86/hvm/irq.c         | 6 +++---
> >  xen/arch/x86/hvm/mtrr.c        | 4 ++--
> >  xen/arch/x86/hvm/pmtimer.c     | 2 +-
> >  xen/arch/x86/hvm/rtc.c         | 2 +-
> >  xen/arch/x86/hvm/save.c        | 5 ++++-
> >  xen/arch/x86/hvm/vioapic.c     | 2 +-
> >  xen/arch/x86/hvm/viridian.c    | 3 ++-
> >  xen/arch/x86/hvm/vlapic.c      | 4 ++--
> >  xen/arch/x86/hvm/vpic.c        | 2 +-
> >  xen/include/asm-x86/hvm/save.h | 6 +++++-
> >  14 files changed, 30 insertions(+), 17 deletions(-)
> > 
> > diff --git a/xen/arch/x86/cpu/mcheck/vmce.c
> > b/xen/arch/x86/cpu/mcheck/vmce.c
> > index 8a1fbfc..29898a6 100644
> > --- a/xen/arch/x86/cpu/mcheck/vmce.c
> > +++ b/xen/arch/x86/cpu/mcheck/vmce.c
> > @@ -396,6 +396,7 @@ static int vmce_load_vcpu_ctxt(struct domain
> > *d,
> > hvm_domain_context_t *h)
> >  }
> > 
> >  HVM_REGISTER_SAVE_RESTORE(VMCE_VCPU, vmce_save_vcpu_ctxt,
> > +                          vmce_save_vcpu_ctxt_one,
> >                            vmce_load_vcpu_ctxt, 1, HVMSR_PER_VCPU);
> > 
> >  /*
> > diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
> > index 2837709..aff8613 100644
> > --- a/xen/arch/x86/hvm/hpet.c
> > +++ b/xen/arch/x86/hvm/hpet.c
> > @@ -640,7 +640,7 @@ static int hpet_load(struct domain *d,
> > hvm_domain_context_t *h)
> >      return 0;
> >  }
> > 
> > -HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, hpet_load, 1,
> > HVMSR_PER_DOM);
> > +HVM_REGISTER_SAVE_RESTORE(HPET, hpet_save, NULL, hpet_load, 1,
> > HVMSR_PER_DOM);
> > 
> >  static void hpet_set(HPETState *h)
> >  {
> > diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> > index 9ff9954..7d2a12d 100644
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -784,6 +784,7 @@ static int hvm_load_tsc_adjust(struct domain
> > *d,
> > hvm_domain_context_t *h)
> >  }
> > 
> >  HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
> > +                          hvm_save_tsc_adjust_one,
> >                            hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
> > 
> >  static int hvm_save_cpu_ctxt_one(struct vcpu *v,
> > hvm_domain_context_t
> > *h)
> > @@ -1181,7 +1182,8 @@ static int hvm_load_cpu_ctxt(struct domain
> > *d,
> > hvm_domain_context_t *h)
> >      return 0;
> >  }
> > 
> > -HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt,
> > hvm_load_cpu_ctxt,
> > +HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt,
> > hvm_save_cpu_ctxt_one,
> > +                          hvm_load_cpu_ctxt,
> >                            1, HVMSR_PER_VCPU);
> > 
> >  #define HVM_CPU_XSAVE_SIZE(xcr0) (offsetof(struct
> > hvm_hw_cpu_xsave,
> > \
> > @@ -1528,6 +1530,7 @@ static int __init
> > hvm_register_CPU_save_and_restore(void)
> >      hvm_register_savevm(CPU_XSAVE_CODE,
> >                          "CPU_XSAVE",
> >                          hvm_save_cpu_xsave_states,
> > +                        hvm_save_cpu_xsave_states_one,
> >                          hvm_load_cpu_xsave_states,
> >                          HVM_CPU_XSAVE_SIZE(xfeature_mask) +
> >                              sizeof(struct hvm_save_descriptor),
> > @@ -1540,6 +1543,7 @@ static int __init
> > hvm_register_CPU_save_and_restore(void)
> >          hvm_register_savevm(CPU_MSR_CODE,
> >                              "CPU_MSR",
> >                              hvm_save_cpu_msrs,
> > +                            hvm_save_cpu_msrs_one,
> >                              hvm_load_cpu_msrs,
> >                              HVM_CPU_MSR_SIZE(msr_count_max) +
> >                                  sizeof(struct
> > hvm_save_descriptor),
> > diff --git a/xen/arch/x86/hvm/i8254.c b/xen/arch/x86/hvm/i8254.c
> > index 992f08d..ec77b23 100644
> > --- a/xen/arch/x86/hvm/i8254.c
> > +++ b/xen/arch/x86/hvm/i8254.c
> > @@ -437,7 +437,7 @@ static int pit_load(struct domain *d,
> > hvm_domain_context_t *h)
> >      return 0;
> >  }
> > 
> > -HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, pit_load, 1,
> > HVMSR_PER_DOM);
> > +HVM_REGISTER_SAVE_RESTORE(PIT, pit_save, NULL, pit_load, 1,
> > HVMSR_PER_DOM);
> > 
> >  void pit_reset(struct domain *d)
> >  {
> > diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
> > index c85d004..770eab7 100644
> > --- a/xen/arch/x86/hvm/irq.c
> > +++ b/xen/arch/x86/hvm/irq.c
> > @@ -764,9 +764,9 @@ static int irq_load_link(struct domain *d,
> > hvm_domain_context_t *h)
> >      return 0;
> >  }
> > 
> > -HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, irq_load_pci,
> > +HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, NULL,
> > irq_load_pci,
> >                            1, HVMSR_PER_DOM);
> > -HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, irq_load_isa,
> > +HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, NULL,
> > irq_load_isa,
> >                            1, HVMSR_PER_DOM);
> > -HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, irq_load_link,
> > +HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, NULL,
> > irq_load_link,
> >                            1, HVMSR_PER_DOM);
> > diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
> > index 9bbff59..ff9ff69 100644
> > --- a/xen/arch/x86/hvm/mtrr.c
> > +++ b/xen/arch/x86/hvm/mtrr.c
> > @@ -819,8 +819,8 @@ static int hvm_load_mtrr_msr(struct domain *d,
> > hvm_domain_context_t *h)
> >      return 0;
> >  }
> > 
> > -HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr,
> > hvm_load_mtrr_msr,
> > -                          1, HVMSR_PER_VCPU);
> > +HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr,
> > hvm_save_mtrr_msr_one,
> > +                          hvm_load_mtrr_msr, 1, HVMSR_PER_VCPU);
> > 
> >  void memory_type_changed(struct domain *d)
> >  {
> > diff --git a/xen/arch/x86/hvm/pmtimer.c
> > b/xen/arch/x86/hvm/pmtimer.c
> > index 435647f..0a5e8ce 100644
> > --- a/xen/arch/x86/hvm/pmtimer.c
> > +++ b/xen/arch/x86/hvm/pmtimer.c
> > @@ -309,7 +309,7 @@ static int acpi_load(struct domain *d,
> > hvm_domain_context_t *h)
> >      return 0;
> >  }
> > 
> > -HVM_REGISTER_SAVE_RESTORE(PMTIMER, acpi_save, acpi_load,
> > +HVM_REGISTER_SAVE_RESTORE(PMTIMER, acpi_save, NULL, acpi_load,
> >                            1, HVMSR_PER_DOM);
> > 
> >  int pmtimer_change_ioport(struct domain *d, unsigned int version)
> > diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
> > index cb75b99..ce7e71b 100644
> > --- a/xen/arch/x86/hvm/rtc.c
> > +++ b/xen/arch/x86/hvm/rtc.c
> > @@ -783,7 +783,7 @@ static int rtc_load(struct domain *d,
> > hvm_domain_context_t *h)
> >      return 0;
> >  }
> > 
> > -HVM_REGISTER_SAVE_RESTORE(RTC, rtc_save, rtc_load, 1,
> > HVMSR_PER_DOM);
> > +HVM_REGISTER_SAVE_RESTORE(RTC, rtc_save, NULL, rtc_load, 1,
> > HVMSR_PER_DOM);
> > 
> >  void rtc_reset(struct domain *d)
> >  {
> > diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
> > index 8984a23..b674937 100644
> > --- a/xen/arch/x86/hvm/save.c
> > +++ b/xen/arch/x86/hvm/save.c
> > @@ -85,16 +85,18 @@ int arch_hvm_load(struct domain *d, struct
> > hvm_save_header *hdr)
> >  /* List of handlers for various HVM save and restore types */
> >  static struct {
> >      hvm_save_handler save;
> > +    hvm_save_one_handler save_one;
> >      hvm_load_handler load;
> >      const char *name;
> >      size_t size;
> >      int kind;
> > -} hvm_sr_handlers[HVM_SAVE_CODE_MAX + 1] = { {NULL, NULL, "<?>"},
> > };
> > +} hvm_sr_handlers[HVM_SAVE_CODE_MAX + 1] = { {NULL, NULL, NULL,
> > "<?>"}, };
> > 
> >  /* Init-time function to add entries to that list */
> >  void __init hvm_register_savevm(uint16_t typecode,
> >                                  const char *name,
> >                                  hvm_save_handler save_state,
> > +                                hvm_save_one_handler save_one,
> >                                  hvm_load_handler load_state,
> >                                  size_t size, int kind)
> >  {
> > @@ -102,6 +104,7 @@ void __init hvm_register_savevm(uint16_t
> > typecode,
> >      ASSERT(hvm_sr_handlers[typecode].save == NULL);
> >      ASSERT(hvm_sr_handlers[typecode].load == NULL);
> >      hvm_sr_handlers[typecode].save = save_state;
> > +    hvm_sr_handlers[typecode].save_one = save_one;
> >      hvm_sr_handlers[typecode].load = load_state;
> >      hvm_sr_handlers[typecode].name = name;
> >      hvm_sr_handlers[typecode].size = size;
> > diff --git a/xen/arch/x86/hvm/vioapic.c
> > b/xen/arch/x86/hvm/vioapic.c
> > index 97b419f..66f54e4 100644
> > --- a/xen/arch/x86/hvm/vioapic.c
> > +++ b/xen/arch/x86/hvm/vioapic.c
> > @@ -601,7 +601,7 @@ static int ioapic_load(struct domain *d,
> > hvm_domain_context_t *h)
> >      return hvm_load_entry(IOAPIC, h, &s->domU);
> >  }
> > 
> > -HVM_REGISTER_SAVE_RESTORE(IOAPIC, ioapic_save, ioapic_load, 1,
> > HVMSR_PER_DOM);
> > +HVM_REGISTER_SAVE_RESTORE(IOAPIC, ioapic_save, NULL, ioapic_load,
> > 1,
> > HVMSR_PER_DOM);
> > 
> >  void vioapic_reset(struct domain *d)
> >  {
> > diff --git a/xen/arch/x86/hvm/viridian.c
> > b/xen/arch/x86/hvm/viridian.c
> > index 1e87cd6..466e015 100644
> > --- a/xen/arch/x86/hvm/viridian.c
> > +++ b/xen/arch/x86/hvm/viridian.c
> > @@ -1023,7 +1023,7 @@ static int viridian_load_domain_ctxt(struct
> > domain
> > *d, hvm_domain_context_t *h)
> >      return 0;
> >  }
> > 
> > -HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN,
> > viridian_save_domain_ctxt,
> > +HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN,
> > viridian_save_domain_ctxt, NULL,
> >                            viridian_load_domain_ctxt, 1,
> > HVMSR_PER_DOM);
> > 
> >  static int viridian_save_vcpu_ctxt_one(struct vcpu *v,
> > hvm_domain_context_t *h)
> > @@ -1083,6 +1083,7 @@ static int viridian_load_vcpu_ctxt(struct
> > domain *d,
> > hvm_domain_context_t *h)
> >  }
> > 
> >  HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_VCPU, viridian_save_vcpu_ctxt,
> > +                          viridian_save_vcpu_ctxt_one,
> >                            viridian_load_vcpu_ctxt, 1,
> > HVMSR_PER_VCPU);
> > 
> >  static int __init parse_viridian_version(const char *arg)
> > diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
> > index 1b9f00a..eff6070 100644
> > --- a/xen/arch/x86/hvm/vlapic.c
> > +++ b/xen/arch/x86/hvm/vlapic.c
> > @@ -1576,9 +1576,9 @@ static int lapic_load_regs(struct domain *d,
> > hvm_domain_context_t *h)
> >      return 0;
> >  }
> > 
> > -HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden,
> > lapic_load_hidden,
> > +HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, NULL,
> > lapic_load_hidden,
> >                            1, HVMSR_PER_VCPU);
> > -HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs,
> > lapic_load_regs,
> > +HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, NULL,
> > lapic_load_regs,
> >                            1, HVMSR_PER_VCPU);
> > 
> >  int vlapic_init(struct vcpu *v)
> > diff --git a/xen/arch/x86/hvm/vpic.c b/xen/arch/x86/hvm/vpic.c
> > index e160bbd..ca9b4cb 100644
> > --- a/xen/arch/x86/hvm/vpic.c
> > +++ b/xen/arch/x86/hvm/vpic.c
> > @@ -411,7 +411,7 @@ static int vpic_load(struct domain *d,
> > hvm_domain_context_t *h)
> >      return 0;
> >  }
> > 
> > -HVM_REGISTER_SAVE_RESTORE(PIC, vpic_save, vpic_load, 2,
> > HVMSR_PER_DOM);
> > +HVM_REGISTER_SAVE_RESTORE(PIC, vpic_save, NULL, vpic_load, 2,
> > HVMSR_PER_DOM);
> > 
> >  void vpic_reset(struct domain *d)
> >  {
> > diff --git a/xen/include/asm-x86/hvm/save.h b/xen/include/asm-
> > x86/hvm/save.h
> > index f889e8f..2538628 100644
> > --- a/xen/include/asm-x86/hvm/save.h
> > +++ b/xen/include/asm-x86/hvm/save.h
> > @@ -97,6 +97,8 @@ static inline uint16_t hvm_load_instance(struct
> > hvm_domain_context *h)
> >   * restoring.  Both return non-zero on error. */
> >  typedef int (*hvm_save_handler) (struct domain *d,
> >                                   hvm_domain_context_t *h);
> > +typedef int (*hvm_save_one_handler)(struct  vcpu *v,
> > +                                    hvm_domain_context_t *h);
> >  typedef int (*hvm_load_handler) (struct domain *d,
> >                                   hvm_domain_context_t *h);
> > 
> > @@ -105,6 +107,7 @@ typedef int (*hvm_load_handler) (struct domain
> > *d,
> >  void hvm_register_savevm(uint16_t typecode,
> >                           const char *name,
> >                           hvm_save_handler save_state,
> > +                         hvm_save_one_handler save_one,
> >                           hvm_load_handler load_state,
> >                           size_t size, int kind);
> > 
> > @@ -114,12 +117,13 @@ void hvm_register_savevm(uint16_t typecode,
> > 
> >  /* Syntactic sugar around that function: specify the max number of
> >   * saves, and this calculates the size of buffer needed */
> > -#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _load, _num, _k)
> > \
> > +#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _save_one, _load,
> > _num, _k)  \
> >  static int __init
> > __hvm_register_##_x##_save_and_restore(void)            \
> >  {                                                                 
> >         \
> >      hvm_register_savevm(HVM_SAVE_CODE(_x),                        
> >         \
> >                          #_x,                                      
> >         \
> >                          &_save,                                   
> >         \
> > +                        _save_one,                                
> >         \
> >                          &_load,                                   
> >         \
> >                          (_num) *
> > (HVM_SAVE_LENGTH(_x)                     \
> >                                    + sizeof (struct
> > hvm_save_descriptor)), \
> > --
> > 2.7.4
> 
> ________________________
> This email was scanned by Bitdefender
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v10 05/11] x86/hvm: Introduce hvm_save_cpu_msrs_one func
  2018-07-04 13:32 ` [PATCH v10 05/11] x86/hvm: Introduce hvm_save_cpu_msrs_one func Alexandru Isaila
@ 2018-07-05  8:53   ` Paul Durrant
  0 siblings, 0 replies; 23+ messages in thread
From: Paul Durrant @ 2018-07-05  8:53 UTC (permalink / raw)
  To: 'Alexandru Isaila', xen-devel
  Cc: Ian Jackson, Wei Liu, jbeulich, Andrew Cooper

> -----Original Message-----
> From: Alexandru Isaila [mailto:aisaila@bitdefender.com]
> Sent: 04 July 2018 14:32
> To: xen-devel@lists.xen.org
> Cc: Ian Jackson <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.com>;
> jbeulich@suse.com; Andrew Cooper <Andrew.Cooper3@citrix.com>; Paul
> Durrant <Paul.Durrant@citrix.com>; Alexandru Isaila
> <aisaila@bitdefender.com>
> Subject: [PATCH v10 05/11] x86/hvm: Introduce hvm_save_cpu_msrs_one
> func
> 
> This is used to save data from a single instance.
> 
> Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>

Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

> 
> ---
> Changes since V7:
> 	- Moved the init of ctxt->count to hvm_save_cpu_msrs_one()
> ---
>  xen/arch/x86/hvm/hvm.c | 101 +++++++++++++++++++++++++++-----------
> -----------
>  1 file changed, 55 insertions(+), 46 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 495abe5..9ff9954 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -1362,66 +1362,75 @@ static const uint32_t msrs_to_send[] = {
>  };
>  static unsigned int __read_mostly msr_count_max =
> ARRAY_SIZE(msrs_to_send);
> 
> -static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t
> *h)
> +static int hvm_save_cpu_msrs_one(struct vcpu *v, hvm_domain_context_t
> *h)
>  {
> -    struct vcpu *v;
> +    unsigned int i;
> +    struct hvm_msr *ctxt;
> +    struct hvm_save_descriptor *desc = _p(&h->data[h->cur]);
> 
> -    for_each_vcpu ( d, v )
> +    if ( _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
> +                         HVM_CPU_MSR_SIZE(msr_count_max)) )
> +        return 1;
> +    ctxt = (struct hvm_msr *)&h->data[h->cur];
> +
> +    ctxt->count = 0;
> +    for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
>      {
> -        struct hvm_save_descriptor *desc = _p(&h->data[h->cur]);
> -        struct hvm_msr *ctxt;
> -        unsigned int i;
> +        uint64_t val;
> +        int rc = guest_rdmsr(v, msrs_to_send[i], &val);
> 
> -        if ( _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
> -                             HVM_CPU_MSR_SIZE(msr_count_max)) )
> -            return 1;
> -        ctxt = (struct hvm_msr *)&h->data[h->cur];
> -        ctxt->count = 0;
> +        /*
> +         * It is the programmers responsibility to ensure that
> +         * msrs_to_send[] contain generally-read/write MSRs.
> +         * X86EMUL_EXCEPTION here implies a missing feature, and that the
> +         * guest doesn't have access to the MSR.
> +         */
> +        if ( rc == X86EMUL_EXCEPTION )
> +            continue;
> 
> -        for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
> +        if ( rc != X86EMUL_OKAY )
>          {
> -            uint64_t val;
> -            int rc = guest_rdmsr(v, msrs_to_send[i], &val);
> +            ASSERT_UNREACHABLE();
> +            return -ENXIO;
> +        }
> 
> -            /*
> -             * It is the programmers responsibility to ensure that
> -             * msrs_to_send[] contain generally-read/write MSRs.
> -             * X86EMUL_EXCEPTION here implies a missing feature, and that the
> -             * guest doesn't have access to the MSR.
> -             */
> -            if ( rc == X86EMUL_EXCEPTION )
> -                continue;
> +        if ( !val )
> +            continue; /* Skip empty MSRs. */
> 
> -            if ( rc != X86EMUL_OKAY )
> -            {
> -                ASSERT_UNREACHABLE();
> -                return -ENXIO;
> -            }
> +        ctxt->msr[ctxt->count].index = msrs_to_send[i];
> +        ctxt->msr[ctxt->count++].val = val;
> +    }
> 
> -            if ( !val )
> -                continue; /* Skip empty MSRs. */
> +    if ( hvm_funcs.save_msr )
> +        hvm_funcs.save_msr(v, ctxt);
> 
> -            ctxt->msr[ctxt->count].index = msrs_to_send[i];
> -            ctxt->msr[ctxt->count++].val = val;
> -        }
> +    ASSERT(ctxt->count <= msr_count_max);
> 
> -        if ( hvm_funcs.save_msr )
> -            hvm_funcs.save_msr(v, ctxt);
> +    for ( i = 0; i < ctxt->count; ++i )
> +        ctxt->msr[i]._rsvd = 0;
> 
> -        ASSERT(ctxt->count <= msr_count_max);
> +    if ( ctxt->count )
> +    {
> +        /* Rewrite length to indicate how much space we actually used. */
> +        desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
> +        h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
> +    }
> +    else
> +        /* or rewind and remove the descriptor from the stream. */
> +        h->cur -= sizeof(struct hvm_save_descriptor);
> 
> -        for ( i = 0; i < ctxt->count; ++i )
> -            ctxt->msr[i]._rsvd = 0;
> +    return 0;
> +}
> 
> -        if ( ctxt->count )
> -        {
> -            /* Rewrite length to indicate how much space we actually used. */
> -            desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
> -            h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
> -        }
> -        else
> -            /* or rewind and remove the descriptor from the stream. */
> -            h->cur -= sizeof(struct hvm_save_descriptor);
> +
> +static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t
> *h)
> +{
> +    struct vcpu *v;
> +
> +    for_each_vcpu ( d, v )
> +    {
> +        if ( hvm_save_cpu_msrs_one(v, h) != 0 )
> +            return 1;
>      }
> 
>      return 0;
> --
> 2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v10 01/11] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func
  2018-07-05  8:41   ` Paul Durrant
@ 2018-07-05  8:54     ` Alexandru Stefan ISAILA
  2018-07-05  9:52       ` Jan Beulich
  0 siblings, 1 reply; 23+ messages in thread
From: Alexandru Stefan ISAILA @ 2018-07-05  8:54 UTC (permalink / raw)
  To: Paul.Durrant, xen-devel; +Cc: Ian.Jackson, wei.liu2, jbeulich, Andrew.Cooper3

On Jo, 2018-07-05 at 08:41 +0000, Paul Durrant wrote:
> > 
> > -----Original Message-----
> > From: Alexandru Isaila [mailto:aisaila@bitdefender.com]
> > Sent: 04 July 2018 14:32
> > To: xen-devel@lists.xen.org
> > Cc: Ian Jackson <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.
> > com>;
> > jbeulich@suse.com; Andrew Cooper <Andrew.Cooper3@citrix.com>; Paul
> > Durrant <Paul.Durrant@citrix.com>; Alexandru Isaila
> > <aisaila@bitdefender.com>
> > Subject: [PATCH v10 01/11] x86/cpu: Introduce
> > vmce_save_vcpu_ctxt_one()
> > func
> > 
> > This is used to save data from a single instance.
> > 
> > Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
> > 
> > ---
> > Changes since V9:
> > 	- Change return of the save_one func to return hvm_save_entry.
> > ---
> >  xen/arch/x86/cpu/mcheck/vmce.c | 21 +++++++++++++--------
> >  1 file changed, 13 insertions(+), 8 deletions(-)
> > 
> > diff --git a/xen/arch/x86/cpu/mcheck/vmce.c
> > b/xen/arch/x86/cpu/mcheck/vmce.c
> > index e07cd2f..8a1fbfc 100644
> > --- a/xen/arch/x86/cpu/mcheck/vmce.c
> > +++ b/xen/arch/x86/cpu/mcheck/vmce.c
> > @@ -349,6 +349,18 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
> >      return ret;
> >  }
> > 
> > +static int vmce_save_vcpu_ctxt_one(struct vcpu *v,
> > hvm_domain_context_t *h)
> > + {
> > +    struct hvm_vmce_vcpu ctxt;
> > +
> Not my call, but I'd feel more comfortable with memsetting ctxt to
> zero here...

If there will be a new version I will add the memset to 0

Alex
> 
> > 
> > +    ctxt.caps = v->arch.vmce.mcg_cap;
> > +    ctxt.mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2;
> > +    ctxt.mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2;
> > +    ctxt.mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl;
> > +
> > +    return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt);
> > + }
> > +
> >  static int vmce_save_vcpu_ctxt(struct domain *d,
> > hvm_domain_context_t
> > *h)
> >  {
> >      struct vcpu *v;
> > @@ -356,14 +368,7 @@ static int vmce_save_vcpu_ctxt(struct domain
> > *d,
> > hvm_domain_context_t *h)
> > 
> >      for_each_vcpu ( d, v )
> >      {
> ...because it used to be implicit in the initializer here.
> 
>   Paul
> 
> > 
> > -        struct hvm_vmce_vcpu ctxt = {
> > -            .caps = v->arch.vmce.mcg_cap,
> > -            .mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2,
> > -            .mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2,
> > -            .mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl,
> > -        };
> > -
> > -        err = hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt);
> > +        err = vmce_save_vcpu_ctxt_one(v, h);
> >          if ( err )
> >              break;
> >      }
> > --
> > 2.7.4
> 
> ________________________
> This email was scanned by Bitdefender
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v10 06/11] x86/hvm: Introduce hvm_save_mtrr_msr_one func
  2018-07-04 13:32 ` [PATCH v10 06/11] x86/hvm: Introduce hvm_save_mtrr_msr_one func Alexandru Isaila
@ 2018-07-05  8:56   ` Paul Durrant
  0 siblings, 0 replies; 23+ messages in thread
From: Paul Durrant @ 2018-07-05  8:56 UTC (permalink / raw)
  To: 'Alexandru Isaila', xen-devel
  Cc: Ian Jackson, Wei Liu, jbeulich, Andrew Cooper

> -----Original Message-----
> From: Alexandru Isaila [mailto:aisaila@bitdefender.com]
> Sent: 04 July 2018 14:32
> To: xen-devel@lists.xen.org
> Cc: Ian Jackson <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.com>;
> jbeulich@suse.com; Andrew Cooper <Andrew.Cooper3@citrix.com>; Paul
> Durrant <Paul.Durrant@citrix.com>; Alexandru Isaila
> <aisaila@bitdefender.com>
> Subject: [PATCH v10 06/11] x86/hvm: Introduce hvm_save_mtrr_msr_one
> func
> 
> This is used to save data from a single instance.
> 
> Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
> 
> ---
> Changes since V9:
> 	- Change return of the save_one func to return hvm_save_entry.
> 
> Note: This patch is based on Roger Pau Monne's series[1]
> ---
>  xen/arch/x86/hvm/mtrr.c | 75 +++++++++++++++++++++++++---------------
> ---------
>  1 file changed, 39 insertions(+), 36 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
> index 48facbb..9bbff59 100644
> --- a/xen/arch/x86/hvm/mtrr.c
> +++ b/xen/arch/x86/hvm/mtrr.c
> @@ -718,6 +718,44 @@ int hvm_set_mem_pinned_cacheattr(struct domain
> *d, uint64_t gfn_start,
>      return 0;
>  }
> 
> +static int hvm_save_mtrr_msr_one(struct vcpu *v, hvm_domain_context_t
> *h)
> +{
> +    const struct mtrr_state *mtrr_state = &v->arch.hvm_vcpu.mtrr;
> +    struct hvm_hw_mtrr hw_mtrr;
> +    unsigned int i;

I believe coding style says there should be a blank line here.

  Paul

> +    memset(&hw_mtrr, 0, sizeof(hw_mtrr));
> +    hw_mtrr.msr_mtrr_def_type = mtrr_state->def_type |
> +                                MASK_INSR(mtrr_state->fixed_enabled,
> +                                          MTRRdefType_FE) |
> +                                 MASK_INSR(mtrr_state->enabled, MTRRdefType_E);
> +    hw_mtrr.msr_mtrr_cap = mtrr_state->mtrr_cap;
> +
> +    if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
> +         (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
> +    {
> +        dprintk(XENLOG_G_ERR,
> +                "HVM save: %pv: too many (%lu) variable range MTRRs\n",
> +                v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
> +        return -EINVAL;
> +    }
> +    hvm_get_guest_pat(v, &hw_mtrr.msr_pat_cr);
> +    for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++ )
> +    {
> +        /* save physbase */
> +        hw_mtrr.msr_mtrr_var[i*2] =
> +            ((uint64_t*)mtrr_state->var_ranges)[i*2];
> +        /* save physmask */
> +        hw_mtrr.msr_mtrr_var[i*2+1] =
> +            ((uint64_t*)mtrr_state->var_ranges)[i*2+1];
> +    }
> +
> +    for ( i = 0; i < NUM_FIXED_MSR; i++ )
> +        hw_mtrr.msr_mtrr_fixed[i] =
> +            ((uint64_t*)mtrr_state->fixed_ranges)[i];
> +
> +    return hvm_save_entry(MTRR, v->vcpu_id, h, &hw_mtrr);
> + }
> +
>  static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t
> *h)
>  {
>      struct vcpu *v;
> @@ -725,42 +763,7 @@ static int hvm_save_mtrr_msr(struct domain *d,
> hvm_domain_context_t *h)
>      /* save mtrr&pat */
>      for_each_vcpu(d, v)
>      {
> -        const struct mtrr_state *mtrr_state = &v->arch.hvm_vcpu.mtrr;
> -        struct hvm_hw_mtrr hw_mtrr = {
> -            .msr_mtrr_def_type = mtrr_state->def_type |
> -                                 MASK_INSR(mtrr_state->fixed_enabled,
> -                                           MTRRdefType_FE) |
> -                                 MASK_INSR(mtrr_state->enabled, MTRRdefType_E),
> -            .msr_mtrr_cap      = mtrr_state->mtrr_cap,
> -        };
> -        unsigned int i;
> -
> -        if ( MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT) >
> -             (ARRAY_SIZE(hw_mtrr.msr_mtrr_var) / 2) )
> -        {
> -            dprintk(XENLOG_G_ERR,
> -                    "HVM save: %pv: too many (%lu) variable range MTRRs\n",
> -                    v, MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT));
> -            return -EINVAL;
> -        }
> -
> -        hvm_get_guest_pat(v, &hw_mtrr.msr_pat_cr);
> -
> -        for ( i = 0; i < MASK_EXTR(hw_mtrr.msr_mtrr_cap, MTRRcap_VCNT); i++
> )
> -        {
> -            /* save physbase */
> -            hw_mtrr.msr_mtrr_var[i*2] =
> -                ((uint64_t*)mtrr_state->var_ranges)[i*2];
> -            /* save physmask */
> -            hw_mtrr.msr_mtrr_var[i*2+1] =
> -                ((uint64_t*)mtrr_state->var_ranges)[i*2+1];
> -        }
> -
> -        for ( i = 0; i < NUM_FIXED_MSR; i++ )
> -            hw_mtrr.msr_mtrr_fixed[i] =
> -                ((uint64_t*)mtrr_state->fixed_ranges)[i];
> -
> -        if ( hvm_save_entry(MTRR, v->vcpu_id, h, &hw_mtrr) != 0 )
> +        if ( hvm_save_mtrr_msr_one(v, h) != 0 )
>              return 1;
>      }
>      return 0;
> --
> 2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v10 01/11] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func
  2018-07-05  8:54     ` Alexandru Stefan ISAILA
@ 2018-07-05  9:52       ` Jan Beulich
  0 siblings, 0 replies; 23+ messages in thread
From: Jan Beulich @ 2018-07-05  9:52 UTC (permalink / raw)
  To: aisaila; +Cc: Andrew Cooper, xen-devel, Paul Durrant, Wei Liu, Ian.Jackson

>>> On 05.07.18 at 10:54, <aisaila@bitdefender.com> wrote:
> On Jo, 2018-07-05 at 08:41 +0000, Paul Durrant wrote:
>> > From: Alexandru Isaila [mailto:aisaila@bitdefender.com]
>> > Sent: 04 July 2018 14:32
>> > --- a/xen/arch/x86/cpu/mcheck/vmce.c
>> > +++ b/xen/arch/x86/cpu/mcheck/vmce.c
>> > @@ -349,6 +349,18 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
>> >      return ret;
>> >  }
>> > 
>> > +static int vmce_save_vcpu_ctxt_one(struct vcpu *v,
>> > hvm_domain_context_t *h)
>> > + {
>> > +    struct hvm_vmce_vcpu ctxt;
>> > +
>> Not my call, but I'd feel more comfortable with memsetting ctxt to
>> zero here...
> 
> If there will be a new version I will add the memset to 0

There will need to be a new version for this alone, but please allow some
more time for people to look at your series. At the rate you've been
sending versions, I haven't ever made it past the middle of your series.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2018-07-05  9:52 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-04 13:32 [PATCH v10 00/11] x86/domctl: Save info for one vcpu instance Alexandru Isaila
2018-07-04 13:32 ` [PATCH v10 01/11] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func Alexandru Isaila
2018-07-05  8:41   ` Paul Durrant
2018-07-05  8:54     ` Alexandru Stefan ISAILA
2018-07-05  9:52       ` Jan Beulich
2018-07-04 13:32 ` [PATCH v10 02/11] x86/hvm: Introduce hvm_save_tsc_adjust_one() func Alexandru Isaila
2018-07-05  8:43   ` Paul Durrant
2018-07-04 13:32 ` [PATCH v10 03/11] x86/hvm: Introduce hvm_save_cpu_ctxt_one func Alexandru Isaila
2018-07-05  8:44   ` Paul Durrant
2018-07-04 13:32 ` [PATCH v10 04/11] x86/hvm: Introduce hvm_save_cpu_xsave_states_one Alexandru Isaila
2018-07-05  8:47   ` Paul Durrant
2018-07-04 13:32 ` [PATCH v10 05/11] x86/hvm: Introduce hvm_save_cpu_msrs_one func Alexandru Isaila
2018-07-05  8:53   ` Paul Durrant
2018-07-04 13:32 ` [PATCH v10 06/11] x86/hvm: Introduce hvm_save_mtrr_msr_one func Alexandru Isaila
2018-07-05  8:56   ` Paul Durrant
2018-07-04 13:32 ` [PATCH v10 07/11] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func Alexandru Isaila
2018-07-05  8:25   ` Paul Durrant
2018-07-04 13:32 ` [PATCH v10 08/11] x86/hvm: Add handler for save_one funcs Alexandru Isaila
2018-07-05  8:33   ` Paul Durrant
2018-07-05  8:48     ` Alexandru Stefan ISAILA
2018-07-04 13:32 ` [PATCH v10 09/11] x86/domctl: Don't pause the whole domain if only getting vcpu state Alexandru Isaila
2018-07-04 13:32 ` [PATCH v10 10/11] x86/hvm: Remove redundant save functions Alexandru Isaila
2018-07-04 13:32 ` [PATCH v10 11/11] x86/hvm: Remove save_one handler Alexandru Isaila

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.