All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v7 00/15] x86/domctl: Save info for one vcpu instance
@ 2018-06-07 14:59 Alexandru Isaila
  2018-06-07 14:59 ` [PATCH v7 01/15] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func Alexandru Isaila
                   ` (14 more replies)
  0 siblings, 15 replies; 30+ messages in thread
From: Alexandru Isaila @ 2018-06-07 14:59 UTC (permalink / raw)
  To: xen-devel; +Cc: wei.liu2, paul.durrant, ian.jackson, jbeulich, andrew.cooper3

Hi all,

This patch series addresses the ideea of saving data from a single vcpu instance.
First it starts by adding *save_one functions, then it removes the for loops from
the *save functions and it merges them to the *save_one functions.
In the final patch, the hvm_save and hvm_save_one functions are changed to make use       
of the new *save functions.                        

Cheers,

Alexandru Isaila (15):

 x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func
 x86/hvm: Introduce hvm_save_tsc_adjust_one() func
 x86/hvm: Introduce hvm_save_cpu_ctxt_one func
 x86/hvm: Introduce hvm_save_cpu_xsave_states_one
 x86/hvm: Introduce hvm_save_cpu_msrs_one func
 x86/hvm: Introduce hvm_save_mtrr_msr_one func
 x86/hvm: Introduce viridian_save_vcpu_ctxt_one()
 x86/cpu: Remove loop form vmce_save_vcpu_ctxt() func
 x86/hvm: Remove loop from hvm_save_tsc_adjust() func
 x86/hvm: Remove loop from hvm_save_cpu_ctxt func
 x86/hvm: Remove loop from hvm_save_cpu_xsave_states
 x86/hvm: Remove loop from hvm_save_cpu_msrs func
 x86/hvm: Remove loop from hvm_save_mtrr_msr func
 x86/hvm: Remove loop from viridian_save_vcpu_ctxt()
 x86/domctl: Don't pause the whole domain if only getting vcpu state

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v7 01/15] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func
  2018-06-07 14:59 [PATCH v7 00/15] x86/domctl: Save info for one vcpu instance Alexandru Isaila
@ 2018-06-07 14:59 ` Alexandru Isaila
  2018-06-08 14:45   ` Jan Beulich
  2018-06-07 14:59 ` [PATCH v7 02/15] x86/hvm: Introduce hvm_save_tsc_adjust_one() func Alexandru Isaila
                   ` (13 subsequent siblings)
  14 siblings, 1 reply; 30+ messages in thread
From: Alexandru Isaila @ 2018-06-07 14:59 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
 xen/arch/x86/cpu/mcheck/vmce.c | 16 ++++++++++------
 1 file changed, 10 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index e07cd2f..404f27e 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -349,6 +349,14 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
     return ret;
 }
 
+static void vmce_save_vcpu_ctxt_one(struct vcpu *v, struct hvm_vmce_vcpu *ctxt)
+{
+    ctxt->caps = v->arch.vmce.mcg_cap;
+    ctxt->mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2;
+    ctxt->mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2;
+    ctxt->mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl;
+}
+
 static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
     struct vcpu *v;
@@ -356,13 +364,9 @@ static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 
     for_each_vcpu ( d, v )
     {
-        struct hvm_vmce_vcpu ctxt = {
-            .caps = v->arch.vmce.mcg_cap,
-            .mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2,
-            .mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2,
-            .mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl,
-        };
+        struct hvm_vmce_vcpu ctxt;
 
+        vmce_save_vcpu_ctxt_one(v, &ctxt);
         err = hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt);
         if ( err )
             break;
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v7 02/15] x86/hvm: Introduce hvm_save_tsc_adjust_one() func
  2018-06-07 14:59 [PATCH v7 00/15] x86/domctl: Save info for one vcpu instance Alexandru Isaila
  2018-06-07 14:59 ` [PATCH v7 01/15] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func Alexandru Isaila
@ 2018-06-07 14:59 ` Alexandru Isaila
  2018-06-07 14:59 ` [PATCH v7 03/15] x86/hvm: Introduce hvm_save_cpu_ctxt_one func Alexandru Isaila
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 30+ messages in thread
From: Alexandru Isaila @ 2018-06-07 14:59 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
 xen/arch/x86/hvm/hvm.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index c23983c..76f7db9 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -740,6 +740,11 @@ void hvm_domain_destroy(struct domain *d)
     destroy_vpci_mmcfg(d);
 }
 
+static void hvm_save_tsc_adjust_one(struct vcpu *v, struct hvm_tsc_adjust *ctxt)
+{
+    ctxt->tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust;
+}
+
 static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 {
     struct vcpu *v;
@@ -748,7 +753,7 @@ static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 
     for_each_vcpu ( d, v )
     {
-        ctxt.tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust;
+        hvm_save_tsc_adjust_one(v, &ctxt);
         err = hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, &ctxt);
         if ( err )
             break;
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v7 03/15] x86/hvm: Introduce hvm_save_cpu_ctxt_one func
  2018-06-07 14:59 [PATCH v7 00/15] x86/domctl: Save info for one vcpu instance Alexandru Isaila
  2018-06-07 14:59 ` [PATCH v7 01/15] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func Alexandru Isaila
  2018-06-07 14:59 ` [PATCH v7 02/15] x86/hvm: Introduce hvm_save_tsc_adjust_one() func Alexandru Isaila
@ 2018-06-07 14:59 ` Alexandru Isaila
  2018-06-08 14:47   ` Jan Beulich
  2018-06-07 14:59 ` [PATCH v7 04/15] x86/hvm: Introduce hvm_save_cpu_xsave_states_one Alexandru Isaila
                   ` (11 subsequent siblings)
  14 siblings, 1 reply; 30+ messages in thread
From: Alexandru Isaila @ 2018-06-07 14:59 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
 xen/arch/x86/hvm/hvm.c | 195 +++++++++++++++++++++++++------------------------
 1 file changed, 100 insertions(+), 95 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 76f7db9..b254378 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -785,11 +785,109 @@ static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
                           hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
+static void hvm_save_cpu_ctxt_one(struct vcpu *v, struct hvm_hw_cpu *ctxt)
+{
+    struct segment_register seg;
+
+    /* Architecture-specific vmcs/vmcb bits */
+    hvm_funcs.save_cpu_ctxt(v, ctxt);
+
+    ctxt->tsc = hvm_get_guest_tsc_fixed(v, v->domain->arch.hvm_domain.sync_tsc);
+
+    ctxt->msr_tsc_aux = hvm_msr_tsc_aux(v);
+
+    hvm_get_segment_register(v, x86_seg_idtr, &seg);
+    ctxt->idtr_limit = seg.limit;
+    ctxt->idtr_base = seg.base;
+
+    hvm_get_segment_register(v, x86_seg_gdtr, &seg);
+    ctxt->gdtr_limit = seg.limit;
+    ctxt->gdtr_base = seg.base;
+
+    hvm_get_segment_register(v, x86_seg_cs, &seg);
+    ctxt->cs_sel = seg.sel;
+    ctxt->cs_limit = seg.limit;
+    ctxt->cs_base = seg.base;
+    ctxt->cs_arbytes = seg.attr;
+
+    hvm_get_segment_register(v, x86_seg_ds, &seg);
+    ctxt->ds_sel = seg.sel;
+    ctxt->ds_limit = seg.limit;
+    ctxt->ds_base = seg.base;
+    ctxt->ds_arbytes = seg.attr;
+
+    hvm_get_segment_register(v, x86_seg_es, &seg);
+    ctxt->es_sel = seg.sel;
+    ctxt->es_limit = seg.limit;
+    ctxt->es_base = seg.base;
+    ctxt->es_arbytes = seg.attr;
+
+    hvm_get_segment_register(v, x86_seg_ss, &seg);
+    ctxt->ss_sel = seg.sel;
+    ctxt->ss_limit = seg.limit;
+    ctxt->ss_base = seg.base;
+    ctxt->ss_arbytes = seg.attr;
+
+    hvm_get_segment_register(v, x86_seg_fs, &seg);
+    ctxt->fs_sel = seg.sel;
+    ctxt->fs_limit = seg.limit;
+    ctxt->fs_base = seg.base;
+    ctxt->fs_arbytes = seg.attr;
+
+    hvm_get_segment_register(v, x86_seg_gs, &seg);
+    ctxt->gs_sel = seg.sel;
+    ctxt->gs_limit = seg.limit;
+    ctxt->gs_base = seg.base;
+    ctxt->gs_arbytes = seg.attr;
+
+    hvm_get_segment_register(v, x86_seg_tr, &seg);
+    ctxt->tr_sel = seg.sel;
+    ctxt->tr_limit = seg.limit;
+    ctxt->tr_base = seg.base;
+    ctxt->tr_arbytes = seg.attr;
+
+    hvm_get_segment_register(v, x86_seg_ldtr, &seg);
+    ctxt->ldtr_sel = seg.sel;
+    ctxt->ldtr_limit = seg.limit;
+    ctxt->ldtr_base = seg.base;
+    ctxt->ldtr_arbytes = seg.attr;
+
+    if ( v->fpu_initialised )
+    {
+        memcpy(ctxt->fpu_regs, v->arch.fpu_ctxt, sizeof(ctxt->fpu_regs));
+        ctxt->flags = XEN_X86_FPU_INITIALISED;
+    }
+
+    ctxt->rax = v->arch.user_regs.rax;
+    ctxt->rbx = v->arch.user_regs.rbx;
+    ctxt->rcx = v->arch.user_regs.rcx;
+    ctxt->rdx = v->arch.user_regs.rdx;
+    ctxt->rbp = v->arch.user_regs.rbp;
+    ctxt->rsi = v->arch.user_regs.rsi;
+    ctxt->rdi = v->arch.user_regs.rdi;
+    ctxt->rsp = v->arch.user_regs.rsp;
+    ctxt->rip = v->arch.user_regs.rip;
+    ctxt->rflags = v->arch.user_regs.rflags;
+    ctxt->r8  = v->arch.user_regs.r8;
+    ctxt->r9  = v->arch.user_regs.r9;
+    ctxt->r10 = v->arch.user_regs.r10;
+    ctxt->r11 = v->arch.user_regs.r11;
+    ctxt->r12 = v->arch.user_regs.r12;
+    ctxt->r13 = v->arch.user_regs.r13;
+    ctxt->r14 = v->arch.user_regs.r14;
+    ctxt->r15 = v->arch.user_regs.r15;
+    ctxt->dr0 = v->arch.debugreg[0];
+    ctxt->dr1 = v->arch.debugreg[1];
+    ctxt->dr2 = v->arch.debugreg[2];
+    ctxt->dr3 = v->arch.debugreg[3];
+    ctxt->dr6 = v->arch.debugreg[6];
+    ctxt->dr7 = v->arch.debugreg[7];
+}
+
 static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
     struct vcpu *v;
     struct hvm_hw_cpu ctxt;
-    struct segment_register seg;
 
     for_each_vcpu ( d, v )
     {
@@ -799,100 +897,7 @@ static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
             continue;
 
         memset(&ctxt, 0, sizeof(ctxt));
-
-        /* Architecture-specific vmcs/vmcb bits */
-        hvm_funcs.save_cpu_ctxt(v, &ctxt);
-
-        ctxt.tsc = hvm_get_guest_tsc_fixed(v, d->arch.hvm_domain.sync_tsc);
-
-        ctxt.msr_tsc_aux = hvm_msr_tsc_aux(v);
-
-        hvm_get_segment_register(v, x86_seg_idtr, &seg);
-        ctxt.idtr_limit = seg.limit;
-        ctxt.idtr_base = seg.base;
-
-        hvm_get_segment_register(v, x86_seg_gdtr, &seg);
-        ctxt.gdtr_limit = seg.limit;
-        ctxt.gdtr_base = seg.base;
-
-        hvm_get_segment_register(v, x86_seg_cs, &seg);
-        ctxt.cs_sel = seg.sel;
-        ctxt.cs_limit = seg.limit;
-        ctxt.cs_base = seg.base;
-        ctxt.cs_arbytes = seg.attr;
-
-        hvm_get_segment_register(v, x86_seg_ds, &seg);
-        ctxt.ds_sel = seg.sel;
-        ctxt.ds_limit = seg.limit;
-        ctxt.ds_base = seg.base;
-        ctxt.ds_arbytes = seg.attr;
-
-        hvm_get_segment_register(v, x86_seg_es, &seg);
-        ctxt.es_sel = seg.sel;
-        ctxt.es_limit = seg.limit;
-        ctxt.es_base = seg.base;
-        ctxt.es_arbytes = seg.attr;
-
-        hvm_get_segment_register(v, x86_seg_ss, &seg);
-        ctxt.ss_sel = seg.sel;
-        ctxt.ss_limit = seg.limit;
-        ctxt.ss_base = seg.base;
-        ctxt.ss_arbytes = seg.attr;
-
-        hvm_get_segment_register(v, x86_seg_fs, &seg);
-        ctxt.fs_sel = seg.sel;
-        ctxt.fs_limit = seg.limit;
-        ctxt.fs_base = seg.base;
-        ctxt.fs_arbytes = seg.attr;
-
-        hvm_get_segment_register(v, x86_seg_gs, &seg);
-        ctxt.gs_sel = seg.sel;
-        ctxt.gs_limit = seg.limit;
-        ctxt.gs_base = seg.base;
-        ctxt.gs_arbytes = seg.attr;
-
-        hvm_get_segment_register(v, x86_seg_tr, &seg);
-        ctxt.tr_sel = seg.sel;
-        ctxt.tr_limit = seg.limit;
-        ctxt.tr_base = seg.base;
-        ctxt.tr_arbytes = seg.attr;
-
-        hvm_get_segment_register(v, x86_seg_ldtr, &seg);
-        ctxt.ldtr_sel = seg.sel;
-        ctxt.ldtr_limit = seg.limit;
-        ctxt.ldtr_base = seg.base;
-        ctxt.ldtr_arbytes = seg.attr;
-
-        if ( v->fpu_initialised )
-        {
-            memcpy(ctxt.fpu_regs, v->arch.fpu_ctxt, sizeof(ctxt.fpu_regs));
-            ctxt.flags = XEN_X86_FPU_INITIALISED;
-        }
-
-        ctxt.rax = v->arch.user_regs.rax;
-        ctxt.rbx = v->arch.user_regs.rbx;
-        ctxt.rcx = v->arch.user_regs.rcx;
-        ctxt.rdx = v->arch.user_regs.rdx;
-        ctxt.rbp = v->arch.user_regs.rbp;
-        ctxt.rsi = v->arch.user_regs.rsi;
-        ctxt.rdi = v->arch.user_regs.rdi;
-        ctxt.rsp = v->arch.user_regs.rsp;
-        ctxt.rip = v->arch.user_regs.rip;
-        ctxt.rflags = v->arch.user_regs.rflags;
-        ctxt.r8  = v->arch.user_regs.r8;
-        ctxt.r9  = v->arch.user_regs.r9;
-        ctxt.r10 = v->arch.user_regs.r10;
-        ctxt.r11 = v->arch.user_regs.r11;
-        ctxt.r12 = v->arch.user_regs.r12;
-        ctxt.r13 = v->arch.user_regs.r13;
-        ctxt.r14 = v->arch.user_regs.r14;
-        ctxt.r15 = v->arch.user_regs.r15;
-        ctxt.dr0 = v->arch.debugreg[0];
-        ctxt.dr1 = v->arch.debugreg[1];
-        ctxt.dr2 = v->arch.debugreg[2];
-        ctxt.dr3 = v->arch.debugreg[3];
-        ctxt.dr6 = v->arch.debugreg[6];
-        ctxt.dr7 = v->arch.debugreg[7];
+        hvm_save_cpu_ctxt_one(v, &ctxt);
 
         if ( hvm_save_entry(CPU, v->vcpu_id, h, &ctxt) != 0 )
             return 1; 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v7 04/15] x86/hvm: Introduce hvm_save_cpu_xsave_states_one
  2018-06-07 14:59 [PATCH v7 00/15] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (2 preceding siblings ...)
  2018-06-07 14:59 ` [PATCH v7 03/15] x86/hvm: Introduce hvm_save_cpu_ctxt_one func Alexandru Isaila
@ 2018-06-07 14:59 ` Alexandru Isaila
  2018-06-08 14:49   ` Jan Beulich
  2018-06-07 14:59 ` [PATCH v7 05/15] x86/hvm: Introduce hvm_save_cpu_msrs_one func Alexandru Isaila
                   ` (10 subsequent siblings)
  14 siblings, 1 reply; 30+ messages in thread
From: Alexandru Isaila @ 2018-06-07 14:59 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
 xen/arch/x86/hvm/hvm.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index b254378..e8ecabf 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1183,6 +1183,13 @@ HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_load_cpu_ctxt,
                                            save_area) + \
                                   xstate_ctxt_size(xcr0))
 
+static void hvm_save_cpu_xsave_states_one(struct vcpu *v, struct hvm_hw_cpu_xsave *ctxt)
+{
+    ctxt->xfeature_mask = xfeature_mask;
+    ctxt->xcr0 = v->arch.xcr0;
+    ctxt->xcr0_accum = v->arch.xcr0_accum;
+}
+
 static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
 {
     struct vcpu *v;
@@ -1202,9 +1209,7 @@ static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
         ctxt = (struct hvm_hw_cpu_xsave *)&h->data[h->cur];
         h->cur += size;
 
-        ctxt->xfeature_mask = xfeature_mask;
-        ctxt->xcr0 = v->arch.xcr0;
-        ctxt->xcr0_accum = v->arch.xcr0_accum;
+        hvm_save_cpu_xsave_states_one(v, ctxt);
         expand_xsave_states(v, &ctxt->save_area,
                             size - offsetof(typeof(*ctxt), save_area));
     }
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v7 05/15] x86/hvm: Introduce hvm_save_cpu_msrs_one func
  2018-06-07 14:59 [PATCH v7 00/15] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (3 preceding siblings ...)
  2018-06-07 14:59 ` [PATCH v7 04/15] x86/hvm: Introduce hvm_save_cpu_xsave_states_one Alexandru Isaila
@ 2018-06-07 14:59 ` Alexandru Isaila
  2018-06-08 14:50   ` Jan Beulich
  2018-06-07 14:59 ` [PATCH v7 06/15] x86/hvm: Introduce hvm_save_mtrr_msr_one func Alexandru Isaila
                   ` (9 subsequent siblings)
  14 siblings, 1 reply; 30+ messages in thread
From: Alexandru Isaila @ 2018-06-07 14:59 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>

---
Changes since V5:
	- Check the return value of hvm_save_cpu_msrs_one()
---
 xen/arch/x86/hvm/hvm.c | 60 ++++++++++++++++++++++++++++----------------------
 1 file changed, 34 insertions(+), 26 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index e8ecabf..7e90bf2 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1354,6 +1354,38 @@ static const uint32_t msrs_to_send[] = {
 };
 static unsigned int __read_mostly msr_count_max = ARRAY_SIZE(msrs_to_send);
 
+static int hvm_save_cpu_msrs_one(struct vcpu *v, struct hvm_msr *ctxt)
+{
+    unsigned int i;
+
+    for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
+    {
+        uint64_t val;
+        int rc = guest_rdmsr(v, msrs_to_send[i], &val);
+
+        /*
+         * It is the programmers responsibility to ensure that
+         * msrs_to_send[] contain generally-read/write MSRs.
+         * X86EMUL_EXCEPTION here implies a missing feature, and that the
+         * guest doesn't have access to the MSR.
+         */
+        if ( rc == X86EMUL_EXCEPTION )
+            continue;
+
+        if ( rc != X86EMUL_OKAY )
+        {
+            ASSERT_UNREACHABLE();
+            return -ENXIO;
+        }
+
+        if ( !val )
+           continue; /* Skip empty MSRs. */
+        ctxt->msr[ctxt->count].index = msrs_to_send[i];
+        ctxt->msr[ctxt->count++].val = val;
+    }
+    return 0;
+}
+
 static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
 {
     struct vcpu *v;
@@ -1370,32 +1402,8 @@ static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
         ctxt = (struct hvm_msr *)&h->data[h->cur];
         ctxt->count = 0;
 
-        for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
-        {
-            uint64_t val;
-            int rc = guest_rdmsr(v, msrs_to_send[i], &val);
-
-            /*
-             * It is the programmers responsibility to ensure that
-             * msrs_to_send[] contain generally-read/write MSRs.
-             * X86EMUL_EXCEPTION here implies a missing feature, and that the
-             * guest doesn't have access to the MSR.
-             */
-            if ( rc == X86EMUL_EXCEPTION )
-                continue;
-
-            if ( rc != X86EMUL_OKAY )
-            {
-                ASSERT_UNREACHABLE();
-                return -ENXIO;
-            }
-
-            if ( !val )
-                continue; /* Skip empty MSRs. */
-
-            ctxt->msr[ctxt->count].index = msrs_to_send[i];
-            ctxt->msr[ctxt->count++].val = val;
-        }
+        if ( hvm_save_cpu_msrs_one(v, ctxt) == -ENXIO )
+            return -ENXIO;
 
         if ( hvm_funcs.save_msr )
             hvm_funcs.save_msr(v, ctxt);
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v7 06/15] x86/hvm: Introduce hvm_save_mtrr_msr_one func
  2018-06-07 14:59 [PATCH v7 00/15] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (4 preceding siblings ...)
  2018-06-07 14:59 ` [PATCH v7 05/15] x86/hvm: Introduce hvm_save_cpu_msrs_one func Alexandru Isaila
@ 2018-06-07 14:59 ` Alexandru Isaila
  2018-06-08 14:57   ` Jan Beulich
  2018-06-07 14:59 ` [PATCH v7 07/15] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func Alexandru Isaila
                   ` (8 subsequent siblings)
  14 siblings, 1 reply; 30+ messages in thread
From: Alexandru Isaila @ 2018-06-07 14:59 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
 xen/arch/x86/hvm/mtrr.c | 52 +++++++++++++++++++++++++++----------------------
 1 file changed, 29 insertions(+), 23 deletions(-)

diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index b721c63..d311031 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -666,36 +666,42 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start,
     return 0;
 }
 
-static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
+static void hvm_save_mtrr_msr_one(struct vcpu *v, struct hvm_hw_mtrr *hw_mtrr)
 {
+    struct mtrr_state *mtrr_state = &v->arch.hvm_vcpu.mtrr;
     int i;
-    struct vcpu *v;
-    struct hvm_hw_mtrr hw_mtrr;
-    struct mtrr_state *mtrr_state;
-    /* save mtrr&pat */
-    for_each_vcpu(d, v)
+
+    hvm_get_guest_pat(v, &hw_mtrr->msr_pat_cr);
+
+    hw_mtrr->msr_mtrr_def_type = mtrr_state->def_type
+                            | (mtrr_state->enabled << 10);
+    hw_mtrr->msr_mtrr_cap = mtrr_state->mtrr_cap;
+
+    for ( i = 0; i < MTRR_VCNT; i++ )
     {
-        mtrr_state = &v->arch.hvm_vcpu.mtrr;
+        /* save physbase */
+        hw_mtrr->msr_mtrr_var[i*2] =
+            ((uint64_t*)mtrr_state->var_ranges)[i*2];
+        /* save physmask */
+        hw_mtrr->msr_mtrr_var[i*2+1] =
+            ((uint64_t*)mtrr_state->var_ranges)[i*2+1];
+    }
 
-        hvm_get_guest_pat(v, &hw_mtrr.msr_pat_cr);
+    for ( i = 0; i < NUM_FIXED_MSR; i++ )
+        hw_mtrr->msr_mtrr_fixed[i] =
+            ((uint64_t*)mtrr_state->fixed_ranges)[i];
 
-        hw_mtrr.msr_mtrr_def_type = mtrr_state->def_type
-                                | (mtrr_state->enabled << 10);
-        hw_mtrr.msr_mtrr_cap = mtrr_state->mtrr_cap;
+}
 
-        for ( i = 0; i < MTRR_VCNT; i++ )
-        {
-            /* save physbase */
-            hw_mtrr.msr_mtrr_var[i*2] =
-                ((uint64_t*)mtrr_state->var_ranges)[i*2];
-            /* save physmask */
-            hw_mtrr.msr_mtrr_var[i*2+1] =
-                ((uint64_t*)mtrr_state->var_ranges)[i*2+1];
-        }
+static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
+{
+    struct vcpu *v;
+    struct hvm_hw_mtrr hw_mtrr;
+    /* save mtrr&pat */
 
-        for ( i = 0; i < NUM_FIXED_MSR; i++ )
-            hw_mtrr.msr_mtrr_fixed[i] =
-                ((uint64_t*)mtrr_state->fixed_ranges)[i];
+    for_each_vcpu(d, v)
+    {
+        hvm_save_mtrr_msr_one(v, &hw_mtrr);
 
         if ( hvm_save_entry(MTRR, v->vcpu_id, h, &hw_mtrr) != 0 )
             return 1;
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v7 07/15] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func
  2018-06-07 14:59 [PATCH v7 00/15] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (5 preceding siblings ...)
  2018-06-07 14:59 ` [PATCH v7 06/15] x86/hvm: Introduce hvm_save_mtrr_msr_one func Alexandru Isaila
@ 2018-06-07 14:59 ` Alexandru Isaila
  2018-06-08  8:27   ` Paul Durrant
  2018-06-07 14:59 ` [PATCH v7 08/15] x86/cpu: Remove loop form vmce_save_vcpu_ctxt() func Alexandru Isaila
                   ` (7 subsequent siblings)
  14 siblings, 1 reply; 30+ messages in thread
From: Alexandru Isaila @ 2018-06-07 14:59 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This is used to save data from a single instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>

---
Changes since V6:
	- Add memset to 0 for ctxt
---
 xen/arch/x86/hvm/viridian.c | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index 694eae6..bab606e 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -1026,6 +1026,13 @@ static int viridian_load_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
 HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt,
                           viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
 
+static void viridian_save_vcpu_ctxt_one(struct vcpu *v, struct hvm_viridian_vcpu_context *ctxt)
+{
+    memset(ctxt, 0, sizeof(*ctxt));
+    ctxt->vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw;
+    ctxt->vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending;
+}
+
 static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
     struct vcpu *v;
@@ -1034,10 +1041,9 @@ static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
         return 0;
 
     for_each_vcpu( d, v ) {
-        struct hvm_viridian_vcpu_context ctxt = {
-            .vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw,
-            .vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending,
-        };
+        struct hvm_viridian_vcpu_context ctxt;
+
+        viridian_save_vcpu_ctxt_one(v, &ctxt);
 
         if ( hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt) != 0 )
             return 1;
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v7 08/15] x86/cpu: Remove loop form vmce_save_vcpu_ctxt() func
  2018-06-07 14:59 [PATCH v7 00/15] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (6 preceding siblings ...)
  2018-06-07 14:59 ` [PATCH v7 07/15] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func Alexandru Isaila
@ 2018-06-07 14:59 ` Alexandru Isaila
  2018-06-08  8:33   ` Paul Durrant
  2018-06-07 14:59 ` [PATCH v7 09/15] x86/hvm: Remove loop from hvm_save_tsc_adjust() func Alexandru Isaila
                   ` (6 subsequent siblings)
  14 siblings, 1 reply; 30+ messages in thread
From: Alexandru Isaila @ 2018-06-07 14:59 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
 xen/arch/x86/cpu/mcheck/vmce.c | 27 +++++++--------------------
 1 file changed, 7 insertions(+), 20 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 404f27e..ead1f73 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -349,30 +349,17 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
     return ret;
 }
 
-static void vmce_save_vcpu_ctxt_one(struct vcpu *v, struct hvm_vmce_vcpu *ctxt)
-{
-    ctxt->caps = v->arch.vmce.mcg_cap;
-    ctxt->mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2;
-    ctxt->mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2;
-    ctxt->mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl;
-}
-
 static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
-    struct vcpu *v;
-    int err = 0;
-
-    for_each_vcpu ( d, v )
-    {
-        struct hvm_vmce_vcpu ctxt;
+    struct hvm_vmce_vcpu ctxt;
+    struct vcpu *v = NULL;
 
-        vmce_save_vcpu_ctxt_one(v, &ctxt);
-        err = hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt);
-        if ( err )
-            break;
-    }
+    ctxt.caps = v->arch.vmce.mcg_cap;
+    ctxt.mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2;
+    ctxt.mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2;
+    ctxt.mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl;
 
-    return err;
+    return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt);
 }
 
 static int vmce_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v7 09/15] x86/hvm: Remove loop from hvm_save_tsc_adjust() func
  2018-06-07 14:59 [PATCH v7 00/15] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (7 preceding siblings ...)
  2018-06-07 14:59 ` [PATCH v7 08/15] x86/cpu: Remove loop form vmce_save_vcpu_ctxt() func Alexandru Isaila
@ 2018-06-07 14:59 ` Alexandru Isaila
  2018-06-07 14:59 ` [PATCH v7 10/15] x86/hvm: Remove loop from hvm_save_cpu_ctxt func Alexandru Isaila
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 30+ messages in thread
From: Alexandru Isaila @ 2018-06-07 14:59 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
 xen/arch/x86/hvm/hvm.c | 19 +++----------------
 1 file changed, 3 insertions(+), 16 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 7e90bf2..701e81c 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -740,26 +740,13 @@ void hvm_domain_destroy(struct domain *d)
     destroy_vpci_mmcfg(d);
 }
 
-static void hvm_save_tsc_adjust_one(struct vcpu *v, struct hvm_tsc_adjust *ctxt)
-{
-    ctxt->tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust;
-}
-
 static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 {
-    struct vcpu *v;
+    struct vcpu *v = NULL;
     struct hvm_tsc_adjust ctxt;
-    int err = 0;
 
-    for_each_vcpu ( d, v )
-    {
-        hvm_save_tsc_adjust_one(v, &ctxt);
-        err = hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, &ctxt);
-        if ( err )
-            break;
-    }
-
-    return err;
+    ctxt.tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust;
+    return hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, &ctxt);
 }
 
 static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v7 10/15] x86/hvm: Remove loop from hvm_save_cpu_ctxt func
  2018-06-07 14:59 [PATCH v7 00/15] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (8 preceding siblings ...)
  2018-06-07 14:59 ` [PATCH v7 09/15] x86/hvm: Remove loop from hvm_save_tsc_adjust() func Alexandru Isaila
@ 2018-06-07 14:59 ` Alexandru Isaila
  2018-06-07 14:59 ` [PATCH v7 11/15] x86/hvm: Remove loop from hvm_save_cpu_xsave_states Alexandru Isaila
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 30+ messages in thread
From: Alexandru Isaila @ 2018-06-07 14:59 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
 xen/arch/x86/hvm/hvm.c            | 166 ++++++++++++++++++--------------------
 xen/include/asm-x86/hvm/support.h |   2 +
 2 files changed, 80 insertions(+), 88 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 701e81c..38e5e96 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -772,123 +772,113 @@ static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
                           hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
-static void hvm_save_cpu_ctxt_one(struct vcpu *v, struct hvm_hw_cpu *ctxt)
+static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
     struct segment_register seg;
+    struct hvm_hw_cpu ctxt = {};
+    struct vcpu *v = NULL;
+
+    /* We don't need to save state for a vcpu that is down; the restore
+     * code will leave it down if there is nothing saved. */
+    if ( v->pause_flags & VPF_down )
+       return CONTINUE;
 
     /* Architecture-specific vmcs/vmcb bits */
-    hvm_funcs.save_cpu_ctxt(v, ctxt);
+    hvm_funcs.save_cpu_ctxt(v, &ctxt);
 
-    ctxt->tsc = hvm_get_guest_tsc_fixed(v, v->domain->arch.hvm_domain.sync_tsc);
+    ctxt.tsc = hvm_get_guest_tsc_fixed(v, v->domain->arch.hvm_domain.sync_tsc);
 
-    ctxt->msr_tsc_aux = hvm_msr_tsc_aux(v);
+    ctxt.msr_tsc_aux = hvm_msr_tsc_aux(v);
 
     hvm_get_segment_register(v, x86_seg_idtr, &seg);
-    ctxt->idtr_limit = seg.limit;
-    ctxt->idtr_base = seg.base;
+    ctxt.idtr_limit = seg.limit;
+    ctxt.idtr_base = seg.base;
 
     hvm_get_segment_register(v, x86_seg_gdtr, &seg);
-    ctxt->gdtr_limit = seg.limit;
-    ctxt->gdtr_base = seg.base;
+    ctxt.gdtr_limit = seg.limit;
+    ctxt.gdtr_base = seg.base;
 
     hvm_get_segment_register(v, x86_seg_cs, &seg);
-    ctxt->cs_sel = seg.sel;
-    ctxt->cs_limit = seg.limit;
-    ctxt->cs_base = seg.base;
-    ctxt->cs_arbytes = seg.attr;
+    ctxt.cs_sel = seg.sel;
+    ctxt.cs_limit = seg.limit;
+    ctxt.cs_base = seg.base;
+    ctxt.cs_arbytes = seg.attr;
 
     hvm_get_segment_register(v, x86_seg_ds, &seg);
-    ctxt->ds_sel = seg.sel;
-    ctxt->ds_limit = seg.limit;
-    ctxt->ds_base = seg.base;
-    ctxt->ds_arbytes = seg.attr;
+    ctxt.ds_sel = seg.sel;
+    ctxt.ds_limit = seg.limit;
+    ctxt.ds_base = seg.base;
+    ctxt.ds_arbytes = seg.attr;
 
     hvm_get_segment_register(v, x86_seg_es, &seg);
-    ctxt->es_sel = seg.sel;
-    ctxt->es_limit = seg.limit;
-    ctxt->es_base = seg.base;
-    ctxt->es_arbytes = seg.attr;
+    ctxt.es_sel = seg.sel;
+    ctxt.es_limit = seg.limit;
+    ctxt.es_base = seg.base;
+    ctxt.es_arbytes = seg.attr;
 
     hvm_get_segment_register(v, x86_seg_ss, &seg);
-    ctxt->ss_sel = seg.sel;
-    ctxt->ss_limit = seg.limit;
-    ctxt->ss_base = seg.base;
-    ctxt->ss_arbytes = seg.attr;
+    ctxt.ss_sel = seg.sel;
+    ctxt.ss_limit = seg.limit;
+    ctxt.ss_base = seg.base;
+    ctxt.ss_arbytes = seg.attr;
 
     hvm_get_segment_register(v, x86_seg_fs, &seg);
-    ctxt->fs_sel = seg.sel;
-    ctxt->fs_limit = seg.limit;
-    ctxt->fs_base = seg.base;
-    ctxt->fs_arbytes = seg.attr;
+    ctxt.fs_sel = seg.sel;
+    ctxt.fs_limit = seg.limit;
+    ctxt.fs_base = seg.base;
+    ctxt.fs_arbytes = seg.attr;
 
     hvm_get_segment_register(v, x86_seg_gs, &seg);
-    ctxt->gs_sel = seg.sel;
-    ctxt->gs_limit = seg.limit;
-    ctxt->gs_base = seg.base;
-    ctxt->gs_arbytes = seg.attr;
+    ctxt.gs_sel = seg.sel;
+    ctxt.gs_limit = seg.limit;
+    ctxt.gs_base = seg.base;
+    ctxt.gs_arbytes = seg.attr;
 
     hvm_get_segment_register(v, x86_seg_tr, &seg);
-    ctxt->tr_sel = seg.sel;
-    ctxt->tr_limit = seg.limit;
-    ctxt->tr_base = seg.base;
-    ctxt->tr_arbytes = seg.attr;
+    ctxt.tr_sel = seg.sel;
+    ctxt.tr_limit = seg.limit;
+    ctxt.tr_base = seg.base;
+    ctxt.tr_arbytes = seg.attr;
 
     hvm_get_segment_register(v, x86_seg_ldtr, &seg);
-    ctxt->ldtr_sel = seg.sel;
-    ctxt->ldtr_limit = seg.limit;
-    ctxt->ldtr_base = seg.base;
-    ctxt->ldtr_arbytes = seg.attr;
+    ctxt.ldtr_sel = seg.sel;
+    ctxt.ldtr_limit = seg.limit;
+    ctxt.ldtr_base = seg.base;
+    ctxt.ldtr_arbytes = seg.attr;
 
     if ( v->fpu_initialised )
     {
-        memcpy(ctxt->fpu_regs, v->arch.fpu_ctxt, sizeof(ctxt->fpu_regs));
-        ctxt->flags = XEN_X86_FPU_INITIALISED;
-    }
-
-    ctxt->rax = v->arch.user_regs.rax;
-    ctxt->rbx = v->arch.user_regs.rbx;
-    ctxt->rcx = v->arch.user_regs.rcx;
-    ctxt->rdx = v->arch.user_regs.rdx;
-    ctxt->rbp = v->arch.user_regs.rbp;
-    ctxt->rsi = v->arch.user_regs.rsi;
-    ctxt->rdi = v->arch.user_regs.rdi;
-    ctxt->rsp = v->arch.user_regs.rsp;
-    ctxt->rip = v->arch.user_regs.rip;
-    ctxt->rflags = v->arch.user_regs.rflags;
-    ctxt->r8  = v->arch.user_regs.r8;
-    ctxt->r9  = v->arch.user_regs.r9;
-    ctxt->r10 = v->arch.user_regs.r10;
-    ctxt->r11 = v->arch.user_regs.r11;
-    ctxt->r12 = v->arch.user_regs.r12;
-    ctxt->r13 = v->arch.user_regs.r13;
-    ctxt->r14 = v->arch.user_regs.r14;
-    ctxt->r15 = v->arch.user_regs.r15;
-    ctxt->dr0 = v->arch.debugreg[0];
-    ctxt->dr1 = v->arch.debugreg[1];
-    ctxt->dr2 = v->arch.debugreg[2];
-    ctxt->dr3 = v->arch.debugreg[3];
-    ctxt->dr6 = v->arch.debugreg[6];
-    ctxt->dr7 = v->arch.debugreg[7];
-}
-
-static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
-{
-    struct vcpu *v;
-    struct hvm_hw_cpu ctxt;
-
-    for_each_vcpu ( d, v )
-    {
-        /* We don't need to save state for a vcpu that is down; the restore 
-         * code will leave it down if there is nothing saved. */
-        if ( v->pause_flags & VPF_down )
-            continue;
-
-        memset(&ctxt, 0, sizeof(ctxt));
-        hvm_save_cpu_ctxt_one(v, &ctxt);
-
-        if ( hvm_save_entry(CPU, v->vcpu_id, h, &ctxt) != 0 )
-            return 1; 
-    }
+        memcpy(ctxt.fpu_regs, v->arch.fpu_ctxt, sizeof(ctxt.fpu_regs));
+        ctxt.flags = XEN_X86_FPU_INITIALISED;
+    }
+
+    ctxt.rax = v->arch.user_regs.rax;
+    ctxt.rbx = v->arch.user_regs.rbx;
+    ctxt.rcx = v->arch.user_regs.rcx;
+    ctxt.rdx = v->arch.user_regs.rdx;
+    ctxt.rbp = v->arch.user_regs.rbp;
+    ctxt.rsi = v->arch.user_regs.rsi;
+    ctxt.rdi = v->arch.user_regs.rdi;
+    ctxt.rsp = v->arch.user_regs.rsp;
+    ctxt.rip = v->arch.user_regs.rip;
+    ctxt.rflags = v->arch.user_regs.rflags;
+    ctxt.r8  = v->arch.user_regs.r8;
+    ctxt.r9  = v->arch.user_regs.r9;
+    ctxt.r10 = v->arch.user_regs.r10;
+    ctxt.r11 = v->arch.user_regs.r11;
+    ctxt.r12 = v->arch.user_regs.r12;
+    ctxt.r13 = v->arch.user_regs.r13;
+    ctxt.r14 = v->arch.user_regs.r14;
+    ctxt.r15 = v->arch.user_regs.r15;
+    ctxt.dr0 = v->arch.debugreg[0];
+    ctxt.dr1 = v->arch.debugreg[1];
+    ctxt.dr2 = v->arch.debugreg[2];
+    ctxt.dr3 = v->arch.debugreg[3];
+    ctxt.dr6 = v->arch.debugreg[6];
+    ctxt.dr7 = v->arch.debugreg[7];
+
+    if ( hvm_save_entry(CPU, v->vcpu_id, h, &ctxt) != 0 )
+        return 1;
     return 0;
 }
 
diff --git a/xen/include/asm-x86/hvm/support.h b/xen/include/asm-x86/hvm/support.h
index ac33eea..f8988e0 100644
--- a/xen/include/asm-x86/hvm/support.h
+++ b/xen/include/asm-x86/hvm/support.h
@@ -52,6 +52,8 @@ extern unsigned int opt_hvm_debug_level;
 #define HVM_DBG_LOG(level, _f, _a...) do {} while (0)
 #endif
 
+#define CONTINUE 2
+
 extern unsigned long hvm_io_bitmap[];
 
 enum hvm_translation_result {
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v7 11/15] x86/hvm: Remove loop from hvm_save_cpu_xsave_states
  2018-06-07 14:59 [PATCH v7 00/15] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (9 preceding siblings ...)
  2018-06-07 14:59 ` [PATCH v7 10/15] x86/hvm: Remove loop from hvm_save_cpu_ctxt func Alexandru Isaila
@ 2018-06-07 14:59 ` Alexandru Isaila
  2018-06-07 14:59 ` [PATCH v7 12/15] x86/hvm: Remove loop from hvm_save_cpu_msrs func Alexandru Isaila
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 30+ messages in thread
From: Alexandru Isaila @ 2018-06-07 14:59 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
 xen/arch/x86/hvm/hvm.c | 35 +++++++++++++----------------------
 1 file changed, 13 insertions(+), 22 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 38e5e96..2542cbd 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1160,36 +1160,27 @@ HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_load_cpu_ctxt,
                                            save_area) + \
                                   xstate_ctxt_size(xcr0))
 
-static void hvm_save_cpu_xsave_states_one(struct vcpu *v, struct hvm_hw_cpu_xsave *ctxt)
-{
-    ctxt->xfeature_mask = xfeature_mask;
-    ctxt->xcr0 = v->arch.xcr0;
-    ctxt->xcr0_accum = v->arch.xcr0_accum;
-}
-
 static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
 {
-    struct vcpu *v;
+    struct vcpu *v = NULL;
     struct hvm_hw_cpu_xsave *ctxt;
+    unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
 
     if ( !cpu_has_xsave )
         return 0;   /* do nothing */
 
-    for_each_vcpu ( d, v )
-    {
-        unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
-
-        if ( !xsave_enabled(v) )
-            continue;
-        if ( _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size) )
-            return 1;
-        ctxt = (struct hvm_hw_cpu_xsave *)&h->data[h->cur];
-        h->cur += size;
+    if ( !xsave_enabled(v) )
+        return CONTINUE;
+    if ( _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size) )
+        return 1;
+    ctxt = (struct hvm_hw_cpu_xsave *)&h->data[h->cur];
+    h->cur += size;
+    ctxt->xfeature_mask = xfeature_mask;
+    ctxt->xcr0 = v->arch.xcr0;
+    ctxt->xcr0_accum = v->arch.xcr0_accum;
 
-        hvm_save_cpu_xsave_states_one(v, ctxt);
-        expand_xsave_states(v, &ctxt->save_area,
-                            size - offsetof(typeof(*ctxt), save_area));
-    }
+    expand_xsave_states(v, &ctxt->save_area,
+                        size - offsetof(typeof(*ctxt), save_area));
 
     return 0;
 }
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v7 12/15] x86/hvm: Remove loop from hvm_save_cpu_msrs func
  2018-06-07 14:59 [PATCH v7 00/15] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (10 preceding siblings ...)
  2018-06-07 14:59 ` [PATCH v7 11/15] x86/hvm: Remove loop from hvm_save_cpu_xsave_states Alexandru Isaila
@ 2018-06-07 14:59 ` Alexandru Isaila
  2018-06-07 14:59 ` [PATCH v7 13/15] x86/hvm: Remove loop from hvm_save_mtrr_msr func Alexandru Isaila
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 30+ messages in thread
From: Alexandru Isaila @ 2018-06-07 14:59 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
 xen/arch/x86/hvm/hvm.c | 58 +++++++++++++++++++-------------------------------
 1 file changed, 22 insertions(+), 36 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 2542cbd..a88efeb 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1322,10 +1322,18 @@ static const uint32_t msrs_to_send[] = {
 };
 static unsigned int __read_mostly msr_count_max = ARRAY_SIZE(msrs_to_send);
 
-static int hvm_save_cpu_msrs_one(struct vcpu *v, struct hvm_msr *ctxt)
+static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
 {
+    struct vcpu *v = NULL;
+    struct hvm_save_descriptor *desc = _p(&h->data[h->cur]);
+    struct hvm_msr *ctxt;
     unsigned int i;
 
+    if ( _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
+                         HVM_CPU_MSR_SIZE(msr_count_max)) )
+        return 1;
+    ctxt = (struct hvm_msr *)&h->data[h->cur];
+    ctxt->count = 0;
     for ( i = 0; i < ARRAY_SIZE(msrs_to_send); ++i )
     {
         uint64_t val;
@@ -1351,46 +1359,24 @@ static int hvm_save_cpu_msrs_one(struct vcpu *v, struct hvm_msr *ctxt)
         ctxt->msr[ctxt->count].index = msrs_to_send[i];
         ctxt->msr[ctxt->count++].val = val;
     }
-    return 0;
-}
 
-static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
-{
-    struct vcpu *v;
+    if ( hvm_funcs.save_msr )
+        hvm_funcs.save_msr(v, ctxt);
 
-    for_each_vcpu ( d, v )
-    {
-        struct hvm_save_descriptor *desc = _p(&h->data[h->cur]);
-        struct hvm_msr *ctxt;
-        unsigned int i;
+    ASSERT(ctxt->count <= msr_count_max);
 
-        if ( _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
-                             HVM_CPU_MSR_SIZE(msr_count_max)) )
-            return 1;
-        ctxt = (struct hvm_msr *)&h->data[h->cur];
-        ctxt->count = 0;
-
-        if ( hvm_save_cpu_msrs_one(v, ctxt) == -ENXIO )
-            return -ENXIO;
-
-        if ( hvm_funcs.save_msr )
-            hvm_funcs.save_msr(v, ctxt);
-
-        ASSERT(ctxt->count <= msr_count_max);
-
-        for ( i = 0; i < ctxt->count; ++i )
-            ctxt->msr[i]._rsvd = 0;
+    for ( i = 0; i < ctxt->count; ++i )
+        ctxt->msr[i]._rsvd = 0;
 
-        if ( ctxt->count )
-        {
-            /* Rewrite length to indicate how much space we actually used. */
-            desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
-            h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
-        }
-        else
-            /* or rewind and remove the descriptor from the stream. */
-            h->cur -= sizeof(struct hvm_save_descriptor);
+    if ( ctxt->count )
+    {
+        /* Rewrite length to indicate how much space we actually used. */
+        desc->length = HVM_CPU_MSR_SIZE(ctxt->count);
+        h->cur += HVM_CPU_MSR_SIZE(ctxt->count);
     }
+    else
+        /* or rewind and remove the descriptor from the stream. */
+        h->cur -= sizeof(struct hvm_save_descriptor);
 
     return 0;
 }
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v7 13/15] x86/hvm: Remove loop from hvm_save_mtrr_msr func
  2018-06-07 14:59 [PATCH v7 00/15] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (11 preceding siblings ...)
  2018-06-07 14:59 ` [PATCH v7 12/15] x86/hvm: Remove loop from hvm_save_cpu_msrs func Alexandru Isaila
@ 2018-06-07 14:59 ` Alexandru Isaila
  2018-06-07 14:59 ` [PATCH v7 14/15] x86/hvm: Remove loop from viridian_save_vcpu_ctxt() func Alexandru Isaila
  2018-06-07 14:59 ` [PATCH v7 15/15] x86/domctl: Don't pause the whole domain if only getting vcpu state Alexandru Isaila
  14 siblings, 0 replies; 30+ messages in thread
From: Alexandru Isaila @ 2018-06-07 14:59 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
 xen/arch/x86/hvm/mtrr.c | 33 ++++++++++++---------------------
 1 file changed, 12 insertions(+), 21 deletions(-)

diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index d311031..4c1e850 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -666,46 +666,37 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start,
     return 0;
 }
 
-static void hvm_save_mtrr_msr_one(struct vcpu *v, struct hvm_hw_mtrr *hw_mtrr)
+static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
 {
+    struct vcpu *v = NULL;
+    struct hvm_hw_mtrr hw_mtrr;
     struct mtrr_state *mtrr_state = &v->arch.hvm_vcpu.mtrr;
     int i;
+    /* save mtrr&pat */
 
-    hvm_get_guest_pat(v, &hw_mtrr->msr_pat_cr);
+    hvm_get_guest_pat(v, &hw_mtrr.msr_pat_cr);
 
-    hw_mtrr->msr_mtrr_def_type = mtrr_state->def_type
+    hw_mtrr.msr_mtrr_def_type = mtrr_state->def_type
                             | (mtrr_state->enabled << 10);
-    hw_mtrr->msr_mtrr_cap = mtrr_state->mtrr_cap;
+    hw_mtrr.msr_mtrr_cap = mtrr_state->mtrr_cap;
 
     for ( i = 0; i < MTRR_VCNT; i++ )
     {
         /* save physbase */
-        hw_mtrr->msr_mtrr_var[i*2] =
+        hw_mtrr.msr_mtrr_var[i*2] =
             ((uint64_t*)mtrr_state->var_ranges)[i*2];
         /* save physmask */
-        hw_mtrr->msr_mtrr_var[i*2+1] =
+        hw_mtrr.msr_mtrr_var[i*2+1] =
             ((uint64_t*)mtrr_state->var_ranges)[i*2+1];
     }
 
     for ( i = 0; i < NUM_FIXED_MSR; i++ )
-        hw_mtrr->msr_mtrr_fixed[i] =
+        hw_mtrr.msr_mtrr_fixed[i] =
             ((uint64_t*)mtrr_state->fixed_ranges)[i];
 
-}
-
-static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
-{
-    struct vcpu *v;
-    struct hvm_hw_mtrr hw_mtrr;
-    /* save mtrr&pat */
-
-    for_each_vcpu(d, v)
-    {
-        hvm_save_mtrr_msr_one(v, &hw_mtrr);
+    if ( hvm_save_entry(MTRR, v->vcpu_id, h, &hw_mtrr) != 0 )
+        return 1;
 
-        if ( hvm_save_entry(MTRR, v->vcpu_id, h, &hw_mtrr) != 0 )
-            return 1;
-    }
     return 0;
 }
 
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v7 14/15] x86/hvm: Remove loop from viridian_save_vcpu_ctxt() func
  2018-06-07 14:59 [PATCH v7 00/15] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (12 preceding siblings ...)
  2018-06-07 14:59 ` [PATCH v7 13/15] x86/hvm: Remove loop from hvm_save_mtrr_msr func Alexandru Isaila
@ 2018-06-07 14:59 ` Alexandru Isaila
  2018-06-07 14:59 ` [PATCH v7 15/15] x86/domctl: Don't pause the whole domain if only getting vcpu state Alexandru Isaila
  14 siblings, 0 replies; 30+ messages in thread
From: Alexandru Isaila @ 2018-06-07 14:59 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
 xen/arch/x86/hvm/viridian.c | 22 +++++++---------------
 1 file changed, 7 insertions(+), 15 deletions(-)

diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index bab606e..86a43ee 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -1026,28 +1026,20 @@ static int viridian_load_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
 HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt,
                           viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
 
-static void viridian_save_vcpu_ctxt_one(struct vcpu *v, struct hvm_viridian_vcpu_context *ctxt)
-{
-    memset(ctxt, 0, sizeof(*ctxt));
-    ctxt->vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw;
-    ctxt->vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending;
-}
 
 static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
-    struct vcpu *v;
+    struct vcpu *v = NULL;
+    struct hvm_viridian_vcpu_context ctxt = {
+        .vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw,
+        .vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending,
+    };
 
     if ( !is_viridian_domain(d) )
         return 0;
 
-    for_each_vcpu( d, v ) {
-        struct hvm_viridian_vcpu_context ctxt;
-
-        viridian_save_vcpu_ctxt_one(v, &ctxt);
-
-        if ( hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt) != 0 )
-            return 1;
-    }
+    if ( hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt) != 0 )
+        return 1;
 
     return 0;
 }
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v7 15/15] x86/domctl: Don't pause the whole domain if only getting vcpu state
  2018-06-07 14:59 [PATCH v7 00/15] x86/domctl: Save info for one vcpu instance Alexandru Isaila
                   ` (13 preceding siblings ...)
  2018-06-07 14:59 ` [PATCH v7 14/15] x86/hvm: Remove loop from viridian_save_vcpu_ctxt() func Alexandru Isaila
@ 2018-06-07 14:59 ` Alexandru Isaila
  14 siblings, 0 replies; 30+ messages in thread
From: Alexandru Isaila @ 2018-06-07 14:59 UTC (permalink / raw)
  To: xen-devel
  Cc: wei.liu2, andrew.cooper3, ian.jackson, paul.durrant, jbeulich,
	Alexandru Isaila

This patch is focused moving the for loop to the caller so
now we can save info for a single vcpu instance.

Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
---
 xen/arch/x86/cpu/mcheck/vmce.c |   3 +-
 xen/arch/x86/hvm/hpet.c        |   3 +-
 xen/arch/x86/hvm/hvm.c         |  12 ++---
 xen/arch/x86/hvm/i8254.c       |   3 +-
 xen/arch/x86/hvm/irq.c         |   9 ++--
 xen/arch/x86/hvm/mtrr.c        |   3 +-
 xen/arch/x86/hvm/pmtimer.c     |   3 +-
 xen/arch/x86/hvm/rtc.c         |   3 +-
 xen/arch/x86/hvm/save.c        | 118 ++++++++++++++++++++++++++++++-----------
 xen/arch/x86/hvm/vioapic.c     |   3 +-
 xen/arch/x86/hvm/viridian.c    |   7 +--
 xen/arch/x86/hvm/vlapic.c      |  34 ++++--------
 xen/arch/x86/hvm/vpic.c        |   3 +-
 xen/include/asm-x86/hvm/save.h |   2 +-
 14 files changed, 125 insertions(+), 81 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index ead1f73..88541b7 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -349,10 +349,9 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
     return ret;
 }
 
-static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+static int vmce_save_vcpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
 {
     struct hvm_vmce_vcpu ctxt;
-    struct vcpu *v = NULL;
 
     ctxt.caps = v->arch.vmce.mcg_cap;
     ctxt.mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2;
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index 2837709..3ed6547 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -516,8 +516,9 @@ static const struct hvm_mmio_ops hpet_mmio_ops = {
 };
 
 
-static int hpet_save(struct domain *d, hvm_domain_context_t *h)
+static int hpet_save(struct vcpu *vcpu, hvm_domain_context_t *h)
 {
+    struct domain *d = vcpu->domain;
     HPETState *hp = domain_vhpet(d);
     struct vcpu *v = pt_global_vcpu_target(d);
     int rc;
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index a88efeb..70d90cc 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -740,9 +740,8 @@ void hvm_domain_destroy(struct domain *d)
     destroy_vpci_mmcfg(d);
 }
 
-static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_tsc_adjust(struct vcpu *v, hvm_domain_context_t *h)
 {
-    struct vcpu *v = NULL;
     struct hvm_tsc_adjust ctxt;
 
     ctxt.tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust;
@@ -772,11 +771,10 @@ static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
                           hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
 
-static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
 {
     struct segment_register seg;
     struct hvm_hw_cpu ctxt = {};
-    struct vcpu *v = NULL;
 
     /* We don't need to save state for a vcpu that is down; the restore
      * code will leave it down if there is nothing saved. */
@@ -1160,9 +1158,8 @@ HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_load_cpu_ctxt,
                                            save_area) + \
                                   xstate_ctxt_size(xcr0))
 
-static int hvm_save_cpu_xsave_states(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_xsave_states(struct vcpu *v, hvm_domain_context_t *h)
 {
-    struct vcpu *v = NULL;
     struct hvm_hw_cpu_xsave *ctxt;
     unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
 
@@ -1322,9 +1319,8 @@ static const uint32_t msrs_to_send[] = {
 };
 static unsigned int __read_mostly msr_count_max = ARRAY_SIZE(msrs_to_send);
 
-static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_cpu_msrs(struct vcpu *v, hvm_domain_context_t *h)
 {
-    struct vcpu *v = NULL;
     struct hvm_save_descriptor *desc = _p(&h->data[h->cur]);
     struct hvm_msr *ctxt;
     unsigned int i;
diff --git a/xen/arch/x86/hvm/i8254.c b/xen/arch/x86/hvm/i8254.c
index 992f08d..e0d2255 100644
--- a/xen/arch/x86/hvm/i8254.c
+++ b/xen/arch/x86/hvm/i8254.c
@@ -390,8 +390,9 @@ void pit_stop_channel0_irq(PITState *pit)
     spin_unlock(&pit->lock);
 }
 
-static int pit_save(struct domain *d, hvm_domain_context_t *h)
+static int pit_save(struct vcpu *v, hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     PITState *pit = domain_vpit(d);
     int rc;
 
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index c85d004..72acb73 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -630,8 +630,9 @@ static int __init dump_irq_info_key_init(void)
 }
 __initcall(dump_irq_info_key_init);
 
-static int irq_save_pci(struct domain *d, hvm_domain_context_t *h)
+static int irq_save_pci(struct vcpu *v, hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     struct hvm_irq *hvm_irq = hvm_domain_irq(d);
     unsigned int asserted, pdev, pintx;
     int rc;
@@ -662,16 +663,18 @@ static int irq_save_pci(struct domain *d, hvm_domain_context_t *h)
     return rc;
 }
 
-static int irq_save_isa(struct domain *d, hvm_domain_context_t *h)
+static int irq_save_isa(struct vcpu *v, hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     struct hvm_irq *hvm_irq = hvm_domain_irq(d);
 
     /* Save ISA IRQ lines */
     return ( hvm_save_entry(ISA_IRQ, 0, h, &hvm_irq->isa_irq) );
 }
 
-static int irq_save_link(struct domain *d, hvm_domain_context_t *h)
+static int irq_save_link(struct vcpu *v, hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     struct hvm_irq *hvm_irq = hvm_domain_irq(d);
 
     /* Save PCI-ISA link state */
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 4c1e850..ae73c78 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -666,9 +666,8 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start,
     return 0;
 }
 
-static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
+static int hvm_save_mtrr_msr(struct vcpu *v, hvm_domain_context_t *h)
 {
-    struct vcpu *v = NULL;
     struct hvm_hw_mtrr hw_mtrr;
     struct mtrr_state *mtrr_state = &v->arch.hvm_vcpu.mtrr;
     int i;
diff --git a/xen/arch/x86/hvm/pmtimer.c b/xen/arch/x86/hvm/pmtimer.c
index 435647f..d8dcbc2 100644
--- a/xen/arch/x86/hvm/pmtimer.c
+++ b/xen/arch/x86/hvm/pmtimer.c
@@ -249,8 +249,9 @@ static int handle_pmt_io(
     return X86EMUL_OKAY;
 }
 
-static int acpi_save(struct domain *d, hvm_domain_context_t *h)
+static int acpi_save(struct vcpu *v, hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     struct hvm_hw_acpi *acpi = &d->arch.hvm_domain.acpi;
     PMTState *s = &d->arch.hvm_domain.pl_time->vpmt;
     uint32_t x, msb = acpi->tmr_val & TMR_VAL_MSB;
diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index cb75b99..58b70fc 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -737,8 +737,9 @@ void rtc_migrate_timers(struct vcpu *v)
 }
 
 /* Save RTC hardware state */
-static int rtc_save(struct domain *d, hvm_domain_context_t *h)
+static int rtc_save(struct vcpu *v, hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     RTCState *s = domain_vrtc(d);
     int rc;
 
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index 8984a23..69f0fff 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -135,9 +135,12 @@ size_t hvm_save_size(struct domain *d)
 int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
                  XEN_GUEST_HANDLE_64(uint8) handle, uint64_t *bufsz)
 {
-    int rv;
+    int rv = 0;
     hvm_domain_context_t ctxt = { };
     const struct hvm_save_descriptor *desc;
+    bool is_single_instance = false;
+    uint32_t off = 0;
+    struct vcpu *v;
 
     if ( d->is_dying ||
          typecode > HVM_SAVE_CODE_MAX ||
@@ -146,42 +149,85 @@ int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int instance,
         return -EINVAL;
 
     ctxt.size = hvm_sr_handlers[typecode].size;
-    if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
+    if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU &&
+        instance == d->max_vcpus )
         ctxt.size *= d->max_vcpus;
     ctxt.data = xmalloc_bytes(ctxt.size);
     if ( !ctxt.data )
         return -ENOMEM;
 
-    if ( (rv = hvm_sr_handlers[typecode].save(d, &ctxt)) != 0 )
-        printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
-               d->domain_id, typecode, rv);
-    else if ( rv = -ENOENT, ctxt.cur >= sizeof(*desc) )
-    {
-        uint32_t off;
+    if( is_single_instance )
+        vcpu_pause(d->vcpu[instance]);
+    else
+        domain_pause(d);
 
-        for ( off = 0; off <= (ctxt.cur - sizeof(*desc)); off += desc->length )
+    if( is_single_instance )
+    {
+        if ( (rv = hvm_sr_handlers[typecode].save(d->vcpu[instance],
+                                                  &ctxt)) != 0 )
         {
-            desc = (void *)(ctxt.data + off);
+            printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
+                           d->domain_id, typecode, rv);
+            vcpu_unpause(d->vcpu[instance]);
+        }
+        else if ( ctxt.cur >= sizeof(*desc) )
+        {
+            rv = -ENOENT;
+            desc = (void *)(ctxt.data);
             /* Move past header */
-            off += sizeof(*desc);
+            off = sizeof(*desc);
             if ( ctxt.cur < desc->length ||
-                 off > ctxt.cur - desc->length )
-                break;
-            if ( instance == desc->instance )
-            {
-                rv = 0;
-                if ( guest_handle_is_null(handle) )
-                    *bufsz = desc->length;
-                else if ( *bufsz < desc->length )
-                    rv = -ENOBUFS;
-                else if ( copy_to_guest(handle, ctxt.data + off, desc->length) )
-                    rv = -EFAULT;
-                else
-                    *bufsz = desc->length;
-                break;
-            }
+                  off > ctxt.cur - desc->length )
+                rv = -EFAULT;
+            rv = 0;
+            if ( guest_handle_is_null(handle) )
+                *bufsz = desc->length;
+            else if ( *bufsz < desc->length )
+                rv = -ENOBUFS;
+            else if ( copy_to_guest(handle, ctxt.data + off, desc->length) )
+                rv = -EFAULT;
+            else
+                *bufsz = desc->length;
+            vcpu_unpause(d->vcpu[instance]);
         }
     }
+    else
+    {
+        for_each_vcpu ( d, v )
+        {
+            if ( (rv = hvm_sr_handlers[typecode].save(d->vcpu[instance],
+                                                      &ctxt)) != 0 )
+            {
+                printk(XENLOG_G_ERR "HVM%d save: failed to save type %"PRIu16" (%d)\n",
+                       d->domain_id, typecode, rv);
+            }
+            else if ( ctxt.cur >= sizeof(*desc) )
+            {
+                rv = -ENOENT;
+                desc = (void *)(ctxt.data + off);
+                /* Move past header */
+                off += sizeof(*desc);
+                if ( ctxt.cur < desc->length ||
+                     off > ctxt.cur - desc->length )
+                    break;
+                if ( instance == desc->instance )
+                {
+                    rv = 0;
+                    if ( guest_handle_is_null(handle) )
+                        *bufsz = desc->length;
+                    else if ( *bufsz < desc->length )
+                        rv = -ENOBUFS;
+                    else if ( copy_to_guest(handle, ctxt.data + off, desc->length) )
+                        rv = -EFAULT;
+                    else
+                        *bufsz = desc->length;
+                    break;
+                }
+                off += desc->length;
+             }
+         }
+        domain_unpause(d);
+     }
 
     xfree(ctxt.data);
     return rv;
@@ -193,7 +239,8 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
     struct hvm_save_header hdr;
     struct hvm_save_end end;
     hvm_save_handler handler;
-    unsigned int i;
+    unsigned int i, rc;
+    struct vcpu *v = NULL;
 
     if ( d->is_dying )
         return -EINVAL;
@@ -225,12 +272,19 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
         {
             printk(XENLOG_G_INFO "HVM%d save: %s\n",
                    d->domain_id, hvm_sr_handlers[i].name);
-            if ( handler(d, h) != 0 )
+            for_each_vcpu ( d, v )
             {
-                printk(XENLOG_G_ERR
-                       "HVM%d save: failed to save type %"PRIu16"\n",
-                       d->domain_id, i);
-                return -EFAULT;
+                rc = handler(v, h);
+                if( rc == CONTINUE )
+                    continue;
+
+                if( rc != 0 )
+                {
+                    printk(XENLOG_G_ERR
+                           "HVM%d save: failed to save type %"PRIu16"\n",
+                           d->domain_id, i);
+                    return -EFAULT;
+                }
             }
         }
     }
diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index 97b419f..86d02cf 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -569,8 +569,9 @@ int vioapic_get_trigger_mode(const struct domain *d, unsigned int gsi)
     return vioapic->redirtbl[pin].fields.trig_mode;
 }
 
-static int ioapic_save(struct domain *d, hvm_domain_context_t *h)
+static int ioapic_save(struct vcpu *v, hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     struct hvm_vioapic *s;
 
     if ( !has_vioapic(d) )
diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index 86a43ee..7ec7a2b 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -990,8 +990,9 @@ out:
     return HVM_HCALL_completed;
 }
 
-static int viridian_save_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
+static int viridian_save_domain_ctxt(struct vcpu *v, hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     struct hvm_viridian_domain_context ctxt = {
         .time_ref_count = d->arch.hvm_domain.viridian.time_ref_count.val,
         .hypercall_gpa  = d->arch.hvm_domain.viridian.hypercall_gpa.raw,
@@ -1027,9 +1028,9 @@ HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt,
                           viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
 
 
-static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
+static int viridian_save_vcpu_ctxt(struct vcpu *v, hvm_domain_context_t *h)
 {
-    struct vcpu *v = NULL;
+    struct domain *d = v->domain;
     struct hvm_viridian_vcpu_context ctxt = {
         .vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw,
         .vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending,
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 1b9f00a..6337cdb 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1435,45 +1435,31 @@ static void lapic_rearm(struct vlapic *s)
     s->timer_last_update = s->pt.last_plt_gtime;
 }
 
-static int lapic_save_hidden(struct domain *d, hvm_domain_context_t *h)
+static int lapic_save_hidden(struct vcpu *v, hvm_domain_context_t *h)
 {
-    struct vcpu *v;
+    struct domain *d = v->domain;
     struct vlapic *s;
-    int rc = 0;
 
     if ( !has_vlapic(d) )
         return 0;
 
-    for_each_vcpu ( d, v )
-    {
-        s = vcpu_vlapic(v);
-        if ( (rc = hvm_save_entry(LAPIC, v->vcpu_id, h, &s->hw)) != 0 )
-            break;
-    }
-
-    return rc;
+    s = vcpu_vlapic(v);
+    return hvm_save_entry(LAPIC, v->vcpu_id, h, &s->hw);
 }
 
-static int lapic_save_regs(struct domain *d, hvm_domain_context_t *h)
+static int lapic_save_regs(struct vcpu *v, hvm_domain_context_t *h)
 {
-    struct vcpu *v;
+    struct domain *d = v->domain;
     struct vlapic *s;
-    int rc = 0;
 
     if ( !has_vlapic(d) )
         return 0;
 
-    for_each_vcpu ( d, v )
-    {
-        if ( hvm_funcs.sync_pir_to_irr )
-            hvm_funcs.sync_pir_to_irr(v);
-
-        s = vcpu_vlapic(v);
-        if ( (rc = hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, s->regs)) != 0 )
-            break;
-    }
+    if ( hvm_funcs.sync_pir_to_irr )
+        hvm_funcs.sync_pir_to_irr(v);
 
-    return rc;
+    s = vcpu_vlapic(v);
+    return hvm_save_entry(LAPIC_REGS, v->vcpu_id, h, s->regs);
 }
 
 /*
diff --git a/xen/arch/x86/hvm/vpic.c b/xen/arch/x86/hvm/vpic.c
index e160bbd..bad5066 100644
--- a/xen/arch/x86/hvm/vpic.c
+++ b/xen/arch/x86/hvm/vpic.c
@@ -371,8 +371,9 @@ static int vpic_intercept_elcr_io(
     return X86EMUL_OKAY;
 }
 
-static int vpic_save(struct domain *d, hvm_domain_context_t *h)
+static int vpic_save(struct vcpu *v, hvm_domain_context_t *h)
 {
+    struct domain *d = v->domain;
     struct hvm_hw_vpic *s;
     int i;
 
diff --git a/xen/include/asm-x86/hvm/save.h b/xen/include/asm-x86/hvm/save.h
index f889e8f..fe642ab 100644
--- a/xen/include/asm-x86/hvm/save.h
+++ b/xen/include/asm-x86/hvm/save.h
@@ -95,7 +95,7 @@ static inline uint16_t hvm_load_instance(struct hvm_domain_context *h)
  * The save handler may save multiple instances of a type into the buffer;
  * the load handler will be called once for each instance found when
  * restoring.  Both return non-zero on error. */
-typedef int (*hvm_save_handler) (struct domain *d, 
+typedef int (*hvm_save_handler) (struct vcpu *v,
                                  hvm_domain_context_t *h);
 typedef int (*hvm_load_handler) (struct domain *d,
                                  hvm_domain_context_t *h);
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 07/15] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func
  2018-06-07 14:59 ` [PATCH v7 07/15] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func Alexandru Isaila
@ 2018-06-08  8:27   ` Paul Durrant
  0 siblings, 0 replies; 30+ messages in thread
From: Paul Durrant @ 2018-06-08  8:27 UTC (permalink / raw)
  To: 'Alexandru Isaila', xen-devel
  Cc: Ian Jackson, Wei Liu, jbeulich, Andrew Cooper

> -----Original Message-----
> From: Alexandru Isaila [mailto:aisaila@bitdefender.com]
> Sent: 07 June 2018 15:59
> To: xen-devel@lists.xen.org
> Cc: Ian Jackson <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.com>;
> jbeulich@suse.com; Andrew Cooper <Andrew.Cooper3@citrix.com>; Paul
> Durrant <Paul.Durrant@citrix.com>; Alexandru Isaila
> <aisaila@bitdefender.com>
> Subject: [PATCH v7 07/15] x86/hvm: Introduce
> viridian_save_vcpu_ctxt_one() func
> 
> This is used to save data from a single instance.
> 
> Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
> 

Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

> ---
> Changes since V6:
> 	- Add memset to 0 for ctxt
> ---
>  xen/arch/x86/hvm/viridian.c | 14 ++++++++++----
>  1 file changed, 10 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
> index 694eae6..bab606e 100644
> --- a/xen/arch/x86/hvm/viridian.c
> +++ b/xen/arch/x86/hvm/viridian.c
> @@ -1026,6 +1026,13 @@ static int viridian_load_domain_ctxt(struct domain
> *d, hvm_domain_context_t *h)
>  HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN,
> viridian_save_domain_ctxt,
>                            viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
> 
> +static void viridian_save_vcpu_ctxt_one(struct vcpu *v, struct
> hvm_viridian_vcpu_context *ctxt)
> +{
> +    memset(ctxt, 0, sizeof(*ctxt));
> +    ctxt->vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw;
> +    ctxt->vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending;
> +}
> +
>  static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t
> *h)
>  {
>      struct vcpu *v;
> @@ -1034,10 +1041,9 @@ static int viridian_save_vcpu_ctxt(struct domain
> *d, hvm_domain_context_t *h)
>          return 0;
> 
>      for_each_vcpu( d, v ) {
> -        struct hvm_viridian_vcpu_context ctxt = {
> -            .vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw,
> -            .vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending,
> -        };
> +        struct hvm_viridian_vcpu_context ctxt;
> +
> +        viridian_save_vcpu_ctxt_one(v, &ctxt);
> 
>          if ( hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt) != 0 )
>              return 1;
> --
> 2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 08/15] x86/cpu: Remove loop form vmce_save_vcpu_ctxt() func
  2018-06-07 14:59 ` [PATCH v7 08/15] x86/cpu: Remove loop form vmce_save_vcpu_ctxt() func Alexandru Isaila
@ 2018-06-08  8:33   ` Paul Durrant
  2018-06-08  8:51     ` Alexandru Stefan ISAILA
  0 siblings, 1 reply; 30+ messages in thread
From: Paul Durrant @ 2018-06-08  8:33 UTC (permalink / raw)
  To: 'Alexandru Isaila', xen-devel
  Cc: Ian Jackson, Wei Liu, jbeulich, Andrew Cooper

> -----Original Message-----
> From: Alexandru Isaila [mailto:aisaila@bitdefender.com]
> Sent: 07 June 2018 15:59
> To: xen-devel@lists.xen.org
> Cc: Ian Jackson <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.com>;
> jbeulich@suse.com; Andrew Cooper <Andrew.Cooper3@citrix.com>; Paul
> Durrant <Paul.Durrant@citrix.com>; Alexandru Isaila
> <aisaila@bitdefender.com>
> Subject: [PATCH v7 08/15] x86/cpu: Remove loop form
> vmce_save_vcpu_ctxt() func
> 
> Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
> ---
>  xen/arch/x86/cpu/mcheck/vmce.c | 27 +++++++--------------------
>  1 file changed, 7 insertions(+), 20 deletions(-)
> 
> diff --git a/xen/arch/x86/cpu/mcheck/vmce.c
> b/xen/arch/x86/cpu/mcheck/vmce.c
> index 404f27e..ead1f73 100644
> --- a/xen/arch/x86/cpu/mcheck/vmce.c
> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
> @@ -349,30 +349,17 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
>      return ret;
>  }
> 
> -static void vmce_save_vcpu_ctxt_one(struct vcpu *v, struct
> hvm_vmce_vcpu *ctxt)
> -{
> -    ctxt->caps = v->arch.vmce.mcg_cap;
> -    ctxt->mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2;
> -    ctxt->mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2;
> -    ctxt->mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl;
> -}
> -
>  static int vmce_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t
> *h)
>  {
> -    struct vcpu *v;
> -    int err = 0;
> -
> -    for_each_vcpu ( d, v )
> -    {
> -        struct hvm_vmce_vcpu ctxt;
> +    struct hvm_vmce_vcpu ctxt;
> +    struct vcpu *v = NULL;
> 
> -        vmce_save_vcpu_ctxt_one(v, &ctxt);
> -        err = hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt);
> -        if ( err )
> -            break;
> -    }
> +    ctxt.caps = v->arch.vmce.mcg_cap;

There's a typo in the commit title (s/form/from), but I don't understand what you're doing here. You set v to NULL above and dereference it below. AFAICT, until patch #15 is applied context saving will be completely broken.

  Paul

> +    ctxt.mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2;
> +    ctxt.mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2;
> +    ctxt.mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl;
> 
> -    return err;
> +    return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt);
>  }
> 
>  static int vmce_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t
> *h)
> --
> 2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 08/15] x86/cpu: Remove loop form vmce_save_vcpu_ctxt() func
  2018-06-08  8:33   ` Paul Durrant
@ 2018-06-08  8:51     ` Alexandru Stefan ISAILA
  2018-06-08 12:46       ` Paul Durrant
  0 siblings, 1 reply; 30+ messages in thread
From: Alexandru Stefan ISAILA @ 2018-06-08  8:51 UTC (permalink / raw)
  To: Paul.Durrant, xen-devel; +Cc: Ian.Jackson, wei.liu2, jbeulich, Andrew.Cooper3

On Vi, 2018-06-08 at 08:33 +0000, Paul Durrant wrote:
> > 
> > -----Original Message-----
> > From: Alexandru Isaila [mailto:aisaila@bitdefender.com]
> > Sent: 07 June 2018 15:59
> > To: xen-devel@lists.xen.org
> > Cc: Ian Jackson <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.
> > com>;
> > jbeulich@suse.com; Andrew Cooper <Andrew.Cooper3@citrix.com>; Paul
> > Durrant <Paul.Durrant@citrix.com>; Alexandru Isaila
> > <aisaila@bitdefender.com>
> > Subject: [PATCH v7 08/15] x86/cpu: Remove loop form
> > vmce_save_vcpu_ctxt() func
> > 
> > Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
> > ---
> >  xen/arch/x86/cpu/mcheck/vmce.c | 27 +++++++--------------------
> >  1 file changed, 7 insertions(+), 20 deletions(-)
> > 
> > diff --git a/xen/arch/x86/cpu/mcheck/vmce.c
> > b/xen/arch/x86/cpu/mcheck/vmce.c
> > index 404f27e..ead1f73 100644
> > --- a/xen/arch/x86/cpu/mcheck/vmce.c
> > +++ b/xen/arch/x86/cpu/mcheck/vmce.c
> > @@ -349,30 +349,17 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
> >      return ret;
> >  }
> > 
> > -static void vmce_save_vcpu_ctxt_one(struct vcpu *v, struct
> > hvm_vmce_vcpu *ctxt)
> > -{
> > -    ctxt->caps = v->arch.vmce.mcg_cap;
> > -    ctxt->mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2;
> > -    ctxt->mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2;
> > -    ctxt->mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl;
> > -}
> > -
> >  static int vmce_save_vcpu_ctxt(struct domain *d,
> > hvm_domain_context_t
> > *h)
> >  {
> > -    struct vcpu *v;
> > -    int err = 0;
> > -
> > -    for_each_vcpu ( d, v )
> > -    {
> > -        struct hvm_vmce_vcpu ctxt;
> > +    struct hvm_vmce_vcpu ctxt;
> > +    struct vcpu *v = NULL;
> > 
> > -        vmce_save_vcpu_ctxt_one(v, &ctxt);
> > -        err = hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt);
> > -        if ( err )
> > -            break;
> > -    }
> > +    ctxt.caps = v->arch.vmce.mcg_cap;
> There's a typo in the commit title (s/form/from), but I don't
> understand what you're doing here. You set v to NULL above and
> dereference it below. AFAICT, until patch #15 is applied context
> saving will be completely broken.
Yes, this is true, but it could't find a better way to split the last
patch further.

Alex
> > 
> > +    ctxt.mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2;
> > +    ctxt.mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2;
> > +    ctxt.mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl;
> > 
> > -    return err;
> > +    return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt);
> >  }
> > 
> >  static int vmce_load_vcpu_ctxt(struct domain *d,
> > hvm_domain_context_t
> > *h)
> > --
> > 2.7.4
> 
> ________________________
> This email was scanned by Bitdefender
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 08/15] x86/cpu: Remove loop form vmce_save_vcpu_ctxt() func
  2018-06-08  8:51     ` Alexandru Stefan ISAILA
@ 2018-06-08 12:46       ` Paul Durrant
  2018-06-08 14:42         ` Jan Beulich
  0 siblings, 1 reply; 30+ messages in thread
From: Paul Durrant @ 2018-06-08 12:46 UTC (permalink / raw)
  To: 'Alexandru Stefan ISAILA', xen-devel
  Cc: Ian Jackson, Wei Liu, jbeulich, Andrew Cooper

> -----Original Message-----
> From: Alexandru Stefan ISAILA [mailto:aisaila@bitdefender.com]
> Sent: 08 June 2018 09:51
> To: Paul Durrant <Paul.Durrant@citrix.com>; xen-devel@lists.xen.org
> Cc: Ian Jackson <Ian.Jackson@citrix.com>; jbeulich@suse.com; Andrew
> Cooper <Andrew.Cooper3@citrix.com>; Wei Liu <wei.liu2@citrix.com>
> Subject: Re: [PATCH v7 08/15] x86/cpu: Remove loop form
> vmce_save_vcpu_ctxt() func
> 
> On Vi, 2018-06-08 at 08:33 +0000, Paul Durrant wrote:
> > >
> > > -----Original Message-----
> > > From: Alexandru Isaila [mailto:aisaila@bitdefender.com]
> > > Sent: 07 June 2018 15:59
> > > To: xen-devel@lists.xen.org
> > > Cc: Ian Jackson <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.
> > > com>;
> > > jbeulich@suse.com; Andrew Cooper <Andrew.Cooper3@citrix.com>;
> Paul
> > > Durrant <Paul.Durrant@citrix.com>; Alexandru Isaila
> > > <aisaila@bitdefender.com>
> > > Subject: [PATCH v7 08/15] x86/cpu: Remove loop form
> > > vmce_save_vcpu_ctxt() func
> > >
> > > Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
> > > ---
> > >  xen/arch/x86/cpu/mcheck/vmce.c | 27 +++++++--------------------
> > >  1 file changed, 7 insertions(+), 20 deletions(-)
> > >
> > > diff --git a/xen/arch/x86/cpu/mcheck/vmce.c
> > > b/xen/arch/x86/cpu/mcheck/vmce.c
> > > index 404f27e..ead1f73 100644
> > > --- a/xen/arch/x86/cpu/mcheck/vmce.c
> > > +++ b/xen/arch/x86/cpu/mcheck/vmce.c
> > > @@ -349,30 +349,17 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
> > >      return ret;
> > >  }
> > >
> > > -static void vmce_save_vcpu_ctxt_one(struct vcpu *v, struct
> > > hvm_vmce_vcpu *ctxt)
> > > -{
> > > -    ctxt->caps = v->arch.vmce.mcg_cap;
> > > -    ctxt->mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2;
> > > -    ctxt->mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2;
> > > -    ctxt->mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl;
> > > -}
> > > -
> > >  static int vmce_save_vcpu_ctxt(struct domain *d,
> > > hvm_domain_context_t
> > > *h)
> > >  {
> > > -    struct vcpu *v;
> > > -    int err = 0;
> > > -
> > > -    for_each_vcpu ( d, v )
> > > -    {
> > > -        struct hvm_vmce_vcpu ctxt;
> > > +    struct hvm_vmce_vcpu ctxt;
> > > +    struct vcpu *v = NULL;
> > >
> > > -        vmce_save_vcpu_ctxt_one(v, &ctxt);
> > > -        err = hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt);
> > > -        if ( err )
> > > -            break;
> > > -    }
> > > +    ctxt.caps = v->arch.vmce.mcg_cap;
> > There's a typo in the commit title (s/form/from), but I don't
> > understand what you're doing here. You set v to NULL above and
> > dereference it below. AFAICT, until patch #15 is applied context
> > saving will be completely broken.
> Yes, this is true, but it could't find a better way to split the last
> patch further.

Can't you do it (something like) this way?

- Each of patches #1 - #7 register their save_one handler via an extra arg to HVM_REGISTER_SAVE_RESTORE (and hence extra field in hvm_sr_handlers)
- Move (current) patch #15 to patch #8 but have it call the save_one handlers
- Then have 7 patches that remove the now redundant save handlers, renaming XXX_save_one to XXX_save and passing NULL as the now useless argument to HVM_REGISTER_SAVE_RESTORE
- Then have a final patch deleting the useless arg from HVM_REGISTER_SAVE_RESTORE, cleaning up the callers and also renaming the field in hvm_sr_handlers from save_one to save.

That should keep the series bisectable AFAICT.

  Paul

> 
> Alex
> > >
> > > +    ctxt.mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2;
> > > +    ctxt.mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2;
> > > +    ctxt.mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl;
> > >
> > > -    return err;
> > > +    return hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt);
> > >  }
> > >
> > >  static int vmce_load_vcpu_ctxt(struct domain *d,
> > > hvm_domain_context_t
> > > *h)
> > > --
> > > 2.7.4
> >
> > ________________________
> > This email was scanned by Bitdefender
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 08/15] x86/cpu: Remove loop form vmce_save_vcpu_ctxt() func
  2018-06-08 12:46       ` Paul Durrant
@ 2018-06-08 14:42         ` Jan Beulich
  2018-06-20  8:56           ` Alexandru Stefan ISAILA
  0 siblings, 1 reply; 30+ messages in thread
From: Jan Beulich @ 2018-06-08 14:42 UTC (permalink / raw)
  To: aisaila, Paul Durrant; +Cc: Andrew Cooper, xen-devel, Wei Liu, Ian Jackson

>>> On 08.06.18 at 14:46, <Paul.Durrant@citrix.com> wrote:
>> From: Alexandru Stefan ISAILA [mailto:aisaila@bitdefender.com]
>> Sent: 08 June 2018 09:51
>> On Vi, 2018-06-08 at 08:33 +0000, Paul Durrant wrote:
>> > > From: Alexandru Isaila [mailto:aisaila@bitdefender.com]
>> > > Sent: 07 June 2018 15:59
>> > > Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>
>> > > ---
>> > >  xen/arch/x86/cpu/mcheck/vmce.c | 27 +++++++--------------------
>> > >  1 file changed, 7 insertions(+), 20 deletions(-)
>> > >
>> > > diff --git a/xen/arch/x86/cpu/mcheck/vmce.c
>> > > b/xen/arch/x86/cpu/mcheck/vmce.c
>> > > index 404f27e..ead1f73 100644
>> > > --- a/xen/arch/x86/cpu/mcheck/vmce.c
>> > > +++ b/xen/arch/x86/cpu/mcheck/vmce.c
>> > > @@ -349,30 +349,17 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
>> > >      return ret;
>> > >  }
>> > >
>> > > -static void vmce_save_vcpu_ctxt_one(struct vcpu *v, struct
>> > > hvm_vmce_vcpu *ctxt)
>> > > -{
>> > > -    ctxt->caps = v->arch.vmce.mcg_cap;
>> > > -    ctxt->mci_ctl2_bank0 = v->arch.vmce.bank[0].mci_ctl2;
>> > > -    ctxt->mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2;
>> > > -    ctxt->mcg_ext_ctl = v->arch.vmce.mcg_ext_ctl;
>> > > -}
>> > > -
>> > >  static int vmce_save_vcpu_ctxt(struct domain *d,
>> > > hvm_domain_context_t
>> > > *h)
>> > >  {
>> > > -    struct vcpu *v;
>> > > -    int err = 0;
>> > > -
>> > > -    for_each_vcpu ( d, v )
>> > > -    {
>> > > -        struct hvm_vmce_vcpu ctxt;
>> > > +    struct hvm_vmce_vcpu ctxt;
>> > > +    struct vcpu *v = NULL;
>> > >
>> > > -        vmce_save_vcpu_ctxt_one(v, &ctxt);
>> > > -        err = hvm_save_entry(VMCE_VCPU, v->vcpu_id, h, &ctxt);
>> > > -        if ( err )
>> > > -            break;
>> > > -    }
>> > > +    ctxt.caps = v->arch.vmce.mcg_cap;
>> > There's a typo in the commit title (s/form/from), but I don't
>> > understand what you're doing here. You set v to NULL above and
>> > dereference it below. AFAICT, until patch #15 is applied context
>> > saving will be completely broken.
>> Yes, this is true, but it could't find a better way to split the last
>> patch further.
> 
> Can't you do it (something like) this way?
> 
> - Each of patches #1 - #7 register their save_one handler via an extra arg 
> to HVM_REGISTER_SAVE_RESTORE (and hence extra field in hvm_sr_handlers)

I think either there should be a 1st patch introducing the new field and macro
arg, or patches 1...7 remain the way they are and patch 8 introduces and
uses that field without otherwise touching the handlers. In any event all later
patches then shift down by one in numbering; apart from the numbering I
mostly agree with ...

> - Move (current) patch #15 to patch #8 but have it call the save_one 
> handlers
> - Then have 7 patches that remove the now redundant save handlers, renaming 
> XXX_save_one to XXX_save and passing NULL as the now useless argument to 
> HVM_REGISTER_SAVE_RESTORE
> - Then have a final patch deleting the useless arg from 
> HVM_REGISTER_SAVE_RESTORE, cleaning up the callers and also renaming the 
> field in hvm_sr_handlers from save_one to save.

... all of this. However, I have to admit I'm not certain yet whether the
extra argument can indeed go away again in the end: There are save
records which aren't per-vCPU, and I'm not convinced we want to alter
their handling.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 01/15] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func
  2018-06-07 14:59 ` [PATCH v7 01/15] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func Alexandru Isaila
@ 2018-06-08 14:45   ` Jan Beulich
  0 siblings, 0 replies; 30+ messages in thread
From: Jan Beulich @ 2018-06-08 14:45 UTC (permalink / raw)
  To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel

>>> On 07.06.18 at 16:59, <aisaila@bitdefender.com> wrote:
> This is used to save data from a single instance.
> 
> Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>

Acked-by: Jan Beulich <jbeulich@suse.com>
also for patch 2, both in case they remain the way they are (according
to one of the two proposals I've just made in reply to Paul's outline).

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 03/15] x86/hvm: Introduce hvm_save_cpu_ctxt_one func
  2018-06-07 14:59 ` [PATCH v7 03/15] x86/hvm: Introduce hvm_save_cpu_ctxt_one func Alexandru Isaila
@ 2018-06-08 14:47   ` Jan Beulich
  0 siblings, 0 replies; 30+ messages in thread
From: Jan Beulich @ 2018-06-08 14:47 UTC (permalink / raw)
  To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel

>>> On 07.06.18 at 16:59, <aisaila@bitdefender.com> wrote:
> @@ -799,100 +897,7 @@ static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
>              continue;
>  
>          memset(&ctxt, 0, sizeof(ctxt));
> -

Why do you not move this memset() as well? I'm pretty convinced the
eventual other mode of use will need it too (or otherwise the other caller
would have to duplicate it).

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 04/15] x86/hvm: Introduce hvm_save_cpu_xsave_states_one
  2018-06-07 14:59 ` [PATCH v7 04/15] x86/hvm: Introduce hvm_save_cpu_xsave_states_one Alexandru Isaila
@ 2018-06-08 14:49   ` Jan Beulich
  0 siblings, 0 replies; 30+ messages in thread
From: Jan Beulich @ 2018-06-08 14:49 UTC (permalink / raw)
  To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel

>>> On 07.06.18 at 16:59, <aisaila@bitdefender.com> wrote:
> This is used to save data from a single instance.
> 
> Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>

Acked-by: Jan Beulich <jbeulich@suse.com>
(with the same constraint as before)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 05/15] x86/hvm: Introduce hvm_save_cpu_msrs_one func
  2018-06-07 14:59 ` [PATCH v7 05/15] x86/hvm: Introduce hvm_save_cpu_msrs_one func Alexandru Isaila
@ 2018-06-08 14:50   ` Jan Beulich
  0 siblings, 0 replies; 30+ messages in thread
From: Jan Beulich @ 2018-06-08 14:50 UTC (permalink / raw)
  To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel

>>> On 07.06.18 at 16:59, <aisaila@bitdefender.com> wrote:
> @@ -1370,32 +1402,8 @@ static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
>          ctxt = (struct hvm_msr *)&h->data[h->cur];
>          ctxt->count = 0;

Along the lines of what I've said for patch 3: Why is this line not being
moved?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 06/15] x86/hvm: Introduce hvm_save_mtrr_msr_one func
  2018-06-07 14:59 ` [PATCH v7 06/15] x86/hvm: Introduce hvm_save_mtrr_msr_one func Alexandru Isaila
@ 2018-06-08 14:57   ` Jan Beulich
  0 siblings, 0 replies; 30+ messages in thread
From: Jan Beulich @ 2018-06-08 14:57 UTC (permalink / raw)
  To: aisaila; +Cc: Andrew Cooper, Paul Durrant, Wei Liu, Ian Jackson, xen-devel

>>> On 07.06.18 at 16:59, <aisaila@bitdefender.com> wrote:
> This is used to save data from a single instance.
> 
> Signed-off-by: Alexandru Isaila <aisaila@bitdefender.com>

I think it would help if you based this on top of Roger's series[1] right away,
as that one is pretty likely to go in relatively soon after branching. If you
choose to do so, please don't forget to mention the dependency as long
as the other series hasn't gone in.

> --- a/xen/arch/x86/hvm/mtrr.c
> +++ b/xen/arch/x86/hvm/mtrr.c
> @@ -666,36 +666,42 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, 
> uint64_t gfn_start,
>      return 0;
>  }
>  
> -static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
> +static void hvm_save_mtrr_msr_one(struct vcpu *v, struct hvm_hw_mtrr *hw_mtrr)
>  {
> +    struct mtrr_state *mtrr_state = &v->arch.hvm_vcpu.mtrr;

const (and in case I've overlooked any such helper pointer in other
patches, please apply this comment there as well)

Jan

[1] https://lists.xenproject.org/archives/html/xen-devel/2018-06/msg00487.html



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 08/15] x86/cpu: Remove loop form vmce_save_vcpu_ctxt() func
  2018-06-08 14:42         ` Jan Beulich
@ 2018-06-20  8:56           ` Alexandru Stefan ISAILA
  2018-06-21  7:39             ` Jan Beulich
  0 siblings, 1 reply; 30+ messages in thread
From: Alexandru Stefan ISAILA @ 2018-06-20  8:56 UTC (permalink / raw)
  To: JBeulich, paul.durrant; +Cc: Ian.Jackson, andrew.cooper3, wei.liu2, xen-devel

On Vi, 2018-06-08 at 08:42 -0600, Jan Beulich wrote:
> >
> > >
> > > >
> > > > On 08.06.18 at 14:46, <Paul.Durrant@citrix.com> wrote:
> > > From: Alexandru Stefan ISAILA [mailto:aisaila@bitdefender.com]
> > > Sent: 08 June 2018 09:51
> > > On Vi, 2018-06-08 at 08:33 +0000, Paul Durrant wrote:
> > > >
> > > > There's a typo in the commit title (s/form/from), but I don't
> > > > understand what you're doing here. You set v to NULL above and
> > > > dereference it below. AFAICT, until patch #15 is applied
> > > > context
> > > > saving will be completely broken.
> > > Yes, this is true, but it could't find a better way to split the
> > > last
> > > patch further.
> > Can't you do it (something like) this way?
> >
> > - Each of patches #1 - #7 register their save_one handler via an
> > extra arg
> > to HVM_REGISTER_SAVE_RESTORE (and hence extra field in
> > hvm_sr_handlers)
> I think either there should be a 1st patch introducing the new field
> and macro
> arg, or patches 1...7 remain the way they are and patch 8 introduces
> and
> uses that field without otherwise touching the handlers. In any event
> all later
> patches then shift down by one in numbering; apart from the numbering
> I
> mostly agree with ...
>
> >
> > - Move (current) patch #15 to patch #8 but have it call the
> > save_one
> > handlers
> > - Then have 7 patches that remove the now redundant save handlers,
> > renaming
> > XXX_save_one to XXX_save and passing NULL as the now useless
> > argument to
> > HVM_REGISTER_SAVE_RESTORE
> > - Then have a final patch deleting the useless arg from
> > HVM_REGISTER_SAVE_RESTORE, cleaning up the callers and also
> > renaming the
> > field in hvm_sr_handlers from save_one to save.
> ... all of this. However, I have to admit I'm not certain yet whether
> the
> extra argument can indeed go away again in the end: There are save
> records which aren't per-vCPU, and I'm not convinced we want to alter
> their handling.
>
So the final plan for the series is like this:
- Base everything on Roger's series
- Keep patches 1-7
- Have patch 8 add an extra arg to HVM_REGISTER_SAVE_RESTORE
and hvm_sr_handlers
- Have patch 9 like the patch 15 form now and have it call the
save_one handlers
- Have the next patches remove the redundant save handlers and
rename the save one
- The final patch should remove the extra arg. This one can be
kept or not.

Is this how I should go? Any thoughts are appreciated.

Thanks,
Alex


________________________
This email was scanned by Bitdefender
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 08/15] x86/cpu: Remove loop form vmce_save_vcpu_ctxt() func
  2018-06-20  8:56           ` Alexandru Stefan ISAILA
@ 2018-06-21  7:39             ` Jan Beulich
  2018-06-21  7:47               ` Alexandru Stefan ISAILA
  0 siblings, 1 reply; 30+ messages in thread
From: Jan Beulich @ 2018-06-21  7:39 UTC (permalink / raw)
  To: aisaila; +Cc: Andrew Cooper, xen-devel, Paul Durrant, Wei Liu, Ian.Jackson

>>> On 20.06.18 at 10:56, <aisaila@bitdefender.com> wrote:
> So the final plan for the series is like this:
> - Base everything on Roger's series
> - Keep patches 1-7
> - Have patch 8 add an extra arg to HVM_REGISTER_SAVE_RESTORE
> and hvm_sr_handlers
> - Have patch 9 like the patch 15 form now and have it call the
> save_one handlers
> - Have the next patches remove the redundant save handlers and
> rename the save one
> - The final patch should remove the extra arg. This one can be
> kept or not.

Sounds reasonable, albeit I have to admit I'm not sure what the last
sentence is supposed to mean. Either it is possible to eliminate the
extra arg, then you should do so, or it is not possible (in which case
you can't possibly add a respective patch to the series). 

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 08/15] x86/cpu: Remove loop form vmce_save_vcpu_ctxt() func
  2018-06-21  7:39             ` Jan Beulich
@ 2018-06-21  7:47               ` Alexandru Stefan ISAILA
  2018-06-21  8:15                 ` Jan Beulich
  0 siblings, 1 reply; 30+ messages in thread
From: Alexandru Stefan ISAILA @ 2018-06-21  7:47 UTC (permalink / raw)
  To: JBeulich; +Cc: Ian.Jackson, andrew.cooper3, wei.liu2, paul.durrant, xen-devel

On Jo, 2018-06-21 at 01:39 -0600, Jan Beulich wrote:
> >
> > >
> > > >
> > > > On 20.06.18 at 10:56, <aisaila@bitdefender.com> wrote:
> > So the final plan for the series is like this:
> > - Base everything on Roger's series
> > - Keep patches 1-7
> > - Have patch 8 add an extra arg to HVM_REGISTER_SAVE_RESTORE
> > and hvm_sr_handlers
> > - Have patch 9 like the patch 15 form now and have it call the
> > save_one handlers
> > - Have the next patches remove the redundant save handlers and
> > rename the save one
> > - The final patch should remove the extra arg. This one can be
> > kept or not.
> Sounds reasonable, albeit I have to admit I'm not sure what the last
> sentence is supposed to mean. Either it is possible to eliminate the
> extra arg, then you should do so, or it is not possible (in which
> case
> you can't possibly add a respective patch to the series).

I wanted to say that the last patch will be independent and it can go
upstream or not because you said that you are not sure if the extra
args should be removed or not.

Alex

________________________
This email was scanned by Bitdefender
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v7 08/15] x86/cpu: Remove loop form vmce_save_vcpu_ctxt() func
  2018-06-21  7:47               ` Alexandru Stefan ISAILA
@ 2018-06-21  8:15                 ` Jan Beulich
  0 siblings, 0 replies; 30+ messages in thread
From: Jan Beulich @ 2018-06-21  8:15 UTC (permalink / raw)
  To: aisaila; +Cc: Andrew Cooper, xen-devel, Paul Durrant, Wei Liu, Ian.Jackson

>>> On 21.06.18 at 09:47, <aisaila@bitdefender.com> wrote:
> On Jo, 2018-06-21 at 01:39 -0600, Jan Beulich wrote:
>> >
>> > >
>> > > >
>> > > > On 20.06.18 at 10:56, <aisaila@bitdefender.com> wrote:
>> > So the final plan for the series is like this:
>> > - Base everything on Roger's series
>> > - Keep patches 1-7
>> > - Have patch 8 add an extra arg to HVM_REGISTER_SAVE_RESTORE
>> > and hvm_sr_handlers
>> > - Have patch 9 like the patch 15 form now and have it call the
>> > save_one handlers
>> > - Have the next patches remove the redundant save handlers and
>> > rename the save one
>> > - The final patch should remove the extra arg. This one can be
>> > kept or not.
>> Sounds reasonable, albeit I have to admit I'm not sure what the last
>> sentence is supposed to mean. Either it is possible to eliminate the
>> extra arg, then you should do so, or it is not possible (in which
>> case
>> you can't possibly add a respective patch to the series).
> 
> I wanted to say that the last patch will be independent and it can go
> upstream or not because you said that you are not sure if the extra
> args should be removed or not.

I did not say "should" but "could" (in other words of course).

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2018-06-21  8:15 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-07 14:59 [PATCH v7 00/15] x86/domctl: Save info for one vcpu instance Alexandru Isaila
2018-06-07 14:59 ` [PATCH v7 01/15] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func Alexandru Isaila
2018-06-08 14:45   ` Jan Beulich
2018-06-07 14:59 ` [PATCH v7 02/15] x86/hvm: Introduce hvm_save_tsc_adjust_one() func Alexandru Isaila
2018-06-07 14:59 ` [PATCH v7 03/15] x86/hvm: Introduce hvm_save_cpu_ctxt_one func Alexandru Isaila
2018-06-08 14:47   ` Jan Beulich
2018-06-07 14:59 ` [PATCH v7 04/15] x86/hvm: Introduce hvm_save_cpu_xsave_states_one Alexandru Isaila
2018-06-08 14:49   ` Jan Beulich
2018-06-07 14:59 ` [PATCH v7 05/15] x86/hvm: Introduce hvm_save_cpu_msrs_one func Alexandru Isaila
2018-06-08 14:50   ` Jan Beulich
2018-06-07 14:59 ` [PATCH v7 06/15] x86/hvm: Introduce hvm_save_mtrr_msr_one func Alexandru Isaila
2018-06-08 14:57   ` Jan Beulich
2018-06-07 14:59 ` [PATCH v7 07/15] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func Alexandru Isaila
2018-06-08  8:27   ` Paul Durrant
2018-06-07 14:59 ` [PATCH v7 08/15] x86/cpu: Remove loop form vmce_save_vcpu_ctxt() func Alexandru Isaila
2018-06-08  8:33   ` Paul Durrant
2018-06-08  8:51     ` Alexandru Stefan ISAILA
2018-06-08 12:46       ` Paul Durrant
2018-06-08 14:42         ` Jan Beulich
2018-06-20  8:56           ` Alexandru Stefan ISAILA
2018-06-21  7:39             ` Jan Beulich
2018-06-21  7:47               ` Alexandru Stefan ISAILA
2018-06-21  8:15                 ` Jan Beulich
2018-06-07 14:59 ` [PATCH v7 09/15] x86/hvm: Remove loop from hvm_save_tsc_adjust() func Alexandru Isaila
2018-06-07 14:59 ` [PATCH v7 10/15] x86/hvm: Remove loop from hvm_save_cpu_ctxt func Alexandru Isaila
2018-06-07 14:59 ` [PATCH v7 11/15] x86/hvm: Remove loop from hvm_save_cpu_xsave_states Alexandru Isaila
2018-06-07 14:59 ` [PATCH v7 12/15] x86/hvm: Remove loop from hvm_save_cpu_msrs func Alexandru Isaila
2018-06-07 14:59 ` [PATCH v7 13/15] x86/hvm: Remove loop from hvm_save_mtrr_msr func Alexandru Isaila
2018-06-07 14:59 ` [PATCH v7 14/15] x86/hvm: Remove loop from viridian_save_vcpu_ctxt() func Alexandru Isaila
2018-06-07 14:59 ` [PATCH v7 15/15] x86/domctl: Don't pause the whole domain if only getting vcpu state Alexandru Isaila

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.