xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/4] SVM: misc cleanup
@ 2017-06-07 13:58 Jan Beulich
  2017-06-07 14:04 ` [PATCH v2 1/4] SVM: use VMCB accessors Jan Beulich
                   ` (3 more replies)
  0 siblings, 4 replies; 9+ messages in thread
From: Jan Beulich @ 2017-06-07 13:58 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Boris Ostrovsky, Suravee Suthikulpanit

1: use VMCB accessors
2: infer type in VMCB_ACCESSORS()
3: clean up svm_vmcb_dump()
4: clean up svm_vmcb_isvalid()

Signed-off-by: Jan Beulich <jbeulich@suse.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v2 1/4] SVM: use VMCB accessors
  2017-06-07 13:58 [PATCH v2 0/4] SVM: misc cleanup Jan Beulich
@ 2017-06-07 14:04 ` Jan Beulich
  2017-06-07 17:04   ` Boris Ostrovsky
  2017-06-07 14:04 ` [PATCH v2 2/4] SVM: infer type in VMCB_ACCESSORS() Jan Beulich
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 9+ messages in thread
From: Jan Beulich @ 2017-06-07 14:04 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Boris Ostrovsky, Suravee Suthikulpanit

[-- Attachment #1: Type: text/plain, Size: 1499 bytes --]

This is particularly relevant for the SET form, to ensure proper clean
bits tracking (albeit in the case here it's benign as CPL and other
segment register attributes share a clean bit).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

--- unstable.orig/xen/arch/x86/hvm/svm/svm.c	2017-05-24 11:52:44.976284414 +0200
+++ unstable/xen/arch/x86/hvm/svm/svm.c	2017-04-18 11:25:44.000000000 +0200
@@ -653,7 +653,7 @@ static void svm_get_segment_register(str
         break;
     case x86_seg_ss:
         *reg = vmcb->ss;
-        reg->attr.fields.dpl = vmcb->_cpl;
+        reg->attr.fields.dpl = vmcb_get_cpl(vmcb);
         break;
     case x86_seg_tr:
         svm_sync_vmcb(v);
@@ -726,7 +726,7 @@ static void svm_set_segment_register(str
         break;
     case x86_seg_ss:
         vmcb->ss = *reg;
-        vmcb->_cpl = vmcb->ss.attr.fields.dpl;
+        vmcb_set_cpl(vmcb, reg->attr.fields.dpl);
         break;
     case x86_seg_tr:
         vmcb->tr = *reg;
@@ -1442,7 +1442,7 @@ static void svm_inject_event(const struc
      * If injecting an event outside of 64bit mode, zero the upper bits of the
      * %eip and nextrip after the adjustments above.
      */
-    if ( !((vmcb->_efer & EFER_LMA) && vmcb->cs.attr.fields.l) )
+    if ( !((vmcb_get_efer(vmcb) & EFER_LMA) && vmcb->cs.attr.fields.l) )
     {
         regs->rip = regs->eip;
         vmcb->nextrip = (uint32_t)vmcb->nextrip;




[-- Attachment #2: SVM-use-accessors.patch --]
[-- Type: text/plain, Size: 1520 bytes --]

SVM: use VMCB accessors

This is particularly relevant for the SET form, to ensure proper clean
bits tracking (albeit in the case here it's benign as CPL and other
segment register attributes share a clean bit).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

--- unstable.orig/xen/arch/x86/hvm/svm/svm.c	2017-05-24 11:52:44.976284414 +0200
+++ unstable/xen/arch/x86/hvm/svm/svm.c	2017-04-18 11:25:44.000000000 +0200
@@ -653,7 +653,7 @@ static void svm_get_segment_register(str
         break;
     case x86_seg_ss:
         *reg = vmcb->ss;
-        reg->attr.fields.dpl = vmcb->_cpl;
+        reg->attr.fields.dpl = vmcb_get_cpl(vmcb);
         break;
     case x86_seg_tr:
         svm_sync_vmcb(v);
@@ -726,7 +726,7 @@ static void svm_set_segment_register(str
         break;
     case x86_seg_ss:
         vmcb->ss = *reg;
-        vmcb->_cpl = vmcb->ss.attr.fields.dpl;
+        vmcb_set_cpl(vmcb, reg->attr.fields.dpl);
         break;
     case x86_seg_tr:
         vmcb->tr = *reg;
@@ -1442,7 +1442,7 @@ static void svm_inject_event(const struc
      * If injecting an event outside of 64bit mode, zero the upper bits of the
      * %eip and nextrip after the adjustments above.
      */
-    if ( !((vmcb->_efer & EFER_LMA) && vmcb->cs.attr.fields.l) )
+    if ( !((vmcb_get_efer(vmcb) & EFER_LMA) && vmcb->cs.attr.fields.l) )
     {
         regs->rip = regs->eip;
         vmcb->nextrip = (uint32_t)vmcb->nextrip;

[-- Attachment #3: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v2 2/4] SVM: infer type in VMCB_ACCESSORS()
  2017-06-07 13:58 [PATCH v2 0/4] SVM: misc cleanup Jan Beulich
  2017-06-07 14:04 ` [PATCH v2 1/4] SVM: use VMCB accessors Jan Beulich
@ 2017-06-07 14:04 ` Jan Beulich
  2017-06-07 17:04   ` Boris Ostrovsky
  2017-06-07 14:07 ` [PATCH v2 3/4] SVM: clean up svm_vmcb_dump() Jan Beulich
  2017-06-07 14:07 ` [PATCH v2 4/4] SVM: clean up svm_vmcb_isvalid() Jan Beulich
  3 siblings, 1 reply; 9+ messages in thread
From: Jan Beulich @ 2017-06-07 14:04 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Boris Ostrovsky, Suravee Suthikulpanit

[-- Attachment #1: Type: text/plain, Size: 4345 bytes --]

Prevent accidental mistakes by not requiring explicit types to be
specified in the macro invocations.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Drop commented out accessors instead of adjusting them.

--- unstable.orig/xen/include/asm-x86/hvm/svm/vmcb.h	2017-06-01 14:30:23.000000000 +0200
+++ unstable/xen/include/asm-x86/hvm/svm/vmcb.h	2017-06-01 14:31:29.000000000 +0200
@@ -544,51 +544,47 @@ void svm_intercept_msr(struct vcpu *v, u
  * VMCB accessor functions.
  */
 
-#define VMCB_ACCESSORS(_type, _name, _cleanbit)                             \
-static inline void vmcb_set_##_name(struct vmcb_struct *vmcb, _type value)  \
-{                                                                           \
-    vmcb->_##_name = value;                                                 \
-    vmcb->cleanbits.fields._cleanbit = 0;                                   \
-}                                                                           \
-static inline _type vmcb_get_##_name(const struct vmcb_struct *vmcb)        \
-{                                                                           \
-    return vmcb->_##_name;                                                  \
+#define VMCB_ACCESSORS(name, cleanbit)            \
+static inline void                                \
+vmcb_set_ ## name(struct vmcb_struct *vmcb,       \
+                  typeof(vmcb->_ ## name) value)  \
+{                                                 \
+    vmcb->_ ## name = value;                      \
+    vmcb->cleanbits.fields.cleanbit = 0;          \
+}                                                 \
+static inline typeof(alloc_vmcb()->_ ## name)     \
+vmcb_get_ ## name(const struct vmcb_struct *vmcb) \
+{                                                 \
+    return vmcb->_ ## name;                       \
 }
 
-VMCB_ACCESSORS(u32, cr_intercepts, intercepts)
-VMCB_ACCESSORS(u32, dr_intercepts, intercepts)
-VMCB_ACCESSORS(u32, exception_intercepts, intercepts)
-VMCB_ACCESSORS(u32, general1_intercepts, intercepts)
-VMCB_ACCESSORS(u32, general2_intercepts, intercepts)
-VMCB_ACCESSORS(u16, pause_filter_count, intercepts)
-VMCB_ACCESSORS(u64, tsc_offset, intercepts)
-VMCB_ACCESSORS(u64, iopm_base_pa, iopm)
-VMCB_ACCESSORS(u64, msrpm_base_pa, iopm)
-VMCB_ACCESSORS(u32, guest_asid, asid)
-VMCB_ACCESSORS(vintr_t, vintr, tpr)
-VMCB_ACCESSORS(u64, np_enable, np)
-VMCB_ACCESSORS(u64, h_cr3, np)
-VMCB_ACCESSORS(u64, g_pat, np)
-VMCB_ACCESSORS(u64, cr0, cr)
-VMCB_ACCESSORS(u64, cr3, cr)
-VMCB_ACCESSORS(u64, cr4, cr)
-VMCB_ACCESSORS(u64, efer, cr)
-VMCB_ACCESSORS(u64, dr6, dr)
-VMCB_ACCESSORS(u64, dr7, dr)
-/* Updates are all via hvm_set_segment_register(). */
-/* VMCB_ACCESSORS(svm_segment_register_t, gdtr, dt) */
-/* VMCB_ACCESSORS(svm_segment_register_t, idtr, dt) */
-/* VMCB_ACCESSORS(svm_segment_register_t, cs, seg) */
-/* VMCB_ACCESSORS(svm_segment_register_t, ds, seg) */
-/* VMCB_ACCESSORS(svm_segment_register_t, es, seg) */
-/* VMCB_ACCESSORS(svm_segment_register_t, ss, seg) */
-VMCB_ACCESSORS(u8, cpl, seg)
-VMCB_ACCESSORS(u64, cr2, cr2)
-VMCB_ACCESSORS(u64, debugctlmsr, lbr)
-VMCB_ACCESSORS(u64, lastbranchfromip, lbr)
-VMCB_ACCESSORS(u64, lastbranchtoip, lbr)
-VMCB_ACCESSORS(u64, lastintfromip, lbr)
-VMCB_ACCESSORS(u64, lastinttoip, lbr)
+VMCB_ACCESSORS(cr_intercepts, intercepts)
+VMCB_ACCESSORS(dr_intercepts, intercepts)
+VMCB_ACCESSORS(exception_intercepts, intercepts)
+VMCB_ACCESSORS(general1_intercepts, intercepts)
+VMCB_ACCESSORS(general2_intercepts, intercepts)
+VMCB_ACCESSORS(pause_filter_count, intercepts)
+VMCB_ACCESSORS(tsc_offset, intercepts)
+VMCB_ACCESSORS(iopm_base_pa, iopm)
+VMCB_ACCESSORS(msrpm_base_pa, iopm)
+VMCB_ACCESSORS(guest_asid, asid)
+VMCB_ACCESSORS(vintr, tpr)
+VMCB_ACCESSORS(np_enable, np)
+VMCB_ACCESSORS(h_cr3, np)
+VMCB_ACCESSORS(g_pat, np)
+VMCB_ACCESSORS(cr0, cr)
+VMCB_ACCESSORS(cr3, cr)
+VMCB_ACCESSORS(cr4, cr)
+VMCB_ACCESSORS(efer, cr)
+VMCB_ACCESSORS(dr6, dr)
+VMCB_ACCESSORS(dr7, dr)
+VMCB_ACCESSORS(cpl, seg)
+VMCB_ACCESSORS(cr2, cr2)
+VMCB_ACCESSORS(debugctlmsr, lbr)
+VMCB_ACCESSORS(lastbranchfromip, lbr)
+VMCB_ACCESSORS(lastbranchtoip, lbr)
+VMCB_ACCESSORS(lastintfromip, lbr)
+VMCB_ACCESSORS(lastinttoip, lbr)
 
 #undef VMCB_ACCESSORS
 



[-- Attachment #2: SVM-accessors-infer-type.patch --]
[-- Type: text/plain, Size: 4380 bytes --]

SVM: infer type in VMCB_ACCESSORS()

Prevent accidental mistakes by not requiring explicit types to be
specified in the macro invocations.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Drop commented out accessors instead of adjusting them.

--- unstable.orig/xen/include/asm-x86/hvm/svm/vmcb.h	2017-06-01 14:30:23.000000000 +0200
+++ unstable/xen/include/asm-x86/hvm/svm/vmcb.h	2017-06-01 14:31:29.000000000 +0200
@@ -544,51 +544,47 @@ void svm_intercept_msr(struct vcpu *v, u
  * VMCB accessor functions.
  */
 
-#define VMCB_ACCESSORS(_type, _name, _cleanbit)                             \
-static inline void vmcb_set_##_name(struct vmcb_struct *vmcb, _type value)  \
-{                                                                           \
-    vmcb->_##_name = value;                                                 \
-    vmcb->cleanbits.fields._cleanbit = 0;                                   \
-}                                                                           \
-static inline _type vmcb_get_##_name(const struct vmcb_struct *vmcb)        \
-{                                                                           \
-    return vmcb->_##_name;                                                  \
+#define VMCB_ACCESSORS(name, cleanbit)            \
+static inline void                                \
+vmcb_set_ ## name(struct vmcb_struct *vmcb,       \
+                  typeof(vmcb->_ ## name) value)  \
+{                                                 \
+    vmcb->_ ## name = value;                      \
+    vmcb->cleanbits.fields.cleanbit = 0;          \
+}                                                 \
+static inline typeof(alloc_vmcb()->_ ## name)     \
+vmcb_get_ ## name(const struct vmcb_struct *vmcb) \
+{                                                 \
+    return vmcb->_ ## name;                       \
 }
 
-VMCB_ACCESSORS(u32, cr_intercepts, intercepts)
-VMCB_ACCESSORS(u32, dr_intercepts, intercepts)
-VMCB_ACCESSORS(u32, exception_intercepts, intercepts)
-VMCB_ACCESSORS(u32, general1_intercepts, intercepts)
-VMCB_ACCESSORS(u32, general2_intercepts, intercepts)
-VMCB_ACCESSORS(u16, pause_filter_count, intercepts)
-VMCB_ACCESSORS(u64, tsc_offset, intercepts)
-VMCB_ACCESSORS(u64, iopm_base_pa, iopm)
-VMCB_ACCESSORS(u64, msrpm_base_pa, iopm)
-VMCB_ACCESSORS(u32, guest_asid, asid)
-VMCB_ACCESSORS(vintr_t, vintr, tpr)
-VMCB_ACCESSORS(u64, np_enable, np)
-VMCB_ACCESSORS(u64, h_cr3, np)
-VMCB_ACCESSORS(u64, g_pat, np)
-VMCB_ACCESSORS(u64, cr0, cr)
-VMCB_ACCESSORS(u64, cr3, cr)
-VMCB_ACCESSORS(u64, cr4, cr)
-VMCB_ACCESSORS(u64, efer, cr)
-VMCB_ACCESSORS(u64, dr6, dr)
-VMCB_ACCESSORS(u64, dr7, dr)
-/* Updates are all via hvm_set_segment_register(). */
-/* VMCB_ACCESSORS(svm_segment_register_t, gdtr, dt) */
-/* VMCB_ACCESSORS(svm_segment_register_t, idtr, dt) */
-/* VMCB_ACCESSORS(svm_segment_register_t, cs, seg) */
-/* VMCB_ACCESSORS(svm_segment_register_t, ds, seg) */
-/* VMCB_ACCESSORS(svm_segment_register_t, es, seg) */
-/* VMCB_ACCESSORS(svm_segment_register_t, ss, seg) */
-VMCB_ACCESSORS(u8, cpl, seg)
-VMCB_ACCESSORS(u64, cr2, cr2)
-VMCB_ACCESSORS(u64, debugctlmsr, lbr)
-VMCB_ACCESSORS(u64, lastbranchfromip, lbr)
-VMCB_ACCESSORS(u64, lastbranchtoip, lbr)
-VMCB_ACCESSORS(u64, lastintfromip, lbr)
-VMCB_ACCESSORS(u64, lastinttoip, lbr)
+VMCB_ACCESSORS(cr_intercepts, intercepts)
+VMCB_ACCESSORS(dr_intercepts, intercepts)
+VMCB_ACCESSORS(exception_intercepts, intercepts)
+VMCB_ACCESSORS(general1_intercepts, intercepts)
+VMCB_ACCESSORS(general2_intercepts, intercepts)
+VMCB_ACCESSORS(pause_filter_count, intercepts)
+VMCB_ACCESSORS(tsc_offset, intercepts)
+VMCB_ACCESSORS(iopm_base_pa, iopm)
+VMCB_ACCESSORS(msrpm_base_pa, iopm)
+VMCB_ACCESSORS(guest_asid, asid)
+VMCB_ACCESSORS(vintr, tpr)
+VMCB_ACCESSORS(np_enable, np)
+VMCB_ACCESSORS(h_cr3, np)
+VMCB_ACCESSORS(g_pat, np)
+VMCB_ACCESSORS(cr0, cr)
+VMCB_ACCESSORS(cr3, cr)
+VMCB_ACCESSORS(cr4, cr)
+VMCB_ACCESSORS(efer, cr)
+VMCB_ACCESSORS(dr6, dr)
+VMCB_ACCESSORS(dr7, dr)
+VMCB_ACCESSORS(cpl, seg)
+VMCB_ACCESSORS(cr2, cr2)
+VMCB_ACCESSORS(debugctlmsr, lbr)
+VMCB_ACCESSORS(lastbranchfromip, lbr)
+VMCB_ACCESSORS(lastbranchtoip, lbr)
+VMCB_ACCESSORS(lastintfromip, lbr)
+VMCB_ACCESSORS(lastinttoip, lbr)
 
 #undef VMCB_ACCESSORS
 

[-- Attachment #3: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v2 3/4] SVM: clean up svm_vmcb_dump()
  2017-06-07 13:58 [PATCH v2 0/4] SVM: misc cleanup Jan Beulich
  2017-06-07 14:04 ` [PATCH v2 1/4] SVM: use VMCB accessors Jan Beulich
  2017-06-07 14:04 ` [PATCH v2 2/4] SVM: infer type in VMCB_ACCESSORS() Jan Beulich
@ 2017-06-07 14:07 ` Jan Beulich
  2017-06-07 17:08   ` Boris Ostrovsky
  2017-06-07 14:07 ` [PATCH v2 4/4] SVM: clean up svm_vmcb_isvalid() Jan Beulich
  3 siblings, 1 reply; 9+ messages in thread
From: Jan Beulich @ 2017-06-07 14:07 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Boris Ostrovsky, Suravee Suthikulpanit

[-- Attachment #1: Type: text/plain, Size: 6194 bytes --]

- constify parameter
- use accessors
- drop stray casts
- adjust formatting

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/svm/svmdebug.c
+++ b/xen/arch/x86/hvm/svm/svmdebug.c
@@ -26,60 +26,52 @@ static void svm_dump_sel(const char *nam
            name, s->sel, s->attr.bytes, s->limit, s->base);
 }
 
-/* This function can directly access fields which are covered by clean bits. */
-void svm_vmcb_dump(const char *from, struct vmcb_struct *vmcb)
+void svm_vmcb_dump(const char *from, const struct vmcb_struct *vmcb)
 {
     printk("Dumping guest's current state at %s...\n", from);
-    printk("Size of VMCB = %d, paddr = %#lx, vaddr = %p\n",
-           (int) sizeof(struct vmcb_struct), virt_to_maddr(vmcb), vmcb);
+    printk("Size of VMCB = %zu, paddr = %"PRIpaddr", vaddr = %p\n",
+           sizeof(struct vmcb_struct), virt_to_maddr(vmcb), vmcb);
 
     printk("cr_intercepts = %#x dr_intercepts = %#x "
            "exception_intercepts = %#x\n",
-           vmcb->_cr_intercepts, vmcb->_dr_intercepts, 
-           vmcb->_exception_intercepts);
+           vmcb_get_cr_intercepts(vmcb), vmcb_get_dr_intercepts(vmcb),
+           vmcb_get_exception_intercepts(vmcb));
     printk("general1_intercepts = %#x general2_intercepts = %#x\n",
-           vmcb->_general1_intercepts, vmcb->_general2_intercepts);
-    printk("iopm_base_pa = %#Lx msrpm_base_pa = %#Lx tsc_offset = %#Lx\n",
-           (unsigned long long)vmcb->_iopm_base_pa,
-           (unsigned long long)vmcb->_msrpm_base_pa,
-           (unsigned long long)vmcb->_tsc_offset);
-    printk("tlb_control = %#x vintr = %#Lx interrupt_shadow = %#Lx\n",
-           vmcb->tlb_control,
-           (unsigned long long)vmcb->_vintr.bytes,
-           (unsigned long long)vmcb->interrupt_shadow);
+           vmcb_get_general1_intercepts(vmcb), vmcb_get_general2_intercepts(vmcb));
+    printk("iopm_base_pa = %#"PRIx64" msrpm_base_pa = %#"PRIx64" tsc_offset = %#"PRIx64"\n",
+           vmcb_get_iopm_base_pa(vmcb), vmcb_get_msrpm_base_pa(vmcb),
+           vmcb_get_tsc_offset(vmcb));
+    printk("tlb_control = %#x vintr = %#"PRIx64" interrupt_shadow = %#"PRIx64"\n",
+           vmcb->tlb_control, vmcb_get_vintr(vmcb).bytes,
+           vmcb->interrupt_shadow);
     printk("eventinj %016"PRIx64", valid? %d, ec? %d, type %u, vector %#x\n",
            vmcb->eventinj.bytes, vmcb->eventinj.fields.v,
            vmcb->eventinj.fields.ev, vmcb->eventinj.fields.type,
            vmcb->eventinj.fields.vector);
-    printk("exitcode = %#Lx exitintinfo = %#Lx\n",
-           (unsigned long long)vmcb->exitcode,
-           (unsigned long long)vmcb->exitintinfo.bytes);
-    printk("exitinfo1 = %#Lx exitinfo2 = %#Lx \n",
-           (unsigned long long)vmcb->exitinfo1,
-           (unsigned long long)vmcb->exitinfo2);
-    printk("np_enable = %Lx guest_asid = %#x\n",
-           (unsigned long long)vmcb->_np_enable, vmcb->_guest_asid);
-    printk("cpl = %d efer = %#Lx star = %#Lx lstar = %#Lx\n",
-           vmcb->_cpl, (unsigned long long)vmcb->_efer,
-           (unsigned long long)vmcb->star, (unsigned long long)vmcb->lstar);
-    printk("CR0 = 0x%016llx CR2 = 0x%016llx\n",
-           (unsigned long long)vmcb->_cr0, (unsigned long long)vmcb->_cr2);
-    printk("CR3 = 0x%016llx CR4 = 0x%016llx\n", 
-           (unsigned long long)vmcb->_cr3, (unsigned long long)vmcb->_cr4);
-    printk("RSP = 0x%016llx  RIP = 0x%016llx\n", 
-           (unsigned long long)vmcb->rsp, (unsigned long long)vmcb->rip);
-    printk("RAX = 0x%016llx  RFLAGS=0x%016llx\n",
-           (unsigned long long)vmcb->rax, (unsigned long long)vmcb->rflags);
-    printk("DR6 = 0x%016llx, DR7 = 0x%016llx\n", 
-           (unsigned long long)vmcb->_dr6, (unsigned long long)vmcb->_dr7);
-    printk("CSTAR = 0x%016llx SFMask = 0x%016llx\n",
-           (unsigned long long)vmcb->cstar, 
-           (unsigned long long)vmcb->sfmask);
-    printk("KernGSBase = 0x%016llx PAT = 0x%016llx \n", 
-           (unsigned long long)vmcb->kerngsbase,
-           (unsigned long long)vmcb->_g_pat);
-    printk("H_CR3 = 0x%016llx CleanBits = %#x\n",
-           (unsigned long long)vmcb->_h_cr3, vmcb->cleanbits.bytes);
+    printk("exitcode = %#"PRIx64" exitintinfo = %#"PRIx64"\n",
+           vmcb->exitcode, vmcb->exitintinfo.bytes);
+    printk("exitinfo1 = %#"PRIx64" exitinfo2 = %#"PRIx64"\n",
+           vmcb->exitinfo1, vmcb->exitinfo2);
+    printk("np_enable = %#"PRIx64" guest_asid = %#x\n",
+           vmcb_get_np_enable(vmcb), vmcb_get_guest_asid(vmcb));
+    printk("cpl = %d efer = %#"PRIx64" star = %#"PRIx64" lstar = %#"PRIx64"\n",
+           vmcb_get_cpl(vmcb), vmcb_get_efer(vmcb), vmcb->star, vmcb->lstar);
+    printk("CR0 = 0x%016"PRIx64" CR2 = 0x%016"PRIx64"\n",
+           vmcb_get_cr0(vmcb), vmcb_get_cr2(vmcb));
+    printk("CR3 = 0x%016"PRIx64" CR4 = 0x%016"PRIx64"\n",
+           vmcb_get_cr3(vmcb), vmcb_get_cr4(vmcb));
+    printk("RSP = 0x%016"PRIx64"  RIP = 0x%016"PRIx64"\n",
+           vmcb->rsp, vmcb->rip);
+    printk("RAX = 0x%016"PRIx64"  RFLAGS=0x%016"PRIx64"\n",
+           vmcb->rax, vmcb->rflags);
+    printk("DR6 = 0x%016"PRIx64", DR7 = 0x%016"PRIx64"\n",
+           vmcb_get_dr6(vmcb), vmcb_get_dr7(vmcb));
+    printk("CSTAR = 0x%016"PRIx64" SFMask = 0x%016"PRIx64"\n",
+           vmcb->cstar, vmcb->sfmask);
+    printk("KernGSBase = 0x%016"PRIx64" PAT = 0x%016"PRIx64"\n",
+           vmcb->kerngsbase, vmcb_get_g_pat(vmcb));
+    printk("H_CR3 = 0x%016"PRIx64" CleanBits = %#x\n",
+           vmcb_get_h_cr3(vmcb), vmcb->cleanbits.bytes);
 
     /* print out all the selectors */
     printk("       sel attr  limit   base\n");
--- a/xen/include/asm-x86/hvm/svm/svmdebug.h
+++ b/xen/include/asm-x86/hvm/svm/svmdebug.h
@@ -22,7 +22,7 @@
 #include <asm/types.h>
 #include <asm/hvm/svm/vmcb.h>
 
-void svm_vmcb_dump(const char *from, struct vmcb_struct *vmcb);
+void svm_vmcb_dump(const char *from, const struct vmcb_struct *vmcb);
 bool_t svm_vmcb_isvalid(const char *from, struct vmcb_struct *vmcb,
                         bool_t verbose);
 



[-- Attachment #2: SVM-dump-cleanup.patch --]
[-- Type: text/plain, Size: 6223 bytes --]

SVM: clean up svm_vmcb_dump()

- constify parameter
- use accessors
- drop stray casts
- adjust formatting

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/svm/svmdebug.c
+++ b/xen/arch/x86/hvm/svm/svmdebug.c
@@ -26,60 +26,52 @@ static void svm_dump_sel(const char *nam
            name, s->sel, s->attr.bytes, s->limit, s->base);
 }
 
-/* This function can directly access fields which are covered by clean bits. */
-void svm_vmcb_dump(const char *from, struct vmcb_struct *vmcb)
+void svm_vmcb_dump(const char *from, const struct vmcb_struct *vmcb)
 {
     printk("Dumping guest's current state at %s...\n", from);
-    printk("Size of VMCB = %d, paddr = %#lx, vaddr = %p\n",
-           (int) sizeof(struct vmcb_struct), virt_to_maddr(vmcb), vmcb);
+    printk("Size of VMCB = %zu, paddr = %"PRIpaddr", vaddr = %p\n",
+           sizeof(struct vmcb_struct), virt_to_maddr(vmcb), vmcb);
 
     printk("cr_intercepts = %#x dr_intercepts = %#x "
            "exception_intercepts = %#x\n",
-           vmcb->_cr_intercepts, vmcb->_dr_intercepts, 
-           vmcb->_exception_intercepts);
+           vmcb_get_cr_intercepts(vmcb), vmcb_get_dr_intercepts(vmcb),
+           vmcb_get_exception_intercepts(vmcb));
     printk("general1_intercepts = %#x general2_intercepts = %#x\n",
-           vmcb->_general1_intercepts, vmcb->_general2_intercepts);
-    printk("iopm_base_pa = %#Lx msrpm_base_pa = %#Lx tsc_offset = %#Lx\n",
-           (unsigned long long)vmcb->_iopm_base_pa,
-           (unsigned long long)vmcb->_msrpm_base_pa,
-           (unsigned long long)vmcb->_tsc_offset);
-    printk("tlb_control = %#x vintr = %#Lx interrupt_shadow = %#Lx\n",
-           vmcb->tlb_control,
-           (unsigned long long)vmcb->_vintr.bytes,
-           (unsigned long long)vmcb->interrupt_shadow);
+           vmcb_get_general1_intercepts(vmcb), vmcb_get_general2_intercepts(vmcb));
+    printk("iopm_base_pa = %#"PRIx64" msrpm_base_pa = %#"PRIx64" tsc_offset = %#"PRIx64"\n",
+           vmcb_get_iopm_base_pa(vmcb), vmcb_get_msrpm_base_pa(vmcb),
+           vmcb_get_tsc_offset(vmcb));
+    printk("tlb_control = %#x vintr = %#"PRIx64" interrupt_shadow = %#"PRIx64"\n",
+           vmcb->tlb_control, vmcb_get_vintr(vmcb).bytes,
+           vmcb->interrupt_shadow);
     printk("eventinj %016"PRIx64", valid? %d, ec? %d, type %u, vector %#x\n",
            vmcb->eventinj.bytes, vmcb->eventinj.fields.v,
            vmcb->eventinj.fields.ev, vmcb->eventinj.fields.type,
            vmcb->eventinj.fields.vector);
-    printk("exitcode = %#Lx exitintinfo = %#Lx\n",
-           (unsigned long long)vmcb->exitcode,
-           (unsigned long long)vmcb->exitintinfo.bytes);
-    printk("exitinfo1 = %#Lx exitinfo2 = %#Lx \n",
-           (unsigned long long)vmcb->exitinfo1,
-           (unsigned long long)vmcb->exitinfo2);
-    printk("np_enable = %Lx guest_asid = %#x\n",
-           (unsigned long long)vmcb->_np_enable, vmcb->_guest_asid);
-    printk("cpl = %d efer = %#Lx star = %#Lx lstar = %#Lx\n",
-           vmcb->_cpl, (unsigned long long)vmcb->_efer,
-           (unsigned long long)vmcb->star, (unsigned long long)vmcb->lstar);
-    printk("CR0 = 0x%016llx CR2 = 0x%016llx\n",
-           (unsigned long long)vmcb->_cr0, (unsigned long long)vmcb->_cr2);
-    printk("CR3 = 0x%016llx CR4 = 0x%016llx\n", 
-           (unsigned long long)vmcb->_cr3, (unsigned long long)vmcb->_cr4);
-    printk("RSP = 0x%016llx  RIP = 0x%016llx\n", 
-           (unsigned long long)vmcb->rsp, (unsigned long long)vmcb->rip);
-    printk("RAX = 0x%016llx  RFLAGS=0x%016llx\n",
-           (unsigned long long)vmcb->rax, (unsigned long long)vmcb->rflags);
-    printk("DR6 = 0x%016llx, DR7 = 0x%016llx\n", 
-           (unsigned long long)vmcb->_dr6, (unsigned long long)vmcb->_dr7);
-    printk("CSTAR = 0x%016llx SFMask = 0x%016llx\n",
-           (unsigned long long)vmcb->cstar, 
-           (unsigned long long)vmcb->sfmask);
-    printk("KernGSBase = 0x%016llx PAT = 0x%016llx \n", 
-           (unsigned long long)vmcb->kerngsbase,
-           (unsigned long long)vmcb->_g_pat);
-    printk("H_CR3 = 0x%016llx CleanBits = %#x\n",
-           (unsigned long long)vmcb->_h_cr3, vmcb->cleanbits.bytes);
+    printk("exitcode = %#"PRIx64" exitintinfo = %#"PRIx64"\n",
+           vmcb->exitcode, vmcb->exitintinfo.bytes);
+    printk("exitinfo1 = %#"PRIx64" exitinfo2 = %#"PRIx64"\n",
+           vmcb->exitinfo1, vmcb->exitinfo2);
+    printk("np_enable = %#"PRIx64" guest_asid = %#x\n",
+           vmcb_get_np_enable(vmcb), vmcb_get_guest_asid(vmcb));
+    printk("cpl = %d efer = %#"PRIx64" star = %#"PRIx64" lstar = %#"PRIx64"\n",
+           vmcb_get_cpl(vmcb), vmcb_get_efer(vmcb), vmcb->star, vmcb->lstar);
+    printk("CR0 = 0x%016"PRIx64" CR2 = 0x%016"PRIx64"\n",
+           vmcb_get_cr0(vmcb), vmcb_get_cr2(vmcb));
+    printk("CR3 = 0x%016"PRIx64" CR4 = 0x%016"PRIx64"\n",
+           vmcb_get_cr3(vmcb), vmcb_get_cr4(vmcb));
+    printk("RSP = 0x%016"PRIx64"  RIP = 0x%016"PRIx64"\n",
+           vmcb->rsp, vmcb->rip);
+    printk("RAX = 0x%016"PRIx64"  RFLAGS=0x%016"PRIx64"\n",
+           vmcb->rax, vmcb->rflags);
+    printk("DR6 = 0x%016"PRIx64", DR7 = 0x%016"PRIx64"\n",
+           vmcb_get_dr6(vmcb), vmcb_get_dr7(vmcb));
+    printk("CSTAR = 0x%016"PRIx64" SFMask = 0x%016"PRIx64"\n",
+           vmcb->cstar, vmcb->sfmask);
+    printk("KernGSBase = 0x%016"PRIx64" PAT = 0x%016"PRIx64"\n",
+           vmcb->kerngsbase, vmcb_get_g_pat(vmcb));
+    printk("H_CR3 = 0x%016"PRIx64" CleanBits = %#x\n",
+           vmcb_get_h_cr3(vmcb), vmcb->cleanbits.bytes);
 
     /* print out all the selectors */
     printk("       sel attr  limit   base\n");
--- a/xen/include/asm-x86/hvm/svm/svmdebug.h
+++ b/xen/include/asm-x86/hvm/svm/svmdebug.h
@@ -22,7 +22,7 @@
 #include <asm/types.h>
 #include <asm/hvm/svm/vmcb.h>
 
-void svm_vmcb_dump(const char *from, struct vmcb_struct *vmcb);
+void svm_vmcb_dump(const char *from, const struct vmcb_struct *vmcb);
 bool_t svm_vmcb_isvalid(const char *from, struct vmcb_struct *vmcb,
                         bool_t verbose);
 

[-- Attachment #3: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v2 4/4] SVM: clean up svm_vmcb_isvalid()
  2017-06-07 13:58 [PATCH v2 0/4] SVM: misc cleanup Jan Beulich
                   ` (2 preceding siblings ...)
  2017-06-07 14:07 ` [PATCH v2 3/4] SVM: clean up svm_vmcb_dump() Jan Beulich
@ 2017-06-07 14:07 ` Jan Beulich
  2017-06-07 17:11   ` Boris Ostrovsky
  3 siblings, 1 reply; 9+ messages in thread
From: Jan Beulich @ 2017-06-07 14:07 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Boris Ostrovsky, Suravee Suthikulpanit

[-- Attachment #1: Type: text/plain, Size: 7713 bytes --]

- correct CR3, CR4, and EFER checks
- delete bogus nested paging check
- add vcpu parameter (to include in log messages) and constify vmcb one
- use bool/true/false
- use accessors
- adjust formatting

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Constrain CR3 checks to the case when CR0.PG is set. Change wording
    of CR4 related message and also log valid bit mask there. Tighten
    EFER checks. Delete bogus nested paging check. Correct indentation
    in a few places.

--- a/xen/arch/x86/hvm/svm/nestedsvm.c
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
@@ -658,13 +658,13 @@ static int nsvm_vmcb_prepare4vmrun(struc
     /* Cleanbits */
     n2vmcb->cleanbits.bytes = 0;
 
-    rc = svm_vmcb_isvalid(__func__, ns_vmcb, 1);
+    rc = svm_vmcb_isvalid(__func__, ns_vmcb, v, true);
     if (rc) {
         gdprintk(XENLOG_ERR, "virtual vmcb invalid\n");
         return NSVM_ERROR_VVMCB;
     }
 
-    rc = svm_vmcb_isvalid(__func__, n2vmcb, 1);
+    rc = svm_vmcb_isvalid(__func__, n2vmcb, v, true);
     if (rc) {
         gdprintk(XENLOG_ERR, "n2vmcb invalid\n");
         return NSVM_ERROR_VMENTRY;
--- a/xen/arch/x86/hvm/svm/svmdebug.c
+++ b/xen/arch/x86/hvm/svm/svmdebug.c
@@ -16,6 +16,7 @@
  *
  */
 
+#include <xen/sched.h>
 #include <asm/processor.h>
 #include <asm/msr-index.h>
 #include <asm/hvm/svm/svmdebug.h>
@@ -87,93 +88,79 @@ void svm_vmcb_dump(const char *from, con
     svm_dump_sel("  TR", &vmcb->tr);
 }
 
-bool_t
-svm_vmcb_isvalid(const char *from, struct vmcb_struct *vmcb,
-                 bool_t verbose)
+bool svm_vmcb_isvalid(const char *from, const struct vmcb_struct *vmcb,
+                      const struct vcpu *v, bool verbose)
 {
-    bool_t ret = 0; /* ok */
+    bool ret = false; /* ok */
 
-#define PRINTF(...) \
-    if (verbose) { ret = 1; printk("%s: ", from); printk(__VA_ARGS__); \
-    } else return 1;
+#define PRINTF(fmt, args...) do { \
+    if ( !verbose ) return true; \
+    ret = true; \
+    printk(XENLOG_GUEST "%pv[%s]: " fmt, v, from, ## args); \
+} while (0)
 
-    if ((vmcb->_efer & EFER_SVME) == 0) {
-        PRINTF("EFER: SVME bit not set (%#"PRIx64")\n", vmcb->_efer);
-    }
+    if ( !(vmcb_get_efer(vmcb) & EFER_SVME) )
+        PRINTF("EFER: SVME bit not set (%#"PRIx64")\n", vmcb_get_efer(vmcb));
 
-    if ((vmcb->_cr0 & X86_CR0_CD) == 0 && (vmcb->_cr0 & X86_CR0_NW) != 0) {
+    if ( !(vmcb_get_cr0(vmcb) & X86_CR0_CD) && (vmcb_get_cr0(vmcb) & X86_CR0_NW) )
         PRINTF("CR0: CD bit is zero and NW bit set (%#"PRIx64")\n",
-                vmcb->_cr0);
-    }
+                vmcb_get_cr0(vmcb));
 
-    if ((vmcb->_cr0 >> 32U) != 0) {
+    if ( vmcb_get_cr0(vmcb) >> 32 )
         PRINTF("CR0: bits [63:32] are not zero (%#"PRIx64")\n",
-                vmcb->_cr0);
-    }
+                vmcb_get_cr0(vmcb));
 
-    if ((vmcb->_cr3 & 0x7) != 0) {
-        PRINTF("CR3: MBZ bits are set (%#"PRIx64")\n", vmcb->_cr3);
-    }
-    if ((vmcb->_efer & EFER_LMA) && (vmcb->_cr3 & 0xfe) != 0) {
-        PRINTF("CR3: MBZ bits are set (%#"PRIx64")\n", vmcb->_cr3);
-    }
-
-    if ((vmcb->_cr4 >> 19U) != 0) {
-        PRINTF("CR4: bits [63:19] are not zero (%#"PRIx64")\n",
-                vmcb->_cr4);
-    }
-
-    if (((vmcb->_cr4 >> 11U) & 0x7fU) != 0) {
-        PRINTF("CR4: bits [17:11] are not zero (%#"PRIx64")\n",
-                vmcb->_cr4);
-    }
+    if ( (vmcb_get_cr0(vmcb) & X86_CR0_PG) &&
+         ((vmcb_get_cr3(vmcb) & 0x7) ||
+          ((!(vmcb_get_cr4(vmcb) & X86_CR4_PAE) ||
+            (vmcb_get_efer(vmcb) & EFER_LMA)) &&
+           (vmcb_get_cr3(vmcb) & 0xfe0)) ||
+          ((vmcb_get_efer(vmcb) & EFER_LMA) &&
+           (vmcb_get_cr3(vmcb) >> v->domain->arch.cpuid->extd.maxphysaddr))) )
+        PRINTF("CR3: MBZ bits are set (%#"PRIx64")\n", vmcb_get_cr3(vmcb));
+
+    if ( vmcb_get_cr4(vmcb) & ~hvm_cr4_guest_valid_bits(v, false) )
+        PRINTF("CR4: invalid bits are set (%#"PRIx64", valid: %#"PRIx64")\n",
+               vmcb_get_cr4(vmcb), hvm_cr4_guest_valid_bits(v, false));
 
-    if ((vmcb->_dr6 >> 32U) != 0) {
+    if ( vmcb_get_dr6(vmcb) >> 32 )
         PRINTF("DR6: bits [63:32] are not zero (%#"PRIx64")\n",
-                vmcb->_dr6);
-    }
+               vmcb_get_dr6(vmcb));
 
-    if ((vmcb->_dr7 >> 32U) != 0) {
+    if ( vmcb_get_dr7(vmcb) >> 32 )
         PRINTF("DR7: bits [63:32] are not zero (%#"PRIx64")\n",
-                vmcb->_dr7);
-    }
+               vmcb_get_dr7(vmcb));
 
-    if ((vmcb->_efer >> 15U) != 0) {
-        PRINTF("EFER: bits [63:15] are not zero (%#"PRIx64")\n",
-                vmcb->_efer);
-    }
+    if ( vmcb_get_efer(vmcb) & ~(EFER_SCE | EFER_LME | EFER_LMA | EFER_NX |
+                                 EFER_SVME | EFER_LMSLE | EFER_FFXSE) )
+        PRINTF("EFER: undefined bits are not zero (%#"PRIx64")\n",
+               vmcb_get_efer(vmcb));
 
-    if ((vmcb->_efer & EFER_LME) != 0 && ((vmcb->_cr0 & X86_CR0_PG) != 0)) {
-        if ((vmcb->_cr4 & X86_CR4_PAE) == 0) {
-            PRINTF("EFER_LME and CR0.PG are both set and CR4.PAE is zero.\n");
-        }
-        if ((vmcb->_cr0 & X86_CR0_PE) == 0) {
-            PRINTF("EFER_LME and CR0.PG are both set and CR0.PE is zero.\n");
-        }
-    }
+    if ( hvm_efer_valid(v, vmcb_get_efer(vmcb), -1) )
+        PRINTF("EFER: %s\n", hvm_efer_valid(v, vmcb_get_efer(vmcb), -1));
 
-    if ((vmcb->_efer & EFER_LME) != 0
-        && (vmcb->_cr0 & X86_CR0_PG) != 0
-        && (vmcb->_cr4 & X86_CR4_PAE) != 0
-        && (vmcb->cs.attr.fields.l != 0)
-        && (vmcb->cs.attr.fields.db != 0))
+    if ( (vmcb_get_efer(vmcb) & EFER_LME) && (vmcb_get_cr0(vmcb) & X86_CR0_PG) )
     {
-        PRINTF("EFER_LME, CR0.PG, CR4.PAE, CS.L and CS.D are all non-zero.\n");
+        if ( !(vmcb_get_cr4(vmcb) & X86_CR4_PAE) )
+            PRINTF("EFER_LME and CR0.PG are both set and CR4.PAE is zero\n");
+        if ( !(vmcb_get_cr0(vmcb) & X86_CR0_PE) )
+            PRINTF("EFER_LME and CR0.PG are both set and CR0.PE is zero\n");
     }
 
-    if ((vmcb->_general2_intercepts & GENERAL2_INTERCEPT_VMRUN) == 0) {
+    if ( (vmcb_get_efer(vmcb) & EFER_LME) &&
+         (vmcb_get_cr0(vmcb) & X86_CR0_PG) &&
+         (vmcb_get_cr4(vmcb) & X86_CR4_PAE) &&
+         vmcb->cs.attr.fields.l &&
+         vmcb->cs.attr.fields.db )
+        PRINTF("EFER_LME, CR0.PG, CR4.PAE, CS.L and CS.D are all non-zero\n");
+
+    if ( !(vmcb_get_general2_intercepts(vmcb) & GENERAL2_INTERCEPT_VMRUN) )
         PRINTF("GENERAL2_INTERCEPT: VMRUN intercept bit is clear (%#"PRIx32")\n",
-            vmcb->_general2_intercepts);
-    }
+               vmcb_get_general2_intercepts(vmcb));
 
-    if (vmcb->eventinj.fields.resvd1 != 0) {
+    if ( vmcb->eventinj.fields.resvd1 )
         PRINTF("eventinj: MBZ bits are set (%#"PRIx64")\n",
-                vmcb->eventinj.bytes);
-    }
-
-    if (vmcb->_np_enable && vmcb->_h_cr3 == 0) {
-        PRINTF("nested paging enabled but host cr3 is 0\n");
-    }
+               vmcb->eventinj.bytes);
 
 #undef PRINTF
     return ret;
--- a/xen/include/asm-x86/hvm/svm/svmdebug.h
+++ b/xen/include/asm-x86/hvm/svm/svmdebug.h
@@ -23,7 +23,7 @@
 #include <asm/hvm/svm/vmcb.h>
 
 void svm_vmcb_dump(const char *from, const struct vmcb_struct *vmcb);
-bool_t svm_vmcb_isvalid(const char *from, struct vmcb_struct *vmcb,
-                        bool_t verbose);
+bool svm_vmcb_isvalid(const char *from, const struct vmcb_struct *vmcb,
+                      const struct vcpu *v, bool verbose);
 
 #endif /* __ASM_X86_HVM_SVM_SVMDEBUG_H__ */



[-- Attachment #2: SVM-isvalid-cleanup.patch --]
[-- Type: text/plain, Size: 7745 bytes --]

SVM: clean up svm_vmcb_isvalid()

- correct CR3, CR4, and EFER checks
- delete bogus nested paging check
- add vcpu parameter (to include in log messages) and constify vmcb one
- use bool/true/false
- use accessors
- adjust formatting

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Constrain CR3 checks to the case when CR0.PG is set. Change wording
    of CR4 related message and also log valid bit mask there. Tighten
    EFER checks. Delete bogus nested paging check. Correct indentation
    in a few places.

--- a/xen/arch/x86/hvm/svm/nestedsvm.c
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
@@ -658,13 +658,13 @@ static int nsvm_vmcb_prepare4vmrun(struc
     /* Cleanbits */
     n2vmcb->cleanbits.bytes = 0;
 
-    rc = svm_vmcb_isvalid(__func__, ns_vmcb, 1);
+    rc = svm_vmcb_isvalid(__func__, ns_vmcb, v, true);
     if (rc) {
         gdprintk(XENLOG_ERR, "virtual vmcb invalid\n");
         return NSVM_ERROR_VVMCB;
     }
 
-    rc = svm_vmcb_isvalid(__func__, n2vmcb, 1);
+    rc = svm_vmcb_isvalid(__func__, n2vmcb, v, true);
     if (rc) {
         gdprintk(XENLOG_ERR, "n2vmcb invalid\n");
         return NSVM_ERROR_VMENTRY;
--- a/xen/arch/x86/hvm/svm/svmdebug.c
+++ b/xen/arch/x86/hvm/svm/svmdebug.c
@@ -16,6 +16,7 @@
  *
  */
 
+#include <xen/sched.h>
 #include <asm/processor.h>
 #include <asm/msr-index.h>
 #include <asm/hvm/svm/svmdebug.h>
@@ -87,93 +88,79 @@ void svm_vmcb_dump(const char *from, con
     svm_dump_sel("  TR", &vmcb->tr);
 }
 
-bool_t
-svm_vmcb_isvalid(const char *from, struct vmcb_struct *vmcb,
-                 bool_t verbose)
+bool svm_vmcb_isvalid(const char *from, const struct vmcb_struct *vmcb,
+                      const struct vcpu *v, bool verbose)
 {
-    bool_t ret = 0; /* ok */
+    bool ret = false; /* ok */
 
-#define PRINTF(...) \
-    if (verbose) { ret = 1; printk("%s: ", from); printk(__VA_ARGS__); \
-    } else return 1;
+#define PRINTF(fmt, args...) do { \
+    if ( !verbose ) return true; \
+    ret = true; \
+    printk(XENLOG_GUEST "%pv[%s]: " fmt, v, from, ## args); \
+} while (0)
 
-    if ((vmcb->_efer & EFER_SVME) == 0) {
-        PRINTF("EFER: SVME bit not set (%#"PRIx64")\n", vmcb->_efer);
-    }
+    if ( !(vmcb_get_efer(vmcb) & EFER_SVME) )
+        PRINTF("EFER: SVME bit not set (%#"PRIx64")\n", vmcb_get_efer(vmcb));
 
-    if ((vmcb->_cr0 & X86_CR0_CD) == 0 && (vmcb->_cr0 & X86_CR0_NW) != 0) {
+    if ( !(vmcb_get_cr0(vmcb) & X86_CR0_CD) && (vmcb_get_cr0(vmcb) & X86_CR0_NW) )
         PRINTF("CR0: CD bit is zero and NW bit set (%#"PRIx64")\n",
-                vmcb->_cr0);
-    }
+                vmcb_get_cr0(vmcb));
 
-    if ((vmcb->_cr0 >> 32U) != 0) {
+    if ( vmcb_get_cr0(vmcb) >> 32 )
         PRINTF("CR0: bits [63:32] are not zero (%#"PRIx64")\n",
-                vmcb->_cr0);
-    }
+                vmcb_get_cr0(vmcb));
 
-    if ((vmcb->_cr3 & 0x7) != 0) {
-        PRINTF("CR3: MBZ bits are set (%#"PRIx64")\n", vmcb->_cr3);
-    }
-    if ((vmcb->_efer & EFER_LMA) && (vmcb->_cr3 & 0xfe) != 0) {
-        PRINTF("CR3: MBZ bits are set (%#"PRIx64")\n", vmcb->_cr3);
-    }
-
-    if ((vmcb->_cr4 >> 19U) != 0) {
-        PRINTF("CR4: bits [63:19] are not zero (%#"PRIx64")\n",
-                vmcb->_cr4);
-    }
-
-    if (((vmcb->_cr4 >> 11U) & 0x7fU) != 0) {
-        PRINTF("CR4: bits [17:11] are not zero (%#"PRIx64")\n",
-                vmcb->_cr4);
-    }
+    if ( (vmcb_get_cr0(vmcb) & X86_CR0_PG) &&
+         ((vmcb_get_cr3(vmcb) & 0x7) ||
+          ((!(vmcb_get_cr4(vmcb) & X86_CR4_PAE) ||
+            (vmcb_get_efer(vmcb) & EFER_LMA)) &&
+           (vmcb_get_cr3(vmcb) & 0xfe0)) ||
+          ((vmcb_get_efer(vmcb) & EFER_LMA) &&
+           (vmcb_get_cr3(vmcb) >> v->domain->arch.cpuid->extd.maxphysaddr))) )
+        PRINTF("CR3: MBZ bits are set (%#"PRIx64")\n", vmcb_get_cr3(vmcb));
+
+    if ( vmcb_get_cr4(vmcb) & ~hvm_cr4_guest_valid_bits(v, false) )
+        PRINTF("CR4: invalid bits are set (%#"PRIx64", valid: %#"PRIx64")\n",
+               vmcb_get_cr4(vmcb), hvm_cr4_guest_valid_bits(v, false));
 
-    if ((vmcb->_dr6 >> 32U) != 0) {
+    if ( vmcb_get_dr6(vmcb) >> 32 )
         PRINTF("DR6: bits [63:32] are not zero (%#"PRIx64")\n",
-                vmcb->_dr6);
-    }
+               vmcb_get_dr6(vmcb));
 
-    if ((vmcb->_dr7 >> 32U) != 0) {
+    if ( vmcb_get_dr7(vmcb) >> 32 )
         PRINTF("DR7: bits [63:32] are not zero (%#"PRIx64")\n",
-                vmcb->_dr7);
-    }
+               vmcb_get_dr7(vmcb));
 
-    if ((vmcb->_efer >> 15U) != 0) {
-        PRINTF("EFER: bits [63:15] are not zero (%#"PRIx64")\n",
-                vmcb->_efer);
-    }
+    if ( vmcb_get_efer(vmcb) & ~(EFER_SCE | EFER_LME | EFER_LMA | EFER_NX |
+                                 EFER_SVME | EFER_LMSLE | EFER_FFXSE) )
+        PRINTF("EFER: undefined bits are not zero (%#"PRIx64")\n",
+               vmcb_get_efer(vmcb));
 
-    if ((vmcb->_efer & EFER_LME) != 0 && ((vmcb->_cr0 & X86_CR0_PG) != 0)) {
-        if ((vmcb->_cr4 & X86_CR4_PAE) == 0) {
-            PRINTF("EFER_LME and CR0.PG are both set and CR4.PAE is zero.\n");
-        }
-        if ((vmcb->_cr0 & X86_CR0_PE) == 0) {
-            PRINTF("EFER_LME and CR0.PG are both set and CR0.PE is zero.\n");
-        }
-    }
+    if ( hvm_efer_valid(v, vmcb_get_efer(vmcb), -1) )
+        PRINTF("EFER: %s\n", hvm_efer_valid(v, vmcb_get_efer(vmcb), -1));
 
-    if ((vmcb->_efer & EFER_LME) != 0
-        && (vmcb->_cr0 & X86_CR0_PG) != 0
-        && (vmcb->_cr4 & X86_CR4_PAE) != 0
-        && (vmcb->cs.attr.fields.l != 0)
-        && (vmcb->cs.attr.fields.db != 0))
+    if ( (vmcb_get_efer(vmcb) & EFER_LME) && (vmcb_get_cr0(vmcb) & X86_CR0_PG) )
     {
-        PRINTF("EFER_LME, CR0.PG, CR4.PAE, CS.L and CS.D are all non-zero.\n");
+        if ( !(vmcb_get_cr4(vmcb) & X86_CR4_PAE) )
+            PRINTF("EFER_LME and CR0.PG are both set and CR4.PAE is zero\n");
+        if ( !(vmcb_get_cr0(vmcb) & X86_CR0_PE) )
+            PRINTF("EFER_LME and CR0.PG are both set and CR0.PE is zero\n");
     }
 
-    if ((vmcb->_general2_intercepts & GENERAL2_INTERCEPT_VMRUN) == 0) {
+    if ( (vmcb_get_efer(vmcb) & EFER_LME) &&
+         (vmcb_get_cr0(vmcb) & X86_CR0_PG) &&
+         (vmcb_get_cr4(vmcb) & X86_CR4_PAE) &&
+         vmcb->cs.attr.fields.l &&
+         vmcb->cs.attr.fields.db )
+        PRINTF("EFER_LME, CR0.PG, CR4.PAE, CS.L and CS.D are all non-zero\n");
+
+    if ( !(vmcb_get_general2_intercepts(vmcb) & GENERAL2_INTERCEPT_VMRUN) )
         PRINTF("GENERAL2_INTERCEPT: VMRUN intercept bit is clear (%#"PRIx32")\n",
-            vmcb->_general2_intercepts);
-    }
+               vmcb_get_general2_intercepts(vmcb));
 
-    if (vmcb->eventinj.fields.resvd1 != 0) {
+    if ( vmcb->eventinj.fields.resvd1 )
         PRINTF("eventinj: MBZ bits are set (%#"PRIx64")\n",
-                vmcb->eventinj.bytes);
-    }
-
-    if (vmcb->_np_enable && vmcb->_h_cr3 == 0) {
-        PRINTF("nested paging enabled but host cr3 is 0\n");
-    }
+               vmcb->eventinj.bytes);
 
 #undef PRINTF
     return ret;
--- a/xen/include/asm-x86/hvm/svm/svmdebug.h
+++ b/xen/include/asm-x86/hvm/svm/svmdebug.h
@@ -23,7 +23,7 @@
 #include <asm/hvm/svm/vmcb.h>
 
 void svm_vmcb_dump(const char *from, const struct vmcb_struct *vmcb);
-bool_t svm_vmcb_isvalid(const char *from, struct vmcb_struct *vmcb,
-                        bool_t verbose);
+bool svm_vmcb_isvalid(const char *from, const struct vmcb_struct *vmcb,
+                      const struct vcpu *v, bool verbose);
 
 #endif /* __ASM_X86_HVM_SVM_SVMDEBUG_H__ */

[-- Attachment #3: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 1/4] SVM: use VMCB accessors
  2017-06-07 14:04 ` [PATCH v2 1/4] SVM: use VMCB accessors Jan Beulich
@ 2017-06-07 17:04   ` Boris Ostrovsky
  0 siblings, 0 replies; 9+ messages in thread
From: Boris Ostrovsky @ 2017-06-07 17:04 UTC (permalink / raw)
  To: Jan Beulich, xen-devel; +Cc: Andrew Cooper, Suravee Suthikulpanit

On 06/07/2017 10:04 AM, Jan Beulich wrote:
> This is particularly relevant for the SET form, to ensure proper clean
> bits tracking (albeit in the case here it's benign as CPL and other
> segment register attributes share a clean bit).
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>


Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 2/4] SVM: infer type in VMCB_ACCESSORS()
  2017-06-07 14:04 ` [PATCH v2 2/4] SVM: infer type in VMCB_ACCESSORS() Jan Beulich
@ 2017-06-07 17:04   ` Boris Ostrovsky
  0 siblings, 0 replies; 9+ messages in thread
From: Boris Ostrovsky @ 2017-06-07 17:04 UTC (permalink / raw)
  To: Jan Beulich, xen-devel; +Cc: Andrew Cooper, Suravee Suthikulpanit

On 06/07/2017 10:04 AM, Jan Beulich wrote:
> Prevent accidental mistakes by not requiring explicit types to be
> specified in the macro invocations.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>


Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 3/4] SVM: clean up svm_vmcb_dump()
  2017-06-07 14:07 ` [PATCH v2 3/4] SVM: clean up svm_vmcb_dump() Jan Beulich
@ 2017-06-07 17:08   ` Boris Ostrovsky
  0 siblings, 0 replies; 9+ messages in thread
From: Boris Ostrovsky @ 2017-06-07 17:08 UTC (permalink / raw)
  To: Jan Beulich, xen-devel; +Cc: Andrew Cooper, Suravee Suthikulpanit

On 06/07/2017 10:07 AM, Jan Beulich wrote:
> - constify parameter
> - use accessors
> - drop stray casts
> - adjust formatting
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 4/4] SVM: clean up svm_vmcb_isvalid()
  2017-06-07 14:07 ` [PATCH v2 4/4] SVM: clean up svm_vmcb_isvalid() Jan Beulich
@ 2017-06-07 17:11   ` Boris Ostrovsky
  0 siblings, 0 replies; 9+ messages in thread
From: Boris Ostrovsky @ 2017-06-07 17:11 UTC (permalink / raw)
  To: Jan Beulich, xen-devel; +Cc: Andrew Cooper, Suravee Suthikulpanit


>  
> -bool_t
> -svm_vmcb_isvalid(const char *from, struct vmcb_struct *vmcb,
> -                 bool_t verbose)
> +bool svm_vmcb_isvalid(const char *from, const struct vmcb_struct *vmcb,
> +                      const struct vcpu *v, bool verbose)
>  {
> -    bool_t ret = 0; /* ok */
> +    bool ret = false; /* ok */

I think stashing control register and EFER (and maybe more) into local
variables will make code much more readable.


> +    if ( (vmcb_get_cr0(vmcb) & X86_CR0_PG) &&
> +         ((vmcb_get_cr3(vmcb) & 0x7) ||
> +          ((!(vmcb_get_cr4(vmcb) & X86_CR4_PAE) ||
> +            (vmcb_get_efer(vmcb) & EFER_LMA)) &&
> +           (vmcb_get_cr3(vmcb) & 0xfe0)) ||
> +          ((vmcb_get_efer(vmcb) & EFER_LMA) &&
> +           (vmcb_get_cr3(vmcb) >> v->domain->arch.cpuid->extd.maxphysaddr))) )
> +        PRINTF("CR3: MBZ bits are set (%#"PRIx64")\n", vmcb_get_cr3(vmcb));

For example, this fragment took me a while to mentally unwrap. Shorter
names should allow you to group them better per line.

(also might generate less code, depending on compiler)

-boris



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2017-06-07 17:12 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-06-07 13:58 [PATCH v2 0/4] SVM: misc cleanup Jan Beulich
2017-06-07 14:04 ` [PATCH v2 1/4] SVM: use VMCB accessors Jan Beulich
2017-06-07 17:04   ` Boris Ostrovsky
2017-06-07 14:04 ` [PATCH v2 2/4] SVM: infer type in VMCB_ACCESSORS() Jan Beulich
2017-06-07 17:04   ` Boris Ostrovsky
2017-06-07 14:07 ` [PATCH v2 3/4] SVM: clean up svm_vmcb_dump() Jan Beulich
2017-06-07 17:08   ` Boris Ostrovsky
2017-06-07 14:07 ` [PATCH v2 4/4] SVM: clean up svm_vmcb_isvalid() Jan Beulich
2017-06-07 17:11   ` Boris Ostrovsky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).