All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/7] x86: Structure naming and consistency improvements
@ 2018-08-28 17:38 Andrew Cooper
  2018-08-28 17:39 ` [PATCH 1/7] x86/pv: Rename d->arch.pv_domain to d->arch.pv Andrew Cooper
                   ` (6 more replies)
  0 siblings, 7 replies; 39+ messages in thread
From: Andrew Cooper @ 2018-08-28 17:38 UTC (permalink / raw)
  To: Xen-devel
  Cc: Kevin Tian, Tamas K Lengyel, Wei Liu, Jun Nakajima,
	Razvan Cojocaru, George Dunlap, Andrew Cooper, Tim Deegan,
	Julien Grall, Paul Durrant, Stefano Stabellini, Jan Beulich,
	Boris Ostrovsky, Brian Woods, Suravee Suthikulpanit,
	Roger Pau Monné

This series started by trying to address the bug in patch 7, and ballooned
somewhat.

It is semi RFC because I expect there might be some objection in principle to
a series this invasive, but I can't find any less invasive way of making the
changes.  In particular, we can't use #define's to stage the changes over
time, because it interferes with local variables with the same name.

Overall, I think the series is enough of an improvement for it to be
considered.

Andrew Cooper (7):
  x86/pv: Rename d->arch.pv_domain to d->arch.pv
  x86/pv: Rename v->arch.pv_vcpu to v->arch.pv
  xen/hvm: Rename d->arch.hvm_domain to d->arch.hvm
  x86/hvm: Rename v->arch.hvm_vcpu to v->arch.hvm
  x86/vtx: Rename arch_vmx_struct to vmx_vcpu
  x86/svm: Rename arch_svm_struct to svm_vcpu
  x86/hvm: Drop hvm_{vmx,svm} shorthands

 xen/arch/arm/domain_build.c             |   2 +-
 xen/arch/arm/hvm.c                      |   4 +-
 xen/arch/x86/cpu/amd.c                  |   6 +-
 xen/arch/x86/cpu/intel.c                |   6 +-
 xen/arch/x86/cpu/vpmu.c                 |   2 +-
 xen/arch/x86/cpuid.c                    |  12 +-
 xen/arch/x86/domain.c                   |  96 +++----
 xen/arch/x86/domain_page.c              |  14 +-
 xen/arch/x86/domctl.c                   | 104 ++++----
 xen/arch/x86/hvm/asid.c                 |   2 +-
 xen/arch/x86/hvm/dm.c                   |  12 +-
 xen/arch/x86/hvm/dom0_build.c           |   4 +-
 xen/arch/x86/hvm/domain.c               |  40 +--
 xen/arch/x86/hvm/emulate.c              |  28 +-
 xen/arch/x86/hvm/hpet.c                 |  10 +-
 xen/arch/x86/hvm/hvm.c                  | 307 +++++++++++-----------
 xen/arch/x86/hvm/hypercall.c            |   6 +-
 xen/arch/x86/hvm/intercept.c            |  14 +-
 xen/arch/x86/hvm/io.c                   |  60 ++---
 xen/arch/x86/hvm/ioreq.c                |  84 +++---
 xen/arch/x86/hvm/irq.c                  |  56 ++--
 xen/arch/x86/hvm/mtrr.c                 |  36 +--
 xen/arch/x86/hvm/pmtimer.c              |  42 +--
 xen/arch/x86/hvm/rtc.c                  |   4 +-
 xen/arch/x86/hvm/save.c                 |   6 +-
 xen/arch/x86/hvm/stdvga.c               |  18 +-
 xen/arch/x86/hvm/svm/asid.c             |   4 +-
 xen/arch/x86/hvm/svm/emulate.c          |   4 +-
 xen/arch/x86/hvm/svm/intr.c             |   8 +-
 xen/arch/x86/hvm/svm/nestedsvm.c        |  72 +++--
 xen/arch/x86/hvm/svm/svm.c              | 248 +++++++++---------
 xen/arch/x86/hvm/svm/vmcb.c             |  16 +-
 xen/arch/x86/hvm/vioapic.c              |  44 ++--
 xen/arch/x86/hvm/viridian.c             | 120 ++++-----
 xen/arch/x86/hvm/vlapic.c               |   8 +-
 xen/arch/x86/hvm/vmsi.c                 |  44 ++--
 xen/arch/x86/hvm/vmx/intr.c             |  20 +-
 xen/arch/x86/hvm/vmx/realmode.c         |  28 +-
 xen/arch/x86/hvm/vmx/vmcs.c             | 174 ++++++-------
 xen/arch/x86/hvm/vmx/vmx.c              | 448 ++++++++++++++++----------------
 xen/arch/x86/hvm/vmx/vvmx.c             |  96 +++----
 xen/arch/x86/hvm/vpic.c                 |  20 +-
 xen/arch/x86/hvm/vpt.c                  |  94 +++----
 xen/arch/x86/i387.c                     |   2 +-
 xen/arch/x86/irq.c                      |  10 +-
 xen/arch/x86/mm.c                       |  24 +-
 xen/arch/x86/mm/hap/guest_walk.c        |   2 +-
 xen/arch/x86/mm/hap/hap.c               |  15 +-
 xen/arch/x86/mm/mem_sharing.c           |   6 +-
 xen/arch/x86/mm/p2m-ept.c               |   6 +-
 xen/arch/x86/mm/shadow/common.c         |  18 +-
 xen/arch/x86/mm/shadow/multi.c          |  20 +-
 xen/arch/x86/physdev.c                  |  11 +-
 xen/arch/x86/pv/callback.c              |  42 +--
 xen/arch/x86/pv/descriptor-tables.c     |  18 +-
 xen/arch/x86/pv/dom0_build.c            |   8 +-
 xen/arch/x86/pv/domain.c                |  65 +++--
 xen/arch/x86/pv/emul-gate-op.c          |   4 +-
 xen/arch/x86/pv/emul-priv-op.c          |  42 +--
 xen/arch/x86/pv/iret.c                  |  10 +-
 xen/arch/x86/pv/misc-hypercalls.c       |   4 +-
 xen/arch/x86/pv/mm.c                    |  10 +-
 xen/arch/x86/pv/traps.c                 |  10 +-
 xen/arch/x86/setup.c                    |  10 +-
 xen/arch/x86/time.c                     |  16 +-
 xen/arch/x86/traps.c                    |  24 +-
 xen/arch/x86/x86_64/asm-offsets.c       |  48 ++--
 xen/arch/x86/x86_64/entry.S             |   2 +-
 xen/arch/x86/x86_64/mm.c                |  10 +-
 xen/arch/x86/x86_64/traps.c             |  18 +-
 xen/arch/x86/x86_emulate.c              |   2 +-
 xen/common/vm_event.c                   |   2 +-
 xen/drivers/passthrough/io.c            |   2 +-
 xen/drivers/passthrough/pci.c           |   2 +-
 xen/drivers/vpci/msix.c                 |   6 +-
 xen/include/asm-arm/domain.h            |   2 +-
 xen/include/asm-x86/domain.h            |  16 +-
 xen/include/asm-x86/flushtlb.h          |   2 +-
 xen/include/asm-x86/guest_pt.h          |   2 +-
 xen/include/asm-x86/hvm/domain.h        |   2 +-
 xen/include/asm-x86/hvm/hvm.h           |  31 ++-
 xen/include/asm-x86/hvm/irq.h           |   2 +-
 xen/include/asm-x86/hvm/nestedhvm.h     |   6 +-
 xen/include/asm-x86/hvm/svm/asid.h      |   2 +-
 xen/include/asm-x86/hvm/svm/nestedsvm.h |   2 +-
 xen/include/asm-x86/hvm/svm/vmcb.h      |   2 +-
 xen/include/asm-x86/hvm/vcpu.h          |  10 +-
 xen/include/asm-x86/hvm/vioapic.h       |   2 +-
 xen/include/asm-x86/hvm/vlapic.h        |   6 +-
 xen/include/asm-x86/hvm/vmx/vmcs.h      |   4 +-
 xen/include/asm-x86/hvm/vmx/vmx.h       |   2 +-
 xen/include/asm-x86/hvm/vpt.h           |   4 +-
 xen/include/asm-x86/irq.h               |   3 +-
 xen/include/asm-x86/ldt.h               |   2 +-
 xen/include/asm-x86/pv/traps.h          |   2 +-
 xen/include/asm-x86/shadow.h            |   4 +-
 96 files changed, 1485 insertions(+), 1515 deletions(-)

-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* [PATCH 1/7] x86/pv: Rename d->arch.pv_domain to d->arch.pv
  2018-08-28 17:38 [PATCH 0/7] x86: Structure naming and consistency improvements Andrew Cooper
@ 2018-08-28 17:39 ` Andrew Cooper
  2018-08-29  7:45   ` Wei Liu
  2018-08-29 15:55   ` Jan Beulich
  2018-08-28 17:39 ` [PATCH 2/7] x86/pv: Rename v->arch.pv_vcpu to v->arch.pv Andrew Cooper
                   ` (5 subsequent siblings)
  6 siblings, 2 replies; 39+ messages in thread
From: Andrew Cooper @ 2018-08-28 17:39 UTC (permalink / raw)
  To: Xen-devel
  Cc: Wei Liu, George Dunlap, Andrew Cooper, Tim Deegan, Jan Beulich,
	Roger Pau Monné

The trailing _domain suffix is redundant, but adds to code volume.  Drop it.

Reflow lines as appropriate, and switch to using the new XFREE/etc wrappers
where applicable.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Tim Deegan <tim@xen.org>
CC: George Dunlap <george.dunlap@eu.citrix.com>
---
 xen/arch/x86/cpu/amd.c         |  4 ++--
 xen/arch/x86/cpu/intel.c       |  4 ++--
 xen/arch/x86/domain.c          |  2 +-
 xen/arch/x86/domain_page.c     |  8 ++++----
 xen/arch/x86/domctl.c          | 10 +++++-----
 xen/arch/x86/mm.c              | 14 +++++++-------
 xen/arch/x86/pv/dom0_build.c   |  4 ++--
 xen/arch/x86/pv/domain.c       | 35 ++++++++++++++++-------------------
 xen/include/asm-x86/domain.h   |  4 ++--
 xen/include/asm-x86/flushtlb.h |  2 +-
 xen/include/asm-x86/shadow.h   |  4 ++--
 11 files changed, 44 insertions(+), 47 deletions(-)

diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index a7afa2f..e0ee114 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -208,8 +208,8 @@ static void amd_ctxt_switch_masking(const struct vcpu *next)
 	struct cpuidmasks *these_masks = &this_cpu(cpuidmasks);
 	const struct domain *nextd = next ? next->domain : NULL;
 	const struct cpuidmasks *masks =
-		(nextd && is_pv_domain(nextd) && nextd->arch.pv_domain.cpuidmasks)
-		? nextd->arch.pv_domain.cpuidmasks : &cpuidmask_defaults;
+		(nextd && is_pv_domain(nextd) && nextd->arch.pv.cpuidmasks)
+		? nextd->arch.pv.cpuidmasks : &cpuidmask_defaults;
 
 	if ((levelling_caps & LCAP_1cd) == LCAP_1cd) {
 		uint64_t val = masks->_1cd;
diff --git a/xen/arch/x86/cpu/intel.c b/xen/arch/x86/cpu/intel.c
index 377beef..8c375c8 100644
--- a/xen/arch/x86/cpu/intel.c
+++ b/xen/arch/x86/cpu/intel.c
@@ -119,8 +119,8 @@ static void intel_ctxt_switch_masking(const struct vcpu *next)
 	struct cpuidmasks *these_masks = &this_cpu(cpuidmasks);
 	const struct domain *nextd = next ? next->domain : NULL;
 	const struct cpuidmasks *masks =
-		(nextd && is_pv_domain(nextd) && nextd->arch.pv_domain.cpuidmasks)
-		? nextd->arch.pv_domain.cpuidmasks : &cpuidmask_defaults;
+		(nextd && is_pv_domain(nextd) && nextd->arch.pv.cpuidmasks)
+		? nextd->arch.pv.cpuidmasks : &cpuidmask_defaults;
 
         if (msr_basic) {
 		uint64_t val = masks->_1cd;
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index eb1e93f..8c7ddf5 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -970,7 +970,7 @@ int arch_set_info_guest(
         if ( d != current->domain && !VM_ASSIST(d, m2p_strict) &&
              is_pv_domain(d) && !is_pv_32bit_domain(d) &&
              test_bit(VMASST_TYPE_m2p_strict, &c.nat->vm_assist) &&
-             atomic_read(&d->arch.pv_domain.nr_l4_pages) )
+             atomic_read(&d->arch.pv.nr_l4_pages) )
         {
             bool done = false;
 
diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c
index 0c24530..15ba6f5 100644
--- a/xen/arch/x86/domain_page.c
+++ b/xen/arch/x86/domain_page.c
@@ -85,7 +85,7 @@ void *map_domain_page(mfn_t mfn)
     if ( !v || !is_pv_vcpu(v) )
         return mfn_to_virt(mfn_x(mfn));
 
-    dcache = &v->domain->arch.pv_domain.mapcache;
+    dcache = &v->domain->arch.pv.mapcache;
     vcache = &v->arch.pv_vcpu.mapcache;
     if ( !dcache->inuse )
         return mfn_to_virt(mfn_x(mfn));
@@ -189,7 +189,7 @@ void unmap_domain_page(const void *ptr)
     v = mapcache_current_vcpu();
     ASSERT(v && is_pv_vcpu(v));
 
-    dcache = &v->domain->arch.pv_domain.mapcache;
+    dcache = &v->domain->arch.pv.mapcache;
     ASSERT(dcache->inuse);
 
     idx = PFN_DOWN(va - MAPCACHE_VIRT_START);
@@ -233,7 +233,7 @@ void unmap_domain_page(const void *ptr)
 
 int mapcache_domain_init(struct domain *d)
 {
-    struct mapcache_domain *dcache = &d->arch.pv_domain.mapcache;
+    struct mapcache_domain *dcache = &d->arch.pv.mapcache;
     unsigned int bitmap_pages;
 
     ASSERT(is_pv_domain(d));
@@ -261,7 +261,7 @@ int mapcache_domain_init(struct domain *d)
 int mapcache_vcpu_init(struct vcpu *v)
 {
     struct domain *d = v->domain;
-    struct mapcache_domain *dcache = &d->arch.pv_domain.mapcache;
+    struct mapcache_domain *dcache = &d->arch.pv.mapcache;
     unsigned long i;
     unsigned int ents = d->max_vcpus * MAPCACHE_VCPU_ENTRIES;
     unsigned int nr = PFN_UP(BITS_TO_LONGS(ents) * sizeof(long));
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 6f1c43e..e27e971 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -194,7 +194,7 @@ static int update_domain_cpuid_info(struct domain *d,
                 break;
             }
 
-            d->arch.pv_domain.cpuidmasks->_1cd = mask;
+            d->arch.pv.cpuidmasks->_1cd = mask;
         }
         break;
 
@@ -206,7 +206,7 @@ static int update_domain_cpuid_info(struct domain *d,
             if ( boot_cpu_data.x86_vendor == X86_VENDOR_AMD )
                 mask &= (~0ULL << 32) | ctl->ecx;
 
-            d->arch.pv_domain.cpuidmasks->_6c = mask;
+            d->arch.pv.cpuidmasks->_6c = mask;
         }
         break;
 
@@ -223,7 +223,7 @@ static int update_domain_cpuid_info(struct domain *d,
             if ( boot_cpu_data.x86_vendor == X86_VENDOR_AMD )
                 mask &= ((uint64_t)eax << 32) | ebx;
 
-            d->arch.pv_domain.cpuidmasks->_7ab0 = mask;
+            d->arch.pv.cpuidmasks->_7ab0 = mask;
         }
 
         /*
@@ -262,7 +262,7 @@ static int update_domain_cpuid_info(struct domain *d,
             if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL )
                 mask &= (~0ULL << 32) | eax;
 
-            d->arch.pv_domain.cpuidmasks->Da1 = mask;
+            d->arch.pv.cpuidmasks->Da1 = mask;
         }
         break;
 
@@ -305,7 +305,7 @@ static int update_domain_cpuid_info(struct domain *d,
                 break;
             }
 
-            d->arch.pv_domain.cpuidmasks->e1cd = mask;
+            d->arch.pv.cpuidmasks->e1cd = mask;
         }
         break;
 
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 8ac4412..26c4551 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -501,7 +501,7 @@ void make_cr3(struct vcpu *v, mfn_t mfn)
     struct domain *d = v->domain;
 
     v->arch.cr3 = mfn_x(mfn) << PAGE_SHIFT;
-    if ( is_pv_domain(d) && d->arch.pv_domain.pcid )
+    if ( is_pv_domain(d) && d->arch.pv.pcid )
         v->arch.cr3 |= get_pcid_bits(v, false);
 }
 
@@ -514,9 +514,9 @@ unsigned long pv_guest_cr4_to_real_cr4(const struct vcpu *v)
     cr4 |= mmu_cr4_features & (X86_CR4_PSE | X86_CR4_SMEP | X86_CR4_SMAP |
                                X86_CR4_OSXSAVE | X86_CR4_FSGSBASE);
 
-    if ( d->arch.pv_domain.pcid )
+    if ( d->arch.pv.pcid )
         cr4 |= X86_CR4_PCIDE;
-    else if ( !d->arch.pv_domain.xpti )
+    else if ( !d->arch.pv.xpti )
         cr4 |= X86_CR4_PGE;
 
     cr4 |= d->arch.vtsc ? X86_CR4_TSD : 0;
@@ -533,7 +533,7 @@ void write_ptbase(struct vcpu *v)
               ? pv_guest_cr4_to_real_cr4(v)
               : ((read_cr4() & ~(X86_CR4_PCIDE | X86_CR4_TSD)) | X86_CR4_PGE);
 
-    if ( is_pv_vcpu(v) && v->domain->arch.pv_domain.xpti )
+    if ( is_pv_vcpu(v) && v->domain->arch.pv.xpti )
     {
         cpu_info->root_pgt_changed = true;
         cpu_info->pv_cr3 = __pa(this_cpu(root_pgt));
@@ -1753,7 +1753,7 @@ static int alloc_l4_table(struct page_info *page)
     {
         init_xen_l4_slots(pl4e, _mfn(pfn),
                           d, INVALID_MFN, VM_ASSIST(d, m2p_strict));
-        atomic_inc(&d->arch.pv_domain.nr_l4_pages);
+        atomic_inc(&d->arch.pv.nr_l4_pages);
         rc = 0;
     }
     unmap_domain_page(pl4e);
@@ -1873,7 +1873,7 @@ static int free_l4_table(struct page_info *page)
 
     if ( rc >= 0 )
     {
-        atomic_dec(&d->arch.pv_domain.nr_l4_pages);
+        atomic_dec(&d->arch.pv.nr_l4_pages);
         rc = 0;
     }
 
@@ -3784,7 +3784,7 @@ long do_mmu_update(
                         break;
                     rc = mod_l4_entry(va, l4e_from_intpte(req.val), mfn,
                                       cmd == MMU_PT_UPDATE_PRESERVE_AD, v);
-                    if ( !rc && pt_owner->arch.pv_domain.xpti )
+                    if ( !rc && pt_owner->arch.pv.xpti )
                     {
                         bool local_in_use = false;
 
diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
index 34c77bc..078288b 100644
--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -387,8 +387,8 @@ int __init dom0_construct_pv(struct domain *d,
     if ( compat32 )
     {
         d->arch.is_32bit_pv = d->arch.has_32bit_shinfo = 1;
-        d->arch.pv_domain.xpti = false;
-        d->arch.pv_domain.pcid = false;
+        d->arch.pv.xpti = false;
+        d->arch.pv.pcid = false;
         v->vcpu_info = (void *)&d->shared_info->compat.vcpu_info[0];
         if ( setup_compat_arg_xlat(v) != 0 )
             BUG();
diff --git a/xen/arch/x86/pv/domain.c b/xen/arch/x86/pv/domain.c
index 52108d4..454d580 100644
--- a/xen/arch/x86/pv/domain.c
+++ b/xen/arch/x86/pv/domain.c
@@ -122,8 +122,8 @@ int switch_compat(struct domain *d)
 
     d->arch.x87_fip_width = 4;
 
-    d->arch.pv_domain.xpti = false;
-    d->arch.pv_domain.pcid = false;
+    d->arch.pv.xpti = false;
+    d->arch.pv.pcid = false;
 
     return 0;
 
@@ -142,7 +142,7 @@ static int pv_create_gdt_ldt_l1tab(struct vcpu *v)
 {
     return create_perdomain_mapping(v->domain, GDT_VIRT_START(v),
                                     1U << GDT_LDT_VCPU_SHIFT,
-                                    v->domain->arch.pv_domain.gdt_ldt_l1tab,
+                                    v->domain->arch.pv.gdt_ldt_l1tab,
                                     NULL);
 }
 
@@ -215,11 +215,9 @@ void pv_domain_destroy(struct domain *d)
     destroy_perdomain_mapping(d, GDT_LDT_VIRT_START,
                               GDT_LDT_MBYTES << (20 - PAGE_SHIFT));
 
-    xfree(d->arch.pv_domain.cpuidmasks);
-    d->arch.pv_domain.cpuidmasks = NULL;
+    XFREE(d->arch.pv.cpuidmasks);
 
-    free_xenheap_page(d->arch.pv_domain.gdt_ldt_l1tab);
-    d->arch.pv_domain.gdt_ldt_l1tab = NULL;
+    FREE_XENHEAP_PAGE(d->arch.pv.gdt_ldt_l1tab);
 }
 
 
@@ -234,14 +232,14 @@ int pv_domain_initialise(struct domain *d)
 
     pv_l1tf_domain_init(d);
 
-    d->arch.pv_domain.gdt_ldt_l1tab =
+    d->arch.pv.gdt_ldt_l1tab =
         alloc_xenheap_pages(0, MEMF_node(domain_to_node(d)));
-    if ( !d->arch.pv_domain.gdt_ldt_l1tab )
+    if ( !d->arch.pv.gdt_ldt_l1tab )
         goto fail;
-    clear_page(d->arch.pv_domain.gdt_ldt_l1tab);
+    clear_page(d->arch.pv.gdt_ldt_l1tab);
 
     if ( levelling_caps & ~LCAP_faulting &&
-         (d->arch.pv_domain.cpuidmasks = xmemdup(&cpuidmask_defaults)) == NULL )
+         (d->arch.pv.cpuidmasks = xmemdup(&cpuidmask_defaults)) == NULL )
         goto fail;
 
     rc = create_perdomain_mapping(d, GDT_LDT_VIRT_START,
@@ -255,8 +253,8 @@ int pv_domain_initialise(struct domain *d)
     /* 64-bit PV guest by default. */
     d->arch.is_32bit_pv = d->arch.has_32bit_shinfo = 0;
 
-    d->arch.pv_domain.xpti = opt_xpti & (is_hardware_domain(d)
-                                         ? OPT_XPTI_DOM0 : OPT_XPTI_DOMU);
+    d->arch.pv.xpti = opt_xpti & (is_hardware_domain(d)
+                                  ? OPT_XPTI_DOM0 : OPT_XPTI_DOMU);
 
     if ( !is_pv_32bit_domain(d) && use_invpcid && cpu_has_pcid )
         switch ( opt_pcid )
@@ -265,15 +263,15 @@ int pv_domain_initialise(struct domain *d)
             break;
 
         case PCID_ALL:
-            d->arch.pv_domain.pcid = true;
+            d->arch.pv.pcid = true;
             break;
 
         case PCID_XPTI:
-            d->arch.pv_domain.pcid = d->arch.pv_domain.xpti;
+            d->arch.pv.pcid = d->arch.pv.xpti;
             break;
 
         case PCID_NOXPTI:
-            d->arch.pv_domain.pcid = !d->arch.pv_domain.xpti;
+            d->arch.pv.pcid = !d->arch.pv.xpti;
             break;
 
         default:
@@ -295,14 +293,13 @@ static void _toggle_guest_pt(struct vcpu *v)
 
     v->arch.flags ^= TF_kernel_mode;
     update_cr3(v);
-    if ( d->arch.pv_domain.xpti )
+    if ( d->arch.pv.xpti )
     {
         struct cpu_info *cpu_info = get_cpu_info();
 
         cpu_info->root_pgt_changed = true;
         cpu_info->pv_cr3 = __pa(this_cpu(root_pgt)) |
-                           (d->arch.pv_domain.pcid
-                            ? get_pcid_bits(v, true) : 0);
+                           (d->arch.pv.pcid ? get_pcid_bits(v, true) : 0);
     }
 
     /* Don't flush user global mappings from the TLB. Don't tick TLB clock. */
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 09f6b3d..0c75c02 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -305,7 +305,7 @@ struct arch_domain
     struct list_head pdev_list;
 
     union {
-        struct pv_domain pv_domain;
+        struct pv_domain pv;
         struct hvm_domain hvm_domain;
     };
 
@@ -458,7 +458,7 @@ struct arch_domain
 #define gdt_ldt_pt_idx(v) \
       ((v)->vcpu_id >> (PAGETABLE_ORDER - GDT_LDT_VCPU_SHIFT))
 #define pv_gdt_ptes(v) \
-    ((v)->domain->arch.pv_domain.gdt_ldt_l1tab[gdt_ldt_pt_idx(v)] + \
+    ((v)->domain->arch.pv.gdt_ldt_l1tab[gdt_ldt_pt_idx(v)] + \
      (((v)->vcpu_id << GDT_LDT_VCPU_SHIFT) & (L1_PAGETABLE_ENTRIES - 1)))
 #define pv_ldt_ptes(v) (pv_gdt_ptes(v) + 16)
 
diff --git a/xen/include/asm-x86/flushtlb.h b/xen/include/asm-x86/flushtlb.h
index ed5f45e..434821a 100644
--- a/xen/include/asm-x86/flushtlb.h
+++ b/xen/include/asm-x86/flushtlb.h
@@ -138,7 +138,7 @@ void flush_area_mask(const cpumask_t *, const void *va, unsigned int flags);
 
 #define flush_root_pgtbl_domain(d)                                       \
 {                                                                        \
-    if ( is_pv_domain(d) && (d)->arch.pv_domain.xpti )                   \
+    if ( is_pv_domain(d) && (d)->arch.pv.xpti )                          \
         flush_mask((d)->dirty_cpumask, FLUSH_ROOT_PGTBL);                \
 }
 
diff --git a/xen/include/asm-x86/shadow.h b/xen/include/asm-x86/shadow.h
index f40f411..b3ebe56 100644
--- a/xen/include/asm-x86/shadow.h
+++ b/xen/include/asm-x86/shadow.h
@@ -169,7 +169,7 @@ static inline bool pv_l1tf_check_pte(struct domain *d, unsigned int level,
     ASSERT(is_pv_domain(d));
     ASSERT(!(pte & _PAGE_PRESENT));
 
-    if ( d->arch.pv_domain.check_l1tf && !paging_mode_sh_forced(d) &&
+    if ( d->arch.pv.check_l1tf && !paging_mode_sh_forced(d) &&
          (((level > 1) && (pte & _PAGE_PSE)) || !is_l1tf_safe_maddr(pte)) )
     {
 #ifdef CONFIG_SHADOW_PAGING
@@ -224,7 +224,7 @@ void pv_l1tf_tasklet(unsigned long data);
 
 static inline void pv_l1tf_domain_init(struct domain *d)
 {
-    d->arch.pv_domain.check_l1tf =
+    d->arch.pv.check_l1tf =
         opt_pv_l1tf & (is_hardware_domain(d)
                        ? OPT_PV_L1TF_DOM0 : OPT_PV_L1TF_DOMU);
 
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH 2/7] x86/pv: Rename v->arch.pv_vcpu to v->arch.pv
  2018-08-28 17:38 [PATCH 0/7] x86: Structure naming and consistency improvements Andrew Cooper
  2018-08-28 17:39 ` [PATCH 1/7] x86/pv: Rename d->arch.pv_domain to d->arch.pv Andrew Cooper
@ 2018-08-28 17:39 ` Andrew Cooper
  2018-08-29  7:53   ` Wei Liu
  2018-08-29 16:01   ` Jan Beulich
  2018-08-28 17:39 ` [PATCH 3/7] xen/hvm: Rename d->arch.hvm_domain to d->arch.hvm Andrew Cooper
                   ` (4 subsequent siblings)
  6 siblings, 2 replies; 39+ messages in thread
From: Andrew Cooper @ 2018-08-28 17:39 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

The trailing _vcpu suffix is redundant, but adds to code volume.  Drop it.

Reflow lines as appropriate, and switch to using the new XFREE/etc wrappers
where applicable.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/cpu/amd.c              |  2 +-
 xen/arch/x86/cpu/intel.c            |  2 +-
 xen/arch/x86/cpuid.c                |  6 +--
 xen/arch/x86/domain.c               | 88 ++++++++++++++++++-------------------
 xen/arch/x86/domain_page.c          |  6 +--
 xen/arch/x86/domctl.c               | 76 ++++++++++++++++----------------
 xen/arch/x86/i387.c                 |  2 +-
 xen/arch/x86/mm.c                   | 10 ++---
 xen/arch/x86/physdev.c              |  9 ++--
 xen/arch/x86/pv/callback.c          | 42 +++++++++---------
 xen/arch/x86/pv/descriptor-tables.c | 18 ++++----
 xen/arch/x86/pv/dom0_build.c        |  4 +-
 xen/arch/x86/pv/domain.c            | 30 ++++++-------
 xen/arch/x86/pv/emul-gate-op.c      |  4 +-
 xen/arch/x86/pv/emul-priv-op.c      | 42 +++++++++---------
 xen/arch/x86/pv/iret.c              | 10 ++---
 xen/arch/x86/pv/misc-hypercalls.c   |  4 +-
 xen/arch/x86/pv/mm.c                | 10 ++---
 xen/arch/x86/pv/traps.c             | 10 ++---
 xen/arch/x86/time.c                 |  2 +-
 xen/arch/x86/traps.c                | 24 +++++-----
 xen/arch/x86/x86_64/asm-offsets.c   | 28 ++++++------
 xen/arch/x86/x86_64/entry.S         |  2 +-
 xen/arch/x86/x86_64/mm.c            | 10 ++---
 xen/arch/x86/x86_64/traps.c         | 10 ++---
 xen/arch/x86/x86_emulate.c          |  2 +-
 xen/include/asm-x86/domain.h        |  2 +-
 xen/include/asm-x86/ldt.h           |  2 +-
 xen/include/asm-x86/pv/traps.h      |  2 +-
 29 files changed, 227 insertions(+), 232 deletions(-)

diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index e0ee114..c394c1c 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -220,7 +220,7 @@ static void amd_ctxt_switch_masking(const struct vcpu *next)
 		 * kernel.
 		 */
 		if (next && is_pv_vcpu(next) && !is_idle_vcpu(next) &&
-		    !(next->arch.pv_vcpu.ctrlreg[4] & X86_CR4_OSXSAVE))
+		    !(next->arch.pv.ctrlreg[4] & X86_CR4_OSXSAVE))
 			val &= ~((uint64_t)cpufeat_mask(X86_FEATURE_OSXSAVE) << 32);
 
 		if (unlikely(these_masks->_1cd != val)) {
diff --git a/xen/arch/x86/cpu/intel.c b/xen/arch/x86/cpu/intel.c
index 8c375c8..65fa3d6 100644
--- a/xen/arch/x86/cpu/intel.c
+++ b/xen/arch/x86/cpu/intel.c
@@ -131,7 +131,7 @@ static void intel_ctxt_switch_masking(const struct vcpu *next)
 		 * kernel.
 		 */
 		if (next && is_pv_vcpu(next) && !is_idle_vcpu(next) &&
-		    !(next->arch.pv_vcpu.ctrlreg[4] & X86_CR4_OSXSAVE))
+		    !(next->arch.pv.ctrlreg[4] & X86_CR4_OSXSAVE))
 			val &= ~(uint64_t)cpufeat_mask(X86_FEATURE_OSXSAVE);
 
 		if (unlikely(these_masks->_1cd != val)) {
diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 88694ed..24366ea 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -841,7 +841,7 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
              *
              * Architecturally, the correct code here is simply:
              *
-             *   if ( v->arch.pv_vcpu.ctrlreg[4] & X86_CR4_OSXSAVE )
+             *   if ( v->arch.pv.ctrlreg[4] & X86_CR4_OSXSAVE )
              *       c |= cpufeat_mask(X86_FEATURE_OSXSAVE);
              *
              * However because of bugs in Xen (before c/s bd19080b, Nov 2010,
@@ -887,7 +887,7 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
              *    #UD or #GP is currently being serviced.
              */
             /* OSXSAVE clear in policy.  Fast-forward CR4 back in. */
-            if ( (v->arch.pv_vcpu.ctrlreg[4] & X86_CR4_OSXSAVE) ||
+            if ( (v->arch.pv.ctrlreg[4] & X86_CR4_OSXSAVE) ||
                  (regs->entry_vector == TRAP_invalid_op &&
                   guest_kernel_mode(v, regs) &&
                   (read_cr4() & X86_CR4_OSXSAVE)) )
@@ -959,7 +959,7 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
         case 0:
             /* OSPKE clear in policy.  Fast-forward CR4 back in. */
             if ( (is_pv_domain(d)
-                  ? v->arch.pv_vcpu.ctrlreg[4]
+                  ? v->arch.pv.ctrlreg[4]
                   : v->arch.hvm_vcpu.guest_cr[4]) & X86_CR4_PKE )
                 res->c |= cpufeat_mask(X86_FEATURE_OSPKE);
             break;
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 8c7ddf5..4cdcd5d 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -849,7 +849,7 @@ int arch_set_info_guest(
     {
         memcpy(&v->arch.user_regs, &c.nat->user_regs, sizeof(c.nat->user_regs));
         if ( is_pv_domain(d) )
-            memcpy(v->arch.pv_vcpu.trap_ctxt, c.nat->trap_ctxt,
+            memcpy(v->arch.pv.trap_ctxt, c.nat->trap_ctxt,
                    sizeof(c.nat->trap_ctxt));
     }
     else
@@ -858,7 +858,7 @@ int arch_set_info_guest(
         if ( is_pv_domain(d) )
         {
             for ( i = 0; i < ARRAY_SIZE(c.cmp->trap_ctxt); ++i )
-                XLAT_trap_info(v->arch.pv_vcpu.trap_ctxt + i,
+                XLAT_trap_info(v->arch.pv.trap_ctxt + i,
                                c.cmp->trap_ctxt + i);
         }
     }
@@ -873,7 +873,7 @@ int arch_set_info_guest(
     }
 
     /* IOPL privileges are virtualised. */
-    v->arch.pv_vcpu.iopl = v->arch.user_regs.eflags & X86_EFLAGS_IOPL;
+    v->arch.pv.iopl = v->arch.user_regs.eflags & X86_EFLAGS_IOPL;
     v->arch.user_regs.eflags &= ~X86_EFLAGS_IOPL;
 
     /* Ensure real hardware interrupts are enabled. */
@@ -884,8 +884,8 @@ int arch_set_info_guest(
         if ( !compat && !(flags & VGCF_in_kernel) && !c.nat->ctrlreg[1] )
             return -EINVAL;
 
-        v->arch.pv_vcpu.ldt_base = c(ldt_base);
-        v->arch.pv_vcpu.ldt_ents = c(ldt_ents);
+        v->arch.pv.ldt_base = c(ldt_base);
+        v->arch.pv.ldt_ents = c(ldt_ents);
     }
     else
     {
@@ -910,47 +910,47 @@ int arch_set_info_guest(
             fail = compat_pfn_to_cr3(pfn) != c.cmp->ctrlreg[3];
         }
 
-        for ( i = 0; i < ARRAY_SIZE(v->arch.pv_vcpu.gdt_frames); ++i )
-            fail |= v->arch.pv_vcpu.gdt_frames[i] != c(gdt_frames[i]);
-        fail |= v->arch.pv_vcpu.gdt_ents != c(gdt_ents);
+        for ( i = 0; i < ARRAY_SIZE(v->arch.pv.gdt_frames); ++i )
+            fail |= v->arch.pv.gdt_frames[i] != c(gdt_frames[i]);
+        fail |= v->arch.pv.gdt_ents != c(gdt_ents);
 
-        fail |= v->arch.pv_vcpu.ldt_base != c(ldt_base);
-        fail |= v->arch.pv_vcpu.ldt_ents != c(ldt_ents);
+        fail |= v->arch.pv.ldt_base != c(ldt_base);
+        fail |= v->arch.pv.ldt_ents != c(ldt_ents);
 
         if ( fail )
            return -EOPNOTSUPP;
     }
 
-    v->arch.pv_vcpu.kernel_ss = c(kernel_ss);
-    v->arch.pv_vcpu.kernel_sp = c(kernel_sp);
-    for ( i = 0; i < ARRAY_SIZE(v->arch.pv_vcpu.ctrlreg); ++i )
-        v->arch.pv_vcpu.ctrlreg[i] = c(ctrlreg[i]);
+    v->arch.pv.kernel_ss = c(kernel_ss);
+    v->arch.pv.kernel_sp = c(kernel_sp);
+    for ( i = 0; i < ARRAY_SIZE(v->arch.pv.ctrlreg); ++i )
+        v->arch.pv.ctrlreg[i] = c(ctrlreg[i]);
 
-    v->arch.pv_vcpu.event_callback_eip = c(event_callback_eip);
-    v->arch.pv_vcpu.failsafe_callback_eip = c(failsafe_callback_eip);
+    v->arch.pv.event_callback_eip = c(event_callback_eip);
+    v->arch.pv.failsafe_callback_eip = c(failsafe_callback_eip);
     if ( !compat )
     {
-        v->arch.pv_vcpu.syscall_callback_eip = c.nat->syscall_callback_eip;
+        v->arch.pv.syscall_callback_eip = c.nat->syscall_callback_eip;
         /* non-nul selector kills fs_base */
-        v->arch.pv_vcpu.fs_base =
+        v->arch.pv.fs_base =
             !(v->arch.user_regs.fs & ~3) ? c.nat->fs_base : 0;
-        v->arch.pv_vcpu.gs_base_kernel = c.nat->gs_base_kernel;
+        v->arch.pv.gs_base_kernel = c.nat->gs_base_kernel;
         /* non-nul selector kills gs_base_user */
-        v->arch.pv_vcpu.gs_base_user =
+        v->arch.pv.gs_base_user =
             !(v->arch.user_regs.gs & ~3) ? c.nat->gs_base_user : 0;
     }
     else
     {
-        v->arch.pv_vcpu.event_callback_cs = c(event_callback_cs);
-        v->arch.pv_vcpu.failsafe_callback_cs = c(failsafe_callback_cs);
+        v->arch.pv.event_callback_cs = c(event_callback_cs);
+        v->arch.pv.failsafe_callback_cs = c(failsafe_callback_cs);
     }
 
     /* Only CR0.TS is modifiable by guest or admin. */
-    v->arch.pv_vcpu.ctrlreg[0] &= X86_CR0_TS;
-    v->arch.pv_vcpu.ctrlreg[0] |= read_cr0() & ~X86_CR0_TS;
+    v->arch.pv.ctrlreg[0] &= X86_CR0_TS;
+    v->arch.pv.ctrlreg[0] |= read_cr0() & ~X86_CR0_TS;
 
-    cr4 = v->arch.pv_vcpu.ctrlreg[4];
-    v->arch.pv_vcpu.ctrlreg[4] = cr4 ? pv_guest_cr4_fixup(v, cr4) :
+    cr4 = v->arch.pv.ctrlreg[4];
+    v->arch.pv.ctrlreg[4] = cr4 ? pv_guest_cr4_fixup(v, cr4) :
         real_cr4_to_pv_guest_cr4(mmu_cr4_features);
 
     memset(v->arch.debugreg, 0, sizeof(v->arch.debugreg));
@@ -1012,10 +1012,10 @@ int arch_set_info_guest(
         rc = (int)pv_set_gdt(v, c.nat->gdt_frames, c.nat->gdt_ents);
     else
     {
-        unsigned long gdt_frames[ARRAY_SIZE(v->arch.pv_vcpu.gdt_frames)];
+        unsigned long gdt_frames[ARRAY_SIZE(v->arch.pv.gdt_frames)];
         unsigned int nr_frames = DIV_ROUND_UP(c.cmp->gdt_ents, 512);
 
-        if ( nr_frames > ARRAY_SIZE(v->arch.pv_vcpu.gdt_frames) )
+        if ( nr_frames > ARRAY_SIZE(v->arch.pv.gdt_frames) )
             return -EINVAL;
 
         for ( i = 0; i < nr_frames; ++i )
@@ -1319,20 +1319,20 @@ static void load_segments(struct vcpu *n)
     if ( !is_pv_32bit_vcpu(n) )
     {
         /* This can only be non-zero if selector is NULL. */
-        if ( n->arch.pv_vcpu.fs_base | (dirty_segment_mask & DIRTY_FS_BASE) )
-            wrfsbase(n->arch.pv_vcpu.fs_base);
+        if ( n->arch.pv.fs_base | (dirty_segment_mask & DIRTY_FS_BASE) )
+            wrfsbase(n->arch.pv.fs_base);
 
         /*
          * Most kernels have non-zero GS base, so don't bother testing.
          * (For old AMD hardware this is also a serialising instruction,
          * avoiding erratum #88.)
          */
-        wrgsshadow(n->arch.pv_vcpu.gs_base_kernel);
+        wrgsshadow(n->arch.pv.gs_base_kernel);
 
         /* This can only be non-zero if selector is NULL. */
-        if ( n->arch.pv_vcpu.gs_base_user |
+        if ( n->arch.pv.gs_base_user |
              (dirty_segment_mask & DIRTY_GS_BASE) )
-            wrgsbase(n->arch.pv_vcpu.gs_base_user);
+            wrgsbase(n->arch.pv.gs_base_user);
 
         /* If in kernel mode then switch the GS bases around. */
         if ( (n->arch.flags & TF_kernel_mode) )
@@ -1341,7 +1341,7 @@ static void load_segments(struct vcpu *n)
 
     if ( unlikely(!all_segs_okay) )
     {
-        struct pv_vcpu *pv = &n->arch.pv_vcpu;
+        struct pv_vcpu *pv = &n->arch.pv;
         struct cpu_user_regs *regs = guest_cpu_user_regs();
         unsigned long *rsp =
             (unsigned long *)(((n->arch.flags & TF_kernel_mode)
@@ -1352,7 +1352,7 @@ static void load_segments(struct vcpu *n)
         rflags  = regs->rflags & ~(X86_EFLAGS_IF|X86_EFLAGS_IOPL);
         rflags |= !vcpu_info(n, evtchn_upcall_mask) << 9;
         if ( VM_ASSIST(n->domain, architectural_iopl) )
-            rflags |= n->arch.pv_vcpu.iopl;
+            rflags |= n->arch.pv.iopl;
 
         if ( is_pv_32bit_vcpu(n) )
         {
@@ -1450,11 +1450,11 @@ static void save_segments(struct vcpu *v)
 
     if ( cpu_has_fsgsbase && !is_pv_32bit_vcpu(v) )
     {
-        v->arch.pv_vcpu.fs_base = __rdfsbase();
+        v->arch.pv.fs_base = __rdfsbase();
         if ( v->arch.flags & TF_kernel_mode )
-            v->arch.pv_vcpu.gs_base_kernel = __rdgsbase();
+            v->arch.pv.gs_base_kernel = __rdgsbase();
         else
-            v->arch.pv_vcpu.gs_base_user = __rdgsbase();
+            v->arch.pv.gs_base_user = __rdgsbase();
     }
 
     if ( regs->ds )
@@ -1468,9 +1468,9 @@ static void save_segments(struct vcpu *v)
         dirty_segment_mask |= DIRTY_FS;
         /* non-nul selector kills fs_base */
         if ( regs->fs & ~3 )
-            v->arch.pv_vcpu.fs_base = 0;
+            v->arch.pv.fs_base = 0;
     }
-    if ( v->arch.pv_vcpu.fs_base )
+    if ( v->arch.pv.fs_base )
         dirty_segment_mask |= DIRTY_FS_BASE;
 
     if ( regs->gs || is_pv_32bit_vcpu(v) )
@@ -1478,10 +1478,10 @@ static void save_segments(struct vcpu *v)
         dirty_segment_mask |= DIRTY_GS;
         /* non-nul selector kills gs_base_user */
         if ( regs->gs & ~3 )
-            v->arch.pv_vcpu.gs_base_user = 0;
+            v->arch.pv.gs_base_user = 0;
     }
-    if ( v->arch.flags & TF_kernel_mode ? v->arch.pv_vcpu.gs_base_kernel
-                                        : v->arch.pv_vcpu.gs_base_user )
+    if ( v->arch.flags & TF_kernel_mode ? v->arch.pv.gs_base_kernel
+                                        : v->arch.pv.gs_base_user )
         dirty_segment_mask |= DIRTY_GS_BASE;
 
     this_cpu(dirty_segment_mask) = dirty_segment_mask;
@@ -1571,7 +1571,7 @@ static void _update_runstate_area(struct vcpu *v)
 {
     if ( !update_runstate_area(v) && is_pv_vcpu(v) &&
          !(v->arch.flags & TF_kernel_mode) )
-        v->arch.pv_vcpu.need_update_runstate_area = 1;
+        v->arch.pv.need_update_runstate_area = 1;
 }
 
 static inline bool need_full_gdt(const struct domain *d)
diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c
index 15ba6f5..5c30c62 100644
--- a/xen/arch/x86/domain_page.c
+++ b/xen/arch/x86/domain_page.c
@@ -86,7 +86,7 @@ void *map_domain_page(mfn_t mfn)
         return mfn_to_virt(mfn_x(mfn));
 
     dcache = &v->domain->arch.pv.mapcache;
-    vcache = &v->arch.pv_vcpu.mapcache;
+    vcache = &v->arch.pv.mapcache;
     if ( !dcache->inuse )
         return mfn_to_virt(mfn_x(mfn));
 
@@ -194,7 +194,7 @@ void unmap_domain_page(const void *ptr)
 
     idx = PFN_DOWN(va - MAPCACHE_VIRT_START);
     mfn = l1e_get_pfn(MAPCACHE_L1ENT(idx));
-    hashent = &v->arch.pv_vcpu.mapcache.hash[MAPHASH_HASHFN(mfn)];
+    hashent = &v->arch.pv.mapcache.hash[MAPHASH_HASHFN(mfn)];
 
     local_irq_save(flags);
 
@@ -293,7 +293,7 @@ int mapcache_vcpu_init(struct vcpu *v)
     BUILD_BUG_ON(MAPHASHENT_NOTINUSE < MAPCACHE_ENTRIES);
     for ( i = 0; i < MAPHASH_ENTRIES; i++ )
     {
-        struct vcpu_maphash_entry *hashent = &v->arch.pv_vcpu.mapcache.hash[i];
+        struct vcpu_maphash_entry *hashent = &v->arch.pv.mapcache.hash[i];
 
         hashent->mfn = ~0UL; /* never valid to map */
         hashent->idx = MAPHASHENT_NOTINUSE;
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index e27e971..fdbcce0 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -856,17 +856,17 @@ long arch_do_domctl(
             if ( is_pv_domain(d) )
             {
                 evc->sysenter_callback_cs      =
-                    v->arch.pv_vcpu.sysenter_callback_cs;
+                    v->arch.pv.sysenter_callback_cs;
                 evc->sysenter_callback_eip     =
-                    v->arch.pv_vcpu.sysenter_callback_eip;
+                    v->arch.pv.sysenter_callback_eip;
                 evc->sysenter_disables_events  =
-                    v->arch.pv_vcpu.sysenter_disables_events;
+                    v->arch.pv.sysenter_disables_events;
                 evc->syscall32_callback_cs     =
-                    v->arch.pv_vcpu.syscall32_callback_cs;
+                    v->arch.pv.syscall32_callback_cs;
                 evc->syscall32_callback_eip    =
-                    v->arch.pv_vcpu.syscall32_callback_eip;
+                    v->arch.pv.syscall32_callback_eip;
                 evc->syscall32_disables_events =
-                    v->arch.pv_vcpu.syscall32_disables_events;
+                    v->arch.pv.syscall32_disables_events;
             }
             else
             {
@@ -900,18 +900,18 @@ long arch_do_domctl(
                     break;
                 domain_pause(d);
                 fixup_guest_code_selector(d, evc->sysenter_callback_cs);
-                v->arch.pv_vcpu.sysenter_callback_cs      =
+                v->arch.pv.sysenter_callback_cs =
                     evc->sysenter_callback_cs;
-                v->arch.pv_vcpu.sysenter_callback_eip     =
+                v->arch.pv.sysenter_callback_eip =
                     evc->sysenter_callback_eip;
-                v->arch.pv_vcpu.sysenter_disables_events  =
+                v->arch.pv.sysenter_disables_events =
                     evc->sysenter_disables_events;
                 fixup_guest_code_selector(d, evc->syscall32_callback_cs);
-                v->arch.pv_vcpu.syscall32_callback_cs     =
+                v->arch.pv.syscall32_callback_cs =
                     evc->syscall32_callback_cs;
-                v->arch.pv_vcpu.syscall32_callback_eip    =
+                v->arch.pv.syscall32_callback_eip =
                     evc->syscall32_callback_eip;
-                v->arch.pv_vcpu.syscall32_disables_events =
+                v->arch.pv.syscall32_disables_events =
                     evc->syscall32_disables_events;
             }
             else if ( (evc->sysenter_callback_cs & ~3) ||
@@ -1330,12 +1330,12 @@ long arch_do_domctl(
 
                 if ( boot_cpu_has(X86_FEATURE_DBEXT) )
                 {
-                    if ( v->arch.pv_vcpu.dr_mask[0] )
+                    if ( v->arch.pv.dr_mask[0] )
                     {
                         if ( i < vmsrs->msr_count && !ret )
                         {
                             msr.index = MSR_AMD64_DR0_ADDRESS_MASK;
-                            msr.value = v->arch.pv_vcpu.dr_mask[0];
+                            msr.value = v->arch.pv.dr_mask[0];
                             if ( copy_to_guest_offset(vmsrs->msrs, i, &msr, 1) )
                                 ret = -EFAULT;
                         }
@@ -1344,12 +1344,12 @@ long arch_do_domctl(
 
                     for ( j = 0; j < 3; ++j )
                     {
-                        if ( !v->arch.pv_vcpu.dr_mask[1 + j] )
+                        if ( !v->arch.pv.dr_mask[1 + j] )
                             continue;
                         if ( i < vmsrs->msr_count && !ret )
                         {
                             msr.index = MSR_AMD64_DR1_ADDRESS_MASK + j;
-                            msr.value = v->arch.pv_vcpu.dr_mask[1 + j];
+                            msr.value = v->arch.pv.dr_mask[1 + j];
                             if ( copy_to_guest_offset(vmsrs->msrs, i, &msr, 1) )
                                 ret = -EFAULT;
                         }
@@ -1394,7 +1394,7 @@ long arch_do_domctl(
                     if ( !boot_cpu_has(X86_FEATURE_DBEXT) ||
                          (msr.value >> 32) )
                         break;
-                    v->arch.pv_vcpu.dr_mask[0] = msr.value;
+                    v->arch.pv.dr_mask[0] = msr.value;
                     continue;
 
                 case MSR_AMD64_DR1_ADDRESS_MASK ...
@@ -1403,7 +1403,7 @@ long arch_do_domctl(
                          (msr.value >> 32) )
                         break;
                     msr.index -= MSR_AMD64_DR1_ADDRESS_MASK - 1;
-                    v->arch.pv_vcpu.dr_mask[msr.index] = msr.value;
+                    v->arch.pv.dr_mask[msr.index] = msr.value;
                     continue;
                 }
                 break;
@@ -1564,7 +1564,7 @@ void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)
     {
         memcpy(&c.nat->user_regs, &v->arch.user_regs, sizeof(c.nat->user_regs));
         if ( is_pv_domain(d) )
-            memcpy(c.nat->trap_ctxt, v->arch.pv_vcpu.trap_ctxt,
+            memcpy(c.nat->trap_ctxt, v->arch.pv.trap_ctxt,
                    sizeof(c.nat->trap_ctxt));
     }
     else
@@ -1574,7 +1574,7 @@ void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)
         {
             for ( i = 0; i < ARRAY_SIZE(c.cmp->trap_ctxt); ++i )
                 XLAT_trap_info(c.cmp->trap_ctxt + i,
-                               v->arch.pv_vcpu.trap_ctxt + i);
+                               v->arch.pv.trap_ctxt + i);
         }
     }
 
@@ -1615,37 +1615,37 @@ void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)
     }
     else
     {
-        c(ldt_base = v->arch.pv_vcpu.ldt_base);
-        c(ldt_ents = v->arch.pv_vcpu.ldt_ents);
-        for ( i = 0; i < ARRAY_SIZE(v->arch.pv_vcpu.gdt_frames); ++i )
-            c(gdt_frames[i] = v->arch.pv_vcpu.gdt_frames[i]);
+        c(ldt_base = v->arch.pv.ldt_base);
+        c(ldt_ents = v->arch.pv.ldt_ents);
+        for ( i = 0; i < ARRAY_SIZE(v->arch.pv.gdt_frames); ++i )
+            c(gdt_frames[i] = v->arch.pv.gdt_frames[i]);
         BUILD_BUG_ON(ARRAY_SIZE(c.nat->gdt_frames) !=
                      ARRAY_SIZE(c.cmp->gdt_frames));
         for ( ; i < ARRAY_SIZE(c.nat->gdt_frames); ++i )
             c(gdt_frames[i] = 0);
-        c(gdt_ents = v->arch.pv_vcpu.gdt_ents);
-        c(kernel_ss = v->arch.pv_vcpu.kernel_ss);
-        c(kernel_sp = v->arch.pv_vcpu.kernel_sp);
-        for ( i = 0; i < ARRAY_SIZE(v->arch.pv_vcpu.ctrlreg); ++i )
-            c(ctrlreg[i] = v->arch.pv_vcpu.ctrlreg[i]);
-        c(event_callback_eip = v->arch.pv_vcpu.event_callback_eip);
-        c(failsafe_callback_eip = v->arch.pv_vcpu.failsafe_callback_eip);
+        c(gdt_ents = v->arch.pv.gdt_ents);
+        c(kernel_ss = v->arch.pv.kernel_ss);
+        c(kernel_sp = v->arch.pv.kernel_sp);
+        for ( i = 0; i < ARRAY_SIZE(v->arch.pv.ctrlreg); ++i )
+            c(ctrlreg[i] = v->arch.pv.ctrlreg[i]);
+        c(event_callback_eip = v->arch.pv.event_callback_eip);
+        c(failsafe_callback_eip = v->arch.pv.failsafe_callback_eip);
         if ( !compat )
         {
-            c.nat->syscall_callback_eip = v->arch.pv_vcpu.syscall_callback_eip;
-            c.nat->fs_base = v->arch.pv_vcpu.fs_base;
-            c.nat->gs_base_kernel = v->arch.pv_vcpu.gs_base_kernel;
-            c.nat->gs_base_user = v->arch.pv_vcpu.gs_base_user;
+            c.nat->syscall_callback_eip = v->arch.pv.syscall_callback_eip;
+            c.nat->fs_base = v->arch.pv.fs_base;
+            c.nat->gs_base_kernel = v->arch.pv.gs_base_kernel;
+            c.nat->gs_base_user = v->arch.pv.gs_base_user;
         }
         else
         {
-            c(event_callback_cs = v->arch.pv_vcpu.event_callback_cs);
-            c(failsafe_callback_cs = v->arch.pv_vcpu.failsafe_callback_cs);
+            c(event_callback_cs = v->arch.pv.event_callback_cs);
+            c(failsafe_callback_cs = v->arch.pv.failsafe_callback_cs);
         }
 
         /* IOPL privileges are virtualised: merge back into returned eflags. */
         BUG_ON((c(user_regs.eflags) & X86_EFLAGS_IOPL) != 0);
-        c(user_regs.eflags |= v->arch.pv_vcpu.iopl);
+        c(user_regs.eflags |= v->arch.pv.iopl);
 
         if ( !compat )
         {
diff --git a/xen/arch/x86/i387.c b/xen/arch/x86/i387.c
index 00cf6bd..8817848 100644
--- a/xen/arch/x86/i387.c
+++ b/xen/arch/x86/i387.c
@@ -233,7 +233,7 @@ void vcpu_restore_fpu_nonlazy(struct vcpu *v, bool need_stts)
         v->fpu_dirtied = 1;
 
         /* Xen doesn't need TS set, but the guest might. */
-        need_stts = is_pv_vcpu(v) && (v->arch.pv_vcpu.ctrlreg[0] & X86_CR0_TS);
+        need_stts = is_pv_vcpu(v) && (v->arch.pv.ctrlreg[0] & X86_CR0_TS);
     }
     else
     {
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 26c4551..6a882e7 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -510,7 +510,7 @@ unsigned long pv_guest_cr4_to_real_cr4(const struct vcpu *v)
     const struct domain *d = v->domain;
     unsigned long cr4;
 
-    cr4 = v->arch.pv_vcpu.ctrlreg[4] & ~X86_CR4_DE;
+    cr4 = v->arch.pv.ctrlreg[4] & ~X86_CR4_DE;
     cr4 |= mmu_cr4_features & (X86_CR4_PSE | X86_CR4_SMEP | X86_CR4_SMAP |
                                X86_CR4_OSXSAVE | X86_CR4_FSGSBASE);
 
@@ -3468,14 +3468,14 @@ long do_mmuext_op(
                          "Bad args to SET_LDT: ptr=%lx, ents=%x\n", ptr, ents);
                 rc = -EINVAL;
             }
-            else if ( (curr->arch.pv_vcpu.ldt_ents != ents) ||
-                      (curr->arch.pv_vcpu.ldt_base != ptr) )
+            else if ( (curr->arch.pv.ldt_ents != ents) ||
+                      (curr->arch.pv.ldt_base != ptr) )
             {
                 if ( pv_destroy_ldt(curr) )
                     flush_tlb_local();
 
-                curr->arch.pv_vcpu.ldt_base = ptr;
-                curr->arch.pv_vcpu.ldt_ents = ents;
+                curr->arch.pv.ldt_base = ptr;
+                curr->arch.pv.ldt_ents = ents;
                 load_LDT(curr);
             }
             break;
diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index b87ec90..4524823 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -412,7 +412,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( set_iopl.iopl > 3 )
             break;
         ret = 0;
-        curr->arch.pv_vcpu.iopl = MASK_INSR(set_iopl.iopl, X86_EFLAGS_IOPL);
+        curr->arch.pv.iopl = MASK_INSR(set_iopl.iopl, X86_EFLAGS_IOPL);
         break;
     }
 
@@ -429,12 +429,11 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
             break;
         ret = 0;
 #ifndef COMPAT
-        curr->arch.pv_vcpu.iobmp = set_iobitmap.bitmap;
+        curr->arch.pv.iobmp = set_iobitmap.bitmap;
 #else
-        guest_from_compat_handle(curr->arch.pv_vcpu.iobmp,
-                                 set_iobitmap.bitmap);
+        guest_from_compat_handle(curr->arch.pv.iobmp, set_iobitmap.bitmap);
 #endif
-        curr->arch.pv_vcpu.iobmp_limit = set_iobitmap.nr_ports;
+        curr->arch.pv.iobmp_limit = set_iobitmap.nr_ports;
         break;
     }
 
diff --git a/xen/arch/x86/pv/callback.c b/xen/arch/x86/pv/callback.c
index 394726a..1433230 100644
--- a/xen/arch/x86/pv/callback.c
+++ b/xen/arch/x86/pv/callback.c
@@ -35,7 +35,7 @@ static int register_guest_nmi_callback(unsigned long address)
 {
     struct vcpu *curr = current;
     struct domain *d = curr->domain;
-    struct trap_info *t = &curr->arch.pv_vcpu.trap_ctxt[TRAP_nmi];
+    struct trap_info *t = &curr->arch.pv.trap_ctxt[TRAP_nmi];
 
     if ( !is_canonical_address(address) )
         return -EINVAL;
@@ -60,7 +60,7 @@ static int register_guest_nmi_callback(unsigned long address)
 static void unregister_guest_nmi_callback(void)
 {
     struct vcpu *curr = current;
-    struct trap_info *t = &curr->arch.pv_vcpu.trap_ctxt[TRAP_nmi];
+    struct trap_info *t = &curr->arch.pv.trap_ctxt[TRAP_nmi];
 
     memset(t, 0, sizeof(*t));
 }
@@ -76,11 +76,11 @@ static long register_guest_callback(struct callback_register *reg)
     switch ( reg->type )
     {
     case CALLBACKTYPE_event:
-        curr->arch.pv_vcpu.event_callback_eip    = reg->address;
+        curr->arch.pv.event_callback_eip    = reg->address;
         break;
 
     case CALLBACKTYPE_failsafe:
-        curr->arch.pv_vcpu.failsafe_callback_eip = reg->address;
+        curr->arch.pv.failsafe_callback_eip = reg->address;
         if ( reg->flags & CALLBACKF_mask_events )
             curr->arch.vgc_flags |= VGCF_failsafe_disables_events;
         else
@@ -88,7 +88,7 @@ static long register_guest_callback(struct callback_register *reg)
         break;
 
     case CALLBACKTYPE_syscall:
-        curr->arch.pv_vcpu.syscall_callback_eip  = reg->address;
+        curr->arch.pv.syscall_callback_eip  = reg->address;
         if ( reg->flags & CALLBACKF_mask_events )
             curr->arch.vgc_flags |= VGCF_syscall_disables_events;
         else
@@ -96,14 +96,14 @@ static long register_guest_callback(struct callback_register *reg)
         break;
 
     case CALLBACKTYPE_syscall32:
-        curr->arch.pv_vcpu.syscall32_callback_eip = reg->address;
-        curr->arch.pv_vcpu.syscall32_disables_events =
+        curr->arch.pv.syscall32_callback_eip = reg->address;
+        curr->arch.pv.syscall32_disables_events =
             !!(reg->flags & CALLBACKF_mask_events);
         break;
 
     case CALLBACKTYPE_sysenter:
-        curr->arch.pv_vcpu.sysenter_callback_eip = reg->address;
-        curr->arch.pv_vcpu.sysenter_disables_events =
+        curr->arch.pv.sysenter_callback_eip = reg->address;
+        curr->arch.pv.sysenter_disables_events =
             !!(reg->flags & CALLBACKF_mask_events);
         break;
 
@@ -218,13 +218,13 @@ static long compat_register_guest_callback(struct compat_callback_register *reg)
     switch ( reg->type )
     {
     case CALLBACKTYPE_event:
-        curr->arch.pv_vcpu.event_callback_cs     = reg->address.cs;
-        curr->arch.pv_vcpu.event_callback_eip    = reg->address.eip;
+        curr->arch.pv.event_callback_cs     = reg->address.cs;
+        curr->arch.pv.event_callback_eip    = reg->address.eip;
         break;
 
     case CALLBACKTYPE_failsafe:
-        curr->arch.pv_vcpu.failsafe_callback_cs  = reg->address.cs;
-        curr->arch.pv_vcpu.failsafe_callback_eip = reg->address.eip;
+        curr->arch.pv.failsafe_callback_cs  = reg->address.cs;
+        curr->arch.pv.failsafe_callback_eip = reg->address.eip;
         if ( reg->flags & CALLBACKF_mask_events )
             curr->arch.vgc_flags |= VGCF_failsafe_disables_events;
         else
@@ -232,16 +232,16 @@ static long compat_register_guest_callback(struct compat_callback_register *reg)
         break;
 
     case CALLBACKTYPE_syscall32:
-        curr->arch.pv_vcpu.syscall32_callback_cs     = reg->address.cs;
-        curr->arch.pv_vcpu.syscall32_callback_eip    = reg->address.eip;
-        curr->arch.pv_vcpu.syscall32_disables_events =
+        curr->arch.pv.syscall32_callback_cs     = reg->address.cs;
+        curr->arch.pv.syscall32_callback_eip    = reg->address.eip;
+        curr->arch.pv.syscall32_disables_events =
             (reg->flags & CALLBACKF_mask_events) != 0;
         break;
 
     case CALLBACKTYPE_sysenter:
-        curr->arch.pv_vcpu.sysenter_callback_cs     = reg->address.cs;
-        curr->arch.pv_vcpu.sysenter_callback_eip    = reg->address.eip;
-        curr->arch.pv_vcpu.sysenter_disables_events =
+        curr->arch.pv.sysenter_callback_cs     = reg->address.cs;
+        curr->arch.pv.sysenter_callback_eip    = reg->address.eip;
+        curr->arch.pv.sysenter_disables_events =
             (reg->flags & CALLBACKF_mask_events) != 0;
         break;
 
@@ -352,7 +352,7 @@ long do_set_trap_table(XEN_GUEST_HANDLE_PARAM(const_trap_info_t) traps)
 {
     struct trap_info cur;
     struct vcpu *curr = current;
-    struct trap_info *dst = curr->arch.pv_vcpu.trap_ctxt;
+    struct trap_info *dst = curr->arch.pv.trap_ctxt;
     long rc = 0;
 
     /* If no table is presented then clear the entire virtual IDT. */
@@ -397,7 +397,7 @@ int compat_set_trap_table(XEN_GUEST_HANDLE(trap_info_compat_t) traps)
 {
     struct vcpu *curr = current;
     struct compat_trap_info cur;
-    struct trap_info *dst = curr->arch.pv_vcpu.trap_ctxt;
+    struct trap_info *dst = curr->arch.pv.trap_ctxt;
     long rc = 0;
 
     /* If no table is presented then clear the entire virtual IDT. */
diff --git a/xen/arch/x86/pv/descriptor-tables.c b/xen/arch/x86/pv/descriptor-tables.c
index 71bf927..9b84cbe 100644
--- a/xen/arch/x86/pv/descriptor-tables.c
+++ b/xen/arch/x86/pv/descriptor-tables.c
@@ -37,9 +37,9 @@ bool pv_destroy_ldt(struct vcpu *v)
 
     ASSERT(!in_irq());
 
-    spin_lock(&v->arch.pv_vcpu.shadow_ldt_lock);
+    spin_lock(&v->arch.pv.shadow_ldt_lock);
 
-    if ( v->arch.pv_vcpu.shadow_ldt_mapcnt == 0 )
+    if ( v->arch.pv.shadow_ldt_mapcnt == 0 )
         goto out;
 
     pl1e = pv_ldt_ptes(v);
@@ -58,11 +58,11 @@ bool pv_destroy_ldt(struct vcpu *v)
         put_page_and_type(page);
     }
 
-    ASSERT(v->arch.pv_vcpu.shadow_ldt_mapcnt == mappings_dropped);
-    v->arch.pv_vcpu.shadow_ldt_mapcnt = 0;
+    ASSERT(v->arch.pv.shadow_ldt_mapcnt == mappings_dropped);
+    v->arch.pv.shadow_ldt_mapcnt = 0;
 
  out:
-    spin_unlock(&v->arch.pv_vcpu.shadow_ldt_lock);
+    spin_unlock(&v->arch.pv.shadow_ldt_lock);
 
     return mappings_dropped;
 }
@@ -74,7 +74,7 @@ void pv_destroy_gdt(struct vcpu *v)
     l1_pgentry_t zero_l1e = l1e_from_mfn(zero_mfn, __PAGE_HYPERVISOR_RO);
     unsigned int i;
 
-    v->arch.pv_vcpu.gdt_ents = 0;
+    v->arch.pv.gdt_ents = 0;
     for ( i = 0; i < FIRST_RESERVED_GDT_PAGE; i++ )
     {
         mfn_t mfn = l1e_get_mfn(pl1e[i]);
@@ -84,7 +84,7 @@ void pv_destroy_gdt(struct vcpu *v)
             put_page_and_type(mfn_to_page(mfn));
 
         l1e_write(&pl1e[i], zero_l1e);
-        v->arch.pv_vcpu.gdt_frames[i] = 0;
+        v->arch.pv.gdt_frames[i] = 0;
     }
 }
 
@@ -117,11 +117,11 @@ long pv_set_gdt(struct vcpu *v, unsigned long *frames, unsigned int entries)
     pv_destroy_gdt(v);
 
     /* Install the new GDT. */
-    v->arch.pv_vcpu.gdt_ents = entries;
+    v->arch.pv.gdt_ents = entries;
     pl1e = pv_gdt_ptes(v);
     for ( i = 0; i < nr_frames; i++ )
     {
-        v->arch.pv_vcpu.gdt_frames[i] = frames[i];
+        v->arch.pv.gdt_frames[i] = frames[i];
         l1e_write(&pl1e[i], l1e_from_pfn(frames[i], __PAGE_HYPERVISOR_RW));
     }
 
diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
index 078288b..96ff0ee 100644
--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -589,8 +589,8 @@ int __init dom0_construct_pv(struct domain *d,
 
     if ( is_pv_32bit_domain(d) )
     {
-        v->arch.pv_vcpu.failsafe_callback_cs = FLAT_COMPAT_KERNEL_CS;
-        v->arch.pv_vcpu.event_callback_cs    = FLAT_COMPAT_KERNEL_CS;
+        v->arch.pv.failsafe_callback_cs = FLAT_COMPAT_KERNEL_CS;
+        v->arch.pv.event_callback_cs    = FLAT_COMPAT_KERNEL_CS;
     }
 
     /* WARNING: The new domain must have its 'processor' field filled in! */
diff --git a/xen/arch/x86/pv/domain.c b/xen/arch/x86/pv/domain.c
index 454d580..d52e640 100644
--- a/xen/arch/x86/pv/domain.c
+++ b/xen/arch/x86/pv/domain.c
@@ -161,8 +161,7 @@ void pv_vcpu_destroy(struct vcpu *v)
     }
 
     pv_destroy_gdt_ldt_l1tab(v);
-    xfree(v->arch.pv_vcpu.trap_ctxt);
-    v->arch.pv_vcpu.trap_ctxt = NULL;
+    XFREE(v->arch.pv.trap_ctxt);
 }
 
 int pv_vcpu_initialise(struct vcpu *v)
@@ -172,17 +171,16 @@ int pv_vcpu_initialise(struct vcpu *v)
 
     ASSERT(!is_idle_domain(d));
 
-    spin_lock_init(&v->arch.pv_vcpu.shadow_ldt_lock);
+    spin_lock_init(&v->arch.pv.shadow_ldt_lock);
 
     rc = pv_create_gdt_ldt_l1tab(v);
     if ( rc )
         return rc;
 
-    BUILD_BUG_ON(NR_VECTORS * sizeof(*v->arch.pv_vcpu.trap_ctxt) >
+    BUILD_BUG_ON(NR_VECTORS * sizeof(*v->arch.pv.trap_ctxt) >
                  PAGE_SIZE);
-    v->arch.pv_vcpu.trap_ctxt = xzalloc_array(struct trap_info,
-                                              NR_VECTORS);
-    if ( !v->arch.pv_vcpu.trap_ctxt )
+    v->arch.pv.trap_ctxt = xzalloc_array(struct trap_info, NR_VECTORS);
+    if ( !v->arch.pv.trap_ctxt )
     {
         rc = -ENOMEM;
         goto done;
@@ -191,7 +189,7 @@ int pv_vcpu_initialise(struct vcpu *v)
     /* PV guests by default have a 100Hz ticker. */
     v->periodic_period = MILLISECS(10);
 
-    v->arch.pv_vcpu.ctrlreg[4] = real_cr4_to_pv_guest_cr4(mmu_cr4_features);
+    v->arch.pv.ctrlreg[4] = real_cr4_to_pv_guest_cr4(mmu_cr4_features);
 
     if ( is_pv_32bit_domain(d) )
     {
@@ -308,14 +306,12 @@ static void _toggle_guest_pt(struct vcpu *v)
     if ( !(v->arch.flags & TF_kernel_mode) )
         return;
 
-    if ( v->arch.pv_vcpu.need_update_runstate_area &&
-         update_runstate_area(v) )
-        v->arch.pv_vcpu.need_update_runstate_area = 0;
+    if ( v->arch.pv.need_update_runstate_area && update_runstate_area(v) )
+        v->arch.pv.need_update_runstate_area = 0;
 
-    if ( v->arch.pv_vcpu.pending_system_time.version &&
-         update_secondary_system_time(v,
-                                      &v->arch.pv_vcpu.pending_system_time) )
-        v->arch.pv_vcpu.pending_system_time.version = 0;
+    if ( v->arch.pv.pending_system_time.version &&
+         update_secondary_system_time(v, &v->arch.pv.pending_system_time) )
+        v->arch.pv.pending_system_time.version = 0;
 }
 
 void toggle_guest_mode(struct vcpu *v)
@@ -325,9 +321,9 @@ void toggle_guest_mode(struct vcpu *v)
     if ( cpu_has_fsgsbase )
     {
         if ( v->arch.flags & TF_kernel_mode )
-            v->arch.pv_vcpu.gs_base_kernel = __rdgsbase();
+            v->arch.pv.gs_base_kernel = __rdgsbase();
         else
-            v->arch.pv_vcpu.gs_base_user = __rdgsbase();
+            v->arch.pv.gs_base_user = __rdgsbase();
     }
     asm volatile ( "swapgs" );
 
diff --git a/xen/arch/x86/pv/emul-gate-op.c b/xen/arch/x86/pv/emul-gate-op.c
index 810c4f7..d1c8aa6 100644
--- a/xen/arch/x86/pv/emul-gate-op.c
+++ b/xen/arch/x86/pv/emul-gate-op.c
@@ -324,8 +324,8 @@ void pv_emulate_gate_op(struct cpu_user_regs *regs)
                 pv_inject_hw_exception(TRAP_gp_fault, regs->error_code);
                 return;
             }
-            esp = v->arch.pv_vcpu.kernel_sp;
-            ss = v->arch.pv_vcpu.kernel_ss;
+            esp = v->arch.pv.kernel_sp;
+            ss = v->arch.pv.kernel_ss;
             if ( (ss & 3) != (sel & 3) ||
                  !pv_emul_read_descriptor(ss, v, &base, &limit, &ar, 0) ||
                  ((ar >> 13) & 3) != (sel & 3) ||
diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
index 84f22ae..64ad28e 100644
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -111,9 +111,9 @@ static bool iopl_ok(const struct vcpu *v, const struct cpu_user_regs *regs)
     unsigned int cpl = guest_kernel_mode(v, regs) ?
         (VM_ASSIST(v->domain, architectural_iopl) ? 0 : 1) : 3;
 
-    ASSERT((v->arch.pv_vcpu.iopl & ~X86_EFLAGS_IOPL) == 0);
+    ASSERT((v->arch.pv.iopl & ~X86_EFLAGS_IOPL) == 0);
 
-    return IOPL(cpl) <= v->arch.pv_vcpu.iopl;
+    return IOPL(cpl) <= v->arch.pv.iopl;
 }
 
 /* Has the guest requested sufficient permission for this I/O access? */
@@ -126,7 +126,7 @@ static bool guest_io_okay(unsigned int port, unsigned int bytes,
     if ( iopl_ok(v, regs) )
         return true;
 
-    if ( (port + bytes) <= v->arch.pv_vcpu.iobmp_limit )
+    if ( (port + bytes) <= v->arch.pv.iobmp_limit )
     {
         union { uint8_t bytes[2]; uint16_t mask; } x;
 
@@ -137,7 +137,7 @@ static bool guest_io_okay(unsigned int port, unsigned int bytes,
         if ( user_mode )
             toggle_guest_pt(v);
 
-        switch ( __copy_from_guest_offset(x.bytes, v->arch.pv_vcpu.iobmp,
+        switch ( __copy_from_guest_offset(x.bytes, v->arch.pv.iobmp,
                                           port>>3, 2) )
         {
         default: x.bytes[0] = ~0;
@@ -287,7 +287,7 @@ static unsigned int check_guest_io_breakpoint(struct vcpu *v,
     unsigned long start;
 
     if ( !(v->arch.debugreg[5]) ||
-         !(v->arch.pv_vcpu.ctrlreg[4] & X86_CR4_DE) )
+         !(v->arch.pv.ctrlreg[4] & X86_CR4_DE) )
         return 0;
 
     for ( i = 0; i < 4; i++ )
@@ -701,12 +701,12 @@ static int read_cr(unsigned int reg, unsigned long *val,
     switch ( reg )
     {
     case 0: /* Read CR0 */
-        *val = (read_cr0() & ~X86_CR0_TS) | curr->arch.pv_vcpu.ctrlreg[0];
+        *val = (read_cr0() & ~X86_CR0_TS) | curr->arch.pv.ctrlreg[0];
         return X86EMUL_OKAY;
 
     case 2: /* Read CR2 */
     case 4: /* Read CR4 */
-        *val = curr->arch.pv_vcpu.ctrlreg[reg];
+        *val = curr->arch.pv.ctrlreg[reg];
         return X86EMUL_OKAY;
 
     case 3: /* Read CR3 */
@@ -755,7 +755,7 @@ static int write_cr(unsigned int reg, unsigned long val,
         return X86EMUL_OKAY;
 
     case 2: /* Write CR2 */
-        curr->arch.pv_vcpu.ctrlreg[2] = val;
+        curr->arch.pv.ctrlreg[2] = val;
         arch_set_cr2(curr, val);
         return X86EMUL_OKAY;
 
@@ -785,7 +785,7 @@ static int write_cr(unsigned int reg, unsigned long val,
     }
 
     case 4: /* Write CR4 */
-        curr->arch.pv_vcpu.ctrlreg[4] = pv_guest_cr4_fixup(curr, val);
+        curr->arch.pv.ctrlreg[4] = pv_guest_cr4_fixup(curr, val);
         write_cr4(pv_guest_cr4_to_real_cr4(curr));
         ctxt_switch_levelling(curr);
         return X86EMUL_OKAY;
@@ -834,20 +834,20 @@ static int read_msr(unsigned int reg, uint64_t *val,
     case MSR_FS_BASE:
         if ( is_pv_32bit_domain(currd) )
             break;
-        *val = cpu_has_fsgsbase ? __rdfsbase() : curr->arch.pv_vcpu.fs_base;
+        *val = cpu_has_fsgsbase ? __rdfsbase() : curr->arch.pv.fs_base;
         return X86EMUL_OKAY;
 
     case MSR_GS_BASE:
         if ( is_pv_32bit_domain(currd) )
             break;
         *val = cpu_has_fsgsbase ? __rdgsbase()
-                                : curr->arch.pv_vcpu.gs_base_kernel;
+                                : curr->arch.pv.gs_base_kernel;
         return X86EMUL_OKAY;
 
     case MSR_SHADOW_GS_BASE:
         if ( is_pv_32bit_domain(currd) )
             break;
-        *val = curr->arch.pv_vcpu.gs_base_user;
+        *val = curr->arch.pv.gs_base_user;
         return X86EMUL_OKAY;
 
     /*
@@ -918,13 +918,13 @@ static int read_msr(unsigned int reg, uint64_t *val,
     case MSR_AMD64_DR0_ADDRESS_MASK:
         if ( !boot_cpu_has(X86_FEATURE_DBEXT) )
             break;
-        *val = curr->arch.pv_vcpu.dr_mask[0];
+        *val = curr->arch.pv.dr_mask[0];
         return X86EMUL_OKAY;
 
     case MSR_AMD64_DR1_ADDRESS_MASK ... MSR_AMD64_DR3_ADDRESS_MASK:
         if ( !boot_cpu_has(X86_FEATURE_DBEXT) )
             break;
-        *val = curr->arch.pv_vcpu.dr_mask[reg - MSR_AMD64_DR1_ADDRESS_MASK + 1];
+        *val = curr->arch.pv.dr_mask[reg - MSR_AMD64_DR1_ADDRESS_MASK + 1];
         return X86EMUL_OKAY;
 
     case MSR_IA32_PERF_CAPABILITIES:
@@ -996,21 +996,21 @@ static int write_msr(unsigned int reg, uint64_t val,
         if ( is_pv_32bit_domain(currd) || !is_canonical_address(val) )
             break;
         wrfsbase(val);
-        curr->arch.pv_vcpu.fs_base = val;
+        curr->arch.pv.fs_base = val;
         return X86EMUL_OKAY;
 
     case MSR_GS_BASE:
         if ( is_pv_32bit_domain(currd) || !is_canonical_address(val) )
             break;
         wrgsbase(val);
-        curr->arch.pv_vcpu.gs_base_kernel = val;
+        curr->arch.pv.gs_base_kernel = val;
         return X86EMUL_OKAY;
 
     case MSR_SHADOW_GS_BASE:
         if ( is_pv_32bit_domain(currd) || !is_canonical_address(val) )
             break;
         wrgsshadow(val);
-        curr->arch.pv_vcpu.gs_base_user = val;
+        curr->arch.pv.gs_base_user = val;
         return X86EMUL_OKAY;
 
     case MSR_K7_FID_VID_STATUS:
@@ -1115,7 +1115,7 @@ static int write_msr(unsigned int reg, uint64_t val,
     case MSR_AMD64_DR0_ADDRESS_MASK:
         if ( !boot_cpu_has(X86_FEATURE_DBEXT) || (val >> 32) )
             break;
-        curr->arch.pv_vcpu.dr_mask[0] = val;
+        curr->arch.pv.dr_mask[0] = val;
         if ( curr->arch.debugreg[7] & DR7_ACTIVE_MASK )
             wrmsrl(MSR_AMD64_DR0_ADDRESS_MASK, val);
         return X86EMUL_OKAY;
@@ -1123,7 +1123,7 @@ static int write_msr(unsigned int reg, uint64_t val,
     case MSR_AMD64_DR1_ADDRESS_MASK ... MSR_AMD64_DR3_ADDRESS_MASK:
         if ( !boot_cpu_has(X86_FEATURE_DBEXT) || (val >> 32) )
             break;
-        curr->arch.pv_vcpu.dr_mask[reg - MSR_AMD64_DR1_ADDRESS_MASK + 1] = val;
+        curr->arch.pv.dr_mask[reg - MSR_AMD64_DR1_ADDRESS_MASK + 1] = val;
         if ( curr->arch.debugreg[7] & DR7_ACTIVE_MASK )
             wrmsrl(reg, val);
         return X86EMUL_OKAY;
@@ -1327,7 +1327,7 @@ int pv_emulate_privileged_op(struct cpu_user_regs *regs)
     else
         regs->eflags |= X86_EFLAGS_IF;
     ASSERT(!(regs->eflags & X86_EFLAGS_IOPL));
-    regs->eflags |= curr->arch.pv_vcpu.iopl;
+    regs->eflags |= curr->arch.pv.iopl;
     eflags = regs->eflags;
 
     ctxt.ctxt.addr_size = ar & _SEGMENT_L ? 64 : ar & _SEGMENT_DB ? 32 : 16;
@@ -1369,7 +1369,7 @@ int pv_emulate_privileged_op(struct cpu_user_regs *regs)
         if ( ctxt.bpmatch )
         {
             curr->arch.debugreg[6] |= ctxt.bpmatch | DR_STATUS_RESERVED_ONE;
-            if ( !(curr->arch.pv_vcpu.trap_bounce.flags & TBF_EXCEPTION) )
+            if ( !(curr->arch.pv.trap_bounce.flags & TBF_EXCEPTION) )
                 pv_inject_hw_exception(TRAP_debug, X86_EVENT_NO_EC);
         }
         /* fall through */
diff --git a/xen/arch/x86/pv/iret.c b/xen/arch/x86/pv/iret.c
index ca433a6..c359a1d 100644
--- a/xen/arch/x86/pv/iret.c
+++ b/xen/arch/x86/pv/iret.c
@@ -51,7 +51,7 @@ unsigned long do_iret(void)
     }
 
     if ( VM_ASSIST(v->domain, architectural_iopl) )
-        v->arch.pv_vcpu.iopl = iret_saved.rflags & X86_EFLAGS_IOPL;
+        v->arch.pv.iopl = iret_saved.rflags & X86_EFLAGS_IOPL;
 
     regs->rip    = iret_saved.rip;
     regs->cs     = iret_saved.cs | 3; /* force guest privilege */
@@ -115,7 +115,7 @@ unsigned int compat_iret(void)
     }
 
     if ( VM_ASSIST(v->domain, architectural_iopl) )
-        v->arch.pv_vcpu.iopl = eflags & X86_EFLAGS_IOPL;
+        v->arch.pv.iopl = eflags & X86_EFLAGS_IOPL;
 
     regs->eflags = (eflags & ~X86_EFLAGS_IOPL) | X86_EFLAGS_IF;
 
@@ -130,7 +130,7 @@ unsigned int compat_iret(void)
          * mode frames).
          */
         const struct trap_info *ti;
-        u32 x, ksp = v->arch.pv_vcpu.kernel_sp - 40;
+        u32 x, ksp = v->arch.pv.kernel_sp - 40;
         unsigned int i;
         int rc = 0;
 
@@ -158,9 +158,9 @@ unsigned int compat_iret(void)
             return 0;
         }
         regs->esp = ksp;
-        regs->ss = v->arch.pv_vcpu.kernel_ss;
+        regs->ss = v->arch.pv.kernel_ss;
 
-        ti = &v->arch.pv_vcpu.trap_ctxt[TRAP_gp_fault];
+        ti = &v->arch.pv.trap_ctxt[TRAP_gp_fault];
         if ( TI_GET_IF(ti) )
             eflags &= ~X86_EFLAGS_IF;
         regs->eflags &= ~(X86_EFLAGS_VM|X86_EFLAGS_RF|
diff --git a/xen/arch/x86/pv/misc-hypercalls.c b/xen/arch/x86/pv/misc-hypercalls.c
index 1619be7..9f61f3d 100644
--- a/xen/arch/x86/pv/misc-hypercalls.c
+++ b/xen/arch/x86/pv/misc-hypercalls.c
@@ -42,12 +42,12 @@ long do_fpu_taskswitch(int set)
 
     if ( set )
     {
-        v->arch.pv_vcpu.ctrlreg[0] |= X86_CR0_TS;
+        v->arch.pv.ctrlreg[0] |= X86_CR0_TS;
         stts();
     }
     else
     {
-        v->arch.pv_vcpu.ctrlreg[0] &= ~X86_CR0_TS;
+        v->arch.pv.ctrlreg[0] &= ~X86_CR0_TS;
         if ( v->fpu_dirtied )
             clts();
     }
diff --git a/xen/arch/x86/pv/mm.c b/xen/arch/x86/pv/mm.c
index b46fd94..e9156ea 100644
--- a/xen/arch/x86/pv/mm.c
+++ b/xen/arch/x86/pv/mm.c
@@ -87,7 +87,7 @@ bool pv_map_ldt_shadow_page(unsigned int offset)
     struct domain *currd = curr->domain;
     struct page_info *page;
     l1_pgentry_t gl1e, *pl1e;
-    unsigned long linear = curr->arch.pv_vcpu.ldt_base + offset;
+    unsigned long linear = curr->arch.pv.ldt_base + offset;
 
     BUG_ON(unlikely(in_irq()));
 
@@ -97,7 +97,7 @@ bool pv_map_ldt_shadow_page(unsigned int offset)
      * current vcpu, and vcpu_reset() will block until this vcpu has been
      * descheduled before continuing.
      */
-    ASSERT((offset >> 3) <= curr->arch.pv_vcpu.ldt_ents);
+    ASSERT((offset >> 3) <= curr->arch.pv.ldt_ents);
 
     if ( is_pv_32bit_domain(currd) )
         linear = (uint32_t)linear;
@@ -119,10 +119,10 @@ bool pv_map_ldt_shadow_page(unsigned int offset)
     pl1e = &pv_ldt_ptes(curr)[offset >> PAGE_SHIFT];
     l1e_add_flags(gl1e, _PAGE_RW);
 
-    spin_lock(&curr->arch.pv_vcpu.shadow_ldt_lock);
+    spin_lock(&curr->arch.pv.shadow_ldt_lock);
     l1e_write(pl1e, gl1e);
-    curr->arch.pv_vcpu.shadow_ldt_mapcnt++;
-    spin_unlock(&curr->arch.pv_vcpu.shadow_ldt_lock);
+    curr->arch.pv.shadow_ldt_mapcnt++;
+    spin_unlock(&curr->arch.pv.shadow_ldt_lock);
 
     return true;
 }
diff --git a/xen/arch/x86/pv/traps.c b/xen/arch/x86/pv/traps.c
index f48db92..1740784 100644
--- a/xen/arch/x86/pv/traps.c
+++ b/xen/arch/x86/pv/traps.c
@@ -63,8 +63,8 @@ void pv_inject_event(const struct x86_event *event)
     else
         ASSERT(error_code == X86_EVENT_NO_EC);
 
-    tb = &curr->arch.pv_vcpu.trap_bounce;
-    ti = &curr->arch.pv_vcpu.trap_ctxt[vector];
+    tb = &curr->arch.pv.trap_bounce;
+    ti = &curr->arch.pv.trap_ctxt[vector];
 
     tb->flags = TBF_EXCEPTION;
     tb->cs    = ti->cs;
@@ -73,7 +73,7 @@ void pv_inject_event(const struct x86_event *event)
     if ( event->type == X86_EVENTTYPE_HW_EXCEPTION &&
          vector == TRAP_page_fault )
     {
-        curr->arch.pv_vcpu.ctrlreg[2] = event->cr2;
+        curr->arch.pv.ctrlreg[2] = event->cr2;
         arch_set_cr2(curr, event->cr2);
 
         /* Re-set error_code.user flag appropriately for the guest. */
@@ -113,7 +113,7 @@ void pv_inject_event(const struct x86_event *event)
 bool set_guest_machinecheck_trapbounce(void)
 {
     struct vcpu *curr = current;
-    struct trap_bounce *tb = &curr->arch.pv_vcpu.trap_bounce;
+    struct trap_bounce *tb = &curr->arch.pv.trap_bounce;
 
     pv_inject_hw_exception(TRAP_machine_check, X86_EVENT_NO_EC);
     tb->flags &= ~TBF_EXCEPTION; /* not needed for MCE delivery path */
@@ -128,7 +128,7 @@ bool set_guest_machinecheck_trapbounce(void)
 bool set_guest_nmi_trapbounce(void)
 {
     struct vcpu *curr = current;
-    struct trap_bounce *tb = &curr->arch.pv_vcpu.trap_bounce;
+    struct trap_bounce *tb = &curr->arch.pv.trap_bounce;
 
     pv_inject_hw_exception(TRAP_nmi, X86_EVENT_NO_EC);
     tb->flags &= ~TBF_EXCEPTION; /* not needed for NMI delivery path */
diff --git a/xen/arch/x86/time.c b/xen/arch/x86/time.c
index 536449b..69e9aaf 100644
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1099,7 +1099,7 @@ static void __update_vcpu_system_time(struct vcpu *v, int force)
 
     if ( !update_secondary_system_time(v, &_u) && is_pv_domain(d) &&
          !is_pv_32bit_domain(d) && !(v->arch.flags & TF_kernel_mode) )
-        v->arch.pv_vcpu.pending_system_time = _u;
+        v->arch.pv.pending_system_time = _u;
 }
 
 bool update_secondary_system_time(struct vcpu *v,
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index ddff346..7ca695c 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -1145,7 +1145,7 @@ static int handle_ldt_mapping_fault(unsigned int offset,
 
         /* Access would have become non-canonical? Pass #GP[sel] back. */
         if ( unlikely(!is_canonical_address(
-                          curr->arch.pv_vcpu.ldt_base + offset)) )
+                          curr->arch.pv.ldt_base + offset)) )
         {
             uint16_t ec = (offset & ~(X86_XEC_EXT | X86_XEC_IDT)) | X86_XEC_TI;
 
@@ -1154,7 +1154,7 @@ static int handle_ldt_mapping_fault(unsigned int offset,
         else
             /* else pass the #PF back, with adjusted %cr2. */
             pv_inject_page_fault(regs->error_code,
-                                 curr->arch.pv_vcpu.ldt_base + offset);
+                                 curr->arch.pv.ldt_base + offset);
     }
 
     return EXCRET_fault_fixed;
@@ -1536,7 +1536,7 @@ void do_general_protection(struct cpu_user_regs *regs)
         /* This fault must be due to <INT n> instruction. */
         const struct trap_info *ti;
         unsigned char vector = regs->error_code >> 3;
-        ti = &v->arch.pv_vcpu.trap_ctxt[vector];
+        ti = &v->arch.pv.trap_ctxt[vector];
         if ( permit_softint(TI_GET_DPL(ti), v, regs) )
         {
             regs->rip += 2;
@@ -1768,10 +1768,10 @@ void do_device_not_available(struct cpu_user_regs *regs)
 
     vcpu_restore_fpu_lazy(curr);
 
-    if ( curr->arch.pv_vcpu.ctrlreg[0] & X86_CR0_TS )
+    if ( curr->arch.pv.ctrlreg[0] & X86_CR0_TS )
     {
         pv_inject_hw_exception(TRAP_no_device, X86_EVENT_NO_EC);
-        curr->arch.pv_vcpu.ctrlreg[0] &= ~X86_CR0_TS;
+        curr->arch.pv.ctrlreg[0] &= ~X86_CR0_TS;
     }
     else
         TRACE_0D(TRC_PV_MATH_STATE_RESTORE);
@@ -2073,10 +2073,10 @@ void activate_debugregs(const struct vcpu *curr)
 
     if ( boot_cpu_has(X86_FEATURE_DBEXT) )
     {
-        wrmsrl(MSR_AMD64_DR0_ADDRESS_MASK, curr->arch.pv_vcpu.dr_mask[0]);
-        wrmsrl(MSR_AMD64_DR1_ADDRESS_MASK, curr->arch.pv_vcpu.dr_mask[1]);
-        wrmsrl(MSR_AMD64_DR2_ADDRESS_MASK, curr->arch.pv_vcpu.dr_mask[2]);
-        wrmsrl(MSR_AMD64_DR3_ADDRESS_MASK, curr->arch.pv_vcpu.dr_mask[3]);
+        wrmsrl(MSR_AMD64_DR0_ADDRESS_MASK, curr->arch.pv.dr_mask[0]);
+        wrmsrl(MSR_AMD64_DR1_ADDRESS_MASK, curr->arch.pv.dr_mask[1]);
+        wrmsrl(MSR_AMD64_DR2_ADDRESS_MASK, curr->arch.pv.dr_mask[2]);
+        wrmsrl(MSR_AMD64_DR3_ADDRESS_MASK, curr->arch.pv.dr_mask[3]);
     }
 }
 
@@ -2109,7 +2109,7 @@ long set_debugreg(struct vcpu *v, unsigned int reg, unsigned long value)
         break;
 
     case 4:
-        if ( v->arch.pv_vcpu.ctrlreg[4] & X86_CR4_DE )
+        if ( v->arch.pv.ctrlreg[4] & X86_CR4_DE )
             return -ENODEV;
 
         /* Fallthrough */
@@ -2129,7 +2129,7 @@ long set_debugreg(struct vcpu *v, unsigned int reg, unsigned long value)
         break;
 
     case 5:
-        if ( v->arch.pv_vcpu.ctrlreg[4] & X86_CR4_DE )
+        if ( v->arch.pv.ctrlreg[4] & X86_CR4_DE )
             return -ENODEV;
 
         /* Fallthrough */
@@ -2160,7 +2160,7 @@ long set_debugreg(struct vcpu *v, unsigned int reg, unsigned long value)
             {
                 if ( ((value >> i) & 3) == DR_IO )
                 {
-                    if ( !(v->arch.pv_vcpu.ctrlreg[4] & X86_CR4_DE) )
+                    if ( !(v->arch.pv.ctrlreg[4] & X86_CR4_DE) )
                         return -EPERM;
                     io_enable |= value & (3 << ((i - 16) >> 1));
                 }
diff --git a/xen/arch/x86/x86_64/asm-offsets.c b/xen/arch/x86/x86_64/asm-offsets.c
index 18077af..26524c4 100644
--- a/xen/arch/x86/x86_64/asm-offsets.c
+++ b/xen/arch/x86/x86_64/asm-offsets.c
@@ -52,28 +52,28 @@ void __dummy__(void)
     OFFSET(VCPU_processor, struct vcpu, processor);
     OFFSET(VCPU_domain, struct vcpu, domain);
     OFFSET(VCPU_vcpu_info, struct vcpu, vcpu_info);
-    OFFSET(VCPU_trap_bounce, struct vcpu, arch.pv_vcpu.trap_bounce);
+    OFFSET(VCPU_trap_bounce, struct vcpu, arch.pv.trap_bounce);
     OFFSET(VCPU_thread_flags, struct vcpu, arch.flags);
-    OFFSET(VCPU_event_addr, struct vcpu, arch.pv_vcpu.event_callback_eip);
-    OFFSET(VCPU_event_sel, struct vcpu, arch.pv_vcpu.event_callback_cs);
+    OFFSET(VCPU_event_addr, struct vcpu, arch.pv.event_callback_eip);
+    OFFSET(VCPU_event_sel, struct vcpu, arch.pv.event_callback_cs);
     OFFSET(VCPU_syscall_addr, struct vcpu,
-           arch.pv_vcpu.syscall_callback_eip);
+           arch.pv.syscall_callback_eip);
     OFFSET(VCPU_syscall32_addr, struct vcpu,
-           arch.pv_vcpu.syscall32_callback_eip);
+           arch.pv.syscall32_callback_eip);
     OFFSET(VCPU_syscall32_sel, struct vcpu,
-           arch.pv_vcpu.syscall32_callback_cs);
+           arch.pv.syscall32_callback_cs);
     OFFSET(VCPU_syscall32_disables_events, struct vcpu,
-           arch.pv_vcpu.syscall32_disables_events);
+           arch.pv.syscall32_disables_events);
     OFFSET(VCPU_sysenter_addr, struct vcpu,
-           arch.pv_vcpu.sysenter_callback_eip);
+           arch.pv.sysenter_callback_eip);
     OFFSET(VCPU_sysenter_sel, struct vcpu,
-           arch.pv_vcpu.sysenter_callback_cs);
+           arch.pv.sysenter_callback_cs);
     OFFSET(VCPU_sysenter_disables_events, struct vcpu,
-           arch.pv_vcpu.sysenter_disables_events);
-    OFFSET(VCPU_trap_ctxt, struct vcpu, arch.pv_vcpu.trap_ctxt);
-    OFFSET(VCPU_kernel_sp, struct vcpu, arch.pv_vcpu.kernel_sp);
-    OFFSET(VCPU_kernel_ss, struct vcpu, arch.pv_vcpu.kernel_ss);
-    OFFSET(VCPU_iopl, struct vcpu, arch.pv_vcpu.iopl);
+           arch.pv.sysenter_disables_events);
+    OFFSET(VCPU_trap_ctxt, struct vcpu, arch.pv.trap_ctxt);
+    OFFSET(VCPU_kernel_sp, struct vcpu, arch.pv.kernel_sp);
+    OFFSET(VCPU_kernel_ss, struct vcpu, arch.pv.kernel_ss);
+    OFFSET(VCPU_iopl, struct vcpu, arch.pv.iopl);
     OFFSET(VCPU_guest_context_flags, struct vcpu, arch.vgc_flags);
     OFFSET(VCPU_cr3, struct vcpu, arch.cr3);
     OFFSET(VCPU_arch_msrs, struct vcpu, arch.msrs);
diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
index dab8c4f..48cb96c 100644
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -371,7 +371,7 @@ UNLIKELY_END(msi_check)
         mov   VCPU_domain(%rbx), %rax
 
         /*
-         * if ( null_trap_info(v, &v->arch.pv_vcpu.trap_ctxt[0x80]) )
+         * if ( null_trap_info(v, &v->arch.pv.trap_ctxt[0x80]) )
          *    goto int80_slow_path;
          */
         mov    0x80 * TRAPINFO_sizeof + TRAPINFO_eip(%rsi), %rdi
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index cca4ae9..989a534 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -1007,8 +1007,8 @@ long subarch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 long do_stack_switch(unsigned long ss, unsigned long esp)
 {
     fixup_guest_stack_selector(current->domain, ss);
-    current->arch.pv_vcpu.kernel_ss = ss;
-    current->arch.pv_vcpu.kernel_sp = esp;
+    current->arch.pv.kernel_ss = ss;
+    current->arch.pv.kernel_sp = esp;
     return 0;
 }
 
@@ -1026,7 +1026,7 @@ long do_set_segment_base(unsigned int which, unsigned long base)
         if ( is_canonical_address(base) )
         {
             wrfsbase(base);
-            v->arch.pv_vcpu.fs_base = base;
+            v->arch.pv.fs_base = base;
         }
         else
             ret = -EINVAL;
@@ -1036,7 +1036,7 @@ long do_set_segment_base(unsigned int which, unsigned long base)
         if ( is_canonical_address(base) )
         {
             wrgsshadow(base);
-            v->arch.pv_vcpu.gs_base_user = base;
+            v->arch.pv.gs_base_user = base;
         }
         else
             ret = -EINVAL;
@@ -1046,7 +1046,7 @@ long do_set_segment_base(unsigned int which, unsigned long base)
         if ( is_canonical_address(base) )
         {
             wrgsbase(base);
-            v->arch.pv_vcpu.gs_base_kernel = base;
+            v->arch.pv.gs_base_kernel = base;
         }
         else
             ret = -EINVAL;
diff --git a/xen/arch/x86/x86_64/traps.c b/xen/arch/x86/x86_64/traps.c
index ed02b78..606b1b0 100644
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -169,15 +169,15 @@ void vcpu_show_registers(const struct vcpu *v)
     if ( !is_pv_vcpu(v) )
         return;
 
-    crs[0] = v->arch.pv_vcpu.ctrlreg[0];
+    crs[0] = v->arch.pv.ctrlreg[0];
     crs[2] = arch_get_cr2(v);
     crs[3] = pagetable_get_paddr(kernel ?
                                  v->arch.guest_table :
                                  v->arch.guest_table_user);
-    crs[4] = v->arch.pv_vcpu.ctrlreg[4];
-    crs[5] = v->arch.pv_vcpu.fs_base;
-    crs[6 + !kernel] = v->arch.pv_vcpu.gs_base_kernel;
-    crs[7 - !kernel] = v->arch.pv_vcpu.gs_base_user;
+    crs[4] = v->arch.pv.ctrlreg[4];
+    crs[5] = v->arch.pv.fs_base;
+    crs[6 + !kernel] = v->arch.pv.gs_base_kernel;
+    crs[7 - !kernel] = v->arch.pv.gs_base_user;
 
     _show_registers(regs, crs, CTXT_pv_guest, v);
 }
diff --git a/xen/arch/x86/x86_emulate.c b/xen/arch/x86/x86_emulate.c
index 30f89ad..532b7e0 100644
--- a/xen/arch/x86/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate.c
@@ -111,7 +111,7 @@ int x86emul_read_dr(unsigned int reg, unsigned long *val,
         break;
 
     case 4 ... 5:
-        if ( !(curr->arch.pv_vcpu.ctrlreg[4] & X86_CR4_DE) )
+        if ( !(curr->arch.pv.ctrlreg[4] & X86_CR4_DE) )
         {
             *val = curr->arch.debugreg[reg + 2];
             break;
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 0c75c02..fdd6856 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -540,7 +540,7 @@ struct arch_vcpu
 
     /* Virtual Machine Extensions */
     union {
-        struct pv_vcpu pv_vcpu;
+        struct pv_vcpu pv;
         struct hvm_vcpu hvm_vcpu;
     };
 
diff --git a/xen/include/asm-x86/ldt.h b/xen/include/asm-x86/ldt.h
index 589daf8..a6236b2 100644
--- a/xen/include/asm-x86/ldt.h
+++ b/xen/include/asm-x86/ldt.h
@@ -9,7 +9,7 @@ static inline void load_LDT(struct vcpu *v)
     struct desc_struct *desc;
     unsigned long ents;
 
-    if ( (ents = v->arch.pv_vcpu.ldt_ents) == 0 )
+    if ( (ents = v->arch.pv.ldt_ents) == 0 )
         lldt(0);
     else
     {
diff --git a/xen/include/asm-x86/pv/traps.h b/xen/include/asm-x86/pv/traps.h
index 89985d1..fcc75f5 100644
--- a/xen/include/asm-x86/pv/traps.h
+++ b/xen/include/asm-x86/pv/traps.h
@@ -37,7 +37,7 @@ bool pv_emulate_invalid_op(struct cpu_user_regs *regs);
 static inline bool pv_trap_callback_registered(const struct vcpu *v,
                                                uint8_t vector)
 {
-    return v->arch.pv_vcpu.trap_ctxt[vector].address;
+    return v->arch.pv.trap_ctxt[vector].address;
 }
 
 #else  /* !CONFIG_PV */
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH 3/7] xen/hvm: Rename d->arch.hvm_domain to d->arch.hvm
  2018-08-28 17:38 [PATCH 0/7] x86: Structure naming and consistency improvements Andrew Cooper
  2018-08-28 17:39 ` [PATCH 1/7] x86/pv: Rename d->arch.pv_domain to d->arch.pv Andrew Cooper
  2018-08-28 17:39 ` [PATCH 2/7] x86/pv: Rename v->arch.pv_vcpu to v->arch.pv Andrew Cooper
@ 2018-08-28 17:39 ` Andrew Cooper
  2018-08-28 18:56   ` Razvan Cojocaru
                     ` (5 more replies)
  2018-08-28 17:39 ` [PATCH 4/7] x86/hvm: Rename v->arch.hvm_vcpu to v->arch.hvm Andrew Cooper
                   ` (3 subsequent siblings)
  6 siblings, 6 replies; 39+ messages in thread
From: Andrew Cooper @ 2018-08-28 17:39 UTC (permalink / raw)
  To: Xen-devel
  Cc: Kevin Tian, Tamas K Lengyel, Wei Liu, Jun Nakajima,
	Razvan Cojocaru, George Dunlap, Andrew Cooper, Tim Deegan,
	Julien Grall, Paul Durrant, Stefano Stabellini, Jan Beulich,
	Boris Ostrovsky, Brian Woods, Suravee Suthikulpanit,
	Roger Pau Monné

The trailing _domain suffix is redundant, but adds to code volume.  Drop it.

Reflow lines as appropriate, and switch to using the new XFREE/etc wrappers
where applicable.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Tim Deegan <tim@xen.org>
CC: George Dunlap <george.dunlap@eu.citrix.com>
CC: Paul Durrant <paul.durrant@citrix.com>
CC: Razvan Cojocaru <rcojocaru@bitdefender.com>
CC: Tamas K Lengyel <tamas@tklengyel.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
CC: Brian Woods <brian.woods@amd.com>
---
 xen/arch/arm/domain_build.c         |   2 +-
 xen/arch/arm/hvm.c                  |   4 +-
 xen/arch/x86/domain.c               |   2 +-
 xen/arch/x86/domctl.c               |  10 +--
 xen/arch/x86/hvm/dom0_build.c       |   4 +-
 xen/arch/x86/hvm/domain.c           |   2 +-
 xen/arch/x86/hvm/hpet.c             |   8 +-
 xen/arch/x86/hvm/hvm.c              | 145 +++++++++++++++++-------------------
 xen/arch/x86/hvm/hypercall.c        |   6 +-
 xen/arch/x86/hvm/intercept.c        |  14 ++--
 xen/arch/x86/hvm/io.c               |  48 ++++++------
 xen/arch/x86/hvm/ioreq.c            |  80 ++++++++++----------
 xen/arch/x86/hvm/irq.c              |  50 ++++++-------
 xen/arch/x86/hvm/mtrr.c             |  14 ++--
 xen/arch/x86/hvm/pmtimer.c          |  40 +++++-----
 xen/arch/x86/hvm/rtc.c              |   4 +-
 xen/arch/x86/hvm/save.c             |   6 +-
 xen/arch/x86/hvm/stdvga.c           |  18 ++---
 xen/arch/x86/hvm/svm/svm.c          |   5 +-
 xen/arch/x86/hvm/svm/vmcb.c         |   2 +-
 xen/arch/x86/hvm/vioapic.c          |  44 +++++------
 xen/arch/x86/hvm/viridian.c         |  56 +++++++-------
 xen/arch/x86/hvm/vlapic.c           |   8 +-
 xen/arch/x86/hvm/vmsi.c             |  14 ++--
 xen/arch/x86/hvm/vmx/vmcs.c         |  12 +--
 xen/arch/x86/hvm/vmx/vmx.c          |  46 ++++++------
 xen/arch/x86/hvm/vpic.c             |  20 ++---
 xen/arch/x86/hvm/vpt.c              |  20 ++---
 xen/arch/x86/irq.c                  |  10 +--
 xen/arch/x86/mm/hap/hap.c           |  11 ++-
 xen/arch/x86/mm/mem_sharing.c       |   6 +-
 xen/arch/x86/mm/shadow/common.c     |  18 ++---
 xen/arch/x86/mm/shadow/multi.c      |   8 +-
 xen/arch/x86/physdev.c              |   2 +-
 xen/arch/x86/setup.c                |  10 +--
 xen/arch/x86/time.c                 |   8 +-
 xen/common/vm_event.c               |   2 +-
 xen/drivers/passthrough/pci.c       |   2 +-
 xen/drivers/vpci/msix.c             |   6 +-
 xen/include/asm-arm/domain.h        |   2 +-
 xen/include/asm-x86/domain.h        |   4 +-
 xen/include/asm-x86/hvm/domain.h    |   2 +-
 xen/include/asm-x86/hvm/hvm.h       |  11 ++-
 xen/include/asm-x86/hvm/irq.h       |   2 +-
 xen/include/asm-x86/hvm/nestedhvm.h |   4 +-
 xen/include/asm-x86/hvm/vioapic.h   |   2 +-
 xen/include/asm-x86/hvm/vpt.h       |   4 +-
 xen/include/asm-x86/irq.h           |   3 +-
 48 files changed, 395 insertions(+), 406 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index e1c79b2..72fd2ae 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2075,7 +2075,7 @@ static void __init evtchn_allocate(struct domain *d)
     val |= MASK_INSR(HVM_PARAM_CALLBACK_TYPE_PPI_FLAG_LOW_LEVEL,
                      HVM_PARAM_CALLBACK_TYPE_PPI_FLAG_MASK);
     val |= d->arch.evtchn_irq;
-    d->arch.hvm_domain.params[HVM_PARAM_CALLBACK_IRQ] = val;
+    d->arch.hvm.params[HVM_PARAM_CALLBACK_IRQ] = val;
 }
 
 static void __init find_gnttab_region(struct domain *d,
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index a56b3fe..76b27c9 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -59,11 +59,11 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 
         if ( op == HVMOP_set_param )
         {
-            d->arch.hvm_domain.params[a.index] = a.value;
+            d->arch.hvm.params[a.index] = a.value;
         }
         else
         {
-            a.value = d->arch.hvm_domain.params[a.index];
+            a.value = d->arch.hvm.params[a.index];
             rc = copy_to_guest(arg, &a, 1) ? -EFAULT : 0;
         }
 
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 4cdcd5d..3dcd7f9 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -505,7 +505,7 @@ int arch_domain_create(struct domain *d,
 
     /* Need to determine if HAP is enabled before initialising paging */
     if ( is_hvm_domain(d) )
-        d->arch.hvm_domain.hap_enabled =
+        d->arch.hvm.hap_enabled =
             hvm_hap_supported() && (config->flags & XEN_DOMCTL_CDF_hap);
 
     if ( (rc = paging_domain_init(d, config->flags)) != 0 )
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index fdbcce0..f306614 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -745,7 +745,7 @@ long arch_do_domctl(
         unsigned int fmp = domctl->u.ioport_mapping.first_mport;
         unsigned int np = domctl->u.ioport_mapping.nr_ports;
         unsigned int add = domctl->u.ioport_mapping.add_mapping;
-        struct hvm_domain *hvm_domain;
+        struct hvm_domain *hvm;
         struct g2m_ioport *g2m_ioport;
         int found = 0;
 
@@ -774,14 +774,14 @@ long arch_do_domctl(
         if ( ret )
             break;
 
-        hvm_domain = &d->arch.hvm_domain;
+        hvm = &d->arch.hvm;
         if ( add )
         {
             printk(XENLOG_G_INFO
                    "ioport_map:add: dom%d gport=%x mport=%x nr=%x\n",
                    d->domain_id, fgp, fmp, np);
 
-            list_for_each_entry(g2m_ioport, &hvm_domain->g2m_ioport_list, list)
+            list_for_each_entry(g2m_ioport, &hvm->g2m_ioport_list, list)
                 if (g2m_ioport->mport == fmp )
                 {
                     g2m_ioport->gport = fgp;
@@ -800,7 +800,7 @@ long arch_do_domctl(
                 g2m_ioport->gport = fgp;
                 g2m_ioport->mport = fmp;
                 g2m_ioport->np = np;
-                list_add_tail(&g2m_ioport->list, &hvm_domain->g2m_ioport_list);
+                list_add_tail(&g2m_ioport->list, &hvm->g2m_ioport_list);
             }
             if ( !ret )
                 ret = ioports_permit_access(d, fmp, fmp + np - 1);
@@ -815,7 +815,7 @@ long arch_do_domctl(
             printk(XENLOG_G_INFO
                    "ioport_map:remove: dom%d gport=%x mport=%x nr=%x\n",
                    d->domain_id, fgp, fmp, np);
-            list_for_each_entry(g2m_ioport, &hvm_domain->g2m_ioport_list, list)
+            list_for_each_entry(g2m_ioport, &hvm->g2m_ioport_list, list)
                 if ( g2m_ioport->mport == fmp )
                 {
                     list_del(&g2m_ioport->list);
diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
index 5065729..22e335f 100644
--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -240,7 +240,7 @@ static int __init pvh_setup_vmx_realmode_helpers(struct domain *d)
         if ( hvm_copy_to_guest_phys(gaddr, NULL, HVM_VM86_TSS_SIZE, v) !=
              HVMTRANS_okay )
             printk("Unable to zero VM86 TSS area\n");
-        d->arch.hvm_domain.params[HVM_PARAM_VM86_TSS_SIZED] =
+        d->arch.hvm.params[HVM_PARAM_VM86_TSS_SIZED] =
             VM86_TSS_UPDATED | ((uint64_t)HVM_VM86_TSS_SIZE << 32) | gaddr;
         if ( pvh_add_mem_range(d, gaddr, gaddr + HVM_VM86_TSS_SIZE,
                                E820_RESERVED) )
@@ -271,7 +271,7 @@ static int __init pvh_setup_vmx_realmode_helpers(struct domain *d)
     write_32bit_pse_identmap(ident_pt);
     unmap_domain_page(ident_pt);
     put_page(mfn_to_page(mfn));
-    d->arch.hvm_domain.params[HVM_PARAM_IDENT_PT] = gaddr;
+    d->arch.hvm.params[HVM_PARAM_IDENT_PT] = gaddr;
     if ( pvh_add_mem_range(d, gaddr, gaddr + PAGE_SIZE, E820_RESERVED) )
             printk("Unable to set identity page tables as reserved in the memory map\n");
 
diff --git a/xen/arch/x86/hvm/domain.c b/xen/arch/x86/hvm/domain.c
index ae70aaf..8a2c83e 100644
--- a/xen/arch/x86/hvm/domain.c
+++ b/xen/arch/x86/hvm/domain.c
@@ -319,7 +319,7 @@ int arch_set_info_hvm_guest(struct vcpu *v, const vcpu_hvm_context_t *ctx)
     v->arch.hvm_vcpu.cache_tsc_offset =
         d->vcpu[0]->arch.hvm_vcpu.cache_tsc_offset;
     hvm_set_tsc_offset(v, v->arch.hvm_vcpu.cache_tsc_offset,
-                       d->arch.hvm_domain.sync_tsc);
+                       d->arch.hvm.sync_tsc);
 
     paging_update_paging_modes(v);
 
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index a594254..8090699 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -26,7 +26,7 @@
 #include <xen/event.h>
 #include <xen/trace.h>
 
-#define domain_vhpet(x) (&(x)->arch.hvm_domain.pl_time->vhpet)
+#define domain_vhpet(x) (&(x)->arch.hvm.pl_time->vhpet)
 #define vcpu_vhpet(x)   (domain_vhpet((x)->domain))
 #define vhpet_domain(x) (container_of(x, struct pl_time, vhpet)->domain)
 #define vhpet_vcpu(x)   (pt_global_vcpu_target(vhpet_domain(x)))
@@ -164,7 +164,7 @@ static int hpet_read(
     unsigned long result;
     uint64_t val;
 
-    if ( !v->domain->arch.hvm_domain.params[HVM_PARAM_HPET_ENABLED] )
+    if ( !v->domain->arch.hvm.params[HVM_PARAM_HPET_ENABLED] )
     {
         result = ~0ul;
         goto out;
@@ -354,7 +354,7 @@ static int hpet_write(
 #define set_start_timer(n)   (__set_bit((n), &start_timers))
 #define set_restart_timer(n) (set_stop_timer(n),set_start_timer(n))
 
-    if ( !v->domain->arch.hvm_domain.params[HVM_PARAM_HPET_ENABLED] )
+    if ( !v->domain->arch.hvm.params[HVM_PARAM_HPET_ENABLED] )
         goto out;
 
     addr &= HPET_MMAP_SIZE-1;
@@ -735,7 +735,7 @@ void hpet_init(struct domain *d)
 
     hpet_set(domain_vhpet(d));
     register_mmio_handler(d, &hpet_mmio_ops);
-    d->arch.hvm_domain.params[HVM_PARAM_HPET_ENABLED] = 1;
+    d->arch.hvm.params[HVM_PARAM_HPET_ENABLED] = 1;
 }
 
 void hpet_deinit(struct domain *d)
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 72c51fa..f895339 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -382,7 +382,7 @@ u64 hvm_get_tsc_scaling_ratio(u32 gtsc_khz)
 
 u64 hvm_scale_tsc(const struct domain *d, u64 tsc)
 {
-    u64 ratio = d->arch.hvm_domain.tsc_scaling_ratio;
+    u64 ratio = d->arch.hvm.tsc_scaling_ratio;
     u64 dummy;
 
     if ( ratio == hvm_default_tsc_scaling_ratio )
@@ -583,14 +583,14 @@ int hvm_domain_initialise(struct domain *d)
         return -EINVAL;
     }
 
-    spin_lock_init(&d->arch.hvm_domain.irq_lock);
-    spin_lock_init(&d->arch.hvm_domain.uc_lock);
-    spin_lock_init(&d->arch.hvm_domain.write_map.lock);
-    rwlock_init(&d->arch.hvm_domain.mmcfg_lock);
-    INIT_LIST_HEAD(&d->arch.hvm_domain.write_map.list);
-    INIT_LIST_HEAD(&d->arch.hvm_domain.g2m_ioport_list);
-    INIT_LIST_HEAD(&d->arch.hvm_domain.mmcfg_regions);
-    INIT_LIST_HEAD(&d->arch.hvm_domain.msix_tables);
+    spin_lock_init(&d->arch.hvm.irq_lock);
+    spin_lock_init(&d->arch.hvm.uc_lock);
+    spin_lock_init(&d->arch.hvm.write_map.lock);
+    rwlock_init(&d->arch.hvm.mmcfg_lock);
+    INIT_LIST_HEAD(&d->arch.hvm.write_map.list);
+    INIT_LIST_HEAD(&d->arch.hvm.g2m_ioport_list);
+    INIT_LIST_HEAD(&d->arch.hvm.mmcfg_regions);
+    INIT_LIST_HEAD(&d->arch.hvm.msix_tables);
 
     rc = create_perdomain_mapping(d, PERDOMAIN_VIRT_START, 0, NULL, NULL);
     if ( rc )
@@ -603,15 +603,15 @@ int hvm_domain_initialise(struct domain *d)
         goto fail0;
 
     nr_gsis = is_hardware_domain(d) ? nr_irqs_gsi : NR_HVM_DOMU_IRQS;
-    d->arch.hvm_domain.pl_time = xzalloc(struct pl_time);
-    d->arch.hvm_domain.params = xzalloc_array(uint64_t, HVM_NR_PARAMS);
-    d->arch.hvm_domain.io_handler = xzalloc_array(struct hvm_io_handler,
-                                                  NR_IO_HANDLERS);
-    d->arch.hvm_domain.irq = xzalloc_bytes(hvm_irq_size(nr_gsis));
+    d->arch.hvm.pl_time = xzalloc(struct pl_time);
+    d->arch.hvm.params = xzalloc_array(uint64_t, HVM_NR_PARAMS);
+    d->arch.hvm.io_handler = xzalloc_array(struct hvm_io_handler,
+                                           NR_IO_HANDLERS);
+    d->arch.hvm.irq = xzalloc_bytes(hvm_irq_size(nr_gsis));
 
     rc = -ENOMEM;
-    if ( !d->arch.hvm_domain.pl_time || !d->arch.hvm_domain.irq ||
-         !d->arch.hvm_domain.params  || !d->arch.hvm_domain.io_handler )
+    if ( !d->arch.hvm.pl_time || !d->arch.hvm.irq ||
+         !d->arch.hvm.params  || !d->arch.hvm.io_handler )
         goto fail1;
 
     /* Set the number of GSIs */
@@ -621,21 +621,21 @@ int hvm_domain_initialise(struct domain *d)
     ASSERT(hvm_domain_irq(d)->nr_gsis >= NR_ISAIRQS);
 
     /* need link to containing domain */
-    d->arch.hvm_domain.pl_time->domain = d;
+    d->arch.hvm.pl_time->domain = d;
 
     /* Set the default IO Bitmap. */
     if ( is_hardware_domain(d) )
     {
-        d->arch.hvm_domain.io_bitmap = _xmalloc(HVM_IOBITMAP_SIZE, PAGE_SIZE);
-        if ( d->arch.hvm_domain.io_bitmap == NULL )
+        d->arch.hvm.io_bitmap = _xmalloc(HVM_IOBITMAP_SIZE, PAGE_SIZE);
+        if ( d->arch.hvm.io_bitmap == NULL )
         {
             rc = -ENOMEM;
             goto fail1;
         }
-        memset(d->arch.hvm_domain.io_bitmap, ~0, HVM_IOBITMAP_SIZE);
+        memset(d->arch.hvm.io_bitmap, ~0, HVM_IOBITMAP_SIZE);
     }
     else
-        d->arch.hvm_domain.io_bitmap = hvm_io_bitmap;
+        d->arch.hvm.io_bitmap = hvm_io_bitmap;
 
     register_g2m_portio_handler(d);
     register_vpci_portio_handler(d);
@@ -644,7 +644,7 @@ int hvm_domain_initialise(struct domain *d)
 
     hvm_init_guest_time(d);
 
-    d->arch.hvm_domain.params[HVM_PARAM_TRIPLE_FAULT_REASON] = SHUTDOWN_reboot;
+    d->arch.hvm.params[HVM_PARAM_TRIPLE_FAULT_REASON] = SHUTDOWN_reboot;
 
     vpic_init(d);
 
@@ -659,7 +659,7 @@ int hvm_domain_initialise(struct domain *d)
     register_portio_handler(d, 0xe9, 1, hvm_print_line);
 
     if ( hvm_tsc_scaling_supported )
-        d->arch.hvm_domain.tsc_scaling_ratio = hvm_default_tsc_scaling_ratio;
+        d->arch.hvm.tsc_scaling_ratio = hvm_default_tsc_scaling_ratio;
 
     rc = hvm_funcs.domain_initialise(d);
     if ( rc != 0 )
@@ -673,11 +673,11 @@ int hvm_domain_initialise(struct domain *d)
     vioapic_deinit(d);
  fail1:
     if ( is_hardware_domain(d) )
-        xfree(d->arch.hvm_domain.io_bitmap);
-    xfree(d->arch.hvm_domain.io_handler);
-    xfree(d->arch.hvm_domain.params);
-    xfree(d->arch.hvm_domain.pl_time);
-    xfree(d->arch.hvm_domain.irq);
+        xfree(d->arch.hvm.io_bitmap);
+    xfree(d->arch.hvm.io_handler);
+    xfree(d->arch.hvm.params);
+    xfree(d->arch.hvm.pl_time);
+    xfree(d->arch.hvm.irq);
  fail0:
     hvm_destroy_cacheattr_region_list(d);
     destroy_perdomain_mapping(d, PERDOMAIN_VIRT_START, 0);
@@ -710,11 +710,8 @@ void hvm_domain_destroy(struct domain *d)
     struct list_head *ioport_list, *tmp;
     struct g2m_ioport *ioport;
 
-    xfree(d->arch.hvm_domain.io_handler);
-    d->arch.hvm_domain.io_handler = NULL;
-
-    xfree(d->arch.hvm_domain.params);
-    d->arch.hvm_domain.params = NULL;
+    XFREE(d->arch.hvm.io_handler);
+    XFREE(d->arch.hvm.params);
 
     hvm_destroy_cacheattr_region_list(d);
 
@@ -723,14 +720,10 @@ void hvm_domain_destroy(struct domain *d)
     stdvga_deinit(d);
     vioapic_deinit(d);
 
-    xfree(d->arch.hvm_domain.pl_time);
-    d->arch.hvm_domain.pl_time = NULL;
-
-    xfree(d->arch.hvm_domain.irq);
-    d->arch.hvm_domain.irq = NULL;
+    XFREE(d->arch.hvm.pl_time);
+    XFREE(d->arch.hvm.irq);
 
-    list_for_each_safe ( ioport_list, tmp,
-                         &d->arch.hvm_domain.g2m_ioport_list )
+    list_for_each_safe ( ioport_list, tmp, &d->arch.hvm.g2m_ioport_list )
     {
         ioport = list_entry(ioport_list, struct g2m_ioport, list);
         list_del(&ioport->list);
@@ -798,7 +791,7 @@ static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
         /* Architecture-specific vmcs/vmcb bits */
         hvm_funcs.save_cpu_ctxt(v, &ctxt);
 
-        ctxt.tsc = hvm_get_guest_tsc_fixed(v, d->arch.hvm_domain.sync_tsc);
+        ctxt.tsc = hvm_get_guest_tsc_fixed(v, d->arch.hvm.sync_tsc);
 
         ctxt.msr_tsc_aux = hvm_msr_tsc_aux(v);
 
@@ -1053,7 +1046,7 @@ static int hvm_load_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 
     v->arch.hvm_vcpu.msr_tsc_aux = ctxt.msr_tsc_aux;
 
-    hvm_set_guest_tsc_fixed(v, ctxt.tsc, d->arch.hvm_domain.sync_tsc);
+    hvm_set_guest_tsc_fixed(v, ctxt.tsc, d->arch.hvm.sync_tsc);
 
     seg.limit = ctxt.idtr_limit;
     seg.base = ctxt.idtr_base;
@@ -1637,7 +1630,7 @@ void hvm_triple_fault(void)
 {
     struct vcpu *v = current;
     struct domain *d = v->domain;
-    u8 reason = d->arch.hvm_domain.params[HVM_PARAM_TRIPLE_FAULT_REASON];
+    u8 reason = d->arch.hvm.params[HVM_PARAM_TRIPLE_FAULT_REASON];
 
     gprintk(XENLOG_INFO,
             "Triple fault - invoking HVM shutdown action %d\n",
@@ -2046,7 +2039,7 @@ static bool_t domain_exit_uc_mode(struct vcpu *v)
 
 static void hvm_set_uc_mode(struct vcpu *v, bool_t is_in_uc_mode)
 {
-    v->domain->arch.hvm_domain.is_in_uc_mode = is_in_uc_mode;
+    v->domain->arch.hvm.is_in_uc_mode = is_in_uc_mode;
     shadow_blow_tables_per_domain(v->domain);
 }
 
@@ -2130,10 +2123,10 @@ void hvm_shadow_handle_cd(struct vcpu *v, unsigned long value)
     if ( value & X86_CR0_CD )
     {
         /* Entering no fill cache mode. */
-        spin_lock(&v->domain->arch.hvm_domain.uc_lock);
+        spin_lock(&v->domain->arch.hvm.uc_lock);
         v->arch.hvm_vcpu.cache_mode = NO_FILL_CACHE_MODE;
 
-        if ( !v->domain->arch.hvm_domain.is_in_uc_mode )
+        if ( !v->domain->arch.hvm.is_in_uc_mode )
         {
             domain_pause_nosync(v->domain);
 
@@ -2143,19 +2136,19 @@ void hvm_shadow_handle_cd(struct vcpu *v, unsigned long value)
 
             domain_unpause(v->domain);
         }
-        spin_unlock(&v->domain->arch.hvm_domain.uc_lock);
+        spin_unlock(&v->domain->arch.hvm.uc_lock);
     }
     else if ( !(value & X86_CR0_CD) &&
               (v->arch.hvm_vcpu.cache_mode == NO_FILL_CACHE_MODE) )
     {
         /* Exit from no fill cache mode. */
-        spin_lock(&v->domain->arch.hvm_domain.uc_lock);
+        spin_lock(&v->domain->arch.hvm.uc_lock);
         v->arch.hvm_vcpu.cache_mode = NORMAL_CACHE_MODE;
 
         if ( domain_exit_uc_mode(v) )
             hvm_set_uc_mode(v, 0);
 
-        spin_unlock(&v->domain->arch.hvm_domain.uc_lock);
+        spin_unlock(&v->domain->arch.hvm.uc_lock);
     }
 }
 
@@ -2597,9 +2590,9 @@ static void *_hvm_map_guest_frame(unsigned long gfn, bool_t permanent,
             return NULL;
         }
         track->page = page;
-        spin_lock(&d->arch.hvm_domain.write_map.lock);
-        list_add_tail(&track->list, &d->arch.hvm_domain.write_map.list);
-        spin_unlock(&d->arch.hvm_domain.write_map.lock);
+        spin_lock(&d->arch.hvm.write_map.lock);
+        list_add_tail(&track->list, &d->arch.hvm.write_map.list);
+        spin_unlock(&d->arch.hvm.write_map.lock);
     }
 
     map = __map_domain_page_global(page);
@@ -2640,8 +2633,8 @@ void hvm_unmap_guest_frame(void *p, bool_t permanent)
         struct hvm_write_map *track;
 
         unmap_domain_page_global(p);
-        spin_lock(&d->arch.hvm_domain.write_map.lock);
-        list_for_each_entry(track, &d->arch.hvm_domain.write_map.list, list)
+        spin_lock(&d->arch.hvm.write_map.lock);
+        list_for_each_entry(track, &d->arch.hvm.write_map.list, list)
             if ( track->page == page )
             {
                 paging_mark_dirty(d, mfn);
@@ -2649,7 +2642,7 @@ void hvm_unmap_guest_frame(void *p, bool_t permanent)
                 xfree(track);
                 break;
             }
-        spin_unlock(&d->arch.hvm_domain.write_map.lock);
+        spin_unlock(&d->arch.hvm.write_map.lock);
     }
 
     put_page(page);
@@ -2659,10 +2652,10 @@ void hvm_mapped_guest_frames_mark_dirty(struct domain *d)
 {
     struct hvm_write_map *track;
 
-    spin_lock(&d->arch.hvm_domain.write_map.lock);
-    list_for_each_entry(track, &d->arch.hvm_domain.write_map.list, list)
+    spin_lock(&d->arch.hvm.write_map.lock);
+    list_for_each_entry(track, &d->arch.hvm.write_map.list, list)
         paging_mark_dirty(d, page_to_mfn(track->page));
-    spin_unlock(&d->arch.hvm_domain.write_map.lock);
+    spin_unlock(&d->arch.hvm.write_map.lock);
 }
 
 static void *hvm_map_entry(unsigned long va, bool_t *writable)
@@ -3942,7 +3935,7 @@ void hvm_vcpu_reset_state(struct vcpu *v, uint16_t cs, uint16_t ip)
     v->arch.hvm_vcpu.cache_tsc_offset =
         v->domain->vcpu[0]->arch.hvm_vcpu.cache_tsc_offset;
     hvm_set_tsc_offset(v, v->arch.hvm_vcpu.cache_tsc_offset,
-                       d->arch.hvm_domain.sync_tsc);
+                       d->arch.hvm.sync_tsc);
 
     v->arch.hvm_vcpu.msr_tsc_adjust = 0;
 
@@ -3964,7 +3957,7 @@ static void hvm_s3_suspend(struct domain *d)
     domain_lock(d);
 
     if ( d->is_dying || (d->vcpu == NULL) || (d->vcpu[0] == NULL) ||
-         test_and_set_bool(d->arch.hvm_domain.is_s3_suspended) )
+         test_and_set_bool(d->arch.hvm.is_s3_suspended) )
     {
         domain_unlock(d);
         domain_unpause(d);
@@ -3994,7 +3987,7 @@ static void hvm_s3_suspend(struct domain *d)
 
 static void hvm_s3_resume(struct domain *d)
 {
-    if ( test_and_clear_bool(d->arch.hvm_domain.is_s3_suspended) )
+    if ( test_and_clear_bool(d->arch.hvm.is_s3_suspended) )
     {
         struct vcpu *v;
 
@@ -4074,7 +4067,7 @@ static int hvmop_set_evtchn_upcall_vector(
 static int hvm_allow_set_param(struct domain *d,
                                const struct xen_hvm_param *a)
 {
-    uint64_t value = d->arch.hvm_domain.params[a->index];
+    uint64_t value = d->arch.hvm.params[a->index];
     int rc;
 
     rc = xsm_hvm_param(XSM_TARGET, d, HVMOP_set_param);
@@ -4177,7 +4170,7 @@ static int hvmop_set_param(
          */
         if ( !paging_mode_hap(d) || !cpu_has_vmx )
         {
-            d->arch.hvm_domain.params[a.index] = a.value;
+            d->arch.hvm.params[a.index] = a.value;
             break;
         }
 
@@ -4192,7 +4185,7 @@ static int hvmop_set_param(
 
         rc = 0;
         domain_pause(d);
-        d->arch.hvm_domain.params[a.index] = a.value;
+        d->arch.hvm.params[a.index] = a.value;
         for_each_vcpu ( d, v )
             paging_update_cr3(v, false);
         domain_unpause(d);
@@ -4241,11 +4234,11 @@ static int hvmop_set_param(
         if ( !paging_mode_hap(d) && a.value )
             rc = -EINVAL;
         if ( a.value &&
-             d->arch.hvm_domain.params[HVM_PARAM_ALTP2M] )
+             d->arch.hvm.params[HVM_PARAM_ALTP2M] )
             rc = -EINVAL;
         /* Set up NHVM state for any vcpus that are already up. */
         if ( a.value &&
-             !d->arch.hvm_domain.params[HVM_PARAM_NESTEDHVM] )
+             !d->arch.hvm.params[HVM_PARAM_NESTEDHVM] )
             for_each_vcpu(d, v)
                 if ( rc == 0 )
                     rc = nestedhvm_vcpu_initialise(v);
@@ -4260,7 +4253,7 @@ static int hvmop_set_param(
         if ( a.value > XEN_ALTP2M_limited )
             rc = -EINVAL;
         if ( a.value &&
-             d->arch.hvm_domain.params[HVM_PARAM_NESTEDHVM] )
+             d->arch.hvm.params[HVM_PARAM_NESTEDHVM] )
             rc = -EINVAL;
         break;
     case HVM_PARAM_BUFIOREQ_EVTCHN:
@@ -4271,20 +4264,20 @@ static int hvmop_set_param(
             rc = -EINVAL;
         break;
     case HVM_PARAM_IOREQ_SERVER_PFN:
-        d->arch.hvm_domain.ioreq_gfn.base = a.value;
+        d->arch.hvm.ioreq_gfn.base = a.value;
         break;
     case HVM_PARAM_NR_IOREQ_SERVER_PAGES:
     {
         unsigned int i;
 
         if ( a.value == 0 ||
-             a.value > sizeof(d->arch.hvm_domain.ioreq_gfn.mask) * 8 )
+             a.value > sizeof(d->arch.hvm.ioreq_gfn.mask) * 8 )
         {
             rc = -EINVAL;
             break;
         }
         for ( i = 0; i < a.value; i++ )
-            set_bit(i, &d->arch.hvm_domain.ioreq_gfn.mask);
+            set_bit(i, &d->arch.hvm.ioreq_gfn.mask);
 
         break;
     }
@@ -4339,7 +4332,7 @@ static int hvmop_set_param(
     if ( rc != 0 )
         goto out;
 
-    d->arch.hvm_domain.params[a.index] = a.value;
+    d->arch.hvm.params[a.index] = a.value;
 
     HVM_DBG_LOG(DBG_LEVEL_HCALL, "set param %u = %"PRIx64,
                 a.index, a.value);
@@ -4418,15 +4411,15 @@ static int hvmop_get_param(
     switch ( a.index )
     {
     case HVM_PARAM_ACPI_S_STATE:
-        a.value = d->arch.hvm_domain.is_s3_suspended ? 3 : 0;
+        a.value = d->arch.hvm.is_s3_suspended ? 3 : 0;
         break;
 
     case HVM_PARAM_VM86_TSS:
-        a.value = (uint32_t)d->arch.hvm_domain.params[HVM_PARAM_VM86_TSS_SIZED];
+        a.value = (uint32_t)d->arch.hvm.params[HVM_PARAM_VM86_TSS_SIZED];
         break;
 
     case HVM_PARAM_VM86_TSS_SIZED:
-        a.value = d->arch.hvm_domain.params[HVM_PARAM_VM86_TSS_SIZED] &
+        a.value = d->arch.hvm.params[HVM_PARAM_VM86_TSS_SIZED] &
                   ~VM86_TSS_UPDATED;
         break;
 
@@ -4453,7 +4446,7 @@ static int hvmop_get_param(
 
     /*FALLTHRU*/
     default:
-        a.value = d->arch.hvm_domain.params[a.index];
+        a.value = d->arch.hvm.params[a.index];
         break;
     }
 
@@ -4553,7 +4546,7 @@ static int do_altp2m_op(
         goto out;
     }
 
-    mode = d->arch.hvm_domain.params[HVM_PARAM_ALTP2M];
+    mode = d->arch.hvm.params[HVM_PARAM_ALTP2M];
 
     if ( XEN_ALTP2M_disabled == mode )
     {
diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c
index 85eacd7..3d7ac49 100644
--- a/xen/arch/x86/hvm/hypercall.c
+++ b/xen/arch/x86/hvm/hypercall.c
@@ -41,7 +41,7 @@ static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         rc = compat_memory_op(cmd, arg);
 
     if ( (cmd & MEMOP_CMD_MASK) == XENMEM_decrease_reservation )
-        curr->domain->arch.hvm_domain.qemu_mapcache_invalidate = true;
+        curr->domain->arch.hvm.qemu_mapcache_invalidate = true;
 
     return rc;
 }
@@ -286,8 +286,8 @@ int hvm_hypercall(struct cpu_user_regs *regs)
     if ( curr->hcall_preempted )
         return HVM_HCALL_preempted;
 
-    if ( unlikely(currd->arch.hvm_domain.qemu_mapcache_invalidate) &&
-         test_and_clear_bool(currd->arch.hvm_domain.qemu_mapcache_invalidate) )
+    if ( unlikely(currd->arch.hvm.qemu_mapcache_invalidate) &&
+         test_and_clear_bool(currd->arch.hvm.qemu_mapcache_invalidate) )
         send_invalidate_req();
 
     return HVM_HCALL_completed;
diff --git a/xen/arch/x86/hvm/intercept.c b/xen/arch/x86/hvm/intercept.c
index 2bc156d..aac22c5 100644
--- a/xen/arch/x86/hvm/intercept.c
+++ b/xen/arch/x86/hvm/intercept.c
@@ -219,10 +219,10 @@ static const struct hvm_io_handler *hvm_find_io_handler(const ioreq_t *p)
     BUG_ON((p->type != IOREQ_TYPE_PIO) &&
            (p->type != IOREQ_TYPE_COPY));
 
-    for ( i = 0; i < curr_d->arch.hvm_domain.io_handler_count; i++ )
+    for ( i = 0; i < curr_d->arch.hvm.io_handler_count; i++ )
     {
         const struct hvm_io_handler *handler =
-            &curr_d->arch.hvm_domain.io_handler[i];
+            &curr_d->arch.hvm.io_handler[i];
         const struct hvm_io_ops *ops = handler->ops;
 
         if ( handler->type != p->type )
@@ -257,9 +257,9 @@ int hvm_io_intercept(ioreq_t *p)
 
 struct hvm_io_handler *hvm_next_io_handler(struct domain *d)
 {
-    unsigned int i = d->arch.hvm_domain.io_handler_count++;
+    unsigned int i = d->arch.hvm.io_handler_count++;
 
-    ASSERT(d->arch.hvm_domain.io_handler);
+    ASSERT(d->arch.hvm.io_handler);
 
     if ( i == NR_IO_HANDLERS )
     {
@@ -267,7 +267,7 @@ struct hvm_io_handler *hvm_next_io_handler(struct domain *d)
         return NULL;
     }
 
-    return &d->arch.hvm_domain.io_handler[i];
+    return &d->arch.hvm.io_handler[i];
 }
 
 void register_mmio_handler(struct domain *d,
@@ -303,10 +303,10 @@ void relocate_portio_handler(struct domain *d, unsigned int old_port,
 {
     unsigned int i;
 
-    for ( i = 0; i < d->arch.hvm_domain.io_handler_count; i++ )
+    for ( i = 0; i < d->arch.hvm.io_handler_count; i++ )
     {
         struct hvm_io_handler *handler =
-            &d->arch.hvm_domain.io_handler[i];
+            &d->arch.hvm.io_handler[i];
 
         if ( handler->type != IOREQ_TYPE_PIO )
             continue;
diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
index bf4d874..f1ea7d7 100644
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -179,12 +179,12 @@ static bool_t g2m_portio_accept(const struct hvm_io_handler *handler,
                                 const ioreq_t *p)
 {
     struct vcpu *curr = current;
-    const struct hvm_domain *hvm_domain = &curr->domain->arch.hvm_domain;
+    const struct hvm_domain *hvm = &curr->domain->arch.hvm;
     struct hvm_vcpu_io *vio = &curr->arch.hvm_vcpu.hvm_io;
     struct g2m_ioport *g2m_ioport;
     unsigned int start, end;
 
-    list_for_each_entry( g2m_ioport, &hvm_domain->g2m_ioport_list, list )
+    list_for_each_entry( g2m_ioport, &hvm->g2m_ioport_list, list )
     {
         start = g2m_ioport->gport;
         end = start + g2m_ioport->np;
@@ -313,12 +313,12 @@ static int vpci_portio_read(const struct hvm_io_handler *handler,
     if ( addr == 0xcf8 )
     {
         ASSERT(size == 4);
-        *data = d->arch.hvm_domain.pci_cf8;
+        *data = d->arch.hvm.pci_cf8;
         return X86EMUL_OKAY;
     }
 
     ASSERT((addr & ~3) == 0xcfc);
-    cf8 = ACCESS_ONCE(d->arch.hvm_domain.pci_cf8);
+    cf8 = ACCESS_ONCE(d->arch.hvm.pci_cf8);
     if ( !CF8_ENABLED(cf8) )
         return X86EMUL_UNHANDLEABLE;
 
@@ -343,12 +343,12 @@ static int vpci_portio_write(const struct hvm_io_handler *handler,
     if ( addr == 0xcf8 )
     {
         ASSERT(size == 4);
-        d->arch.hvm_domain.pci_cf8 = data;
+        d->arch.hvm.pci_cf8 = data;
         return X86EMUL_OKAY;
     }
 
     ASSERT((addr & ~3) == 0xcfc);
-    cf8 = ACCESS_ONCE(d->arch.hvm_domain.pci_cf8);
+    cf8 = ACCESS_ONCE(d->arch.hvm.pci_cf8);
     if ( !CF8_ENABLED(cf8) )
         return X86EMUL_UNHANDLEABLE;
 
@@ -397,7 +397,7 @@ static const struct hvm_mmcfg *vpci_mmcfg_find(const struct domain *d,
 {
     const struct hvm_mmcfg *mmcfg;
 
-    list_for_each_entry ( mmcfg, &d->arch.hvm_domain.mmcfg_regions, next )
+    list_for_each_entry ( mmcfg, &d->arch.hvm.mmcfg_regions, next )
         if ( addr >= mmcfg->addr && addr < mmcfg->addr + mmcfg->size )
             return mmcfg;
 
@@ -420,9 +420,9 @@ static int vpci_mmcfg_accept(struct vcpu *v, unsigned long addr)
     struct domain *d = v->domain;
     bool found;
 
-    read_lock(&d->arch.hvm_domain.mmcfg_lock);
+    read_lock(&d->arch.hvm.mmcfg_lock);
     found = vpci_mmcfg_find(d, addr);
-    read_unlock(&d->arch.hvm_domain.mmcfg_lock);
+    read_unlock(&d->arch.hvm.mmcfg_lock);
 
     return found;
 }
@@ -437,16 +437,16 @@ static int vpci_mmcfg_read(struct vcpu *v, unsigned long addr,
 
     *data = ~0ul;
 
-    read_lock(&d->arch.hvm_domain.mmcfg_lock);
+    read_lock(&d->arch.hvm.mmcfg_lock);
     mmcfg = vpci_mmcfg_find(d, addr);
     if ( !mmcfg )
     {
-        read_unlock(&d->arch.hvm_domain.mmcfg_lock);
+        read_unlock(&d->arch.hvm.mmcfg_lock);
         return X86EMUL_RETRY;
     }
 
     reg = vpci_mmcfg_decode_addr(mmcfg, addr, &sbdf);
-    read_unlock(&d->arch.hvm_domain.mmcfg_lock);
+    read_unlock(&d->arch.hvm.mmcfg_lock);
 
     if ( !vpci_access_allowed(reg, len) ||
          (reg + len) > PCI_CFG_SPACE_EXP_SIZE )
@@ -479,16 +479,16 @@ static int vpci_mmcfg_write(struct vcpu *v, unsigned long addr,
     unsigned int reg;
     pci_sbdf_t sbdf;
 
-    read_lock(&d->arch.hvm_domain.mmcfg_lock);
+    read_lock(&d->arch.hvm.mmcfg_lock);
     mmcfg = vpci_mmcfg_find(d, addr);
     if ( !mmcfg )
     {
-        read_unlock(&d->arch.hvm_domain.mmcfg_lock);
+        read_unlock(&d->arch.hvm.mmcfg_lock);
         return X86EMUL_RETRY;
     }
 
     reg = vpci_mmcfg_decode_addr(mmcfg, addr, &sbdf);
-    read_unlock(&d->arch.hvm_domain.mmcfg_lock);
+    read_unlock(&d->arch.hvm.mmcfg_lock);
 
     if ( !vpci_access_allowed(reg, len) ||
          (reg + len) > PCI_CFG_SPACE_EXP_SIZE )
@@ -527,8 +527,8 @@ int register_vpci_mmcfg_handler(struct domain *d, paddr_t addr,
     new->segment = seg;
     new->size = (end_bus - start_bus + 1) << 20;
 
-    write_lock(&d->arch.hvm_domain.mmcfg_lock);
-    list_for_each_entry ( mmcfg, &d->arch.hvm_domain.mmcfg_regions, next )
+    write_lock(&d->arch.hvm.mmcfg_lock);
+    list_for_each_entry ( mmcfg, &d->arch.hvm.mmcfg_regions, next )
         if ( new->addr < mmcfg->addr + mmcfg->size &&
              mmcfg->addr < new->addr + new->size )
         {
@@ -539,25 +539,25 @@ int register_vpci_mmcfg_handler(struct domain *d, paddr_t addr,
                  new->segment == mmcfg->segment &&
                  new->size == mmcfg->size )
                 ret = 0;
-            write_unlock(&d->arch.hvm_domain.mmcfg_lock);
+            write_unlock(&d->arch.hvm.mmcfg_lock);
             xfree(new);
             return ret;
         }
 
-    if ( list_empty(&d->arch.hvm_domain.mmcfg_regions) )
+    if ( list_empty(&d->arch.hvm.mmcfg_regions) )
         register_mmio_handler(d, &vpci_mmcfg_ops);
 
-    list_add(&new->next, &d->arch.hvm_domain.mmcfg_regions);
-    write_unlock(&d->arch.hvm_domain.mmcfg_lock);
+    list_add(&new->next, &d->arch.hvm.mmcfg_regions);
+    write_unlock(&d->arch.hvm.mmcfg_lock);
 
     return 0;
 }
 
 void destroy_vpci_mmcfg(struct domain *d)
 {
-    struct list_head *mmcfg_regions = &d->arch.hvm_domain.mmcfg_regions;
+    struct list_head *mmcfg_regions = &d->arch.hvm.mmcfg_regions;
 
-    write_lock(&d->arch.hvm_domain.mmcfg_lock);
+    write_lock(&d->arch.hvm.mmcfg_lock);
     while ( !list_empty(mmcfg_regions) )
     {
         struct hvm_mmcfg *mmcfg = list_first_entry(mmcfg_regions,
@@ -566,7 +566,7 @@ void destroy_vpci_mmcfg(struct domain *d)
         list_del(&mmcfg->next);
         xfree(mmcfg);
     }
-    write_unlock(&d->arch.hvm_domain.mmcfg_lock);
+    write_unlock(&d->arch.hvm.mmcfg_lock);
 }
 
 /*
diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index 940a2c9..8d60b02 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -38,13 +38,13 @@ static void set_ioreq_server(struct domain *d, unsigned int id,
                              struct hvm_ioreq_server *s)
 {
     ASSERT(id < MAX_NR_IOREQ_SERVERS);
-    ASSERT(!s || !d->arch.hvm_domain.ioreq_server.server[id]);
+    ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]);
 
-    d->arch.hvm_domain.ioreq_server.server[id] = s;
+    d->arch.hvm.ioreq_server.server[id] = s;
 }
 
 #define GET_IOREQ_SERVER(d, id) \
-    (d)->arch.hvm_domain.ioreq_server.server[id]
+    (d)->arch.hvm.ioreq_server.server[id]
 
 static struct hvm_ioreq_server *get_ioreq_server(const struct domain *d,
                                                  unsigned int id)
@@ -247,10 +247,10 @@ static gfn_t hvm_alloc_ioreq_gfn(struct hvm_ioreq_server *s)
 
     ASSERT(!IS_DEFAULT(s));
 
-    for ( i = 0; i < sizeof(d->arch.hvm_domain.ioreq_gfn.mask) * 8; i++ )
+    for ( i = 0; i < sizeof(d->arch.hvm.ioreq_gfn.mask) * 8; i++ )
     {
-        if ( test_and_clear_bit(i, &d->arch.hvm_domain.ioreq_gfn.mask) )
-            return _gfn(d->arch.hvm_domain.ioreq_gfn.base + i);
+        if ( test_and_clear_bit(i, &d->arch.hvm.ioreq_gfn.mask) )
+            return _gfn(d->arch.hvm.ioreq_gfn.base + i);
     }
 
     return INVALID_GFN;
@@ -259,12 +259,12 @@ static gfn_t hvm_alloc_ioreq_gfn(struct hvm_ioreq_server *s)
 static void hvm_free_ioreq_gfn(struct hvm_ioreq_server *s, gfn_t gfn)
 {
     struct domain *d = s->target;
-    unsigned int i = gfn_x(gfn) - d->arch.hvm_domain.ioreq_gfn.base;
+    unsigned int i = gfn_x(gfn) - d->arch.hvm.ioreq_gfn.base;
 
     ASSERT(!IS_DEFAULT(s));
     ASSERT(!gfn_eq(gfn, INVALID_GFN));
 
-    set_bit(i, &d->arch.hvm_domain.ioreq_gfn.mask);
+    set_bit(i, &d->arch.hvm.ioreq_gfn.mask);
 }
 
 static void hvm_unmap_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
@@ -307,8 +307,8 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
 
     if ( IS_DEFAULT(s) )
         iorp->gfn = _gfn(buf ?
-                         d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN] :
-                         d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN]);
+                         d->arch.hvm.params[HVM_PARAM_BUFIOREQ_PFN] :
+                         d->arch.hvm.params[HVM_PARAM_IOREQ_PFN]);
     else
         iorp->gfn = hvm_alloc_ioreq_gfn(s);
 
@@ -394,7 +394,7 @@ bool is_ioreq_server_page(struct domain *d, const struct page_info *page)
     unsigned int id;
     bool found = false;
 
-    spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
 
     FOR_EACH_IOREQ_SERVER(d, id, s)
     {
@@ -405,7 +405,7 @@ bool is_ioreq_server_page(struct domain *d, const struct page_info *page)
         }
     }
 
-    spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
 
     return found;
 }
@@ -492,7 +492,7 @@ static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s,
 
         s->bufioreq_evtchn = rc;
         if ( IS_DEFAULT(s) )
-            d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN] =
+            d->arch.hvm.params[HVM_PARAM_BUFIOREQ_EVTCHN] =
                 s->bufioreq_evtchn;
     }
 
@@ -797,7 +797,7 @@ int hvm_create_ioreq_server(struct domain *d, bool is_default,
         return -ENOMEM;
 
     domain_pause(d);
-    spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
 
     if ( is_default )
     {
@@ -841,13 +841,13 @@ int hvm_create_ioreq_server(struct domain *d, bool is_default,
     if ( id )
         *id = i;
 
-    spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
     domain_unpause(d);
 
     return 0;
 
  fail:
-    spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
     domain_unpause(d);
 
     xfree(s);
@@ -862,7 +862,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
     if ( id == DEFAULT_IOSERVID )
         return -EPERM;
 
-    spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
 
     s = get_ioreq_server(d, id);
 
@@ -898,7 +898,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
     rc = 0;
 
  out:
-    spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
 
     return rc;
 }
@@ -914,7 +914,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
     if ( id == DEFAULT_IOSERVID )
         return -EOPNOTSUPP;
 
-    spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
 
     s = get_ioreq_server(d, id);
 
@@ -950,7 +950,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
     rc = 0;
 
  out:
-    spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
 
     return rc;
 }
@@ -967,7 +967,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
     if ( !is_hvm_domain(d) )
         return -EINVAL;
 
-    spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
 
     s = get_ioreq_server(d, id);
 
@@ -1007,7 +1007,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
     }
 
  out:
-    spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
 
     return rc;
 }
@@ -1026,7 +1026,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
     if ( id == DEFAULT_IOSERVID )
         return -EOPNOTSUPP;
 
-    spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
 
     s = get_ioreq_server(d, id);
 
@@ -1064,7 +1064,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
     rc = rangeset_add_range(r, start, end);
 
  out:
-    spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
 
     return rc;
 }
@@ -1083,7 +1083,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
     if ( id == DEFAULT_IOSERVID )
         return -EOPNOTSUPP;
 
-    spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
 
     s = get_ioreq_server(d, id);
 
@@ -1121,7 +1121,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
     rc = rangeset_remove_range(r, start, end);
 
  out:
-    spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
 
     return rc;
 }
@@ -1149,7 +1149,7 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
     if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE )
         return -EINVAL;
 
-    spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
 
     s = get_ioreq_server(d, id);
 
@@ -1166,7 +1166,7 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
     rc = p2m_set_ioreq_server(d, flags, s);
 
  out:
-    spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
 
     if ( rc == 0 && flags == 0 )
     {
@@ -1188,7 +1188,7 @@ int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
     if ( id == DEFAULT_IOSERVID )
         return -EOPNOTSUPP;
 
-    spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
 
     s = get_ioreq_server(d, id);
 
@@ -1214,7 +1214,7 @@ int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
     rc = 0;
 
  out:
-    spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
     return rc;
 }
 
@@ -1224,7 +1224,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
     unsigned int id;
     int rc;
 
-    spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
 
     FOR_EACH_IOREQ_SERVER(d, id, s)
     {
@@ -1233,7 +1233,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
             goto fail;
     }
 
-    spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
 
     return 0;
 
@@ -1248,7 +1248,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
         hvm_ioreq_server_remove_vcpu(s, v);
     }
 
-    spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
 
     return rc;
 }
@@ -1258,12 +1258,12 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v)
     struct hvm_ioreq_server *s;
     unsigned int id;
 
-    spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
 
     FOR_EACH_IOREQ_SERVER(d, id, s)
         hvm_ioreq_server_remove_vcpu(s, v);
 
-    spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
 }
 
 void hvm_destroy_all_ioreq_servers(struct domain *d)
@@ -1271,7 +1271,7 @@ void hvm_destroy_all_ioreq_servers(struct domain *d)
     struct hvm_ioreq_server *s;
     unsigned int id;
 
-    spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
 
     /* No need to domain_pause() as the domain is being torn down */
 
@@ -1291,7 +1291,7 @@ void hvm_destroy_all_ioreq_servers(struct domain *d)
         xfree(s);
     }
 
-    spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
 }
 
 struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
@@ -1306,7 +1306,7 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
     if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO )
         return GET_IOREQ_SERVER(d, DEFAULT_IOSERVID);
 
-    cf8 = d->arch.hvm_domain.pci_cf8;
+    cf8 = d->arch.hvm.pci_cf8;
 
     if ( p->type == IOREQ_TYPE_PIO &&
          (p->addr & ~3) == 0xcfc &&
@@ -1564,7 +1564,7 @@ static int hvm_access_cf8(
     struct domain *d = current->domain;
 
     if ( dir == IOREQ_WRITE && bytes == 4 )
-        d->arch.hvm_domain.pci_cf8 = *val;
+        d->arch.hvm.pci_cf8 = *val;
 
     /* We always need to fall through to the catch all emulator */
     return X86EMUL_UNHANDLEABLE;
@@ -1572,7 +1572,7 @@ static int hvm_access_cf8(
 
 void hvm_ioreq_init(struct domain *d)
 {
-    spin_lock_init(&d->arch.hvm_domain.ioreq_server.lock);
+    spin_lock_init(&d->arch.hvm.ioreq_server.lock);
 
     register_portio_handler(d, 0xcf8, 4, hvm_access_cf8);
 }
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index dfe8ed6..1ded2c2 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -52,11 +52,11 @@ int hvm_ioapic_assert(struct domain *d, unsigned int gsi, bool level)
         return -1;
     }
 
-    spin_lock(&d->arch.hvm_domain.irq_lock);
+    spin_lock(&d->arch.hvm.irq_lock);
     if ( !level || hvm_irq->gsi_assert_count[gsi]++ == 0 )
         assert_gsi(d, gsi);
     vector = vioapic_get_vector(d, gsi);
-    spin_unlock(&d->arch.hvm_domain.irq_lock);
+    spin_unlock(&d->arch.hvm.irq_lock);
 
     return vector;
 }
@@ -71,9 +71,9 @@ void hvm_ioapic_deassert(struct domain *d, unsigned int gsi)
         return;
     }
 
-    spin_lock(&d->arch.hvm_domain.irq_lock);
+    spin_lock(&d->arch.hvm.irq_lock);
     hvm_irq->gsi_assert_count[gsi]--;
-    spin_unlock(&d->arch.hvm_domain.irq_lock);
+    spin_unlock(&d->arch.hvm.irq_lock);
 }
 
 static void assert_irq(struct domain *d, unsigned ioapic_gsi, unsigned pic_irq)
@@ -122,9 +122,9 @@ static void __hvm_pci_intx_assert(
 void hvm_pci_intx_assert(
     struct domain *d, unsigned int device, unsigned int intx)
 {
-    spin_lock(&d->arch.hvm_domain.irq_lock);
+    spin_lock(&d->arch.hvm.irq_lock);
     __hvm_pci_intx_assert(d, device, intx);
-    spin_unlock(&d->arch.hvm_domain.irq_lock);
+    spin_unlock(&d->arch.hvm.irq_lock);
 }
 
 static void __hvm_pci_intx_deassert(
@@ -156,9 +156,9 @@ static void __hvm_pci_intx_deassert(
 void hvm_pci_intx_deassert(
     struct domain *d, unsigned int device, unsigned int intx)
 {
-    spin_lock(&d->arch.hvm_domain.irq_lock);
+    spin_lock(&d->arch.hvm.irq_lock);
     __hvm_pci_intx_deassert(d, device, intx);
-    spin_unlock(&d->arch.hvm_domain.irq_lock);
+    spin_unlock(&d->arch.hvm.irq_lock);
 }
 
 void hvm_gsi_assert(struct domain *d, unsigned int gsi)
@@ -179,13 +179,13 @@ void hvm_gsi_assert(struct domain *d, unsigned int gsi)
      * for the hardware domain, Xen needs to rely on gsi_assert_count in order
      * to know if the GSI is pending or not.
      */
-    spin_lock(&d->arch.hvm_domain.irq_lock);
+    spin_lock(&d->arch.hvm.irq_lock);
     if ( !hvm_irq->gsi_assert_count[gsi] )
     {
         hvm_irq->gsi_assert_count[gsi] = 1;
         assert_gsi(d, gsi);
     }
-    spin_unlock(&d->arch.hvm_domain.irq_lock);
+    spin_unlock(&d->arch.hvm.irq_lock);
 }
 
 void hvm_gsi_deassert(struct domain *d, unsigned int gsi)
@@ -198,9 +198,9 @@ void hvm_gsi_deassert(struct domain *d, unsigned int gsi)
         return;
     }
 
-    spin_lock(&d->arch.hvm_domain.irq_lock);
+    spin_lock(&d->arch.hvm.irq_lock);
     hvm_irq->gsi_assert_count[gsi] = 0;
-    spin_unlock(&d->arch.hvm_domain.irq_lock);
+    spin_unlock(&d->arch.hvm.irq_lock);
 }
 
 int hvm_isa_irq_assert(struct domain *d, unsigned int isa_irq,
@@ -213,7 +213,7 @@ int hvm_isa_irq_assert(struct domain *d, unsigned int isa_irq,
 
     ASSERT(isa_irq <= 15);
 
-    spin_lock(&d->arch.hvm_domain.irq_lock);
+    spin_lock(&d->arch.hvm.irq_lock);
 
     if ( !__test_and_set_bit(isa_irq, &hvm_irq->isa_irq.i) &&
          (hvm_irq->gsi_assert_count[gsi]++ == 0) )
@@ -222,7 +222,7 @@ int hvm_isa_irq_assert(struct domain *d, unsigned int isa_irq,
     if ( get_vector )
         vector = get_vector(d, gsi);
 
-    spin_unlock(&d->arch.hvm_domain.irq_lock);
+    spin_unlock(&d->arch.hvm.irq_lock);
 
     return vector;
 }
@@ -235,13 +235,13 @@ void hvm_isa_irq_deassert(
 
     ASSERT(isa_irq <= 15);
 
-    spin_lock(&d->arch.hvm_domain.irq_lock);
+    spin_lock(&d->arch.hvm.irq_lock);
 
     if ( __test_and_clear_bit(isa_irq, &hvm_irq->isa_irq.i) &&
          (--hvm_irq->gsi_assert_count[gsi] == 0) )
         deassert_irq(d, isa_irq);
 
-    spin_unlock(&d->arch.hvm_domain.irq_lock);
+    spin_unlock(&d->arch.hvm.irq_lock);
 }
 
 static void hvm_set_callback_irq_level(struct vcpu *v)
@@ -252,7 +252,7 @@ static void hvm_set_callback_irq_level(struct vcpu *v)
 
     ASSERT(v->vcpu_id == 0);
 
-    spin_lock(&d->arch.hvm_domain.irq_lock);
+    spin_lock(&d->arch.hvm.irq_lock);
 
     /* NB. Do not check the evtchn_upcall_mask. It is not used in HVM mode. */
     asserted = !!vcpu_info(v, evtchn_upcall_pending);
@@ -289,7 +289,7 @@ static void hvm_set_callback_irq_level(struct vcpu *v)
     }
 
  out:
-    spin_unlock(&d->arch.hvm_domain.irq_lock);
+    spin_unlock(&d->arch.hvm.irq_lock);
 }
 
 void hvm_maybe_deassert_evtchn_irq(void)
@@ -331,7 +331,7 @@ int hvm_set_pci_link_route(struct domain *d, u8 link, u8 isa_irq)
     if ( (link > 3) || (isa_irq > 15) )
         return -EINVAL;
 
-    spin_lock(&d->arch.hvm_domain.irq_lock);
+    spin_lock(&d->arch.hvm.irq_lock);
 
     old_isa_irq = hvm_irq->pci_link.route[link];
     if ( old_isa_irq == isa_irq )
@@ -363,7 +363,7 @@ int hvm_set_pci_link_route(struct domain *d, u8 link, u8 isa_irq)
     }
 
  out:
-    spin_unlock(&d->arch.hvm_domain.irq_lock);
+    spin_unlock(&d->arch.hvm.irq_lock);
 
     dprintk(XENLOG_G_INFO, "Dom%u PCI link %u changed %u -> %u\n",
             d->domain_id, link, old_isa_irq, isa_irq);
@@ -431,7 +431,7 @@ void hvm_set_callback_via(struct domain *d, uint64_t via)
          (!has_vlapic(d) || !has_vioapic(d) || !has_vpic(d)) )
         return;
 
-    spin_lock(&d->arch.hvm_domain.irq_lock);
+    spin_lock(&d->arch.hvm.irq_lock);
 
     /* Tear down old callback via. */
     if ( hvm_irq->callback_via_asserted )
@@ -481,7 +481,7 @@ void hvm_set_callback_via(struct domain *d, uint64_t via)
         break;
     }
 
-    spin_unlock(&d->arch.hvm_domain.irq_lock);
+    spin_unlock(&d->arch.hvm.irq_lock);
 
     for_each_vcpu ( d, v )
         if ( is_vcpu_online(v) )
@@ -509,7 +509,7 @@ void hvm_set_callback_via(struct domain *d, uint64_t via)
 
 struct hvm_intack hvm_vcpu_has_pending_irq(struct vcpu *v)
 {
-    struct hvm_domain *plat = &v->domain->arch.hvm_domain;
+    struct hvm_domain *plat = &v->domain->arch.hvm;
     int vector;
 
     if ( unlikely(v->nmi_pending) )
@@ -645,7 +645,7 @@ static int irq_save_pci(struct domain *d, hvm_domain_context_t *h)
     unsigned int asserted, pdev, pintx;
     int rc;
 
-    spin_lock(&d->arch.hvm_domain.irq_lock);
+    spin_lock(&d->arch.hvm.irq_lock);
 
     pdev  = hvm_irq->callback_via.pci.dev;
     pintx = hvm_irq->callback_via.pci.intx;
@@ -666,7 +666,7 @@ static int irq_save_pci(struct domain *d, hvm_domain_context_t *h)
     if ( asserted )
         __hvm_pci_intx_assert(d, pdev, pintx);    
 
-    spin_unlock(&d->arch.hvm_domain.irq_lock);
+    spin_unlock(&d->arch.hvm.irq_lock);
 
     return rc;
 }
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index edfe5cd..8a772bc 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -539,12 +539,12 @@ static DEFINE_RCU_READ_LOCK(pinned_cacheattr_rcu_lock);
 
 void hvm_init_cacheattr_region_list(struct domain *d)
 {
-    INIT_LIST_HEAD(&d->arch.hvm_domain.pinned_cacheattr_ranges);
+    INIT_LIST_HEAD(&d->arch.hvm.pinned_cacheattr_ranges);
 }
 
 void hvm_destroy_cacheattr_region_list(struct domain *d)
 {
-    struct list_head *head = &d->arch.hvm_domain.pinned_cacheattr_ranges;
+    struct list_head *head = &d->arch.hvm.pinned_cacheattr_ranges;
     struct hvm_mem_pinned_cacheattr_range *range;
 
     while ( !list_empty(head) )
@@ -568,7 +568,7 @@ int hvm_get_mem_pinned_cacheattr(struct domain *d, gfn_t gfn,
 
     rcu_read_lock(&pinned_cacheattr_rcu_lock);
     list_for_each_entry_rcu ( range,
-                              &d->arch.hvm_domain.pinned_cacheattr_ranges,
+                              &d->arch.hvm.pinned_cacheattr_ranges,
                               list )
     {
         if ( ((gfn_x(gfn) & mask) >= range->start) &&
@@ -612,7 +612,7 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start,
         /* Remove the requested range. */
         rcu_read_lock(&pinned_cacheattr_rcu_lock);
         list_for_each_entry_rcu ( range,
-                                  &d->arch.hvm_domain.pinned_cacheattr_ranges,
+                                  &d->arch.hvm.pinned_cacheattr_ranges,
                                   list )
             if ( range->start == gfn_start && range->end == gfn_end )
             {
@@ -655,7 +655,7 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start,
 
     rcu_read_lock(&pinned_cacheattr_rcu_lock);
     list_for_each_entry_rcu ( range,
-                              &d->arch.hvm_domain.pinned_cacheattr_ranges,
+                              &d->arch.hvm.pinned_cacheattr_ranges,
                               list )
     {
         if ( range->start == gfn_start && range->end == gfn_end )
@@ -682,7 +682,7 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start,
     range->end = gfn_end;
     range->type = type;
 
-    list_add_rcu(&range->list, &d->arch.hvm_domain.pinned_cacheattr_ranges);
+    list_add_rcu(&range->list, &d->arch.hvm.pinned_cacheattr_ranges);
     p2m_memory_type_changed(d);
     if ( type != PAT_TYPE_WRBACK )
         flush_all(FLUSH_CACHE);
@@ -827,7 +827,7 @@ int epte_get_entry_emt(struct domain *d, unsigned long gfn, mfn_t mfn,
 
     if ( direct_mmio )
     {
-        if ( (mfn_x(mfn) ^ d->arch.hvm_domain.vmx.apic_access_mfn) >> order )
+        if ( (mfn_x(mfn) ^ d->arch.hvm.vmx.apic_access_mfn) >> order )
             return MTRR_TYPE_UNCACHABLE;
         if ( order )
             return -1;
diff --git a/xen/arch/x86/hvm/pmtimer.c b/xen/arch/x86/hvm/pmtimer.c
index 435647f..75b9408 100644
--- a/xen/arch/x86/hvm/pmtimer.c
+++ b/xen/arch/x86/hvm/pmtimer.c
@@ -56,7 +56,7 @@
 /* Dispatch SCIs based on the PM1a_STS and PM1a_EN registers */
 static void pmt_update_sci(PMTState *s)
 {
-    struct hvm_hw_acpi *acpi = &s->vcpu->domain->arch.hvm_domain.acpi;
+    struct hvm_hw_acpi *acpi = &s->vcpu->domain->arch.hvm.acpi;
 
     ASSERT(spin_is_locked(&s->lock));
 
@@ -68,26 +68,26 @@ static void pmt_update_sci(PMTState *s)
 
 void hvm_acpi_power_button(struct domain *d)
 {
-    PMTState *s = &d->arch.hvm_domain.pl_time->vpmt;
+    PMTState *s = &d->arch.hvm.pl_time->vpmt;
 
     if ( !has_vpm(d) )
         return;
 
     spin_lock(&s->lock);
-    d->arch.hvm_domain.acpi.pm1a_sts |= PWRBTN_STS;
+    d->arch.hvm.acpi.pm1a_sts |= PWRBTN_STS;
     pmt_update_sci(s);
     spin_unlock(&s->lock);
 }
 
 void hvm_acpi_sleep_button(struct domain *d)
 {
-    PMTState *s = &d->arch.hvm_domain.pl_time->vpmt;
+    PMTState *s = &d->arch.hvm.pl_time->vpmt;
 
     if ( !has_vpm(d) )
         return;
 
     spin_lock(&s->lock);
-    d->arch.hvm_domain.acpi.pm1a_sts |= PWRBTN_STS;
+    d->arch.hvm.acpi.pm1a_sts |= PWRBTN_STS;
     pmt_update_sci(s);
     spin_unlock(&s->lock);
 }
@@ -97,7 +97,7 @@ void hvm_acpi_sleep_button(struct domain *d)
 static void pmt_update_time(PMTState *s)
 {
     uint64_t curr_gtime, tmp;
-    struct hvm_hw_acpi *acpi = &s->vcpu->domain->arch.hvm_domain.acpi;
+    struct hvm_hw_acpi *acpi = &s->vcpu->domain->arch.hvm.acpi;
     uint32_t tmr_val = acpi->tmr_val, msb = tmr_val & TMR_VAL_MSB;
     
     ASSERT(spin_is_locked(&s->lock));
@@ -137,7 +137,7 @@ static void pmt_timer_callback(void *opaque)
 
     /* How close are we to the next MSB flip? */
     pmt_cycles_until_flip = TMR_VAL_MSB -
-        (s->vcpu->domain->arch.hvm_domain.acpi.tmr_val & (TMR_VAL_MSB - 1));
+        (s->vcpu->domain->arch.hvm.acpi.tmr_val & (TMR_VAL_MSB - 1));
 
     /* Overall time between MSB flips */
     time_until_flip = (1000000000ULL << 23) / FREQUENCE_PMTIMER;
@@ -156,13 +156,13 @@ static int handle_evt_io(
     int dir, unsigned int port, unsigned int bytes, uint32_t *val)
 {
     struct vcpu *v = current;
-    struct hvm_hw_acpi *acpi = &v->domain->arch.hvm_domain.acpi;
-    PMTState *s = &v->domain->arch.hvm_domain.pl_time->vpmt;
+    struct hvm_hw_acpi *acpi = &v->domain->arch.hvm.acpi;
+    PMTState *s = &v->domain->arch.hvm.pl_time->vpmt;
     uint32_t addr, data, byte;
     int i;
 
     addr = port -
-        ((v->domain->arch.hvm_domain.params[
+        ((v->domain->arch.hvm.params[
             HVM_PARAM_ACPI_IOPORTS_LOCATION] == 0) ?
          PM1a_STS_ADDR_V0 : PM1a_STS_ADDR_V1);
 
@@ -220,8 +220,8 @@ static int handle_pmt_io(
     int dir, unsigned int port, unsigned int bytes, uint32_t *val)
 {
     struct vcpu *v = current;
-    struct hvm_hw_acpi *acpi = &v->domain->arch.hvm_domain.acpi;
-    PMTState *s = &v->domain->arch.hvm_domain.pl_time->vpmt;
+    struct hvm_hw_acpi *acpi = &v->domain->arch.hvm.acpi;
+    PMTState *s = &v->domain->arch.hvm.pl_time->vpmt;
 
     if ( bytes != 4 || dir != IOREQ_READ )
     {
@@ -251,8 +251,8 @@ static int handle_pmt_io(
 
 static int acpi_save(struct domain *d, hvm_domain_context_t *h)
 {
-    struct hvm_hw_acpi *acpi = &d->arch.hvm_domain.acpi;
-    PMTState *s = &d->arch.hvm_domain.pl_time->vpmt;
+    struct hvm_hw_acpi *acpi = &d->arch.hvm.acpi;
+    PMTState *s = &d->arch.hvm.pl_time->vpmt;
     uint32_t x, msb = acpi->tmr_val & TMR_VAL_MSB;
     int rc;
 
@@ -282,8 +282,8 @@ static int acpi_save(struct domain *d, hvm_domain_context_t *h)
 
 static int acpi_load(struct domain *d, hvm_domain_context_t *h)
 {
-    struct hvm_hw_acpi *acpi = &d->arch.hvm_domain.acpi;
-    PMTState *s = &d->arch.hvm_domain.pl_time->vpmt;
+    struct hvm_hw_acpi *acpi = &d->arch.hvm.acpi;
+    PMTState *s = &d->arch.hvm.pl_time->vpmt;
 
     if ( !has_vpm(d) )
         return -ENODEV;
@@ -320,7 +320,7 @@ int pmtimer_change_ioport(struct domain *d, unsigned int version)
         return -ENODEV;
 
     /* Check that version is changing. */
-    old_version = d->arch.hvm_domain.params[HVM_PARAM_ACPI_IOPORTS_LOCATION];
+    old_version = d->arch.hvm.params[HVM_PARAM_ACPI_IOPORTS_LOCATION];
     if ( version == old_version )
         return 0;
 
@@ -346,7 +346,7 @@ int pmtimer_change_ioport(struct domain *d, unsigned int version)
 
 void pmtimer_init(struct vcpu *v)
 {
-    PMTState *s = &v->domain->arch.hvm_domain.pl_time->vpmt;
+    PMTState *s = &v->domain->arch.hvm.pl_time->vpmt;
 
     if ( !has_vpm(v->domain) )
         return;
@@ -370,7 +370,7 @@ void pmtimer_init(struct vcpu *v)
 
 void pmtimer_deinit(struct domain *d)
 {
-    PMTState *s = &d->arch.hvm_domain.pl_time->vpmt;
+    PMTState *s = &d->arch.hvm.pl_time->vpmt;
 
     if ( !has_vpm(d) )
         return;
@@ -384,7 +384,7 @@ void pmtimer_reset(struct domain *d)
         return;
 
     /* Reset the counter. */
-    d->arch.hvm_domain.acpi.tmr_val = 0;
+    d->arch.hvm.acpi.tmr_val = 0;
 }
 
 /*
diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index 96921bb..1828587 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -38,7 +38,7 @@
 #define MIN_PER_HOUR    60
 #define HOUR_PER_DAY    24
 
-#define domain_vrtc(x) (&(x)->arch.hvm_domain.pl_time->vrtc)
+#define domain_vrtc(x) (&(x)->arch.hvm.pl_time->vrtc)
 #define vcpu_vrtc(x)   (domain_vrtc((x)->domain))
 #define vrtc_domain(x) (container_of(x, struct pl_time, vrtc)->domain)
 #define vrtc_vcpu(x)   (pt_global_vcpu_target(vrtc_domain(x)))
@@ -148,7 +148,7 @@ static void rtc_timer_update(RTCState *s)
                 s_time_t now = NOW();
 
                 s->period = period;
-                if ( v->domain->arch.hvm_domain.params[HVM_PARAM_VPT_ALIGN] )
+                if ( v->domain->arch.hvm.params[HVM_PARAM_VPT_ALIGN] )
                     delta = 0;
                 else
                     delta = period - ((now - s->start_time) % period);
diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index d2dc430..0ace160 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -39,7 +39,7 @@ void arch_hvm_save(struct domain *d, struct hvm_save_header *hdr)
     hdr->gtsc_khz = d->arch.tsc_khz;
 
     /* Time when saving started */
-    d->arch.hvm_domain.sync_tsc = rdtsc();
+    d->arch.hvm.sync_tsc = rdtsc();
 }
 
 int arch_hvm_load(struct domain *d, struct hvm_save_header *hdr)
@@ -74,10 +74,10 @@ int arch_hvm_load(struct domain *d, struct hvm_save_header *hdr)
         hvm_set_rdtsc_exiting(d, 1);
 
     /* Time when restore started  */
-    d->arch.hvm_domain.sync_tsc = rdtsc();
+    d->arch.hvm.sync_tsc = rdtsc();
 
     /* VGA state is not saved/restored, so we nobble the cache. */
-    d->arch.hvm_domain.stdvga.cache = STDVGA_CACHE_DISABLED;
+    d->arch.hvm.stdvga.cache = STDVGA_CACHE_DISABLED;
 
     return 0;
 }
diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c
index 925bab2..bd398db 100644
--- a/xen/arch/x86/hvm/stdvga.c
+++ b/xen/arch/x86/hvm/stdvga.c
@@ -134,7 +134,7 @@ static bool_t stdvga_cache_is_enabled(const struct hvm_hw_stdvga *s)
 
 static int stdvga_outb(uint64_t addr, uint8_t val)
 {
-    struct hvm_hw_stdvga *s = &current->domain->arch.hvm_domain.stdvga;
+    struct hvm_hw_stdvga *s = &current->domain->arch.hvm.stdvga;
     int rc = 1, prev_stdvga = s->stdvga;
 
     switch ( addr )
@@ -202,7 +202,7 @@ static void stdvga_out(uint32_t port, uint32_t bytes, uint32_t val)
 static int stdvga_intercept_pio(
     int dir, unsigned int port, unsigned int bytes, uint32_t *val)
 {
-    struct hvm_hw_stdvga *s = &current->domain->arch.hvm_domain.stdvga;
+    struct hvm_hw_stdvga *s = &current->domain->arch.hvm.stdvga;
 
     if ( dir == IOREQ_WRITE )
     {
@@ -252,7 +252,7 @@ static unsigned int stdvga_mem_offset(
 
 static uint8_t stdvga_mem_readb(uint64_t addr)
 {
-    struct hvm_hw_stdvga *s = &current->domain->arch.hvm_domain.stdvga;
+    struct hvm_hw_stdvga *s = &current->domain->arch.hvm.stdvga;
     int plane;
     uint32_t ret, *vram_l;
     uint8_t *vram_b;
@@ -347,7 +347,7 @@ static int stdvga_mem_read(const struct hvm_io_handler *handler,
 
 static void stdvga_mem_writeb(uint64_t addr, uint32_t val)
 {
-    struct hvm_hw_stdvga *s = &current->domain->arch.hvm_domain.stdvga;
+    struct hvm_hw_stdvga *s = &current->domain->arch.hvm.stdvga;
     int plane, write_mode, b, func_select, mask;
     uint32_t write_mask, bit_mask, set_mask, *vram_l;
     uint8_t *vram_b;
@@ -457,7 +457,7 @@ static int stdvga_mem_write(const struct hvm_io_handler *handler,
                             uint64_t addr, uint32_t size,
                             uint64_t data)
 {
-    struct hvm_hw_stdvga *s = &current->domain->arch.hvm_domain.stdvga;
+    struct hvm_hw_stdvga *s = &current->domain->arch.hvm.stdvga;
     ioreq_t p = {
         .type = IOREQ_TYPE_COPY,
         .addr = addr,
@@ -517,7 +517,7 @@ static int stdvga_mem_write(const struct hvm_io_handler *handler,
 static bool_t stdvga_mem_accept(const struct hvm_io_handler *handler,
                                 const ioreq_t *p)
 {
-    struct hvm_hw_stdvga *s = &current->domain->arch.hvm_domain.stdvga;
+    struct hvm_hw_stdvga *s = &current->domain->arch.hvm.stdvga;
 
     /*
      * The range check must be done without taking the lock, to avoid
@@ -560,7 +560,7 @@ static bool_t stdvga_mem_accept(const struct hvm_io_handler *handler,
 
 static void stdvga_mem_complete(const struct hvm_io_handler *handler)
 {
-    struct hvm_hw_stdvga *s = &current->domain->arch.hvm_domain.stdvga;
+    struct hvm_hw_stdvga *s = &current->domain->arch.hvm.stdvga;
 
     spin_unlock(&s->lock);
 }
@@ -574,7 +574,7 @@ static const struct hvm_io_ops stdvga_mem_ops = {
 
 void stdvga_init(struct domain *d)
 {
-    struct hvm_hw_stdvga *s = &d->arch.hvm_domain.stdvga;
+    struct hvm_hw_stdvga *s = &d->arch.hvm.stdvga;
     struct page_info *pg;
     unsigned int i;
 
@@ -615,7 +615,7 @@ void stdvga_init(struct domain *d)
 
 void stdvga_deinit(struct domain *d)
 {
-    struct hvm_hw_stdvga *s = &d->arch.hvm_domain.stdvga;
+    struct hvm_hw_stdvga *s = &d->arch.hvm.stdvga;
     int i;
 
     if ( !has_vvga(d) )
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index a16f372..2d52247 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1197,7 +1197,7 @@ void svm_vmenter_helper(const struct cpu_user_regs *regs)
 
 static void svm_guest_osvw_init(struct domain *d)
 {
-    struct svm_domain *svm = &d->arch.hvm_domain.svm;
+    struct svm_domain *svm = &d->arch.hvm.svm;
 
     spin_lock(&osvw_lock);
 
@@ -2006,8 +2006,7 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
     case MSR_AMD_OSVW_STATUS:
         if ( !d->arch.cpuid->extd.osvw )
             goto gpf;
-        *msr_content =
-            d->arch.hvm_domain.svm.osvw.raw[msr - MSR_AMD_OSVW_ID_LENGTH];
+        *msr_content = d->arch.hvm.svm.osvw.raw[msr - MSR_AMD_OSVW_ID_LENGTH];
         break;
 
     default:
diff --git a/xen/arch/x86/hvm/svm/vmcb.c b/xen/arch/x86/hvm/svm/vmcb.c
index 04518fd..d31fcfa 100644
--- a/xen/arch/x86/hvm/svm/vmcb.c
+++ b/xen/arch/x86/hvm/svm/vmcb.c
@@ -106,7 +106,7 @@ static int construct_vmcb(struct vcpu *v)
         svm_disable_intercept_for_msr(v, MSR_AMD64_LWP_CBADDR);
 
     vmcb->_msrpm_base_pa = (u64)virt_to_maddr(arch_svm->msrpm);
-    vmcb->_iopm_base_pa = __pa(v->domain->arch.hvm_domain.io_bitmap);
+    vmcb->_iopm_base_pa = __pa(v->domain->arch.hvm.io_bitmap);
 
     /* Virtualise EFLAGS.IF and LAPIC TPR (CR8). */
     vmcb->_vintr.fields.intr_masking = 1;
diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index 97b419f..9675424 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -49,7 +49,7 @@ static struct hvm_vioapic *addr_vioapic(const struct domain *d,
 {
     unsigned int i;
 
-    for ( i = 0; i < d->arch.hvm_domain.nr_vioapics; i++ )
+    for ( i = 0; i < d->arch.hvm.nr_vioapics; i++ )
     {
         struct hvm_vioapic *vioapic = domain_vioapic(d, i);
 
@@ -66,7 +66,7 @@ static struct hvm_vioapic *gsi_vioapic(const struct domain *d,
 {
     unsigned int i;
 
-    for ( i = 0; i < d->arch.hvm_domain.nr_vioapics; i++ )
+    for ( i = 0; i < d->arch.hvm.nr_vioapics; i++ )
     {
         struct hvm_vioapic *vioapic = domain_vioapic(d, i);
 
@@ -214,7 +214,7 @@ static void vioapic_write_redirent(
     int unmasked = 0;
     unsigned int gsi = vioapic->base_gsi + idx;
 
-    spin_lock(&d->arch.hvm_domain.irq_lock);
+    spin_lock(&d->arch.hvm.irq_lock);
 
     pent = &vioapic->redirtbl[idx];
     ent  = *pent;
@@ -264,7 +264,7 @@ static void vioapic_write_redirent(
         vioapic_deliver(vioapic, idx);
     }
 
-    spin_unlock(&d->arch.hvm_domain.irq_lock);
+    spin_unlock(&d->arch.hvm.irq_lock);
 
     if ( gsi == 0 || unmasked )
         pt_may_unmask_irq(d, NULL);
@@ -388,7 +388,7 @@ static void vioapic_deliver(struct hvm_vioapic *vioapic, unsigned int pin)
     struct vcpu *v;
     unsigned int irq = vioapic->base_gsi + pin;
 
-    ASSERT(spin_is_locked(&d->arch.hvm_domain.irq_lock));
+    ASSERT(spin_is_locked(&d->arch.hvm.irq_lock));
 
     HVM_DBG_LOG(DBG_LEVEL_IOAPIC,
                 "dest=%x dest_mode=%x delivery_mode=%x "
@@ -476,7 +476,7 @@ void vioapic_irq_positive_edge(struct domain *d, unsigned int irq)
     HVM_DBG_LOG(DBG_LEVEL_IOAPIC, "irq %x", irq);
 
     ASSERT(pin < vioapic->nr_pins);
-    ASSERT(spin_is_locked(&d->arch.hvm_domain.irq_lock));
+    ASSERT(spin_is_locked(&d->arch.hvm.irq_lock));
 
     ent = &vioapic->redirtbl[pin];
     if ( ent->fields.mask )
@@ -501,9 +501,9 @@ void vioapic_update_EOI(struct domain *d, u8 vector)
 
     ASSERT(has_vioapic(d));
 
-    spin_lock(&d->arch.hvm_domain.irq_lock);
+    spin_lock(&d->arch.hvm.irq_lock);
 
-    for ( i = 0; i < d->arch.hvm_domain.nr_vioapics; i++ )
+    for ( i = 0; i < d->arch.hvm.nr_vioapics; i++ )
     {
         struct hvm_vioapic *vioapic = domain_vioapic(d, i);
         unsigned int pin;
@@ -518,9 +518,9 @@ void vioapic_update_EOI(struct domain *d, u8 vector)
 
             if ( iommu_enabled )
             {
-                spin_unlock(&d->arch.hvm_domain.irq_lock);
+                spin_unlock(&d->arch.hvm.irq_lock);
                 hvm_dpci_eoi(d, vioapic->base_gsi + pin, ent);
-                spin_lock(&d->arch.hvm_domain.irq_lock);
+                spin_lock(&d->arch.hvm.irq_lock);
             }
 
             if ( (ent->fields.trig_mode == VIOAPIC_LEVEL_TRIG) &&
@@ -533,7 +533,7 @@ void vioapic_update_EOI(struct domain *d, u8 vector)
         }
     }
 
-    spin_unlock(&d->arch.hvm_domain.irq_lock);
+    spin_unlock(&d->arch.hvm.irq_lock);
 }
 
 int vioapic_get_mask(const struct domain *d, unsigned int gsi)
@@ -579,7 +579,7 @@ static int ioapic_save(struct domain *d, hvm_domain_context_t *h)
     s = domain_vioapic(d, 0);
 
     if ( s->nr_pins != ARRAY_SIZE(s->domU.redirtbl) ||
-         d->arch.hvm_domain.nr_vioapics != 1 )
+         d->arch.hvm.nr_vioapics != 1 )
         return -EOPNOTSUPP;
 
     return hvm_save_entry(IOAPIC, 0, h, &s->domU);
@@ -595,7 +595,7 @@ static int ioapic_load(struct domain *d, hvm_domain_context_t *h)
     s = domain_vioapic(d, 0);
 
     if ( s->nr_pins != ARRAY_SIZE(s->domU.redirtbl) ||
-         d->arch.hvm_domain.nr_vioapics != 1 )
+         d->arch.hvm.nr_vioapics != 1 )
         return -EOPNOTSUPP;
 
     return hvm_load_entry(IOAPIC, h, &s->domU);
@@ -609,11 +609,11 @@ void vioapic_reset(struct domain *d)
 
     if ( !has_vioapic(d) )
     {
-        ASSERT(!d->arch.hvm_domain.nr_vioapics);
+        ASSERT(!d->arch.hvm.nr_vioapics);
         return;
     }
 
-    for ( i = 0; i < d->arch.hvm_domain.nr_vioapics; i++ )
+    for ( i = 0; i < d->arch.hvm.nr_vioapics; i++ )
     {
         struct hvm_vioapic *vioapic = domain_vioapic(d, i);
         unsigned int nr_pins = vioapic->nr_pins, base_gsi = vioapic->base_gsi;
@@ -646,7 +646,7 @@ static void vioapic_free(const struct domain *d, unsigned int nr_vioapics)
 
     for ( i = 0; i < nr_vioapics; i++)
         xfree(domain_vioapic(d, i));
-    xfree(d->arch.hvm_domain.vioapic);
+    xfree(d->arch.hvm.vioapic);
 }
 
 int vioapic_init(struct domain *d)
@@ -655,14 +655,14 @@ int vioapic_init(struct domain *d)
 
     if ( !has_vioapic(d) )
     {
-        ASSERT(!d->arch.hvm_domain.nr_vioapics);
+        ASSERT(!d->arch.hvm.nr_vioapics);
         return 0;
     }
 
     nr_vioapics = is_hardware_domain(d) ? nr_ioapics : 1;
 
-    if ( (d->arch.hvm_domain.vioapic == NULL) &&
-         ((d->arch.hvm_domain.vioapic =
+    if ( (d->arch.hvm.vioapic == NULL) &&
+         ((d->arch.hvm.vioapic =
            xzalloc_array(struct hvm_vioapic *, nr_vioapics)) == NULL) )
         return -ENOMEM;
 
@@ -699,7 +699,7 @@ int vioapic_init(struct domain *d)
      */
     ASSERT(hvm_domain_irq(d)->nr_gsis >= nr_gsis);
 
-    d->arch.hvm_domain.nr_vioapics = nr_vioapics;
+    d->arch.hvm.nr_vioapics = nr_vioapics;
     vioapic_reset(d);
 
     register_mmio_handler(d, &vioapic_mmio_ops);
@@ -711,9 +711,9 @@ void vioapic_deinit(struct domain *d)
 {
     if ( !has_vioapic(d) )
     {
-        ASSERT(!d->arch.hvm_domain.nr_vioapics);
+        ASSERT(!d->arch.hvm.nr_vioapics);
         return;
     }
 
-    vioapic_free(d, d->arch.hvm_domain.nr_vioapics);
+    vioapic_free(d, d->arch.hvm.nr_vioapics);
 }
diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index 4860651..5ddb41b 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -223,7 +223,7 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,
     case 2:
         /* Hypervisor information, but only if the guest has set its
            own version number. */
-        if ( d->arch.hvm_domain.viridian.guest_os_id.raw == 0 )
+        if ( d->arch.hvm.viridian.guest_os_id.raw == 0 )
             break;
         res->a = viridian_build;
         res->b = ((uint32_t)viridian_major << 16) | viridian_minor;
@@ -268,8 +268,8 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,
 
     case 4:
         /* Recommended hypercall usage. */
-        if ( (d->arch.hvm_domain.viridian.guest_os_id.raw == 0) ||
-             (d->arch.hvm_domain.viridian.guest_os_id.fields.os < 4) )
+        if ( (d->arch.hvm.viridian.guest_os_id.raw == 0) ||
+             (d->arch.hvm.viridian.guest_os_id.fields.os < 4) )
             break;
         res->a = CPUID4A_RELAX_TIMER_INT;
         if ( viridian_feature_mask(d) & HVMPV_hcall_remote_tlb_flush )
@@ -301,7 +301,7 @@ static void dump_guest_os_id(const struct domain *d)
 {
     const union viridian_guest_os_id *goi;
 
-    goi = &d->arch.hvm_domain.viridian.guest_os_id;
+    goi = &d->arch.hvm.viridian.guest_os_id;
 
     printk(XENLOG_G_INFO
            "d%d: VIRIDIAN GUEST_OS_ID: vendor: %x os: %x major: %x minor: %x sp: %x build: %x\n",
@@ -315,7 +315,7 @@ static void dump_hypercall(const struct domain *d)
 {
     const union viridian_hypercall_gpa *hg;
 
-    hg = &d->arch.hvm_domain.viridian.hypercall_gpa;
+    hg = &d->arch.hvm.viridian.hypercall_gpa;
 
     printk(XENLOG_G_INFO "d%d: VIRIDIAN HYPERCALL: enabled: %x pfn: %lx\n",
            d->domain_id,
@@ -336,7 +336,7 @@ static void dump_reference_tsc(const struct domain *d)
 {
     const union viridian_reference_tsc *rt;
 
-    rt = &d->arch.hvm_domain.viridian.reference_tsc;
+    rt = &d->arch.hvm.viridian.reference_tsc;
     
     printk(XENLOG_G_INFO "d%d: VIRIDIAN REFERENCE_TSC: enabled: %x pfn: %lx\n",
            d->domain_id,
@@ -345,7 +345,7 @@ static void dump_reference_tsc(const struct domain *d)
 
 static void enable_hypercall_page(struct domain *d)
 {
-    unsigned long gmfn = d->arch.hvm_domain.viridian.hypercall_gpa.fields.pfn;
+    unsigned long gmfn = d->arch.hvm.viridian.hypercall_gpa.fields.pfn;
     struct page_info *page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC);
     uint8_t *p;
 
@@ -483,7 +483,7 @@ void viridian_apic_assist_clear(struct vcpu *v)
 
 static void update_reference_tsc(struct domain *d, bool_t initialize)
 {
-    unsigned long gmfn = d->arch.hvm_domain.viridian.reference_tsc.fields.pfn;
+    unsigned long gmfn = d->arch.hvm.viridian.reference_tsc.fields.pfn;
     struct page_info *page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC);
     HV_REFERENCE_TSC_PAGE *p;
 
@@ -566,15 +566,15 @@ int wrmsr_viridian_regs(uint32_t idx, uint64_t val)
     {
     case HV_X64_MSR_GUEST_OS_ID:
         perfc_incr(mshv_wrmsr_osid);
-        d->arch.hvm_domain.viridian.guest_os_id.raw = val;
+        d->arch.hvm.viridian.guest_os_id.raw = val;
         dump_guest_os_id(d);
         break;
 
     case HV_X64_MSR_HYPERCALL:
         perfc_incr(mshv_wrmsr_hc_page);
-        d->arch.hvm_domain.viridian.hypercall_gpa.raw = val;
+        d->arch.hvm.viridian.hypercall_gpa.raw = val;
         dump_hypercall(d);
-        if ( d->arch.hvm_domain.viridian.hypercall_gpa.fields.enabled )
+        if ( d->arch.hvm.viridian.hypercall_gpa.fields.enabled )
             enable_hypercall_page(d);
         break;
 
@@ -618,9 +618,9 @@ int wrmsr_viridian_regs(uint32_t idx, uint64_t val)
             return 0;
 
         perfc_incr(mshv_wrmsr_tsc_msr);
-        d->arch.hvm_domain.viridian.reference_tsc.raw = val;
+        d->arch.hvm.viridian.reference_tsc.raw = val;
         dump_reference_tsc(d);
-        if ( d->arch.hvm_domain.viridian.reference_tsc.fields.enabled )
+        if ( d->arch.hvm.viridian.reference_tsc.fields.enabled )
             update_reference_tsc(d, 1);
         break;
 
@@ -681,7 +681,7 @@ void viridian_time_ref_count_freeze(struct domain *d)
 {
     struct viridian_time_ref_count *trc;
 
-    trc = &d->arch.hvm_domain.viridian.time_ref_count;
+    trc = &d->arch.hvm.viridian.time_ref_count;
 
     if ( test_and_clear_bit(_TRC_running, &trc->flags) )
         trc->val = raw_trc_val(d) + trc->off;
@@ -691,7 +691,7 @@ void viridian_time_ref_count_thaw(struct domain *d)
 {
     struct viridian_time_ref_count *trc;
 
-    trc = &d->arch.hvm_domain.viridian.time_ref_count;
+    trc = &d->arch.hvm.viridian.time_ref_count;
 
     if ( !d->is_shutting_down &&
          !test_and_set_bit(_TRC_running, &trc->flags) )
@@ -710,12 +710,12 @@ int rdmsr_viridian_regs(uint32_t idx, uint64_t *val)
     {
     case HV_X64_MSR_GUEST_OS_ID:
         perfc_incr(mshv_rdmsr_osid);
-        *val = d->arch.hvm_domain.viridian.guest_os_id.raw;
+        *val = d->arch.hvm.viridian.guest_os_id.raw;
         break;
 
     case HV_X64_MSR_HYPERCALL:
         perfc_incr(mshv_rdmsr_hc_page);
-        *val = d->arch.hvm_domain.viridian.hypercall_gpa.raw;
+        *val = d->arch.hvm.viridian.hypercall_gpa.raw;
         break;
 
     case HV_X64_MSR_VP_INDEX:
@@ -760,14 +760,14 @@ int rdmsr_viridian_regs(uint32_t idx, uint64_t *val)
             return 0;
 
         perfc_incr(mshv_rdmsr_tsc_msr);
-        *val = d->arch.hvm_domain.viridian.reference_tsc.raw;
+        *val = d->arch.hvm.viridian.reference_tsc.raw;
         break;
 
     case HV_X64_MSR_TIME_REF_COUNT:
     {
         struct viridian_time_ref_count *trc;
 
-        trc = &d->arch.hvm_domain.viridian.time_ref_count;
+        trc = &d->arch.hvm.viridian.time_ref_count;
 
         if ( !(viridian_feature_mask(d) & HVMPV_time_ref_count) )
             return 0;
@@ -993,10 +993,10 @@ int viridian_hypercall(struct cpu_user_regs *regs)
 static int viridian_save_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
 {
     struct hvm_viridian_domain_context ctxt = {
-        .time_ref_count = d->arch.hvm_domain.viridian.time_ref_count.val,
-        .hypercall_gpa  = d->arch.hvm_domain.viridian.hypercall_gpa.raw,
-        .guest_os_id    = d->arch.hvm_domain.viridian.guest_os_id.raw,
-        .reference_tsc  = d->arch.hvm_domain.viridian.reference_tsc.raw,
+        .time_ref_count = d->arch.hvm.viridian.time_ref_count.val,
+        .hypercall_gpa  = d->arch.hvm.viridian.hypercall_gpa.raw,
+        .guest_os_id    = d->arch.hvm.viridian.guest_os_id.raw,
+        .reference_tsc  = d->arch.hvm.viridian.reference_tsc.raw,
     };
 
     if ( !is_viridian_domain(d) )
@@ -1012,12 +1012,12 @@ static int viridian_load_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
     if ( hvm_load_entry_zeroextend(VIRIDIAN_DOMAIN, h, &ctxt) != 0 )
         return -EINVAL;
 
-    d->arch.hvm_domain.viridian.time_ref_count.val = ctxt.time_ref_count;
-    d->arch.hvm_domain.viridian.hypercall_gpa.raw  = ctxt.hypercall_gpa;
-    d->arch.hvm_domain.viridian.guest_os_id.raw    = ctxt.guest_os_id;
-    d->arch.hvm_domain.viridian.reference_tsc.raw  = ctxt.reference_tsc;
+    d->arch.hvm.viridian.time_ref_count.val = ctxt.time_ref_count;
+    d->arch.hvm.viridian.hypercall_gpa.raw  = ctxt.hypercall_gpa;
+    d->arch.hvm.viridian.guest_os_id.raw    = ctxt.guest_os_id;
+    d->arch.hvm.viridian.reference_tsc.raw  = ctxt.reference_tsc;
 
-    if ( d->arch.hvm_domain.viridian.reference_tsc.fields.enabled )
+    if ( d->arch.hvm.viridian.reference_tsc.fields.enabled )
         update_reference_tsc(d, 0);
 
     return 0;
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index ec089cc..04702e9 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1203,10 +1203,10 @@ int vlapic_accept_pic_intr(struct vcpu *v)
         return 0;
 
     TRACE_2D(TRC_HVM_EMUL_LAPIC_PIC_INTR,
-             (v == v->domain->arch.hvm_domain.i8259_target),
+             (v == v->domain->arch.hvm.i8259_target),
              v ? __vlapic_accept_pic_intr(v) : -1);
 
-    return ((v == v->domain->arch.hvm_domain.i8259_target) &&
+    return ((v == v->domain->arch.hvm.i8259_target) &&
             __vlapic_accept_pic_intr(v));
 }
 
@@ -1224,9 +1224,9 @@ void vlapic_adjust_i8259_target(struct domain *d)
     v = d->vcpu ? d->vcpu[0] : NULL;
 
  found:
-    if ( d->arch.hvm_domain.i8259_target == v )
+    if ( d->arch.hvm.i8259_target == v )
         return;
-    d->arch.hvm_domain.i8259_target = v;
+    d->arch.hvm.i8259_target = v;
     pt_adjust_global_vcpu_target(v);
 }
 
diff --git a/xen/arch/x86/hvm/vmsi.c b/xen/arch/x86/hvm/vmsi.c
index 3001d5c..ccbf181 100644
--- a/xen/arch/x86/hvm/vmsi.c
+++ b/xen/arch/x86/hvm/vmsi.c
@@ -173,7 +173,7 @@ static DEFINE_RCU_READ_LOCK(msixtbl_rcu_lock);
  */
 static bool msixtbl_initialised(const struct domain *d)
 {
-    return !!d->arch.hvm_domain.msixtbl_list.next;
+    return !!d->arch.hvm.msixtbl_list.next;
 }
 
 static struct msixtbl_entry *msixtbl_find_entry(
@@ -182,7 +182,7 @@ static struct msixtbl_entry *msixtbl_find_entry(
     struct msixtbl_entry *entry;
     struct domain *d = v->domain;
 
-    list_for_each_entry( entry, &d->arch.hvm_domain.msixtbl_list, list )
+    list_for_each_entry( entry, &d->arch.hvm.msixtbl_list, list )
         if ( addr >= entry->gtable &&
              addr < entry->gtable + entry->table_len )
             return entry;
@@ -430,7 +430,7 @@ static void add_msixtbl_entry(struct domain *d,
     entry->pdev = pdev;
     entry->gtable = (unsigned long) gtable;
 
-    list_add_rcu(&entry->list, &d->arch.hvm_domain.msixtbl_list);
+    list_add_rcu(&entry->list, &d->arch.hvm.msixtbl_list);
 }
 
 static void free_msixtbl_entry(struct rcu_head *rcu)
@@ -483,7 +483,7 @@ int msixtbl_pt_register(struct domain *d, struct pirq *pirq, uint64_t gtable)
 
     pdev = msi_desc->dev;
 
-    list_for_each_entry( entry, &d->arch.hvm_domain.msixtbl_list, list )
+    list_for_each_entry( entry, &d->arch.hvm.msixtbl_list, list )
         if ( pdev == entry->pdev )
             goto found;
 
@@ -542,7 +542,7 @@ void msixtbl_pt_unregister(struct domain *d, struct pirq *pirq)
 
     pdev = msi_desc->dev;
 
-    list_for_each_entry( entry, &d->arch.hvm_domain.msixtbl_list, list )
+    list_for_each_entry( entry, &d->arch.hvm.msixtbl_list, list )
         if ( pdev == entry->pdev )
             goto found;
 
@@ -564,7 +564,7 @@ void msixtbl_init(struct domain *d)
     if ( !is_hvm_domain(d) || !has_vlapic(d) || msixtbl_initialised(d) )
         return;
 
-    INIT_LIST_HEAD(&d->arch.hvm_domain.msixtbl_list);
+    INIT_LIST_HEAD(&d->arch.hvm.msixtbl_list);
 
     handler = hvm_next_io_handler(d);
     if ( handler )
@@ -584,7 +584,7 @@ void msixtbl_pt_cleanup(struct domain *d)
     spin_lock(&d->event_lock);
 
     list_for_each_entry_safe( entry, temp,
-                              &d->arch.hvm_domain.msixtbl_list, list )
+                              &d->arch.hvm.msixtbl_list, list )
         del_msixtbl_entry(entry);
 
     spin_unlock(&d->event_lock);
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 6681032..f30850c 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1108,8 +1108,8 @@ static int construct_vmcs(struct vcpu *v)
     }
 
     /* I/O access bitmap. */
-    __vmwrite(IO_BITMAP_A, __pa(d->arch.hvm_domain.io_bitmap));
-    __vmwrite(IO_BITMAP_B, __pa(d->arch.hvm_domain.io_bitmap) + PAGE_SIZE);
+    __vmwrite(IO_BITMAP_A, __pa(d->arch.hvm.io_bitmap));
+    __vmwrite(IO_BITMAP_B, __pa(d->arch.hvm.io_bitmap) + PAGE_SIZE);
 
     if ( cpu_has_vmx_virtual_intr_delivery )
     {
@@ -1263,7 +1263,7 @@ static int construct_vmcs(struct vcpu *v)
         __vmwrite(XSS_EXIT_BITMAP, 0);
 
     if ( cpu_has_vmx_tsc_scaling )
-        __vmwrite(TSC_MULTIPLIER, d->arch.hvm_domain.tsc_scaling_ratio);
+        __vmwrite(TSC_MULTIPLIER, d->arch.hvm.tsc_scaling_ratio);
 
     /* will update HOST & GUEST_CR3 as reqd */
     paging_update_paging_modes(v);
@@ -1643,7 +1643,7 @@ void vmx_vcpu_flush_pml_buffer(struct vcpu *v)
 
 bool_t vmx_domain_pml_enabled(const struct domain *d)
 {
-    return !!(d->arch.hvm_domain.vmx.status & VMX_DOMAIN_PML_ENABLED);
+    return !!(d->arch.hvm.vmx.status & VMX_DOMAIN_PML_ENABLED);
 }
 
 /*
@@ -1668,7 +1668,7 @@ int vmx_domain_enable_pml(struct domain *d)
         if ( (rc = vmx_vcpu_enable_pml(v)) != 0 )
             goto error;
 
-    d->arch.hvm_domain.vmx.status |= VMX_DOMAIN_PML_ENABLED;
+    d->arch.hvm.vmx.status |= VMX_DOMAIN_PML_ENABLED;
 
     return 0;
 
@@ -1697,7 +1697,7 @@ void vmx_domain_disable_pml(struct domain *d)
     for_each_vcpu ( d, v )
         vmx_vcpu_disable_pml(v);
 
-    d->arch.hvm_domain.vmx.status &= ~VMX_DOMAIN_PML_ENABLED;
+    d->arch.hvm.vmx.status &= ~VMX_DOMAIN_PML_ENABLED;
 }
 
 /*
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 73f0d52..ccfbacb 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -318,7 +318,7 @@ void vmx_pi_hooks_assign(struct domain *d)
     if ( !iommu_intpost || !is_hvm_domain(d) )
         return;
 
-    ASSERT(!d->arch.hvm_domain.pi_ops.vcpu_block);
+    ASSERT(!d->arch.hvm.pi_ops.vcpu_block);
 
     /*
      * We carefully handle the timing here:
@@ -329,8 +329,8 @@ void vmx_pi_hooks_assign(struct domain *d)
      * This can make sure the PI (especially the NDST feild) is
      * in proper state when we call vmx_vcpu_block().
      */
-    d->arch.hvm_domain.pi_ops.switch_from = vmx_pi_switch_from;
-    d->arch.hvm_domain.pi_ops.switch_to = vmx_pi_switch_to;
+    d->arch.hvm.pi_ops.switch_from = vmx_pi_switch_from;
+    d->arch.hvm.pi_ops.switch_to = vmx_pi_switch_to;
 
     for_each_vcpu ( d, v )
     {
@@ -345,8 +345,8 @@ void vmx_pi_hooks_assign(struct domain *d)
                 x2apic_enabled ? dest : MASK_INSR(dest, PI_xAPIC_NDST_MASK));
     }
 
-    d->arch.hvm_domain.pi_ops.vcpu_block = vmx_vcpu_block;
-    d->arch.hvm_domain.pi_ops.do_resume = vmx_pi_do_resume;
+    d->arch.hvm.pi_ops.vcpu_block = vmx_vcpu_block;
+    d->arch.hvm.pi_ops.do_resume = vmx_pi_do_resume;
 }
 
 /* This function is called when pcidevs_lock is held */
@@ -357,7 +357,7 @@ void vmx_pi_hooks_deassign(struct domain *d)
     if ( !iommu_intpost || !is_hvm_domain(d) )
         return;
 
-    ASSERT(d->arch.hvm_domain.pi_ops.vcpu_block);
+    ASSERT(d->arch.hvm.pi_ops.vcpu_block);
 
     /*
      * Pausing the domain can make sure the vCPUs are not
@@ -369,7 +369,7 @@ void vmx_pi_hooks_deassign(struct domain *d)
     domain_pause(d);
 
     /*
-     * Note that we don't set 'd->arch.hvm_domain.pi_ops.switch_to' to NULL
+     * Note that we don't set 'd->arch.hvm.pi_ops.switch_to' to NULL
      * here. If we deassign the hooks while the vCPU is runnable in the
      * runqueue with 'SN' set, all the future notification event will be
      * suppressed since vmx_deliver_posted_intr() also use 'SN' bit
@@ -382,9 +382,9 @@ void vmx_pi_hooks_deassign(struct domain *d)
      * system, leave it here until we find a clean solution to deassign the
      * 'switch_to' hook function.
      */
-    d->arch.hvm_domain.pi_ops.vcpu_block = NULL;
-    d->arch.hvm_domain.pi_ops.switch_from = NULL;
-    d->arch.hvm_domain.pi_ops.do_resume = NULL;
+    d->arch.hvm.pi_ops.vcpu_block = NULL;
+    d->arch.hvm.pi_ops.switch_from = NULL;
+    d->arch.hvm.pi_ops.do_resume = NULL;
 
     for_each_vcpu ( d, v )
         vmx_pi_unblock_vcpu(v);
@@ -934,8 +934,8 @@ static void vmx_ctxt_switch_from(struct vcpu *v)
     vmx_restore_host_msrs();
     vmx_save_dr(v);
 
-    if ( v->domain->arch.hvm_domain.pi_ops.switch_from )
-        v->domain->arch.hvm_domain.pi_ops.switch_from(v);
+    if ( v->domain->arch.hvm.pi_ops.switch_from )
+        v->domain->arch.hvm.pi_ops.switch_from(v);
 }
 
 static void vmx_ctxt_switch_to(struct vcpu *v)
@@ -943,8 +943,8 @@ static void vmx_ctxt_switch_to(struct vcpu *v)
     vmx_restore_guest_msrs(v);
     vmx_restore_dr(v);
 
-    if ( v->domain->arch.hvm_domain.pi_ops.switch_to )
-        v->domain->arch.hvm_domain.pi_ops.switch_to(v);
+    if ( v->domain->arch.hvm.pi_ops.switch_to )
+        v->domain->arch.hvm.pi_ops.switch_to(v);
 }
 
 
@@ -1104,7 +1104,7 @@ static void vmx_set_segment_register(struct vcpu *v, enum x86_segment seg,
         if ( seg == x86_seg_tr ) 
         {
             const struct domain *d = v->domain;
-            uint64_t val = d->arch.hvm_domain.params[HVM_PARAM_VM86_TSS_SIZED];
+            uint64_t val = d->arch.hvm.params[HVM_PARAM_VM86_TSS_SIZED];
 
             if ( val )
             {
@@ -1115,7 +1115,7 @@ static void vmx_set_segment_register(struct vcpu *v, enum x86_segment seg,
                 if ( val & VM86_TSS_UPDATED )
                 {
                     hvm_prepare_vm86_tss(v, base, limit);
-                    cmpxchg(&d->arch.hvm_domain.params[HVM_PARAM_VM86_TSS_SIZED],
+                    cmpxchg(&d->arch.hvm.params[HVM_PARAM_VM86_TSS_SIZED],
                             val, val & ~VM86_TSS_UPDATED);
                 }
                 v->arch.hvm_vmx.vm86_segment_mask &= ~(1u << seg);
@@ -1626,7 +1626,7 @@ static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr,
         {
             if ( !hvm_paging_enabled(v) && !vmx_unrestricted_guest(v) )
                 v->arch.hvm_vcpu.hw_cr[3] =
-                    v->domain->arch.hvm_domain.params[HVM_PARAM_IDENT_PT];
+                    v->domain->arch.hvm.params[HVM_PARAM_IDENT_PT];
             vmx_load_pdptrs(v);
         }
 
@@ -2997,7 +2997,7 @@ static int vmx_alloc_vlapic_mapping(struct domain *d)
     mfn = page_to_mfn(pg);
     clear_domain_page(mfn);
     share_xen_page_with_guest(pg, d, SHARE_rw);
-    d->arch.hvm_domain.vmx.apic_access_mfn = mfn_x(mfn);
+    d->arch.hvm.vmx.apic_access_mfn = mfn_x(mfn);
     set_mmio_p2m_entry(d, paddr_to_pfn(APIC_DEFAULT_PHYS_BASE), mfn,
                        PAGE_ORDER_4K, p2m_get_hostp2m(d)->default_access);
 
@@ -3006,7 +3006,7 @@ static int vmx_alloc_vlapic_mapping(struct domain *d)
 
 static void vmx_free_vlapic_mapping(struct domain *d)
 {
-    unsigned long mfn = d->arch.hvm_domain.vmx.apic_access_mfn;
+    unsigned long mfn = d->arch.hvm.vmx.apic_access_mfn;
 
     if ( mfn != 0 )
         free_shared_domheap_page(mfn_to_page(_mfn(mfn)));
@@ -3016,13 +3016,13 @@ static void vmx_install_vlapic_mapping(struct vcpu *v)
 {
     paddr_t virt_page_ma, apic_page_ma;
 
-    if ( v->domain->arch.hvm_domain.vmx.apic_access_mfn == 0 )
+    if ( v->domain->arch.hvm.vmx.apic_access_mfn == 0 )
         return;
 
     ASSERT(cpu_has_vmx_virtualize_apic_accesses);
 
     virt_page_ma = page_to_maddr(vcpu_vlapic(v)->regs_page);
-    apic_page_ma = v->domain->arch.hvm_domain.vmx.apic_access_mfn;
+    apic_page_ma = v->domain->arch.hvm.vmx.apic_access_mfn;
     apic_page_ma <<= PAGE_SHIFT;
 
     vmx_vmcs_enter(v);
@@ -4330,8 +4330,8 @@ bool vmx_vmenter_helper(const struct cpu_user_regs *regs)
      if ( nestedhvm_vcpu_in_guestmode(curr) && vcpu_nestedhvm(curr).stale_np2m )
          return false;
 
-    if ( curr->domain->arch.hvm_domain.pi_ops.do_resume )
-        curr->domain->arch.hvm_domain.pi_ops.do_resume(curr);
+    if ( curr->domain->arch.hvm.pi_ops.do_resume )
+        curr->domain->arch.hvm.pi_ops.do_resume(curr);
 
     if ( !cpu_has_vmx_vpid )
         goto out;
diff --git a/xen/arch/x86/hvm/vpic.c b/xen/arch/x86/hvm/vpic.c
index cfc9544..e0500c5 100644
--- a/xen/arch/x86/hvm/vpic.c
+++ b/xen/arch/x86/hvm/vpic.c
@@ -35,7 +35,7 @@
 #include <asm/hvm/support.h>
 
 #define vpic_domain(v) (container_of((v), struct domain, \
-                        arch.hvm_domain.vpic[!vpic->is_master]))
+                        arch.hvm.vpic[!vpic->is_master]))
 #define __vpic_lock(v) &container_of((v), struct hvm_domain, \
                                         vpic[!(v)->is_master])->irq_lock
 #define vpic_lock(v)   spin_lock(__vpic_lock(v))
@@ -112,7 +112,7 @@ static void vpic_update_int_output(struct hvm_hw_vpic *vpic)
         if ( vpic->is_master )
         {
             /* Master INT line is connected in Virtual Wire Mode. */
-            struct vcpu *v = vpic_domain(vpic)->arch.hvm_domain.i8259_target;
+            struct vcpu *v = vpic_domain(vpic)->arch.hvm.i8259_target;
             if ( v != NULL )
             {
                 TRACE_1D(TRC_HVM_EMUL_PIC_KICK, irq);
@@ -334,7 +334,7 @@ static int vpic_intercept_pic_io(
         return X86EMUL_OKAY;
     }
 
-    vpic = &current->domain->arch.hvm_domain.vpic[port >> 7];
+    vpic = &current->domain->arch.hvm.vpic[port >> 7];
 
     if ( dir == IOREQ_WRITE )
         vpic_ioport_write(vpic, port, (uint8_t)*val);
@@ -352,7 +352,7 @@ static int vpic_intercept_elcr_io(
 
     BUG_ON(bytes != 1);
 
-    vpic = &current->domain->arch.hvm_domain.vpic[port & 1];
+    vpic = &current->domain->arch.hvm.vpic[port & 1];
 
     if ( dir == IOREQ_WRITE )
     {
@@ -382,7 +382,7 @@ static int vpic_save(struct domain *d, hvm_domain_context_t *h)
     /* Save the state of both PICs */
     for ( i = 0; i < 2 ; i++ )
     {
-        s = &d->arch.hvm_domain.vpic[i];
+        s = &d->arch.hvm.vpic[i];
         if ( hvm_save_entry(PIC, i, h, s) )
             return 1;
     }
@@ -401,7 +401,7 @@ static int vpic_load(struct domain *d, hvm_domain_context_t *h)
     /* Which PIC is this? */
     if ( inst > 1 )
         return -EINVAL;
-    s = &d->arch.hvm_domain.vpic[inst];
+    s = &d->arch.hvm.vpic[inst];
 
     /* Load the state */
     if ( hvm_load_entry(PIC, h, s) != 0 )
@@ -420,7 +420,7 @@ void vpic_reset(struct domain *d)
         return;
 
     /* Master PIC. */
-    vpic = &d->arch.hvm_domain.vpic[0];
+    vpic = &d->arch.hvm.vpic[0];
     memset(vpic, 0, sizeof(*vpic));
     vpic->is_master = 1;
     vpic->elcr      = 1 << 2;
@@ -446,7 +446,7 @@ void vpic_init(struct domain *d)
 
 void vpic_irq_positive_edge(struct domain *d, int irq)
 {
-    struct hvm_hw_vpic *vpic = &d->arch.hvm_domain.vpic[irq >> 3];
+    struct hvm_hw_vpic *vpic = &d->arch.hvm.vpic[irq >> 3];
     uint8_t mask = 1 << (irq & 7);
 
     ASSERT(has_vpic(d));
@@ -464,7 +464,7 @@ void vpic_irq_positive_edge(struct domain *d, int irq)
 
 void vpic_irq_negative_edge(struct domain *d, int irq)
 {
-    struct hvm_hw_vpic *vpic = &d->arch.hvm_domain.vpic[irq >> 3];
+    struct hvm_hw_vpic *vpic = &d->arch.hvm.vpic[irq >> 3];
     uint8_t mask = 1 << (irq & 7);
 
     ASSERT(has_vpic(d));
@@ -483,7 +483,7 @@ void vpic_irq_negative_edge(struct domain *d, int irq)
 int vpic_ack_pending_irq(struct vcpu *v)
 {
     int irq, vector;
-    struct hvm_hw_vpic *vpic = &v->domain->arch.hvm_domain.vpic[0];
+    struct hvm_hw_vpic *vpic = &v->domain->arch.hvm.vpic[0];
 
     ASSERT(has_vpic(v->domain));
 
diff --git a/xen/arch/x86/hvm/vpt.c b/xen/arch/x86/hvm/vpt.c
index 6ac4c91..7b57017 100644
--- a/xen/arch/x86/hvm/vpt.c
+++ b/xen/arch/x86/hvm/vpt.c
@@ -24,11 +24,11 @@
 #include <asm/mc146818rtc.h>
 
 #define mode_is(d, name) \
-    ((d)->arch.hvm_domain.params[HVM_PARAM_TIMER_MODE] == HVMPTM_##name)
+    ((d)->arch.hvm.params[HVM_PARAM_TIMER_MODE] == HVMPTM_##name)
 
 void hvm_init_guest_time(struct domain *d)
 {
-    struct pl_time *pl = d->arch.hvm_domain.pl_time;
+    struct pl_time *pl = d->arch.hvm.pl_time;
 
     spin_lock_init(&pl->pl_time_lock);
     pl->stime_offset = -(u64)get_s_time();
@@ -37,7 +37,7 @@ void hvm_init_guest_time(struct domain *d)
 
 uint64_t hvm_get_guest_time_fixed(const struct vcpu *v, uint64_t at_tsc)
 {
-    struct pl_time *pl = v->domain->arch.hvm_domain.pl_time;
+    struct pl_time *pl = v->domain->arch.hvm.pl_time;
     u64 now;
 
     /* Called from device models shared with PV guests. Be careful. */
@@ -88,7 +88,7 @@ static int pt_irq_vector(struct periodic_time *pt, enum hvm_intsrc src)
     gsi = hvm_isa_irq_to_gsi(isa_irq);
 
     if ( src == hvm_intsrc_pic )
-        return (v->domain->arch.hvm_domain.vpic[isa_irq >> 3].irq_base
+        return (v->domain->arch.hvm.vpic[isa_irq >> 3].irq_base
                 + (isa_irq & 7));
 
     ASSERT(src == hvm_intsrc_lapic);
@@ -121,7 +121,7 @@ static int pt_irq_masked(struct periodic_time *pt)
 
     case PTSRC_isa:
     {
-        uint8_t pic_imr = v->domain->arch.hvm_domain.vpic[pt->irq >> 3].imr;
+        uint8_t pic_imr = v->domain->arch.hvm.vpic[pt->irq >> 3].imr;
 
         /* Check if the interrupt is unmasked in the PIC. */
         if ( !(pic_imr & (1 << (pt->irq & 7))) && vlapic_accept_pic_intr(v) )
@@ -363,7 +363,7 @@ int pt_update_irq(struct vcpu *v)
     case PTSRC_isa:
         hvm_isa_irq_deassert(v->domain, irq);
         if ( platform_legacy_irq(irq) && vlapic_accept_pic_intr(v) &&
-             v->domain->arch.hvm_domain.vpic[irq >> 3].int_output )
+             v->domain->arch.hvm.vpic[irq >> 3].int_output )
             hvm_isa_irq_assert(v->domain, irq, NULL);
         else
         {
@@ -514,7 +514,7 @@ void create_periodic_time(
 
     if ( !pt->one_shot )
     {
-        if ( v->domain->arch.hvm_domain.params[HVM_PARAM_VPT_ALIGN] )
+        if ( v->domain->arch.hvm.params[HVM_PARAM_VPT_ALIGN] )
         {
             pt->scheduled = align_timer(pt->scheduled, pt->period);
         }
@@ -605,7 +605,7 @@ void pt_adjust_global_vcpu_target(struct vcpu *v)
     pt_adjust_vcpu(&vpit->pt0, v);
     spin_unlock(&vpit->lock);
 
-    pl_time = v->domain->arch.hvm_domain.pl_time;
+    pl_time = v->domain->arch.hvm.pl_time;
 
     spin_lock(&pl_time->vrtc.lock);
     pt_adjust_vcpu(&pl_time->vrtc.pt, v);
@@ -640,9 +640,9 @@ void pt_may_unmask_irq(struct domain *d, struct periodic_time *vlapic_pt)
     if ( d )
     {
         pt_resume(&d->arch.vpit.pt0);
-        pt_resume(&d->arch.hvm_domain.pl_time->vrtc.pt);
+        pt_resume(&d->arch.hvm.pl_time->vrtc.pt);
         for ( i = 0; i < HPET_TIMER_NUM; i++ )
-            pt_resume(&d->arch.hvm_domain.pl_time->vhpet.pt[i]);
+            pt_resume(&d->arch.hvm.pl_time->vhpet.pt[i]);
     }
 
     if ( vlapic_pt )
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 6865c79..ec93ab6 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -1293,7 +1293,7 @@ int init_domain_irq_mapping(struct domain *d)
 
     radix_tree_init(&d->arch.irq_pirq);
     if ( is_hvm_domain(d) )
-        radix_tree_init(&d->arch.hvm_domain.emuirq_pirq);
+        radix_tree_init(&d->arch.hvm.emuirq_pirq);
 
     for ( i = 1; platform_legacy_irq(i); ++i )
     {
@@ -1319,7 +1319,7 @@ void cleanup_domain_irq_mapping(struct domain *d)
 {
     radix_tree_destroy(&d->arch.irq_pirq, NULL);
     if ( is_hvm_domain(d) )
-        radix_tree_destroy(&d->arch.hvm_domain.emuirq_pirq, NULL);
+        radix_tree_destroy(&d->arch.hvm.emuirq_pirq, NULL);
 }
 
 struct pirq *alloc_pirq_struct(struct domain *d)
@@ -2490,7 +2490,7 @@ int map_domain_emuirq_pirq(struct domain *d, int pirq, int emuirq)
     /* do not store emuirq mappings for pt devices */
     if ( emuirq != IRQ_PT )
     {
-        int err = radix_tree_insert(&d->arch.hvm_domain.emuirq_pirq, emuirq,
+        int err = radix_tree_insert(&d->arch.hvm.emuirq_pirq, emuirq,
                                     radix_tree_int_to_ptr(pirq));
 
         switch ( err )
@@ -2500,7 +2500,7 @@ int map_domain_emuirq_pirq(struct domain *d, int pirq, int emuirq)
         case -EEXIST:
             radix_tree_replace_slot(
                 radix_tree_lookup_slot(
-                    &d->arch.hvm_domain.emuirq_pirq, emuirq),
+                    &d->arch.hvm.emuirq_pirq, emuirq),
                 radix_tree_int_to_ptr(pirq));
             break;
         default:
@@ -2542,7 +2542,7 @@ int unmap_domain_pirq_emuirq(struct domain *d, int pirq)
         pirq_cleanup_check(info, d);
     }
     if ( emuirq != IRQ_PT )
-        radix_tree_delete(&d->arch.hvm_domain.emuirq_pirq, emuirq);
+        radix_tree_delete(&d->arch.hvm.emuirq_pirq, emuirq);
 
  done:
     return ret;
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 812a840..fe10e9d 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -83,7 +83,7 @@ int hap_track_dirty_vram(struct domain *d,
 
         paging_lock(d);
 
-        dirty_vram = d->arch.hvm_domain.dirty_vram;
+        dirty_vram = d->arch.hvm.dirty_vram;
         if ( !dirty_vram )
         {
             rc = -ENOMEM;
@@ -93,7 +93,7 @@ int hap_track_dirty_vram(struct domain *d,
                 goto out;
             }
 
-            d->arch.hvm_domain.dirty_vram = dirty_vram;
+            d->arch.hvm.dirty_vram = dirty_vram;
         }
 
         if ( begin_pfn != dirty_vram->begin_pfn ||
@@ -145,7 +145,7 @@ int hap_track_dirty_vram(struct domain *d,
     {
         paging_lock(d);
 
-        dirty_vram = d->arch.hvm_domain.dirty_vram;
+        dirty_vram = d->arch.hvm.dirty_vram;
         if ( dirty_vram )
         {
             /*
@@ -155,7 +155,7 @@ int hap_track_dirty_vram(struct domain *d,
             begin_pfn = dirty_vram->begin_pfn;
             nr = dirty_vram->end_pfn - dirty_vram->begin_pfn;
             xfree(dirty_vram);
-            d->arch.hvm_domain.dirty_vram = NULL;
+            d->arch.hvm.dirty_vram = NULL;
         }
 
         paging_unlock(d);
@@ -579,8 +579,7 @@ void hap_teardown(struct domain *d, bool *preempted)
 
     d->arch.paging.mode &= ~PG_log_dirty;
 
-    xfree(d->arch.hvm_domain.dirty_vram);
-    d->arch.hvm_domain.dirty_vram = NULL;
+    XFREE(d->arch.hvm.dirty_vram);
 
 out:
     paging_unlock(d);
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index fad8a9d..fe1df83 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -150,7 +150,7 @@ static inline shr_handle_t get_next_handle(void)
 }
 
 #define mem_sharing_enabled(d) \
-    (is_hvm_domain(d) && (d)->arch.hvm_domain.mem_sharing_enabled)
+    (is_hvm_domain(d) && (d)->arch.hvm.mem_sharing_enabled)
 
 static atomic_t nr_saved_mfns   = ATOMIC_INIT(0); 
 static atomic_t nr_shared_mfns  = ATOMIC_INIT(0);
@@ -1333,7 +1333,7 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
 
     /* Only HAP is supported */
     rc = -ENODEV;
-    if ( !hap_enabled(d) || !d->arch.hvm_domain.mem_sharing_enabled )
+    if ( !hap_enabled(d) || !d->arch.hvm.mem_sharing_enabled )
         goto out;
 
     switch ( mso.op )
@@ -1613,7 +1613,7 @@ int mem_sharing_domctl(struct domain *d, struct xen_domctl_mem_sharing_op *mec)
             if ( unlikely(need_iommu(d) && mec->u.enable) )
                 rc = -EXDEV;
             else
-                d->arch.hvm_domain.mem_sharing_enabled = mec->u.enable;
+                d->arch.hvm.mem_sharing_enabled = mec->u.enable;
         }
         break;
 
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 1930a1d..afdc27d 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2881,11 +2881,11 @@ void shadow_teardown(struct domain *d, bool *preempted)
      * calls now that we've torn down the bitmap */
     d->arch.paging.mode &= ~PG_log_dirty;
 
-    if (d->arch.hvm_domain.dirty_vram) {
-        xfree(d->arch.hvm_domain.dirty_vram->sl1ma);
-        xfree(d->arch.hvm_domain.dirty_vram->dirty_bitmap);
-        xfree(d->arch.hvm_domain.dirty_vram);
-        d->arch.hvm_domain.dirty_vram = NULL;
+    if ( d->arch.hvm.dirty_vram )
+    {
+        xfree(d->arch.hvm.dirty_vram->sl1ma);
+        xfree(d->arch.hvm.dirty_vram->dirty_bitmap);
+        XFREE(d->arch.hvm.dirty_vram);
     }
 
 out:
@@ -3261,7 +3261,7 @@ int shadow_track_dirty_vram(struct domain *d,
     p2m_lock(p2m_get_hostp2m(d));
     paging_lock(d);
 
-    dirty_vram = d->arch.hvm_domain.dirty_vram;
+    dirty_vram = d->arch.hvm.dirty_vram;
 
     if ( dirty_vram && (!nr ||
              ( begin_pfn != dirty_vram->begin_pfn
@@ -3272,7 +3272,7 @@ int shadow_track_dirty_vram(struct domain *d,
         xfree(dirty_vram->sl1ma);
         xfree(dirty_vram->dirty_bitmap);
         xfree(dirty_vram);
-        dirty_vram = d->arch.hvm_domain.dirty_vram = NULL;
+        dirty_vram = d->arch.hvm.dirty_vram = NULL;
     }
 
     if ( !nr )
@@ -3299,7 +3299,7 @@ int shadow_track_dirty_vram(struct domain *d,
             goto out;
         dirty_vram->begin_pfn = begin_pfn;
         dirty_vram->end_pfn = end_pfn;
-        d->arch.hvm_domain.dirty_vram = dirty_vram;
+        d->arch.hvm.dirty_vram = dirty_vram;
 
         if ( (dirty_vram->sl1ma = xmalloc_array(paddr_t, nr)) == NULL )
             goto out_dirty_vram;
@@ -3418,7 +3418,7 @@ int shadow_track_dirty_vram(struct domain *d,
     xfree(dirty_vram->sl1ma);
 out_dirty_vram:
     xfree(dirty_vram);
-    dirty_vram = d->arch.hvm_domain.dirty_vram = NULL;
+    dirty_vram = d->arch.hvm.dirty_vram = NULL;
 
 out:
     paging_unlock(d);
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 9e43533..62819eb 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -527,7 +527,7 @@ _sh_propagate(struct vcpu *v,
     guest_l1e_t guest_entry = { guest_intpte };
     shadow_l1e_t *sp = shadow_entry_ptr;
     struct domain *d = v->domain;
-    struct sh_dirty_vram *dirty_vram = d->arch.hvm_domain.dirty_vram;
+    struct sh_dirty_vram *dirty_vram = d->arch.hvm.dirty_vram;
     gfn_t target_gfn = guest_l1e_get_gfn(guest_entry);
     u32 pass_thru_flags;
     u32 gflags, sflags;
@@ -619,7 +619,7 @@ _sh_propagate(struct vcpu *v,
         if ( !mmio_mfn &&
              (type = hvm_get_mem_pinned_cacheattr(d, target_gfn, 0)) >= 0 )
             sflags |= pat_type_2_pte_flags(type);
-        else if ( d->arch.hvm_domain.is_in_uc_mode )
+        else if ( d->arch.hvm.is_in_uc_mode )
             sflags |= pat_type_2_pte_flags(PAT_TYPE_UNCACHABLE);
         else
             if ( iomem_access_permitted(d, mfn_x(target_mfn), mfn_x(target_mfn)) )
@@ -1110,7 +1110,7 @@ static inline void shadow_vram_get_l1e(shadow_l1e_t new_sl1e,
     mfn_t mfn = shadow_l1e_get_mfn(new_sl1e);
     int flags = shadow_l1e_get_flags(new_sl1e);
     unsigned long gfn;
-    struct sh_dirty_vram *dirty_vram = d->arch.hvm_domain.dirty_vram;
+    struct sh_dirty_vram *dirty_vram = d->arch.hvm.dirty_vram;
 
     if ( !dirty_vram         /* tracking disabled? */
          || !(flags & _PAGE_RW) /* read-only mapping? */
@@ -1141,7 +1141,7 @@ static inline void shadow_vram_put_l1e(shadow_l1e_t old_sl1e,
     mfn_t mfn = shadow_l1e_get_mfn(old_sl1e);
     int flags = shadow_l1e_get_flags(old_sl1e);
     unsigned long gfn;
-    struct sh_dirty_vram *dirty_vram = d->arch.hvm_domain.dirty_vram;
+    struct sh_dirty_vram *dirty_vram = d->arch.hvm.dirty_vram;
 
     if ( !dirty_vram         /* tracking disabled? */
          || !(flags & _PAGE_RW) /* read-only mapping? */
diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index 4524823..3a3c158 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -98,7 +98,7 @@ int physdev_map_pirq(domid_t domid, int type, int *index, int *pirq_p,
     {
         /*
          * Only makes sense for vector-based callback, else HVM-IRQ logic
-         * calls back into itself and deadlocks on hvm_domain.irq_lock.
+         * calls back into itself and deadlocks on hvm.irq_lock.
          */
         if ( !is_hvm_pv_evtchn_domain(d) )
             return -EINVAL;
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index dd11815..ed133fc 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1870,7 +1870,7 @@ static int __hwdom_init io_bitmap_cb(unsigned long s, unsigned long e,
 
     ASSERT(e <= INT_MAX);
     for ( i = s; i <= e; i++ )
-        __clear_bit(i, d->arch.hvm_domain.io_bitmap);
+        __clear_bit(i, d->arch.hvm.io_bitmap);
 
     return 0;
 }
@@ -1881,7 +1881,7 @@ void __hwdom_init setup_io_bitmap(struct domain *d)
 
     if ( is_hvm_domain(d) )
     {
-        bitmap_fill(d->arch.hvm_domain.io_bitmap, 0x10000);
+        bitmap_fill(d->arch.hvm.io_bitmap, 0x10000);
         rc = rangeset_report_ranges(d->arch.ioport_caps, 0, 0x10000,
                                     io_bitmap_cb, d);
         BUG_ON(rc);
@@ -1892,9 +1892,9 @@ void __hwdom_init setup_io_bitmap(struct domain *d)
          * Access to 1 byte RTC ports also needs to be trapped in order
          * to keep consistency with PV.
          */
-        __set_bit(0xcf8, d->arch.hvm_domain.io_bitmap);
-        __set_bit(RTC_PORT(0), d->arch.hvm_domain.io_bitmap);
-        __set_bit(RTC_PORT(1), d->arch.hvm_domain.io_bitmap);
+        __set_bit(0xcf8, d->arch.hvm.io_bitmap);
+        __set_bit(RTC_PORT(0), d->arch.hvm.io_bitmap);
+        __set_bit(RTC_PORT(1), d->arch.hvm.io_bitmap);
     }
 }
 
diff --git a/xen/arch/x86/time.c b/xen/arch/x86/time.c
index 69e9aaf..5922fbf 100644
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1039,7 +1039,7 @@ static void __update_vcpu_system_time(struct vcpu *v, int force)
 
         if ( is_hvm_domain(d) )
         {
-            struct pl_time *pl = v->domain->arch.hvm_domain.pl_time;
+            struct pl_time *pl = v->domain->arch.hvm.pl_time;
 
             stime += pl->stime_offset + v->arch.hvm_vcpu.stime_offset;
             if ( stime >= 0 )
@@ -2183,7 +2183,7 @@ void tsc_set_info(struct domain *d,
     if ( is_hvm_domain(d) )
     {
         if ( hvm_tsc_scaling_supported && !d->arch.vtsc )
-            d->arch.hvm_domain.tsc_scaling_ratio =
+            d->arch.hvm.tsc_scaling_ratio =
                 hvm_get_tsc_scaling_ratio(d->arch.tsc_khz);
 
         hvm_set_rdtsc_exiting(d, d->arch.vtsc);
@@ -2197,10 +2197,10 @@ void tsc_set_info(struct domain *d,
              * call set_tsc_offset() later from hvm_vcpu_reset_state() and they
              * will sync their TSC to BSP's sync_tsc.
              */
-            d->arch.hvm_domain.sync_tsc = rdtsc();
+            d->arch.hvm.sync_tsc = rdtsc();
             hvm_set_tsc_offset(d->vcpu[0],
                                d->vcpu[0]->arch.hvm_vcpu.cache_tsc_offset,
-                               d->arch.hvm_domain.sync_tsc);
+                               d->arch.hvm.sync_tsc);
         }
     }
 
diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 144ab81..4793aac 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -48,7 +48,7 @@ static int vm_event_enable(
     xen_event_channel_notification_t notification_fn)
 {
     int rc;
-    unsigned long ring_gfn = d->arch.hvm_domain.params[param];
+    unsigned long ring_gfn = d->arch.hvm.params[param];
 
     if ( !*ved )
         *ved = xzalloc(struct vm_event_domain);
diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index d1adffa..2644048 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -1417,7 +1417,7 @@ static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag)
     /* Prevent device assign if mem paging or mem sharing have been 
      * enabled for this domain */
     if ( unlikely(!need_iommu(d) &&
-            (d->arch.hvm_domain.mem_sharing_enabled ||
+            (d->arch.hvm.mem_sharing_enabled ||
              vm_event_check_ring(d->vm_event_paging) ||
              p2m_get_hostp2m(d)->global_logdirty)) )
         return -EXDEV;
diff --git a/xen/drivers/vpci/msix.c b/xen/drivers/vpci/msix.c
index bcf6325..1960dae 100644
--- a/xen/drivers/vpci/msix.c
+++ b/xen/drivers/vpci/msix.c
@@ -152,7 +152,7 @@ static struct vpci_msix *msix_find(const struct domain *d, unsigned long addr)
 {
     struct vpci_msix *msix;
 
-    list_for_each_entry ( msix, &d->arch.hvm_domain.msix_tables, next )
+    list_for_each_entry ( msix, &d->arch.hvm.msix_tables, next )
     {
         const struct vpci_bar *bars = msix->pdev->vpci->header.bars;
         unsigned int i;
@@ -438,10 +438,10 @@ static int init_msix(struct pci_dev *pdev)
     if ( rc )
         return rc;
 
-    if ( list_empty(&d->arch.hvm_domain.msix_tables) )
+    if ( list_empty(&d->arch.hvm.msix_tables) )
         register_mmio_handler(d, &vpci_msix_table_ops);
 
-    list_add(&pdev->vpci->msix->next, &d->arch.hvm_domain.msix_tables);
+    list_add(&pdev->vpci->msix->next, &d->arch.hvm.msix_tables);
 
     return 0;
 }
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 280c395..d682307 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -51,7 +51,7 @@ struct arch_domain
     /* Virtual MMU */
     struct p2m_domain p2m;
 
-    struct hvm_domain hvm_domain;
+    struct hvm_domain hvm;
 
     struct vmmio vmmio;
 
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index fdd6856..4722c2d 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -17,7 +17,7 @@
 #define is_pv_32bit_vcpu(v)    (is_pv_32bit_domain((v)->domain))
 
 #define is_hvm_pv_evtchn_domain(d) (is_hvm_domain(d) && \
-        (d)->arch.hvm_domain.irq->callback_via_type == HVMIRQ_callback_vector)
+        (d)->arch.hvm.irq->callback_via_type == HVMIRQ_callback_vector)
 #define is_hvm_pv_evtchn_vcpu(v) (is_hvm_pv_evtchn_domain(v->domain))
 #define is_domain_direct_mapped(d) ((void)(d), 0)
 
@@ -306,7 +306,7 @@ struct arch_domain
 
     union {
         struct pv_domain pv;
-        struct hvm_domain hvm_domain;
+        struct hvm_domain hvm;
     };
 
     struct paging_domain paging;
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 5885950..acf8e03 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -202,7 +202,7 @@ struct hvm_domain {
     };
 };
 
-#define hap_enabled(d)  ((d)->arch.hvm_domain.hap_enabled)
+#define hap_enabled(d)  ((d)->arch.hvm.hap_enabled)
 
 #endif /* __ASM_X86_HVM_DOMAIN_H__ */
 
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 5ea507b..ac0f035 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -266,7 +266,7 @@ u64 hvm_get_guest_tsc_fixed(struct vcpu *v, u64 at_tsc);
     (1ULL << hvm_funcs.tsc_scaling.ratio_frac_bits)
 
 #define hvm_tsc_scaling_ratio(d) \
-    ((d)->arch.hvm_domain.tsc_scaling_ratio)
+    ((d)->arch.hvm.tsc_scaling_ratio)
 
 u64 hvm_scale_tsc(const struct domain *d, u64 tsc);
 u64 hvm_get_tsc_scaling_ratio(u32 gtsc_khz);
@@ -391,10 +391,10 @@ static inline bool hvm_get_guest_bndcfgs(struct vcpu *v, u64 *val)
 bool hvm_set_guest_bndcfgs(struct vcpu *v, u64 val);
 
 #define has_hvm_params(d) \
-    ((d)->arch.hvm_domain.params != NULL)
+    ((d)->arch.hvm.params != NULL)
 
 #define viridian_feature_mask(d) \
-    (has_hvm_params(d) ? (d)->arch.hvm_domain.params[HVM_PARAM_VIRIDIAN] : 0)
+    (has_hvm_params(d) ? (d)->arch.hvm.params[HVM_PARAM_VIRIDIAN] : 0)
 
 #define is_viridian_domain(d) \
     (is_hvm_domain(d) && (viridian_feature_mask(d) & HVMPV_base_freq))
@@ -670,9 +670,8 @@ unsigned long hvm_cr4_guest_valid_bits(const struct domain *d, bool restore);
 #define arch_vcpu_block(v) ({                                   \
     struct vcpu *v_ = (v);                                      \
     struct domain *d_ = v_->domain;                             \
-    if ( is_hvm_domain(d_) &&                               \
-         (d_->arch.hvm_domain.pi_ops.vcpu_block) )          \
-        d_->arch.hvm_domain.pi_ops.vcpu_block(v_);          \
+    if ( is_hvm_domain(d_) && d_->arch.hvm.pi_ops.vcpu_block )  \
+        d_->arch.hvm.pi_ops.vcpu_block(v_);                     \
 })
 
 #endif /* __ASM_X86_HVM_HVM_H__ */
diff --git a/xen/include/asm-x86/hvm/irq.h b/xen/include/asm-x86/hvm/irq.h
index 8a43cb9..2e6fa70 100644
--- a/xen/include/asm-x86/hvm/irq.h
+++ b/xen/include/asm-x86/hvm/irq.h
@@ -97,7 +97,7 @@ struct hvm_irq {
     (((((dev)<<2) + ((dev)>>3) + (intx)) & 31) + 16)
 #define hvm_pci_intx_link(dev, intx) \
     (((dev) + (intx)) & 3)
-#define hvm_domain_irq(d) ((d)->arch.hvm_domain.irq)
+#define hvm_domain_irq(d) ((d)->arch.hvm.irq)
 #define hvm_irq_size(cnt) offsetof(struct hvm_irq, gsi_assert_count[cnt])
 
 #define hvm_isa_irq_to_gsi(isa_irq) ((isa_irq) ? : 2)
diff --git a/xen/include/asm-x86/hvm/nestedhvm.h b/xen/include/asm-x86/hvm/nestedhvm.h
index 3c810b7..4a041e2 100644
--- a/xen/include/asm-x86/hvm/nestedhvm.h
+++ b/xen/include/asm-x86/hvm/nestedhvm.h
@@ -35,8 +35,8 @@ enum nestedhvm_vmexits {
 /* Nested HVM on/off per domain */
 static inline bool nestedhvm_enabled(const struct domain *d)
 {
-    return is_hvm_domain(d) && d->arch.hvm_domain.params &&
-        d->arch.hvm_domain.params[HVM_PARAM_NESTEDHVM];
+    return is_hvm_domain(d) && d->arch.hvm.params &&
+        d->arch.hvm.params[HVM_PARAM_NESTEDHVM];
 }
 
 /* Nested VCPU */
diff --git a/xen/include/asm-x86/hvm/vioapic.h b/xen/include/asm-x86/hvm/vioapic.h
index 138d2c0..a72cd17 100644
--- a/xen/include/asm-x86/hvm/vioapic.h
+++ b/xen/include/asm-x86/hvm/vioapic.h
@@ -58,7 +58,7 @@ struct hvm_vioapic {
 };
 
 #define hvm_vioapic_size(cnt) offsetof(struct hvm_vioapic, redirtbl[cnt])
-#define domain_vioapic(d, i) ((d)->arch.hvm_domain.vioapic[i])
+#define domain_vioapic(d, i) ((d)->arch.hvm.vioapic[i])
 #define vioapic_domain(v) ((v)->domain)
 
 int vioapic_init(struct domain *d);
diff --git a/xen/include/asm-x86/hvm/vpt.h b/xen/include/asm-x86/hvm/vpt.h
index 61c26ed..99169dd 100644
--- a/xen/include/asm-x86/hvm/vpt.h
+++ b/xen/include/asm-x86/hvm/vpt.h
@@ -150,8 +150,8 @@ void pt_migrate(struct vcpu *v);
 
 void pt_adjust_global_vcpu_target(struct vcpu *v);
 #define pt_global_vcpu_target(d) \
-    (is_hvm_domain(d) && (d)->arch.hvm_domain.i8259_target ? \
-     (d)->arch.hvm_domain.i8259_target : \
+    (is_hvm_domain(d) && (d)->arch.hvm.i8259_target ? \
+     (d)->arch.hvm.i8259_target : \
      (d)->vcpu ? (d)->vcpu[0] : NULL)
 
 void pt_may_unmask_irq(struct domain *d, struct periodic_time *vlapic_pt);
diff --git a/xen/include/asm-x86/irq.h b/xen/include/asm-x86/irq.h
index d9dad39..054c3ab 100644
--- a/xen/include/asm-x86/irq.h
+++ b/xen/include/asm-x86/irq.h
@@ -188,8 +188,7 @@ void cleanup_domain_irq_mapping(struct domain *);
 #define domain_pirq_to_emuirq(d, pirq) pirq_field(d, pirq,              \
     arch.hvm.emuirq, IRQ_UNBOUND)
 #define domain_emuirq_to_pirq(d, emuirq) ({                             \
-    void *__ret = radix_tree_lookup(&(d)->arch.hvm_domain.emuirq_pirq,  \
-                                    emuirq);                            \
+    void *__ret = radix_tree_lookup(&(d)->arch.hvm.emuirq_pirq, emuirq);\
     __ret ? radix_tree_ptr_to_int(__ret) : IRQ_UNBOUND;                 \
 })
 #define IRQ_UNBOUND -1
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH 4/7] x86/hvm: Rename v->arch.hvm_vcpu to v->arch.hvm
  2018-08-28 17:38 [PATCH 0/7] x86: Structure naming and consistency improvements Andrew Cooper
                   ` (2 preceding siblings ...)
  2018-08-28 17:39 ` [PATCH 3/7] xen/hvm: Rename d->arch.hvm_domain to d->arch.hvm Andrew Cooper
@ 2018-08-28 17:39 ` Andrew Cooper
  2018-08-28 18:59   ` Razvan Cojocaru
                     ` (4 more replies)
  2018-08-28 17:39 ` [PATCH 5/7] x86/vtx: Rename arch_vmx_struct to vmx_vcpu Andrew Cooper
                   ` (2 subsequent siblings)
  6 siblings, 5 replies; 39+ messages in thread
From: Andrew Cooper @ 2018-08-28 17:39 UTC (permalink / raw)
  To: Xen-devel
  Cc: Kevin Tian, Tamas K Lengyel, Wei Liu, Jun Nakajima,
	Razvan Cojocaru, George Dunlap, Andrew Cooper, Tim Deegan,
	Julien Grall, Paul Durrant, Stefano Stabellini, Jan Beulich,
	Boris Ostrovsky, Brian Woods, Suravee Suthikulpanit,
	Roger Pau Monné

The trailing _vcpu suffix is redundant, but adds to code volume.  Drop it.

Reflow lines as appropriate, and switch to using the new XFREE/etc wrappers
where applicable.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Tim Deegan <tim@xen.org>
CC: George Dunlap <george.dunlap@eu.citrix.com>
CC: Paul Durrant <paul.durrant@citrix.com>
CC: Razvan Cojocaru <rcojocaru@bitdefender.com>
CC: Tamas K Lengyel <tamas@tklengyel.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
CC: Brian Woods <brian.woods@amd.com>
---
 xen/arch/x86/cpu/vpmu.c                 |   2 +-
 xen/arch/x86/cpuid.c                    |   4 +-
 xen/arch/x86/domain.c                   |   4 +-
 xen/arch/x86/domctl.c                   |   8 +-
 xen/arch/x86/hvm/asid.c                 |   2 +-
 xen/arch/x86/hvm/dm.c                   |  12 +--
 xen/arch/x86/hvm/domain.c               |  38 ++++----
 xen/arch/x86/hvm/emulate.c              |  28 +++---
 xen/arch/x86/hvm/hpet.c                 |   2 +-
 xen/arch/x86/hvm/hvm.c                  | 162 ++++++++++++++++----------------
 xen/arch/x86/hvm/io.c                   |  12 +--
 xen/arch/x86/hvm/ioreq.c                |   4 +-
 xen/arch/x86/hvm/irq.c                  |   6 +-
 xen/arch/x86/hvm/mtrr.c                 |  22 ++---
 xen/arch/x86/hvm/pmtimer.c              |   2 +-
 xen/arch/x86/hvm/svm/asid.c             |   2 +-
 xen/arch/x86/hvm/svm/nestedsvm.c        |  44 ++++-----
 xen/arch/x86/hvm/svm/svm.c              |  67 +++++++------
 xen/arch/x86/hvm/svm/vmcb.c             |   6 +-
 xen/arch/x86/hvm/viridian.c             |  64 ++++++-------
 xen/arch/x86/hvm/vmsi.c                 |  30 +++---
 xen/arch/x86/hvm/vmx/intr.c             |   2 +-
 xen/arch/x86/hvm/vmx/realmode.c         |  10 +-
 xen/arch/x86/hvm/vmx/vmcs.c             |   8 +-
 xen/arch/x86/hvm/vmx/vmx.c              | 130 ++++++++++++-------------
 xen/arch/x86/hvm/vmx/vvmx.c             |  32 +++----
 xen/arch/x86/hvm/vpt.c                  |  74 +++++++--------
 xen/arch/x86/mm/hap/guest_walk.c        |   2 +-
 xen/arch/x86/mm/hap/hap.c               |   4 +-
 xen/arch/x86/mm/shadow/multi.c          |  12 +--
 xen/arch/x86/time.c                     |   6 +-
 xen/arch/x86/x86_64/asm-offsets.c       |   8 +-
 xen/arch/x86/x86_64/traps.c             |   8 +-
 xen/include/asm-x86/domain.h            |   6 +-
 xen/include/asm-x86/guest_pt.h          |   2 +-
 xen/include/asm-x86/hvm/hvm.h           |  20 ++--
 xen/include/asm-x86/hvm/nestedhvm.h     |   2 +-
 xen/include/asm-x86/hvm/svm/nestedsvm.h |   2 +-
 xen/include/asm-x86/hvm/vcpu.h          |   4 +-
 xen/include/asm-x86/hvm/vlapic.h        |   6 +-
 xen/include/asm-x86/hvm/vmx/vmx.h       |   2 +-
 41 files changed, 428 insertions(+), 433 deletions(-)

diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
index fa6762f..8a4f753 100644
--- a/xen/arch/x86/cpu/vpmu.c
+++ b/xen/arch/x86/cpu/vpmu.c
@@ -304,7 +304,7 @@ void vpmu_do_interrupt(struct cpu_user_regs *regs)
                 hvm_get_segment_register(sampled, x86_seg_ss, &seg);
                 r->ss = seg.sel;
                 r->cpl = seg.dpl;
-                if ( !(sampled->arch.hvm_vcpu.guest_cr[0] & X86_CR0_PE) )
+                if ( !(sampled->arch.hvm.guest_cr[0] & X86_CR0_PE) )
                     *flags |= PMU_SAMPLE_REAL;
             }
         }
diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 24366ea..59d3298 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -829,7 +829,7 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
         if ( is_hvm_domain(d) )
         {
             /* OSXSAVE clear in policy.  Fast-forward CR4 back in. */
-            if ( v->arch.hvm_vcpu.guest_cr[4] & X86_CR4_OSXSAVE )
+            if ( v->arch.hvm.guest_cr[4] & X86_CR4_OSXSAVE )
                 res->c |= cpufeat_mask(X86_FEATURE_OSXSAVE);
         }
         else /* PV domain */
@@ -960,7 +960,7 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
             /* OSPKE clear in policy.  Fast-forward CR4 back in. */
             if ( (is_pv_domain(d)
                   ? v->arch.pv.ctrlreg[4]
-                  : v->arch.hvm_vcpu.guest_cr[4]) & X86_CR4_PKE )
+                  : v->arch.hvm.guest_cr[4]) & X86_CR4_PKE )
                 res->c |= cpufeat_mask(X86_FEATURE_OSPKE);
             break;
         }
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 3dcd7f9..ccdfec2 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1619,7 +1619,7 @@ static void __context_switch(void)
                 BUG();
 
             if ( cpu_has_xsaves && is_hvm_vcpu(n) )
-                set_msr_xss(n->arch.hvm_vcpu.msr_xss);
+                set_msr_xss(n->arch.hvm.msr_xss);
         }
         vcpu_restore_fpu_nonlazy(n, false);
         nd->arch.ctxt_switch->to(n);
@@ -1692,7 +1692,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
         np2m_schedule(NP2M_SCHEDLE_OUT);
     }
 
-    if ( is_hvm_domain(prevd) && !list_empty(&prev->arch.hvm_vcpu.tm_list) )
+    if ( is_hvm_domain(prevd) && !list_empty(&prev->arch.hvm.tm_list) )
         pt_save_timer(prev);
 
     local_irq_disable();
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index f306614..797841e 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -1585,10 +1585,10 @@ void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)
     {
         struct segment_register sreg;
 
-        c.nat->ctrlreg[0] = v->arch.hvm_vcpu.guest_cr[0];
-        c.nat->ctrlreg[2] = v->arch.hvm_vcpu.guest_cr[2];
-        c.nat->ctrlreg[3] = v->arch.hvm_vcpu.guest_cr[3];
-        c.nat->ctrlreg[4] = v->arch.hvm_vcpu.guest_cr[4];
+        c.nat->ctrlreg[0] = v->arch.hvm.guest_cr[0];
+        c.nat->ctrlreg[2] = v->arch.hvm.guest_cr[2];
+        c.nat->ctrlreg[3] = v->arch.hvm.guest_cr[3];
+        c.nat->ctrlreg[4] = v->arch.hvm.guest_cr[4];
         hvm_get_segment_register(v, x86_seg_cs, &sreg);
         c.nat->user_regs.cs = sreg.sel;
         hvm_get_segment_register(v, x86_seg_ss, &sreg);
diff --git a/xen/arch/x86/hvm/asid.c b/xen/arch/x86/hvm/asid.c
index beca8ec..9d3c671 100644
--- a/xen/arch/x86/hvm/asid.c
+++ b/xen/arch/x86/hvm/asid.c
@@ -87,7 +87,7 @@ void hvm_asid_flush_vcpu_asid(struct hvm_vcpu_asid *asid)
 
 void hvm_asid_flush_vcpu(struct vcpu *v)
 {
-    hvm_asid_flush_vcpu_asid(&v->arch.hvm_vcpu.n1asid);
+    hvm_asid_flush_vcpu_asid(&v->arch.hvm.n1asid);
     hvm_asid_flush_vcpu_asid(&vcpu_nestedhvm(v).nv_n2asid);
 }
 
diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index 6755f3f..87d97d0 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -317,17 +317,17 @@ static int inject_event(struct domain *d,
     if ( data->vcpuid >= d->max_vcpus || !(v = d->vcpu[data->vcpuid]) )
         return -EINVAL;
 
-    if ( cmpxchg(&v->arch.hvm_vcpu.inject_event.vector,
+    if ( cmpxchg(&v->arch.hvm.inject_event.vector,
                  HVM_EVENT_VECTOR_UNSET, HVM_EVENT_VECTOR_UPDATING) !=
          HVM_EVENT_VECTOR_UNSET )
         return -EBUSY;
 
-    v->arch.hvm_vcpu.inject_event.type = data->type;
-    v->arch.hvm_vcpu.inject_event.insn_len = data->insn_len;
-    v->arch.hvm_vcpu.inject_event.error_code = data->error_code;
-    v->arch.hvm_vcpu.inject_event.cr2 = data->cr2;
+    v->arch.hvm.inject_event.type = data->type;
+    v->arch.hvm.inject_event.insn_len = data->insn_len;
+    v->arch.hvm.inject_event.error_code = data->error_code;
+    v->arch.hvm.inject_event.cr2 = data->cr2;
     smp_wmb();
-    v->arch.hvm_vcpu.inject_event.vector = data->vector;
+    v->arch.hvm.inject_event.vector = data->vector;
 
     return 0;
 }
diff --git a/xen/arch/x86/hvm/domain.c b/xen/arch/x86/hvm/domain.c
index 8a2c83e..5d5a746 100644
--- a/xen/arch/x86/hvm/domain.c
+++ b/xen/arch/x86/hvm/domain.c
@@ -204,10 +204,10 @@ int arch_set_info_hvm_guest(struct vcpu *v, const vcpu_hvm_context_t *ctx)
         uregs->rip    = regs->eip;
         uregs->rflags = regs->eflags;
 
-        v->arch.hvm_vcpu.guest_cr[0] = regs->cr0;
-        v->arch.hvm_vcpu.guest_cr[3] = regs->cr3;
-        v->arch.hvm_vcpu.guest_cr[4] = regs->cr4;
-        v->arch.hvm_vcpu.guest_efer  = regs->efer;
+        v->arch.hvm.guest_cr[0] = regs->cr0;
+        v->arch.hvm.guest_cr[3] = regs->cr3;
+        v->arch.hvm.guest_cr[4] = regs->cr4;
+        v->arch.hvm.guest_efer  = regs->efer;
     }
     break;
 
@@ -255,10 +255,10 @@ int arch_set_info_hvm_guest(struct vcpu *v, const vcpu_hvm_context_t *ctx)
         uregs->rip    = regs->rip;
         uregs->rflags = regs->rflags;
 
-        v->arch.hvm_vcpu.guest_cr[0] = regs->cr0;
-        v->arch.hvm_vcpu.guest_cr[3] = regs->cr3;
-        v->arch.hvm_vcpu.guest_cr[4] = regs->cr4;
-        v->arch.hvm_vcpu.guest_efer  = regs->efer;
+        v->arch.hvm.guest_cr[0] = regs->cr0;
+        v->arch.hvm.guest_cr[3] = regs->cr3;
+        v->arch.hvm.guest_cr[4] = regs->cr4;
+        v->arch.hvm.guest_efer  = regs->efer;
 
 #define SEG(l, a) (struct segment_register){ 0, { a }, l, 0 }
         cs = SEG(~0u, 0xa9b); /* 64bit code segment. */
@@ -270,21 +270,21 @@ int arch_set_info_hvm_guest(struct vcpu *v, const vcpu_hvm_context_t *ctx)
 
     }
 
-    if ( v->arch.hvm_vcpu.guest_efer & EFER_LME )
-        v->arch.hvm_vcpu.guest_efer |= EFER_LMA;
+    if ( v->arch.hvm.guest_efer & EFER_LME )
+        v->arch.hvm.guest_efer |= EFER_LMA;
 
-    if ( v->arch.hvm_vcpu.guest_cr[4] & ~hvm_cr4_guest_valid_bits(d, false) )
+    if ( v->arch.hvm.guest_cr[4] & ~hvm_cr4_guest_valid_bits(d, false) )
     {
         gprintk(XENLOG_ERR, "Bad CR4 value: %#016lx\n",
-                v->arch.hvm_vcpu.guest_cr[4]);
+                v->arch.hvm.guest_cr[4]);
         return -EINVAL;
     }
 
-    errstr = hvm_efer_valid(v, v->arch.hvm_vcpu.guest_efer, -1);
+    errstr = hvm_efer_valid(v, v->arch.hvm.guest_efer, -1);
     if ( errstr )
     {
         gprintk(XENLOG_ERR, "Bad EFER value (%#016lx): %s\n",
-               v->arch.hvm_vcpu.guest_efer, errstr);
+               v->arch.hvm.guest_efer, errstr);
         return -EINVAL;
     }
 
@@ -297,12 +297,12 @@ int arch_set_info_hvm_guest(struct vcpu *v, const vcpu_hvm_context_t *ctx)
     {
         /* Shadow-mode CR3 change. Check PDBR and update refcounts. */
         struct page_info *page = get_page_from_gfn(v->domain,
-                                 v->arch.hvm_vcpu.guest_cr[3] >> PAGE_SHIFT,
+                                 v->arch.hvm.guest_cr[3] >> PAGE_SHIFT,
                                  NULL, P2M_ALLOC);
         if ( !page )
         {
             gprintk(XENLOG_ERR, "Invalid CR3: %#lx\n",
-                    v->arch.hvm_vcpu.guest_cr[3]);
+                    v->arch.hvm.guest_cr[3]);
             return -EINVAL;
         }
 
@@ -316,9 +316,9 @@ int arch_set_info_hvm_guest(struct vcpu *v, const vcpu_hvm_context_t *ctx)
     hvm_set_segment_register(v, x86_seg_tr, &tr);
 
     /* Sync AP's TSC with BSP's. */
-    v->arch.hvm_vcpu.cache_tsc_offset =
-        d->vcpu[0]->arch.hvm_vcpu.cache_tsc_offset;
-    hvm_set_tsc_offset(v, v->arch.hvm_vcpu.cache_tsc_offset,
+    v->arch.hvm.cache_tsc_offset =
+        d->vcpu[0]->arch.hvm.cache_tsc_offset;
+    hvm_set_tsc_offset(v, v->arch.hvm.cache_tsc_offset,
                        d->arch.hvm.sync_tsc);
 
     paging_update_paging_modes(v);
diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 20d1d5b..dbf8b81 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -123,7 +123,7 @@ static int hvmemul_do_io(
 {
     struct vcpu *curr = current;
     struct domain *currd = curr->domain;
-    struct hvm_vcpu_io *vio = &curr->arch.hvm_vcpu.hvm_io;
+    struct hvm_vcpu_io *vio = &curr->arch.hvm.hvm_io;
     ioreq_t p = {
         .type = is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO,
         .addr = addr,
@@ -437,7 +437,7 @@ static int hvmemul_do_io_addr(
     ASSERT(rc != X86EMUL_UNIMPLEMENTED);
 
     if ( rc == X86EMUL_OKAY )
-        v->arch.hvm_vcpu.hvm_io.mmio_retry = (count < *reps);
+        v->arch.hvm.hvm_io.mmio_retry = (count < *reps);
 
     *reps = count;
 
@@ -706,7 +706,7 @@ static int hvmemul_linear_to_phys(
     *reps = min_t(unsigned long, *reps, 4096);
 
     /* With no paging it's easy: linear == physical. */
-    if ( !(curr->arch.hvm_vcpu.guest_cr[0] & X86_CR0_PG) )
+    if ( !(curr->arch.hvm.guest_cr[0] & X86_CR0_PG) )
     {
         *paddr = addr;
         return X86EMUL_OKAY;
@@ -975,7 +975,7 @@ static int hvmemul_linear_mmio_access(
     unsigned long gla, unsigned int size, uint8_t dir, void *buffer,
     uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt, bool_t known_gpfn)
 {
-    struct hvm_vcpu_io *vio = &current->arch.hvm_vcpu.hvm_io;
+    struct hvm_vcpu_io *vio = &current->arch.hvm.hvm_io;
     unsigned long offset = gla & ~PAGE_MASK;
     struct hvm_mmio_cache *cache = hvmemul_find_mmio_cache(vio, gla, dir);
     unsigned int chunk, buffer_offset = 0;
@@ -1053,7 +1053,7 @@ static int __hvmemul_read(
     pagefault_info_t pfinfo;
     unsigned long addr, reps = 1;
     uint32_t pfec = PFEC_page_present;
-    struct hvm_vcpu_io *vio = &curr->arch.hvm_vcpu.hvm_io;
+    struct hvm_vcpu_io *vio = &curr->arch.hvm.hvm_io;
     int rc;
 
     if ( is_x86_system_segment(seg) )
@@ -1174,7 +1174,7 @@ static int hvmemul_write(
     struct vcpu *curr = current;
     unsigned long addr, reps = 1;
     uint32_t pfec = PFEC_page_present | PFEC_write_access;
-    struct hvm_vcpu_io *vio = &curr->arch.hvm_vcpu.hvm_io;
+    struct hvm_vcpu_io *vio = &curr->arch.hvm.hvm_io;
     int rc;
     void *mapping;
 
@@ -1218,7 +1218,7 @@ static int hvmemul_rmw(
         container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
     unsigned long addr, reps = 1;
     uint32_t pfec = PFEC_page_present | PFEC_write_access;
-    struct hvm_vcpu_io *vio = &current->arch.hvm_vcpu.hvm_io;
+    struct hvm_vcpu_io *vio = &current->arch.hvm.hvm_io;
     int rc;
     void *mapping;
 
@@ -1375,7 +1375,7 @@ static int hvmemul_cmpxchg(
     struct vcpu *curr = current;
     unsigned long addr, reps = 1;
     uint32_t pfec = PFEC_page_present | PFEC_write_access;
-    struct hvm_vcpu_io *vio = &curr->arch.hvm_vcpu.hvm_io;
+    struct hvm_vcpu_io *vio = &curr->arch.hvm.hvm_io;
     int rc;
     void *mapping = NULL;
 
@@ -1593,7 +1593,7 @@ static int hvmemul_rep_movs(
 {
     struct hvm_emulate_ctxt *hvmemul_ctxt =
         container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
-    struct hvm_vcpu_io *vio = &current->arch.hvm_vcpu.hvm_io;
+    struct hvm_vcpu_io *vio = &current->arch.hvm.hvm_io;
     unsigned long saddr, daddr, bytes;
     paddr_t sgpa, dgpa;
     uint32_t pfec = PFEC_page_present;
@@ -1748,7 +1748,7 @@ static int hvmemul_rep_stos(
 {
     struct hvm_emulate_ctxt *hvmemul_ctxt =
         container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
-    struct hvm_vcpu_io *vio = &current->arch.hvm_vcpu.hvm_io;
+    struct hvm_vcpu_io *vio = &current->arch.hvm.hvm_io;
     unsigned long addr, bytes;
     paddr_t gpa;
     p2m_type_t p2mt;
@@ -1931,7 +1931,7 @@ static int hvmemul_read_cr(
     case 2:
     case 3:
     case 4:
-        *val = current->arch.hvm_vcpu.guest_cr[reg];
+        *val = current->arch.hvm.guest_cr[reg];
         HVMTRACE_LONG_2D(CR_READ, reg, TRC_PAR_LONG(*val));
         return X86EMUL_OKAY;
     default:
@@ -1956,7 +1956,7 @@ static int hvmemul_write_cr(
         break;
 
     case 2:
-        current->arch.hvm_vcpu.guest_cr[2] = val;
+        current->arch.hvm.guest_cr[2] = val;
         rc = X86EMUL_OKAY;
         break;
 
@@ -2280,7 +2280,7 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt,
     const struct cpu_user_regs *regs = hvmemul_ctxt->ctxt.regs;
     struct vcpu *curr = current;
     uint32_t new_intr_shadow;
-    struct hvm_vcpu_io *vio = &curr->arch.hvm_vcpu.hvm_io;
+    struct hvm_vcpu_io *vio = &curr->arch.hvm.hvm_io;
     int rc;
 
     hvm_emulate_init_per_insn(hvmemul_ctxt, vio->mmio_insn,
@@ -2410,7 +2410,7 @@ void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr,
         break;
     case EMUL_KIND_SET_CONTEXT_INSN: {
         struct vcpu *curr = current;
-        struct hvm_vcpu_io *vio = &curr->arch.hvm_vcpu.hvm_io;
+        struct hvm_vcpu_io *vio = &curr->arch.hvm.hvm_io;
 
         BUILD_BUG_ON(sizeof(vio->mmio_insn) !=
                      sizeof(curr->arch.vm_event->emul.insn.data));
diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index 8090699..cbd1efb 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -581,7 +581,7 @@ static int hpet_save(struct domain *d, hvm_domain_context_t *h)
         return 0;
 
     write_lock(&hp->lock);
-    guest_time = (v->arch.hvm_vcpu.guest_time ?: hvm_get_guest_time(v)) /
+    guest_time = (v->arch.hvm.guest_time ?: hvm_get_guest_time(v)) /
                  STIME_PER_HPET_TICK;
 
     /* Write the proper value into the main counter */
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index f895339..ac067a8 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -278,7 +278,7 @@ void hvm_set_rdtsc_exiting(struct domain *d, bool_t enable)
 void hvm_get_guest_pat(struct vcpu *v, u64 *guest_pat)
 {
     if ( !hvm_funcs.get_guest_pat(v, guest_pat) )
-        *guest_pat = v->arch.hvm_vcpu.pat_cr;
+        *guest_pat = v->arch.hvm.pat_cr;
 }
 
 int hvm_set_guest_pat(struct vcpu *v, u64 guest_pat)
@@ -303,7 +303,7 @@ int hvm_set_guest_pat(struct vcpu *v, u64 guest_pat)
         }
 
     if ( !hvm_funcs.set_guest_pat(v, guest_pat) )
-        v->arch.hvm_vcpu.pat_cr = guest_pat;
+        v->arch.hvm.pat_cr = guest_pat;
 
     return 1;
 }
@@ -415,28 +415,26 @@ static void hvm_set_guest_tsc_fixed(struct vcpu *v, u64 guest_tsc, u64 at_tsc)
     }
 
     delta_tsc = guest_tsc - tsc;
-    v->arch.hvm_vcpu.cache_tsc_offset = delta_tsc;
+    v->arch.hvm.cache_tsc_offset = delta_tsc;
 
-    hvm_set_tsc_offset(v, v->arch.hvm_vcpu.cache_tsc_offset, at_tsc);
+    hvm_set_tsc_offset(v, v->arch.hvm.cache_tsc_offset, at_tsc);
 }
 
 #define hvm_set_guest_tsc(v, t) hvm_set_guest_tsc_fixed(v, t, 0)
 
 static void hvm_set_guest_tsc_msr(struct vcpu *v, u64 guest_tsc)
 {
-    uint64_t tsc_offset = v->arch.hvm_vcpu.cache_tsc_offset;
+    uint64_t tsc_offset = v->arch.hvm.cache_tsc_offset;
 
     hvm_set_guest_tsc(v, guest_tsc);
-    v->arch.hvm_vcpu.msr_tsc_adjust += v->arch.hvm_vcpu.cache_tsc_offset
-                          - tsc_offset;
+    v->arch.hvm.msr_tsc_adjust += v->arch.hvm.cache_tsc_offset - tsc_offset;
 }
 
 static void hvm_set_guest_tsc_adjust(struct vcpu *v, u64 tsc_adjust)
 {
-    v->arch.hvm_vcpu.cache_tsc_offset += tsc_adjust
-                            - v->arch.hvm_vcpu.msr_tsc_adjust;
-    hvm_set_tsc_offset(v, v->arch.hvm_vcpu.cache_tsc_offset, 0);
-    v->arch.hvm_vcpu.msr_tsc_adjust = tsc_adjust;
+    v->arch.hvm.cache_tsc_offset += tsc_adjust - v->arch.hvm.msr_tsc_adjust;
+    hvm_set_tsc_offset(v, v->arch.hvm.cache_tsc_offset, 0);
+    v->arch.hvm.msr_tsc_adjust = tsc_adjust;
 }
 
 u64 hvm_get_guest_tsc_fixed(struct vcpu *v, uint64_t at_tsc)
@@ -455,7 +453,7 @@ u64 hvm_get_guest_tsc_fixed(struct vcpu *v, uint64_t at_tsc)
             tsc = hvm_scale_tsc(v->domain, tsc);
     }
 
-    return tsc + v->arch.hvm_vcpu.cache_tsc_offset;
+    return tsc + v->arch.hvm.cache_tsc_offset;
 }
 
 void hvm_migrate_timers(struct vcpu *v)
@@ -501,7 +499,7 @@ void hvm_migrate_pirqs(struct vcpu *v)
 
 static bool hvm_get_pending_event(struct vcpu *v, struct x86_event *info)
 {
-    info->cr2 = v->arch.hvm_vcpu.guest_cr[2];
+    info->cr2 = v->arch.hvm.guest_cr[2];
     return hvm_funcs.get_pending_event(v, info);
 }
 
@@ -518,14 +516,14 @@ void hvm_do_resume(struct vcpu *v)
         hvm_vm_event_do_resume(v);
 
     /* Inject pending hw/sw event */
-    if ( v->arch.hvm_vcpu.inject_event.vector >= 0 )
+    if ( v->arch.hvm.inject_event.vector >= 0 )
     {
         smp_rmb();
 
         if ( !hvm_event_pending(v) )
-            hvm_inject_event(&v->arch.hvm_vcpu.inject_event);
+            hvm_inject_event(&v->arch.hvm.inject_event);
 
-        v->arch.hvm_vcpu.inject_event.vector = HVM_EVENT_VECTOR_UNSET;
+        v->arch.hvm.inject_event.vector = HVM_EVENT_VECTOR_UNSET;
     }
 
     if ( unlikely(v->arch.vm_event) && v->arch.monitor.next_interrupt_enabled )
@@ -741,7 +739,7 @@ static int hvm_save_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
 
     for_each_vcpu ( d, v )
     {
-        ctxt.tsc_adjust = v->arch.hvm_vcpu.msr_tsc_adjust;
+        ctxt.tsc_adjust = v->arch.hvm.msr_tsc_adjust;
         err = hvm_save_entry(TSC_ADJUST, v->vcpu_id, h, &ctxt);
         if ( err )
             break;
@@ -766,7 +764,7 @@ static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
     if ( hvm_load_entry(TSC_ADJUST, h, &ctxt) != 0 )
         return -EINVAL;
 
-    v->arch.hvm_vcpu.msr_tsc_adjust = ctxt.tsc_adjust;
+    v->arch.hvm.msr_tsc_adjust = ctxt.tsc_adjust;
     return 0;
 }
 
@@ -1044,7 +1042,7 @@ static int hvm_load_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
     if ( hvm_funcs.tsc_scaling.setup )
         hvm_funcs.tsc_scaling.setup(v);
 
-    v->arch.hvm_vcpu.msr_tsc_aux = ctxt.msr_tsc_aux;
+    v->arch.hvm.msr_tsc_aux = ctxt.msr_tsc_aux;
 
     hvm_set_guest_tsc_fixed(v, ctxt.tsc, d->arch.hvm.sync_tsc);
 
@@ -1501,8 +1499,8 @@ int hvm_vcpu_initialise(struct vcpu *v)
 
     hvm_asid_flush_vcpu(v);
 
-    spin_lock_init(&v->arch.hvm_vcpu.tm_lock);
-    INIT_LIST_HEAD(&v->arch.hvm_vcpu.tm_list);
+    spin_lock_init(&v->arch.hvm.tm_lock);
+    INIT_LIST_HEAD(&v->arch.hvm.tm_list);
 
     rc = hvm_vcpu_cacheattr_init(v); /* teardown: vcpu_cacheattr_destroy */
     if ( rc != 0 )
@@ -1517,11 +1515,11 @@ int hvm_vcpu_initialise(struct vcpu *v)
         goto fail3;
 
     softirq_tasklet_init(
-        &v->arch.hvm_vcpu.assert_evtchn_irq_tasklet,
+        &v->arch.hvm.assert_evtchn_irq_tasklet,
         (void(*)(unsigned long))hvm_assert_evtchn_irq,
         (unsigned long)v);
 
-    v->arch.hvm_vcpu.inject_event.vector = HVM_EVENT_VECTOR_UNSET;
+    v->arch.hvm.inject_event.vector = HVM_EVENT_VECTOR_UNSET;
 
     rc = setup_compat_arg_xlat(v); /* teardown: free_compat_arg_xlat() */
     if ( rc != 0 )
@@ -1574,7 +1572,7 @@ void hvm_vcpu_destroy(struct vcpu *v)
 
     free_compat_arg_xlat(v);
 
-    tasklet_kill(&v->arch.hvm_vcpu.assert_evtchn_irq_tasklet);
+    tasklet_kill(&v->arch.hvm.assert_evtchn_irq_tasklet);
     hvm_funcs.vcpu_destroy(v);
 
     vlapic_destroy(v);
@@ -1967,11 +1965,11 @@ int hvm_set_efer(uint64_t value)
     {
         printk(XENLOG_G_WARNING
                "%pv: Invalid EFER update: %#"PRIx64" -> %#"PRIx64" - %s\n",
-               v, v->arch.hvm_vcpu.guest_efer, value, errstr);
+               v, v->arch.hvm.guest_efer, value, errstr);
         return X86EMUL_EXCEPTION;
     }
 
-    if ( ((value ^ v->arch.hvm_vcpu.guest_efer) & EFER_LME) &&
+    if ( ((value ^ v->arch.hvm.guest_efer) & EFER_LME) &&
          hvm_paging_enabled(v) )
     {
         gdprintk(XENLOG_WARNING,
@@ -1979,7 +1977,7 @@ int hvm_set_efer(uint64_t value)
         return X86EMUL_EXCEPTION;
     }
 
-    if ( (value & EFER_LME) && !(v->arch.hvm_vcpu.guest_efer & EFER_LME) )
+    if ( (value & EFER_LME) && !(v->arch.hvm.guest_efer & EFER_LME) )
     {
         struct segment_register cs;
 
@@ -2005,15 +2003,15 @@ int hvm_set_efer(uint64_t value)
 
     if ( nestedhvm_enabled(v->domain) && cpu_has_svm &&
        ((value & EFER_SVME) == 0 ) &&
-       ((value ^ v->arch.hvm_vcpu.guest_efer) & EFER_SVME) )
+       ((value ^ v->arch.hvm.guest_efer) & EFER_SVME) )
     {
         /* Cleared EFER.SVME: Flush all nestedp2m tables */
         p2m_flush_nestedp2m(v->domain);
         nestedhvm_vcpu_reset(v);
     }
 
-    value |= v->arch.hvm_vcpu.guest_efer & EFER_LMA;
-    v->arch.hvm_vcpu.guest_efer = value;
+    value |= v->arch.hvm.guest_efer & EFER_LMA;
+    v->arch.hvm.guest_efer = value;
     hvm_update_guest_efer(v);
 
     return X86EMUL_OKAY;
@@ -2029,7 +2027,7 @@ static bool_t domain_exit_uc_mode(struct vcpu *v)
     {
         if ( (vs == v) || !vs->is_initialised )
             continue;
-        if ( (vs->arch.hvm_vcpu.cache_mode == NO_FILL_CACHE_MODE) ||
+        if ( (vs->arch.hvm.cache_mode == NO_FILL_CACHE_MODE) ||
              mtrr_pat_not_equal(vs, v) )
             return 0;
     }
@@ -2097,7 +2095,7 @@ int hvm_mov_from_cr(unsigned int cr, unsigned int gpr)
     case 2:
     case 3:
     case 4:
-        val = curr->arch.hvm_vcpu.guest_cr[cr];
+        val = curr->arch.hvm.guest_cr[cr];
         break;
     case 8:
         val = (vlapic_get_reg(vcpu_vlapic(curr), APIC_TASKPRI) & 0xf0) >> 4;
@@ -2124,7 +2122,7 @@ void hvm_shadow_handle_cd(struct vcpu *v, unsigned long value)
     {
         /* Entering no fill cache mode. */
         spin_lock(&v->domain->arch.hvm.uc_lock);
-        v->arch.hvm_vcpu.cache_mode = NO_FILL_CACHE_MODE;
+        v->arch.hvm.cache_mode = NO_FILL_CACHE_MODE;
 
         if ( !v->domain->arch.hvm.is_in_uc_mode )
         {
@@ -2139,11 +2137,11 @@ void hvm_shadow_handle_cd(struct vcpu *v, unsigned long value)
         spin_unlock(&v->domain->arch.hvm.uc_lock);
     }
     else if ( !(value & X86_CR0_CD) &&
-              (v->arch.hvm_vcpu.cache_mode == NO_FILL_CACHE_MODE) )
+              (v->arch.hvm.cache_mode == NO_FILL_CACHE_MODE) )
     {
         /* Exit from no fill cache mode. */
         spin_lock(&v->domain->arch.hvm.uc_lock);
-        v->arch.hvm_vcpu.cache_mode = NORMAL_CACHE_MODE;
+        v->arch.hvm.cache_mode = NORMAL_CACHE_MODE;
 
         if ( domain_exit_uc_mode(v) )
             hvm_set_uc_mode(v, 0);
@@ -2154,7 +2152,7 @@ void hvm_shadow_handle_cd(struct vcpu *v, unsigned long value)
 
 static void hvm_update_cr(struct vcpu *v, unsigned int cr, unsigned long value)
 {
-    v->arch.hvm_vcpu.guest_cr[cr] = value;
+    v->arch.hvm.guest_cr[cr] = value;
     nestedhvm_set_cr(v, cr, value);
     hvm_update_guest_cr(v, cr);
 }
@@ -2163,7 +2161,7 @@ int hvm_set_cr0(unsigned long value, bool_t may_defer)
 {
     struct vcpu *v = current;
     struct domain *d = v->domain;
-    unsigned long gfn, old_value = v->arch.hvm_vcpu.guest_cr[0];
+    unsigned long gfn, old_value = v->arch.hvm.guest_cr[0];
     struct page_info *page;
 
     HVM_DBG_LOG(DBG_LEVEL_VMMU, "Update CR0 value = %lx", value);
@@ -2202,28 +2200,28 @@ int hvm_set_cr0(unsigned long value, bool_t may_defer)
 
     if ( (value & X86_CR0_PG) && !(old_value & X86_CR0_PG) )
     {
-        if ( v->arch.hvm_vcpu.guest_efer & EFER_LME )
+        if ( v->arch.hvm.guest_efer & EFER_LME )
         {
-            if ( !(v->arch.hvm_vcpu.guest_cr[4] & X86_CR4_PAE) &&
+            if ( !(v->arch.hvm.guest_cr[4] & X86_CR4_PAE) &&
                  !nestedhvm_vmswitch_in_progress(v) )
             {
                 HVM_DBG_LOG(DBG_LEVEL_1, "Enable paging before PAE enable");
                 return X86EMUL_EXCEPTION;
             }
             HVM_DBG_LOG(DBG_LEVEL_1, "Enabling long mode");
-            v->arch.hvm_vcpu.guest_efer |= EFER_LMA;
+            v->arch.hvm.guest_efer |= EFER_LMA;
             hvm_update_guest_efer(v);
         }
 
         if ( !paging_mode_hap(d) )
         {
             /* The guest CR3 must be pointing to the guest physical. */
-            gfn = v->arch.hvm_vcpu.guest_cr[3]>>PAGE_SHIFT;
+            gfn = v->arch.hvm.guest_cr[3] >> PAGE_SHIFT;
             page = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC);
             if ( !page )
             {
                 gdprintk(XENLOG_ERR, "Invalid CR3 value = %lx\n",
-                         v->arch.hvm_vcpu.guest_cr[3]);
+                         v->arch.hvm.guest_cr[3]);
                 domain_crash(d);
                 return X86EMUL_UNHANDLEABLE;
             }
@@ -2232,7 +2230,7 @@ int hvm_set_cr0(unsigned long value, bool_t may_defer)
             v->arch.guest_table = pagetable_from_page(page);
 
             HVM_DBG_LOG(DBG_LEVEL_VMMU, "Update CR3 value = %lx, mfn = %lx",
-                        v->arch.hvm_vcpu.guest_cr[3], mfn_x(page_to_mfn(page)));
+                        v->arch.hvm.guest_cr[3], mfn_x(page_to_mfn(page)));
         }
     }
     else if ( !(value & X86_CR0_PG) && (old_value & X86_CR0_PG) )
@@ -2247,7 +2245,7 @@ int hvm_set_cr0(unsigned long value, bool_t may_defer)
         /* When CR0.PG is cleared, LMA is cleared immediately. */
         if ( hvm_long_mode_active(v) )
         {
-            v->arch.hvm_vcpu.guest_efer &= ~EFER_LMA;
+            v->arch.hvm.guest_efer &= ~EFER_LMA;
             hvm_update_guest_efer(v);
         }
 
@@ -2281,7 +2279,7 @@ int hvm_set_cr3(unsigned long value, bool_t may_defer)
 {
     struct vcpu *v = current;
     struct page_info *page;
-    unsigned long old = v->arch.hvm_vcpu.guest_cr[3];
+    unsigned long old = v->arch.hvm.guest_cr[3];
     bool noflush = false;
 
     if ( may_defer && unlikely(v->domain->arch.monitor.write_ctrlreg_enabled &
@@ -2306,7 +2304,7 @@ int hvm_set_cr3(unsigned long value, bool_t may_defer)
     }
 
     if ( hvm_paging_enabled(v) && !paging_mode_hap(v->domain) &&
-         (value != v->arch.hvm_vcpu.guest_cr[3]) )
+         (value != v->arch.hvm.guest_cr[3]) )
     {
         /* Shadow-mode CR3 change. Check PDBR and update refcounts. */
         HVM_DBG_LOG(DBG_LEVEL_VMMU, "CR3 value = %lx", value);
@@ -2321,7 +2319,7 @@ int hvm_set_cr3(unsigned long value, bool_t may_defer)
         HVM_DBG_LOG(DBG_LEVEL_VMMU, "Update CR3 value = %lx", value);
     }
 
-    v->arch.hvm_vcpu.guest_cr[3] = value;
+    v->arch.hvm.guest_cr[3] = value;
     paging_update_cr3(v, noflush);
     return X86EMUL_OKAY;
 
@@ -2354,11 +2352,11 @@ int hvm_set_cr4(unsigned long value, bool_t may_defer)
         }
     }
 
-    old_cr = v->arch.hvm_vcpu.guest_cr[4];
+    old_cr = v->arch.hvm.guest_cr[4];
 
     if ( (value & X86_CR4_PCIDE) && !(old_cr & X86_CR4_PCIDE) &&
          (!hvm_long_mode_active(v) ||
-          (v->arch.hvm_vcpu.guest_cr[3] & 0xfff)) )
+          (v->arch.hvm.guest_cr[3] & 0xfff)) )
     {
         HVM_DBG_LOG(DBG_LEVEL_1, "Guest attempts to change CR4.PCIDE from "
                     "0 to 1 while either EFER.LMA=0 or CR3[11:0]!=000H");
@@ -2441,7 +2439,7 @@ bool_t hvm_virtual_to_linear_addr(
      */
     ASSERT(seg < x86_seg_none);
 
-    if ( !(curr->arch.hvm_vcpu.guest_cr[0] & X86_CR0_PE) ||
+    if ( !(curr->arch.hvm.guest_cr[0] & X86_CR0_PE) ||
          (guest_cpu_user_regs()->eflags & X86_EFLAGS_VM) )
     {
         /*
@@ -3050,7 +3048,7 @@ void hvm_task_switch(
     tr.type = 0xb; /* busy 32-bit tss */
     hvm_set_segment_register(v, x86_seg_tr, &tr);
 
-    v->arch.hvm_vcpu.guest_cr[0] |= X86_CR0_TS;
+    v->arch.hvm.guest_cr[0] |= X86_CR0_TS;
     hvm_update_guest_cr(v, 0);
 
     if ( (taskswitch_reason == TSW_iret ||
@@ -3392,8 +3390,8 @@ int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
     uint64_t *var_range_base, *fixed_range_base;
     int ret;
 
-    var_range_base = (uint64_t *)v->arch.hvm_vcpu.mtrr.var_ranges;
-    fixed_range_base = (uint64_t *)v->arch.hvm_vcpu.mtrr.fixed_ranges;
+    var_range_base = (uint64_t *)v->arch.hvm.mtrr.var_ranges;
+    fixed_range_base = (uint64_t *)v->arch.hvm.mtrr.fixed_ranges;
 
     if ( (ret = guest_rdmsr(v, msr, msr_content)) != X86EMUL_UNHANDLEABLE )
         return ret;
@@ -3405,7 +3403,7 @@ int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
         unsigned int index;
 
     case MSR_EFER:
-        *msr_content = v->arch.hvm_vcpu.guest_efer;
+        *msr_content = v->arch.hvm.guest_efer;
         break;
 
     case MSR_IA32_TSC:
@@ -3413,7 +3411,7 @@ int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
         break;
 
     case MSR_IA32_TSC_ADJUST:
-        *msr_content = v->arch.hvm_vcpu.msr_tsc_adjust;
+        *msr_content = v->arch.hvm.msr_tsc_adjust;
         break;
 
     case MSR_TSC_AUX:
@@ -3440,14 +3438,14 @@ int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
     case MSR_MTRRcap:
         if ( !d->arch.cpuid->basic.mtrr )
             goto gp_fault;
-        *msr_content = v->arch.hvm_vcpu.mtrr.mtrr_cap;
+        *msr_content = v->arch.hvm.mtrr.mtrr_cap;
         break;
     case MSR_MTRRdefType:
         if ( !d->arch.cpuid->basic.mtrr )
             goto gp_fault;
-        *msr_content = v->arch.hvm_vcpu.mtrr.def_type |
-                       MASK_INSR(v->arch.hvm_vcpu.mtrr.enabled, MTRRdefType_E) |
-                       MASK_INSR(v->arch.hvm_vcpu.mtrr.fixed_enabled,
+        *msr_content = v->arch.hvm.mtrr.def_type |
+                       MASK_INSR(v->arch.hvm.mtrr.enabled, MTRRdefType_E) |
+                       MASK_INSR(v->arch.hvm.mtrr.fixed_enabled,
                                  MTRRdefType_FE);
         break;
     case MSR_MTRRfix64K_00000:
@@ -3473,7 +3471,7 @@ int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
             goto gp_fault;
         index = msr - MSR_IA32_MTRR_PHYSBASE(0);
         if ( (index / 2) >=
-             MASK_EXTR(v->arch.hvm_vcpu.mtrr.mtrr_cap, MTRRcap_VCNT) )
+             MASK_EXTR(v->arch.hvm.mtrr.mtrr_cap, MTRRcap_VCNT) )
             goto gp_fault;
         *msr_content = var_range_base[index];
         break;
@@ -3481,7 +3479,7 @@ int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
     case MSR_IA32_XSS:
         if ( !d->arch.cpuid->xstate.xsaves )
             goto gp_fault;
-        *msr_content = v->arch.hvm_vcpu.msr_xss;
+        *msr_content = v->arch.hvm.msr_xss;
         break;
 
     case MSR_IA32_BNDCFGS:
@@ -3573,7 +3571,7 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content,
         break;
 
     case MSR_TSC_AUX:
-        v->arch.hvm_vcpu.msr_tsc_aux = (uint32_t)msr_content;
+        v->arch.hvm.msr_tsc_aux = (uint32_t)msr_content;
         if ( cpu_has_rdtscp
              && (v->domain->arch.tsc_mode != TSC_MODE_PVRDTSCP) )
             wrmsr_tsc_aux(msr_content);
@@ -3604,14 +3602,14 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content,
     case MSR_MTRRdefType:
         if ( !d->arch.cpuid->basic.mtrr )
             goto gp_fault;
-        if ( !mtrr_def_type_msr_set(v->domain, &v->arch.hvm_vcpu.mtrr,
+        if ( !mtrr_def_type_msr_set(v->domain, &v->arch.hvm.mtrr,
                                     msr_content) )
            goto gp_fault;
         break;
     case MSR_MTRRfix64K_00000:
         if ( !d->arch.cpuid->basic.mtrr )
             goto gp_fault;
-        if ( !mtrr_fix_range_msr_set(v->domain, &v->arch.hvm_vcpu.mtrr, 0,
+        if ( !mtrr_fix_range_msr_set(v->domain, &v->arch.hvm.mtrr, 0,
                                      msr_content) )
             goto gp_fault;
         break;
@@ -3620,7 +3618,7 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content,
         if ( !d->arch.cpuid->basic.mtrr )
             goto gp_fault;
         index = msr - MSR_MTRRfix16K_80000 + 1;
-        if ( !mtrr_fix_range_msr_set(v->domain, &v->arch.hvm_vcpu.mtrr,
+        if ( !mtrr_fix_range_msr_set(v->domain, &v->arch.hvm.mtrr,
                                      index, msr_content) )
             goto gp_fault;
         break;
@@ -3628,7 +3626,7 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content,
         if ( !d->arch.cpuid->basic.mtrr )
             goto gp_fault;
         index = msr - MSR_MTRRfix4K_C0000 + 3;
-        if ( !mtrr_fix_range_msr_set(v->domain, &v->arch.hvm_vcpu.mtrr,
+        if ( !mtrr_fix_range_msr_set(v->domain, &v->arch.hvm.mtrr,
                                      index, msr_content) )
             goto gp_fault;
         break;
@@ -3637,8 +3635,8 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content,
             goto gp_fault;
         index = msr - MSR_IA32_MTRR_PHYSBASE(0);
         if ( ((index / 2) >=
-              MASK_EXTR(v->arch.hvm_vcpu.mtrr.mtrr_cap, MTRRcap_VCNT)) ||
-             !mtrr_var_range_msr_set(v->domain, &v->arch.hvm_vcpu.mtrr,
+              MASK_EXTR(v->arch.hvm.mtrr.mtrr_cap, MTRRcap_VCNT)) ||
+             !mtrr_var_range_msr_set(v->domain, &v->arch.hvm.mtrr,
                                      msr, msr_content) )
             goto gp_fault;
         break;
@@ -3647,7 +3645,7 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content,
         /* No XSS features currently supported for guests. */
         if ( !d->arch.cpuid->xstate.xsaves || msr_content != 0 )
             goto gp_fault;
-        v->arch.hvm_vcpu.msr_xss = msr_content;
+        v->arch.hvm.msr_xss = msr_content;
         break;
 
     case MSR_IA32_BNDCFGS:
@@ -3872,7 +3870,7 @@ void hvm_vcpu_reset_state(struct vcpu *v, uint16_t cs, uint16_t ip)
 
     if ( !paging_mode_hap(d) )
     {
-        if ( v->arch.hvm_vcpu.guest_cr[0] & X86_CR0_PG )
+        if ( v->arch.hvm.guest_cr[0] & X86_CR0_PG )
             put_page(pagetable_get_page(v->arch.guest_table));
         v->arch.guest_table = pagetable_null();
     }
@@ -3888,19 +3886,19 @@ void hvm_vcpu_reset_state(struct vcpu *v, uint16_t cs, uint16_t ip)
     v->arch.user_regs.rip = ip;
     memset(&v->arch.debugreg, 0, sizeof(v->arch.debugreg));
 
-    v->arch.hvm_vcpu.guest_cr[0] = X86_CR0_ET;
+    v->arch.hvm.guest_cr[0] = X86_CR0_ET;
     hvm_update_guest_cr(v, 0);
 
-    v->arch.hvm_vcpu.guest_cr[2] = 0;
+    v->arch.hvm.guest_cr[2] = 0;
     hvm_update_guest_cr(v, 2);
 
-    v->arch.hvm_vcpu.guest_cr[3] = 0;
+    v->arch.hvm.guest_cr[3] = 0;
     hvm_update_guest_cr(v, 3);
 
-    v->arch.hvm_vcpu.guest_cr[4] = 0;
+    v->arch.hvm.guest_cr[4] = 0;
     hvm_update_guest_cr(v, 4);
 
-    v->arch.hvm_vcpu.guest_efer = 0;
+    v->arch.hvm.guest_efer = 0;
     hvm_update_guest_efer(v);
 
     reg.sel = cs;
@@ -3932,12 +3930,12 @@ void hvm_vcpu_reset_state(struct vcpu *v, uint16_t cs, uint16_t ip)
         hvm_funcs.tsc_scaling.setup(v);
 
     /* Sync AP's TSC with BSP's. */
-    v->arch.hvm_vcpu.cache_tsc_offset =
-        v->domain->vcpu[0]->arch.hvm_vcpu.cache_tsc_offset;
-    hvm_set_tsc_offset(v, v->arch.hvm_vcpu.cache_tsc_offset,
+    v->arch.hvm.cache_tsc_offset =
+        v->domain->vcpu[0]->arch.hvm.cache_tsc_offset;
+    hvm_set_tsc_offset(v, v->arch.hvm.cache_tsc_offset,
                        d->arch.hvm.sync_tsc);
 
-    v->arch.hvm_vcpu.msr_tsc_adjust = 0;
+    v->arch.hvm.msr_tsc_adjust = 0;
 
     paging_update_paging_modes(v);
 
@@ -4059,7 +4057,7 @@ static int hvmop_set_evtchn_upcall_vector(
 
     printk(XENLOG_G_INFO "%pv: upcall vector %02x\n", v, op.vector);
 
-    v->arch.hvm_vcpu.evtchn_upcall_vector = op.vector;
+    v->arch.hvm.evtchn_upcall_vector = op.vector;
     hvm_assert_evtchn_irq(v);
     return 0;
 }
@@ -4976,7 +4974,7 @@ int hvm_debug_op(struct vcpu *v, int32_t op)
                 break;
             rc = 0;
             vcpu_pause(v);
-            v->arch.hvm_vcpu.single_step =
+            v->arch.hvm.single_step =
                 (op == XEN_DOMCTL_DEBUG_OP_SINGLE_STEP_ON);
             vcpu_unpause(v); /* guest will latch new state */
             break;
@@ -4995,7 +4993,7 @@ void hvm_toggle_singlestep(struct vcpu *v)
     if ( !hvm_is_singlestep_supported() )
         return;
 
-    v->arch.hvm_vcpu.single_step = !v->arch.hvm_vcpu.single_step;
+    v->arch.hvm.single_step = !v->arch.hvm.single_step;
 }
 
 void hvm_domain_soft_reset(struct domain *d)
diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
index f1ea7d7..47d6c85 100644
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -82,7 +82,7 @@ bool hvm_emulate_one_insn(hvm_emulate_validate_t *validate, const char *descr)
 {
     struct hvm_emulate_ctxt ctxt;
     struct vcpu *curr = current;
-    struct hvm_vcpu_io *vio = &curr->arch.hvm_vcpu.hvm_io;
+    struct hvm_vcpu_io *vio = &curr->arch.hvm.hvm_io;
     int rc;
 
     hvm_emulate_init_once(&ctxt, validate, guest_cpu_user_regs());
@@ -118,7 +118,7 @@ bool hvm_emulate_one_insn(hvm_emulate_validate_t *validate, const char *descr)
 bool handle_mmio_with_translation(unsigned long gla, unsigned long gpfn,
                                   struct npfec access)
 {
-    struct hvm_vcpu_io *vio = &current->arch.hvm_vcpu.hvm_io;
+    struct hvm_vcpu_io *vio = &current->arch.hvm.hvm_io;
 
     vio->mmio_access = access.gla_valid &&
                        access.kind == npfec_kind_with_gla
@@ -131,7 +131,7 @@ bool handle_mmio_with_translation(unsigned long gla, unsigned long gpfn,
 bool handle_pio(uint16_t port, unsigned int size, int dir)
 {
     struct vcpu *curr = current;
-    struct hvm_vcpu_io *vio = &curr->arch.hvm_vcpu.hvm_io;
+    struct hvm_vcpu_io *vio = &curr->arch.hvm.hvm_io;
     unsigned long data;
     int rc;
 
@@ -180,7 +180,7 @@ static bool_t g2m_portio_accept(const struct hvm_io_handler *handler,
 {
     struct vcpu *curr = current;
     const struct hvm_domain *hvm = &curr->domain->arch.hvm;
-    struct hvm_vcpu_io *vio = &curr->arch.hvm_vcpu.hvm_io;
+    struct hvm_vcpu_io *vio = &curr->arch.hvm.hvm_io;
     struct g2m_ioport *g2m_ioport;
     unsigned int start, end;
 
@@ -201,7 +201,7 @@ static bool_t g2m_portio_accept(const struct hvm_io_handler *handler,
 static int g2m_portio_read(const struct hvm_io_handler *handler,
                            uint64_t addr, uint32_t size, uint64_t *data)
 {
-    struct hvm_vcpu_io *vio = &current->arch.hvm_vcpu.hvm_io;
+    struct hvm_vcpu_io *vio = &current->arch.hvm.hvm_io;
     const struct g2m_ioport *g2m_ioport = vio->g2m_ioport;
     unsigned int mport = (addr - g2m_ioport->gport) + g2m_ioport->mport;
 
@@ -226,7 +226,7 @@ static int g2m_portio_read(const struct hvm_io_handler *handler,
 static int g2m_portio_write(const struct hvm_io_handler *handler,
                             uint64_t addr, uint32_t size, uint64_t data)
 {
-    struct hvm_vcpu_io *vio = &current->arch.hvm_vcpu.hvm_io;
+    struct hvm_vcpu_io *vio = &current->arch.hvm.hvm_io;
     const struct g2m_ioport *g2m_ioport = vio->g2m_ioport;
     unsigned int mport = (addr - g2m_ioport->gport) + g2m_ioport->mport;
 
diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index 8d60b02..138ed69 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -110,7 +110,7 @@ bool hvm_io_pending(struct vcpu *v)
 static void hvm_io_assist(struct hvm_ioreq_vcpu *sv, uint64_t data)
 {
     struct vcpu *v = sv->vcpu;
-    ioreq_t *ioreq = &v->arch.hvm_vcpu.hvm_io.io_req;
+    ioreq_t *ioreq = &v->arch.hvm.hvm_io.io_req;
 
     if ( hvm_ioreq_needs_completion(ioreq) )
     {
@@ -184,7 +184,7 @@ static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p)
 bool handle_hvm_io_completion(struct vcpu *v)
 {
     struct domain *d = v->domain;
-    struct hvm_vcpu_io *vio = &v->arch.hvm_vcpu.hvm_io;
+    struct hvm_vcpu_io *vio = &v->arch.hvm.hvm_io;
     struct hvm_ioreq_server *s;
     enum hvm_io_completion io_completion;
     unsigned int id;
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index 1ded2c2..fe2c2fa 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -306,13 +306,13 @@ void hvm_assert_evtchn_irq(struct vcpu *v)
 {
     if ( unlikely(in_irq() || !local_irq_is_enabled()) )
     {
-        tasklet_schedule(&v->arch.hvm_vcpu.assert_evtchn_irq_tasklet);
+        tasklet_schedule(&v->arch.hvm.assert_evtchn_irq_tasklet);
         return;
     }
 
-    if ( v->arch.hvm_vcpu.evtchn_upcall_vector != 0 )
+    if ( v->arch.hvm.evtchn_upcall_vector != 0 )
     {
-        uint8_t vector = v->arch.hvm_vcpu.evtchn_upcall_vector;
+        uint8_t vector = v->arch.hvm.evtchn_upcall_vector;
 
         vlapic_set_irq(vcpu_vlapic(v), vector, 0);
     }
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 8a772bc..de1b5c4 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -122,7 +122,7 @@ uint8_t pat_type_2_pte_flags(uint8_t pat_type)
 
 int hvm_vcpu_cacheattr_init(struct vcpu *v)
 {
-    struct mtrr_state *m = &v->arch.hvm_vcpu.mtrr;
+    struct mtrr_state *m = &v->arch.hvm.mtrr;
     unsigned int num_var_ranges =
         is_hardware_domain(v->domain) ? MASK_EXTR(mtrr_state.mtrr_cap,
                                                   MTRRcap_VCNT)
@@ -144,7 +144,7 @@ int hvm_vcpu_cacheattr_init(struct vcpu *v)
 
     m->mtrr_cap = (1u << 10) | (1u << 8) | num_var_ranges;
 
-    v->arch.hvm_vcpu.pat_cr =
+    v->arch.hvm.pat_cr =
         ((uint64_t)PAT_TYPE_WRBACK) |               /* PAT0: WB */
         ((uint64_t)PAT_TYPE_WRTHROUGH << 8) |       /* PAT1: WT */
         ((uint64_t)PAT_TYPE_UC_MINUS << 16) |       /* PAT2: UC- */
@@ -185,7 +185,7 @@ int hvm_vcpu_cacheattr_init(struct vcpu *v)
 
 void hvm_vcpu_cacheattr_destroy(struct vcpu *v)
 {
-    xfree(v->arch.hvm_vcpu.mtrr.var_ranges);
+    xfree(v->arch.hvm.mtrr.var_ranges);
 }
 
 /*
@@ -343,8 +343,8 @@ uint32_t get_pat_flags(struct vcpu *v,
     uint8_t guest_eff_mm_type;
     uint8_t shadow_mtrr_type;
     uint8_t pat_entry_value;
-    uint64_t pat = v->arch.hvm_vcpu.pat_cr;
-    struct mtrr_state *g = &v->arch.hvm_vcpu.mtrr;
+    uint64_t pat = v->arch.hvm.pat_cr;
+    struct mtrr_state *g = &v->arch.hvm.mtrr;
 
     /* 1. Get the effective memory type of guest physical address,
      * with the pair of guest MTRR and PAT
@@ -494,8 +494,8 @@ bool_t mtrr_var_range_msr_set(
 
 bool mtrr_pat_not_equal(const struct vcpu *vd, const struct vcpu *vs)
 {
-    const struct mtrr_state *md = &vd->arch.hvm_vcpu.mtrr;
-    const struct mtrr_state *ms = &vs->arch.hvm_vcpu.mtrr;
+    const struct mtrr_state *md = &vd->arch.hvm.mtrr;
+    const struct mtrr_state *ms = &vs->arch.hvm.mtrr;
 
     if ( md->enabled != ms->enabled )
         return true;
@@ -525,7 +525,7 @@ bool mtrr_pat_not_equal(const struct vcpu *vd, const struct vcpu *vs)
     }
 
     /* Test PAT. */
-    return vd->arch.hvm_vcpu.pat_cr != vs->arch.hvm_vcpu.pat_cr;
+    return vd->arch.hvm.pat_cr != vs->arch.hvm.pat_cr;
 }
 
 struct hvm_mem_pinned_cacheattr_range {
@@ -697,7 +697,7 @@ static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
     /* save mtrr&pat */
     for_each_vcpu(d, v)
     {
-        const struct mtrr_state *mtrr_state = &v->arch.hvm_vcpu.mtrr;
+        const struct mtrr_state *mtrr_state = &v->arch.hvm.mtrr;
         struct hvm_hw_mtrr hw_mtrr = {
             .msr_mtrr_def_type = mtrr_state->def_type |
                                  MASK_INSR(mtrr_state->fixed_enabled,
@@ -764,7 +764,7 @@ static int hvm_load_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
         return -EINVAL;
     }
 
-    mtrr_state = &v->arch.hvm_vcpu.mtrr;
+    mtrr_state = &v->arch.hvm.mtrr;
 
     hvm_set_guest_pat(v, hw_mtrr.msr_pat_cr);
 
@@ -858,7 +858,7 @@ int epte_get_entry_emt(struct domain *d, unsigned long gfn, mfn_t mfn,
         return -1;
 
     gmtrr_mtype = is_hvm_domain(d) && v ?
-                  get_mtrr_type(&v->arch.hvm_vcpu.mtrr,
+                  get_mtrr_type(&v->arch.hvm.mtrr,
                                 gfn << PAGE_SHIFT, order) :
                   MTRR_TYPE_WRBACK;
     hmtrr_mtype = get_mtrr_type(&mtrr_state, mfn_x(mfn) << PAGE_SHIFT, order);
diff --git a/xen/arch/x86/hvm/pmtimer.c b/xen/arch/x86/hvm/pmtimer.c
index 75b9408..8542a32 100644
--- a/xen/arch/x86/hvm/pmtimer.c
+++ b/xen/arch/x86/hvm/pmtimer.c
@@ -265,7 +265,7 @@ static int acpi_save(struct domain *d, hvm_domain_context_t *h)
      * Update the counter to the guest's current time.  Make sure it only
      * goes forwards.
      */
-    x = (((s->vcpu->arch.hvm_vcpu.guest_time ?: hvm_get_guest_time(s->vcpu)) -
+    x = (((s->vcpu->arch.hvm.guest_time ?: hvm_get_guest_time(s->vcpu)) -
           s->last_gtime) * s->scale) >> 32;
     if ( x < 1UL<<31 )
         acpi->tmr_val += x;
diff --git a/xen/arch/x86/hvm/svm/asid.c b/xen/arch/x86/hvm/svm/asid.c
index 4861daa..7cc54da 100644
--- a/xen/arch/x86/hvm/svm/asid.c
+++ b/xen/arch/x86/hvm/svm/asid.c
@@ -43,7 +43,7 @@ void svm_asid_handle_vmrun(void)
     struct vmcb_struct *vmcb = curr->arch.hvm_svm.vmcb;
     struct hvm_vcpu_asid *p_asid =
         nestedhvm_vcpu_in_guestmode(curr)
-        ? &vcpu_nestedhvm(curr).nv_n2asid : &curr->arch.hvm_vcpu.n1asid;
+        ? &vcpu_nestedhvm(curr).nv_n2asid : &curr->arch.hvm.n1asid;
     bool_t need_flush = hvm_asid_handle_vmenter(p_asid);
 
     /* ASID 0 indicates that ASIDs are disabled. */
diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
index 6457532..a1f840e 100644
--- a/xen/arch/x86/hvm/svm/nestedsvm.c
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
@@ -243,10 +243,10 @@ static int nsvm_vcpu_hostsave(struct vcpu *v, unsigned int inst_len)
 
     /* Save shadowed values. This ensures that the l1 guest
      * cannot override them to break out. */
-    n1vmcb->_efer = v->arch.hvm_vcpu.guest_efer;
-    n1vmcb->_cr0 = v->arch.hvm_vcpu.guest_cr[0];
-    n1vmcb->_cr2 = v->arch.hvm_vcpu.guest_cr[2];
-    n1vmcb->_cr4 = v->arch.hvm_vcpu.guest_cr[4];
+    n1vmcb->_efer = v->arch.hvm.guest_efer;
+    n1vmcb->_cr0 = v->arch.hvm.guest_cr[0];
+    n1vmcb->_cr2 = v->arch.hvm.guest_cr[2];
+    n1vmcb->_cr4 = v->arch.hvm.guest_cr[4];
 
     /* Remember the host interrupt flag */
     svm->ns_hostflags.fields.rflagsif =
@@ -276,7 +276,7 @@ static int nsvm_vcpu_hostrestore(struct vcpu *v, struct cpu_user_regs *regs)
     v->arch.hvm_svm.vmcb_pa = nv->nv_n1vmcx_pa;
 
     /* EFER */
-    v->arch.hvm_vcpu.guest_efer = n1vmcb->_efer;
+    v->arch.hvm.guest_efer = n1vmcb->_efer;
     rc = hvm_set_efer(n1vmcb->_efer);
     if ( rc == X86EMUL_EXCEPTION )
         hvm_inject_hw_exception(TRAP_gp_fault, 0);
@@ -284,7 +284,7 @@ static int nsvm_vcpu_hostrestore(struct vcpu *v, struct cpu_user_regs *regs)
         gdprintk(XENLOG_ERR, "hvm_set_efer failed, rc: %u\n", rc);
 
     /* CR4 */
-    v->arch.hvm_vcpu.guest_cr[4] = n1vmcb->_cr4;
+    v->arch.hvm.guest_cr[4] = n1vmcb->_cr4;
     rc = hvm_set_cr4(n1vmcb->_cr4, 1);
     if ( rc == X86EMUL_EXCEPTION )
         hvm_inject_hw_exception(TRAP_gp_fault, 0);
@@ -293,28 +293,28 @@ static int nsvm_vcpu_hostrestore(struct vcpu *v, struct cpu_user_regs *regs)
 
     /* CR0 */
     nestedsvm_fpu_vmexit(n1vmcb, n2vmcb,
-        svm->ns_cr0, v->arch.hvm_vcpu.guest_cr[0]);
-    v->arch.hvm_vcpu.guest_cr[0] = n1vmcb->_cr0 | X86_CR0_PE;
+        svm->ns_cr0, v->arch.hvm.guest_cr[0]);
+    v->arch.hvm.guest_cr[0] = n1vmcb->_cr0 | X86_CR0_PE;
     n1vmcb->rflags &= ~X86_EFLAGS_VM;
     rc = hvm_set_cr0(n1vmcb->_cr0 | X86_CR0_PE, 1);
     if ( rc == X86EMUL_EXCEPTION )
         hvm_inject_hw_exception(TRAP_gp_fault, 0);
     if (rc != X86EMUL_OKAY)
         gdprintk(XENLOG_ERR, "hvm_set_cr0 failed, rc: %u\n", rc);
-    svm->ns_cr0 = v->arch.hvm_vcpu.guest_cr[0];
+    svm->ns_cr0 = v->arch.hvm.guest_cr[0];
 
     /* CR2 */
-    v->arch.hvm_vcpu.guest_cr[2] = n1vmcb->_cr2;
+    v->arch.hvm.guest_cr[2] = n1vmcb->_cr2;
     hvm_update_guest_cr(v, 2);
 
     /* CR3 */
     /* Nested paging mode */
     if (nestedhvm_paging_mode_hap(v)) {
         /* host nested paging + guest nested paging. */
-        /* hvm_set_cr3() below sets v->arch.hvm_vcpu.guest_cr[3] for us. */
+        /* hvm_set_cr3() below sets v->arch.hvm.guest_cr[3] for us. */
     } else if (paging_mode_hap(v->domain)) {
         /* host nested paging + guest shadow paging. */
-        /* hvm_set_cr3() below sets v->arch.hvm_vcpu.guest_cr[3] for us. */
+        /* hvm_set_cr3() below sets v->arch.hvm.guest_cr[3] for us. */
     } else {
         /* host shadow paging + guest shadow paging. */
 
@@ -322,7 +322,7 @@ static int nsvm_vcpu_hostrestore(struct vcpu *v, struct cpu_user_regs *regs)
         if (!pagetable_is_null(v->arch.guest_table))
             put_page(pagetable_get_page(v->arch.guest_table));
         v->arch.guest_table = pagetable_null();
-        /* hvm_set_cr3() below sets v->arch.hvm_vcpu.guest_cr[3] for us. */
+        /* hvm_set_cr3() below sets v->arch.hvm.guest_cr[3] for us. */
     }
     rc = hvm_set_cr3(n1vmcb->_cr3, 1);
     if ( rc == X86EMUL_EXCEPTION )
@@ -549,7 +549,7 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v, struct cpu_user_regs *regs)
     }
 
     /* EFER */
-    v->arch.hvm_vcpu.guest_efer = ns_vmcb->_efer;
+    v->arch.hvm.guest_efer = ns_vmcb->_efer;
     rc = hvm_set_efer(ns_vmcb->_efer);
     if ( rc == X86EMUL_EXCEPTION )
         hvm_inject_hw_exception(TRAP_gp_fault, 0);
@@ -557,7 +557,7 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v, struct cpu_user_regs *regs)
         gdprintk(XENLOG_ERR, "hvm_set_efer failed, rc: %u\n", rc);
 
     /* CR4 */
-    v->arch.hvm_vcpu.guest_cr[4] = ns_vmcb->_cr4;
+    v->arch.hvm.guest_cr[4] = ns_vmcb->_cr4;
     rc = hvm_set_cr4(ns_vmcb->_cr4, 1);
     if ( rc == X86EMUL_EXCEPTION )
         hvm_inject_hw_exception(TRAP_gp_fault, 0);
@@ -565,9 +565,9 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v, struct cpu_user_regs *regs)
         gdprintk(XENLOG_ERR, "hvm_set_cr4 failed, rc: %u\n", rc);
 
     /* CR0 */
-    svm->ns_cr0 = v->arch.hvm_vcpu.guest_cr[0];
+    svm->ns_cr0 = v->arch.hvm.guest_cr[0];
     cr0 = nestedsvm_fpu_vmentry(svm->ns_cr0, ns_vmcb, n1vmcb, n2vmcb);
-    v->arch.hvm_vcpu.guest_cr[0] = ns_vmcb->_cr0;
+    v->arch.hvm.guest_cr[0] = ns_vmcb->_cr0;
     rc = hvm_set_cr0(cr0, 1);
     if ( rc == X86EMUL_EXCEPTION )
         hvm_inject_hw_exception(TRAP_gp_fault, 0);
@@ -575,7 +575,7 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v, struct cpu_user_regs *regs)
         gdprintk(XENLOG_ERR, "hvm_set_cr0 failed, rc: %u\n", rc);
 
     /* CR2 */
-    v->arch.hvm_vcpu.guest_cr[2] = ns_vmcb->_cr2;
+    v->arch.hvm.guest_cr[2] = ns_vmcb->_cr2;
     hvm_update_guest_cr(v, 2);
 
     /* Nested paging mode */
@@ -585,7 +585,7 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v, struct cpu_user_regs *regs)
 
         nestedsvm_vmcb_set_nestedp2m(v, ns_vmcb, n2vmcb);
 
-        /* hvm_set_cr3() below sets v->arch.hvm_vcpu.guest_cr[3] for us. */
+        /* hvm_set_cr3() below sets v->arch.hvm.guest_cr[3] for us. */
         rc = hvm_set_cr3(ns_vmcb->_cr3, 1);
         if ( rc == X86EMUL_EXCEPTION )
             hvm_inject_hw_exception(TRAP_gp_fault, 0);
@@ -599,7 +599,7 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v, struct cpu_user_regs *regs)
         /* When l1 guest does shadow paging
          * we assume it intercepts page faults.
          */
-        /* hvm_set_cr3() below sets v->arch.hvm_vcpu.guest_cr[3] for us. */
+        /* hvm_set_cr3() below sets v->arch.hvm.guest_cr[3] for us. */
         rc = hvm_set_cr3(ns_vmcb->_cr3, 1);
         if ( rc == X86EMUL_EXCEPTION )
             hvm_inject_hw_exception(TRAP_gp_fault, 0);
@@ -1259,7 +1259,7 @@ enum hvm_intblk nsvm_intr_blocked(struct vcpu *v)
          * Delay the injection because this would result in delivering
          * an interrupt *within* the execution of an instruction.
          */
-        if ( v->arch.hvm_vcpu.hvm_io.io_req.state != STATE_IOREQ_NONE )
+        if ( v->arch.hvm.hvm_io.io_req.state != STATE_IOREQ_NONE )
             return hvm_intblk_shadow;
 
         if ( !nv->nv_vmexit_pending && n2vmcb->exitintinfo.bytes != 0 ) {
@@ -1681,7 +1681,7 @@ void svm_nested_features_on_efer_update(struct vcpu *v)
      * Need state for transfering the nested gif status so only write on
      * the hvm_vcpu EFER.SVME changing.
      */
-    if ( v->arch.hvm_vcpu.guest_efer & EFER_SVME )
+    if ( v->arch.hvm.guest_efer & EFER_SVME )
     {
         if ( !vmcb->virt_ext.fields.vloadsave_enable &&
              paging_mode_hap(v->domain) &&
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 2d52247..92b29b1 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -191,13 +191,13 @@ static void svm_set_icebp_interception(struct domain *d, bool enable)
 static void svm_save_dr(struct vcpu *v)
 {
     struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
-    unsigned int flag_dr_dirty = v->arch.hvm_vcpu.flag_dr_dirty;
+    unsigned int flag_dr_dirty = v->arch.hvm.flag_dr_dirty;
 
     if ( !flag_dr_dirty )
         return;
 
     /* Clear the DR dirty flag and re-enable intercepts for DR accesses. */
-    v->arch.hvm_vcpu.flag_dr_dirty = 0;
+    v->arch.hvm.flag_dr_dirty = 0;
     vmcb_set_dr_intercepts(vmcb, ~0u);
 
     if ( v->domain->arch.cpuid->extd.dbext )
@@ -223,10 +223,10 @@ static void svm_save_dr(struct vcpu *v)
 
 static void __restore_debug_registers(struct vmcb_struct *vmcb, struct vcpu *v)
 {
-    if ( v->arch.hvm_vcpu.flag_dr_dirty )
+    if ( v->arch.hvm.flag_dr_dirty )
         return;
 
-    v->arch.hvm_vcpu.flag_dr_dirty = 1;
+    v->arch.hvm.flag_dr_dirty = 1;
     vmcb_set_dr_intercepts(vmcb, 0);
 
     ASSERT(v == current);
@@ -269,10 +269,10 @@ static int svm_vmcb_save(struct vcpu *v, struct hvm_hw_cpu *c)
 {
     struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
 
-    c->cr0 = v->arch.hvm_vcpu.guest_cr[0];
-    c->cr2 = v->arch.hvm_vcpu.guest_cr[2];
-    c->cr3 = v->arch.hvm_vcpu.guest_cr[3];
-    c->cr4 = v->arch.hvm_vcpu.guest_cr[4];
+    c->cr0 = v->arch.hvm.guest_cr[0];
+    c->cr2 = v->arch.hvm.guest_cr[2];
+    c->cr3 = v->arch.hvm.guest_cr[3];
+    c->cr4 = v->arch.hvm.guest_cr[4];
 
     c->sysenter_cs = v->arch.hvm_svm.guest_sysenter_cs;
     c->sysenter_esp = v->arch.hvm_svm.guest_sysenter_esp;
@@ -330,17 +330,17 @@ static int svm_vmcb_restore(struct vcpu *v, struct hvm_hw_cpu *c)
             }
         }
 
-        if ( v->arch.hvm_vcpu.guest_cr[0] & X86_CR0_PG )
+        if ( v->arch.hvm.guest_cr[0] & X86_CR0_PG )
             put_page(pagetable_get_page(v->arch.guest_table));
 
         v->arch.guest_table =
             page ? pagetable_from_page(page) : pagetable_null();
     }
 
-    v->arch.hvm_vcpu.guest_cr[0] = c->cr0 | X86_CR0_ET;
-    v->arch.hvm_vcpu.guest_cr[2] = c->cr2;
-    v->arch.hvm_vcpu.guest_cr[3] = c->cr3;
-    v->arch.hvm_vcpu.guest_cr[4] = c->cr4;
+    v->arch.hvm.guest_cr[0] = c->cr0 | X86_CR0_ET;
+    v->arch.hvm.guest_cr[2] = c->cr2;
+    v->arch.hvm.guest_cr[3] = c->cr3;
+    v->arch.hvm.guest_cr[4] = c->cr4;
     svm_update_guest_cr(v, 0, 0);
     svm_update_guest_cr(v, 2, 0);
     svm_update_guest_cr(v, 4, 0);
@@ -384,7 +384,7 @@ static void svm_save_cpu_state(struct vcpu *v, struct hvm_hw_cpu *data)
     data->msr_star         = vmcb->star;
     data->msr_cstar        = vmcb->cstar;
     data->msr_syscall_mask = vmcb->sfmask;
-    data->msr_efer         = v->arch.hvm_vcpu.guest_efer;
+    data->msr_efer         = v->arch.hvm.guest_efer;
     data->msr_flags        = 0;
 }
 
@@ -398,7 +398,7 @@ static void svm_load_cpu_state(struct vcpu *v, struct hvm_hw_cpu *data)
     vmcb->star       = data->msr_star;
     vmcb->cstar      = data->msr_cstar;
     vmcb->sfmask     = data->msr_syscall_mask;
-    v->arch.hvm_vcpu.guest_efer = data->msr_efer;
+    v->arch.hvm.guest_efer = data->msr_efer;
     svm_update_guest_efer(v);
 }
 
@@ -509,7 +509,7 @@ static void svm_fpu_leave(struct vcpu *v)
      * then this is not necessary: no FPU activity can occur until the guest 
      * clears CR0.TS, and we will initialise the FPU when that happens.
      */
-    if ( !(v->arch.hvm_vcpu.guest_cr[0] & X86_CR0_TS) )
+    if ( !(v->arch.hvm.guest_cr[0] & X86_CR0_TS) )
     {
         vmcb_set_exception_intercepts(
             n1vmcb,
@@ -550,7 +550,7 @@ static int svm_guest_x86_mode(struct vcpu *v)
 {
     struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
 
-    if ( unlikely(!(v->arch.hvm_vcpu.guest_cr[0] & X86_CR0_PE)) )
+    if ( unlikely(!(v->arch.hvm.guest_cr[0] & X86_CR0_PE)) )
         return 0;
     if ( unlikely(guest_cpu_user_regs()->eflags & X86_EFLAGS_VM) )
         return 1;
@@ -569,7 +569,7 @@ void svm_update_guest_cr(struct vcpu *v, unsigned int cr, unsigned int flags)
     case 0: {
         unsigned long hw_cr0_mask = 0;
 
-        if ( !(v->arch.hvm_vcpu.guest_cr[0] & X86_CR0_TS) )
+        if ( !(v->arch.hvm.guest_cr[0] & X86_CR0_TS) )
         {
             if ( v != current )
             {
@@ -590,17 +590,17 @@ void svm_update_guest_cr(struct vcpu *v, unsigned int cr, unsigned int flags)
                vmcb_set_cr_intercepts(vmcb, intercepts | CR_INTERCEPT_CR3_WRITE);
         }
 
-        value = v->arch.hvm_vcpu.guest_cr[0] | hw_cr0_mask;
+        value = v->arch.hvm.guest_cr[0] | hw_cr0_mask;
         if ( !paging_mode_hap(v->domain) )
             value |= X86_CR0_PG | X86_CR0_WP;
         vmcb_set_cr0(vmcb, value);
         break;
     }
     case 2:
-        vmcb_set_cr2(vmcb, v->arch.hvm_vcpu.guest_cr[2]);
+        vmcb_set_cr2(vmcb, v->arch.hvm.guest_cr[2]);
         break;
     case 3:
-        vmcb_set_cr3(vmcb, v->arch.hvm_vcpu.hw_cr[3]);
+        vmcb_set_cr3(vmcb, v->arch.hvm.hw_cr[3]);
         if ( !nestedhvm_enabled(v->domain) )
         {
             if ( !(flags & HVM_UPDATE_GUEST_CR3_NOFLUSH) )
@@ -611,13 +611,13 @@ void svm_update_guest_cr(struct vcpu *v, unsigned int cr, unsigned int flags)
         else if ( !(flags & HVM_UPDATE_GUEST_CR3_NOFLUSH) )
             hvm_asid_flush_vcpu_asid(
                 nestedhvm_vcpu_in_guestmode(v)
-                ? &vcpu_nestedhvm(v).nv_n2asid : &v->arch.hvm_vcpu.n1asid);
+                ? &vcpu_nestedhvm(v).nv_n2asid : &v->arch.hvm.n1asid);
         break;
     case 4:
         value = HVM_CR4_HOST_MASK;
         if ( paging_mode_hap(v->domain) )
             value &= ~X86_CR4_PAE;
-        value |= v->arch.hvm_vcpu.guest_cr[4];
+        value |= v->arch.hvm.guest_cr[4];
 
         if ( !hvm_paging_enabled(v) )
         {
@@ -646,16 +646,16 @@ void svm_update_guest_cr(struct vcpu *v, unsigned int cr, unsigned int flags)
 static void svm_update_guest_efer(struct vcpu *v)
 {
     struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
-    bool_t lma = !!(v->arch.hvm_vcpu.guest_efer & EFER_LMA);
+    bool_t lma = !!(v->arch.hvm.guest_efer & EFER_LMA);
     uint64_t new_efer;
 
-    new_efer = (v->arch.hvm_vcpu.guest_efer | EFER_SVME) & ~EFER_LME;
+    new_efer = (v->arch.hvm.guest_efer | EFER_SVME) & ~EFER_LME;
     if ( lma )
         new_efer |= EFER_LME;
     vmcb_set_efer(vmcb, new_efer);
 
     ASSERT(nestedhvm_enabled(v->domain) ||
-           !(v->arch.hvm_vcpu.guest_efer & EFER_SVME));
+           !(v->arch.hvm.guest_efer & EFER_SVME));
 
     if ( nestedhvm_enabled(v->domain) )
         svm_nested_features_on_efer_update(v);
@@ -1140,11 +1140,11 @@ static void noreturn svm_do_resume(struct vcpu *v)
         vcpu_guestmode = 1;
 
     if ( !vcpu_guestmode &&
-        unlikely(v->arch.hvm_vcpu.debug_state_latch != debug_state) )
+        unlikely(v->arch.hvm.debug_state_latch != debug_state) )
     {
         uint32_t intercepts = vmcb_get_exception_intercepts(vmcb);
 
-        v->arch.hvm_vcpu.debug_state_latch = debug_state;
+        v->arch.hvm.debug_state_latch = debug_state;
         vmcb_set_exception_intercepts(
             vmcb, debug_state ? (intercepts | (1U << TRAP_int3))
                               : (intercepts & ~(1U << TRAP_int3)));
@@ -1458,7 +1458,7 @@ static void svm_inject_event(const struct x86_event *event)
 
     case TRAP_page_fault:
         ASSERT(_event.type == X86_EVENTTYPE_HW_EXCEPTION);
-        curr->arch.hvm_vcpu.guest_cr[2] = _event.cr2;
+        curr->arch.hvm.guest_cr[2] = _event.cr2;
         vmcb_set_cr2(vmcb, _event.cr2);
         break;
     }
@@ -1800,14 +1800,14 @@ static void svm_fpu_dirty_intercept(void)
     if ( vmcb != n1vmcb )
     {
        /* Check if l1 guest must make FPU ready for the l2 guest */
-       if ( v->arch.hvm_vcpu.guest_cr[0] & X86_CR0_TS )
+       if ( v->arch.hvm.guest_cr[0] & X86_CR0_TS )
            hvm_inject_hw_exception(TRAP_no_device, X86_EVENT_NO_EC);
        else
            vmcb_set_cr0(n1vmcb, vmcb_get_cr0(n1vmcb) & ~X86_CR0_TS);
        return;
     }
 
-    if ( !(v->arch.hvm_vcpu.guest_cr[0] & X86_CR0_TS) )
+    if ( !(v->arch.hvm.guest_cr[0] & X86_CR0_TS) )
         vmcb_set_cr0(vmcb, vmcb_get_cr0(vmcb) & ~X86_CR0_TS);
 }
 
@@ -2492,7 +2492,7 @@ static void svm_invlpga_intercept(
 {
     svm_invlpga(vaddr,
                 (asid == 0)
-                ? v->arch.hvm_vcpu.n1asid.asid
+                ? v->arch.hvm.n1asid.asid
                 : vcpu_nestedhvm(v).nv_n2asid.asid);
 }
 
@@ -2609,8 +2609,7 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
     hvm_invalidate_regs_fields(regs);
 
     if ( paging_mode_hap(v->domain) )
-        v->arch.hvm_vcpu.guest_cr[3] = v->arch.hvm_vcpu.hw_cr[3] =
-            vmcb_get_cr3(vmcb);
+        v->arch.hvm.guest_cr[3] = v->arch.hvm.hw_cr[3] = vmcb_get_cr3(vmcb);
 
     if ( nestedhvm_enabled(v->domain) && nestedhvm_vcpu_in_guestmode(v) )
         vcpu_guestmode = 1;
diff --git a/xen/arch/x86/hvm/svm/vmcb.c b/xen/arch/x86/hvm/svm/vmcb.c
index d31fcfa..3776c53 100644
--- a/xen/arch/x86/hvm/svm/vmcb.c
+++ b/xen/arch/x86/hvm/svm/vmcb.c
@@ -125,7 +125,7 @@ static int construct_vmcb(struct vcpu *v)
     }
 
     /* Guest EFER. */
-    v->arch.hvm_vcpu.guest_efer = 0;
+    v->arch.hvm.guest_efer = 0;
     hvm_update_guest_efer(v);
 
     /* Guest segment limits. */
@@ -171,10 +171,10 @@ static int construct_vmcb(struct vcpu *v)
     vmcb->tr.base = 0;
     vmcb->tr.limit = 0xff;
 
-    v->arch.hvm_vcpu.guest_cr[0] = X86_CR0_PE | X86_CR0_ET;
+    v->arch.hvm.guest_cr[0] = X86_CR0_PE | X86_CR0_ET;
     hvm_update_guest_cr(v, 0);
 
-    v->arch.hvm_vcpu.guest_cr[4] = 0;
+    v->arch.hvm.guest_cr[4] = 0;
     hvm_update_guest_cr(v, 4);
 
     paging_update_paging_modes(v);
diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
index 5ddb41b..e84c4f4 100644
--- a/xen/arch/x86/hvm/viridian.c
+++ b/xen/arch/x86/hvm/viridian.c
@@ -326,7 +326,7 @@ static void dump_vp_assist(const struct vcpu *v)
 {
     const union viridian_vp_assist *va;
 
-    va = &v->arch.hvm_vcpu.viridian.vp_assist.msr;
+    va = &v->arch.hvm.viridian.vp_assist.msr;
 
     printk(XENLOG_G_INFO "%pv: VIRIDIAN VP_ASSIST_PAGE: enabled: %x pfn: %lx\n",
            v, va->fields.enabled, (unsigned long)va->fields.pfn);
@@ -380,11 +380,11 @@ static void enable_hypercall_page(struct domain *d)
 static void initialize_vp_assist(struct vcpu *v)
 {
     struct domain *d = v->domain;
-    unsigned long gmfn = v->arch.hvm_vcpu.viridian.vp_assist.msr.fields.pfn;
+    unsigned long gmfn = v->arch.hvm.viridian.vp_assist.msr.fields.pfn;
     struct page_info *page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC);
     void *va;
 
-    ASSERT(!v->arch.hvm_vcpu.viridian.vp_assist.va);
+    ASSERT(!v->arch.hvm.viridian.vp_assist.va);
 
     /*
      * See section 7.8.7 of the specification for details of this
@@ -409,7 +409,7 @@ static void initialize_vp_assist(struct vcpu *v)
 
     clear_page(va);
 
-    v->arch.hvm_vcpu.viridian.vp_assist.va = va;
+    v->arch.hvm.viridian.vp_assist.va = va;
     return;
 
  fail:
@@ -419,13 +419,13 @@ static void initialize_vp_assist(struct vcpu *v)
 
 static void teardown_vp_assist(struct vcpu *v)
 {
-    void *va = v->arch.hvm_vcpu.viridian.vp_assist.va;
+    void *va = v->arch.hvm.viridian.vp_assist.va;
     struct page_info *page;
 
     if ( !va )
         return;
 
-    v->arch.hvm_vcpu.viridian.vp_assist.va = NULL;
+    v->arch.hvm.viridian.vp_assist.va = NULL;
 
     page = mfn_to_page(domain_page_map_to_mfn(va));
 
@@ -435,7 +435,7 @@ static void teardown_vp_assist(struct vcpu *v)
 
 void viridian_apic_assist_set(struct vcpu *v)
 {
-    uint32_t *va = v->arch.hvm_vcpu.viridian.vp_assist.va;
+    uint32_t *va = v->arch.hvm.viridian.vp_assist.va;
 
     if ( !va )
         return;
@@ -445,25 +445,25 @@ void viridian_apic_assist_set(struct vcpu *v)
      * wrong and the VM will most likely hang so force a crash now
      * to make the problem clear.
      */
-    if ( v->arch.hvm_vcpu.viridian.vp_assist.pending )
+    if ( v->arch.hvm.viridian.vp_assist.pending )
         domain_crash(v->domain);
 
-    v->arch.hvm_vcpu.viridian.vp_assist.pending = true;
+    v->arch.hvm.viridian.vp_assist.pending = true;
     *va |= 1u;
 }
 
 bool viridian_apic_assist_completed(struct vcpu *v)
 {
-    uint32_t *va = v->arch.hvm_vcpu.viridian.vp_assist.va;
+    uint32_t *va = v->arch.hvm.viridian.vp_assist.va;
 
     if ( !va )
         return false;
 
-    if ( v->arch.hvm_vcpu.viridian.vp_assist.pending &&
+    if ( v->arch.hvm.viridian.vp_assist.pending &&
          !(*va & 1u) )
     {
         /* An EOI has been avoided */
-        v->arch.hvm_vcpu.viridian.vp_assist.pending = false;
+        v->arch.hvm.viridian.vp_assist.pending = false;
         return true;
     }
 
@@ -472,13 +472,13 @@ bool viridian_apic_assist_completed(struct vcpu *v)
 
 void viridian_apic_assist_clear(struct vcpu *v)
 {
-    uint32_t *va = v->arch.hvm_vcpu.viridian.vp_assist.va;
+    uint32_t *va = v->arch.hvm.viridian.vp_assist.va;
 
     if ( !va )
         return;
 
     *va &= ~1u;
-    v->arch.hvm_vcpu.viridian.vp_assist.pending = false;
+    v->arch.hvm.viridian.vp_assist.pending = false;
 }
 
 static void update_reference_tsc(struct domain *d, bool_t initialize)
@@ -607,9 +607,9 @@ int wrmsr_viridian_regs(uint32_t idx, uint64_t val)
     case HV_X64_MSR_VP_ASSIST_PAGE:
         perfc_incr(mshv_wrmsr_apic_msr);
         teardown_vp_assist(v); /* release any previous mapping */
-        v->arch.hvm_vcpu.viridian.vp_assist.msr.raw = val;
+        v->arch.hvm.viridian.vp_assist.msr.raw = val;
         dump_vp_assist(v);
-        if ( v->arch.hvm_vcpu.viridian.vp_assist.msr.fields.enabled )
+        if ( v->arch.hvm.viridian.vp_assist.msr.fields.enabled )
             initialize_vp_assist(v);
         break;
 
@@ -630,10 +630,10 @@ int wrmsr_viridian_regs(uint32_t idx, uint64_t val)
     case HV_X64_MSR_CRASH_P3:
     case HV_X64_MSR_CRASH_P4:
         BUILD_BUG_ON(HV_X64_MSR_CRASH_P4 - HV_X64_MSR_CRASH_P0 >=
-                     ARRAY_SIZE(v->arch.hvm_vcpu.viridian.crash_param));
+                     ARRAY_SIZE(v->arch.hvm.viridian.crash_param));
 
         idx -= HV_X64_MSR_CRASH_P0;
-        v->arch.hvm_vcpu.viridian.crash_param[idx] = val;
+        v->arch.hvm.viridian.crash_param[idx] = val;
         break;
 
     case HV_X64_MSR_CRASH_CTL:
@@ -646,11 +646,11 @@ int wrmsr_viridian_regs(uint32_t idx, uint64_t val)
             break;
 
         gprintk(XENLOG_WARNING, "VIRIDIAN CRASH: %lx %lx %lx %lx %lx\n",
-                v->arch.hvm_vcpu.viridian.crash_param[0],
-                v->arch.hvm_vcpu.viridian.crash_param[1],
-                v->arch.hvm_vcpu.viridian.crash_param[2],
-                v->arch.hvm_vcpu.viridian.crash_param[3],
-                v->arch.hvm_vcpu.viridian.crash_param[4]);
+                v->arch.hvm.viridian.crash_param[0],
+                v->arch.hvm.viridian.crash_param[1],
+                v->arch.hvm.viridian.crash_param[2],
+                v->arch.hvm.viridian.crash_param[3],
+                v->arch.hvm.viridian.crash_param[4]);
         break;
     }
 
@@ -752,7 +752,7 @@ int rdmsr_viridian_regs(uint32_t idx, uint64_t *val)
 
     case HV_X64_MSR_VP_ASSIST_PAGE:
         perfc_incr(mshv_rdmsr_apic_msr);
-        *val = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw;
+        *val = v->arch.hvm.viridian.vp_assist.msr.raw;
         break;
 
     case HV_X64_MSR_REFERENCE_TSC:
@@ -787,10 +787,10 @@ int rdmsr_viridian_regs(uint32_t idx, uint64_t *val)
     case HV_X64_MSR_CRASH_P3:
     case HV_X64_MSR_CRASH_P4:
         BUILD_BUG_ON(HV_X64_MSR_CRASH_P4 - HV_X64_MSR_CRASH_P0 >=
-                     ARRAY_SIZE(v->arch.hvm_vcpu.viridian.crash_param));
+                     ARRAY_SIZE(v->arch.hvm.viridian.crash_param));
 
         idx -= HV_X64_MSR_CRASH_P0;
-        *val = v->arch.hvm_vcpu.viridian.crash_param[idx];
+        *val = v->arch.hvm.viridian.crash_param[idx];
         break;
 
     case HV_X64_MSR_CRASH_CTL:
@@ -1035,8 +1035,8 @@ static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
 
     for_each_vcpu( d, v ) {
         struct hvm_viridian_vcpu_context ctxt = {
-            .vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw,
-            .vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending,
+            .vp_assist_msr = v->arch.hvm.viridian.vp_assist.msr.raw,
+            .vp_assist_pending = v->arch.hvm.viridian.vp_assist.pending,
         };
 
         if ( hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h, &ctxt) != 0 )
@@ -1065,12 +1065,12 @@ static int viridian_load_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
     if ( memcmp(&ctxt._pad, zero_page, sizeof(ctxt._pad)) )
         return -EINVAL;
 
-    v->arch.hvm_vcpu.viridian.vp_assist.msr.raw = ctxt.vp_assist_msr;
-    if ( v->arch.hvm_vcpu.viridian.vp_assist.msr.fields.enabled &&
-         !v->arch.hvm_vcpu.viridian.vp_assist.va )
+    v->arch.hvm.viridian.vp_assist.msr.raw = ctxt.vp_assist_msr;
+    if ( v->arch.hvm.viridian.vp_assist.msr.fields.enabled &&
+         !v->arch.hvm.viridian.vp_assist.va )
         initialize_vp_assist(v);
 
-    v->arch.hvm_vcpu.viridian.vp_assist.pending = !!ctxt.vp_assist_pending;
+    v->arch.hvm.viridian.vp_assist.pending = !!ctxt.vp_assist_pending;
 
     return 0;
 }
diff --git a/xen/arch/x86/hvm/vmsi.c b/xen/arch/x86/hvm/vmsi.c
index ccbf181..0c5d0cb 100644
--- a/xen/arch/x86/hvm/vmsi.c
+++ b/xen/arch/x86/hvm/vmsi.c
@@ -311,7 +311,7 @@ static int msixtbl_write(struct vcpu *v, unsigned long address,
     if ( !(val & PCI_MSIX_VECTOR_BITMASK) &&
          test_and_clear_bit(nr_entry, &entry->table_flags) )
     {
-        v->arch.hvm_vcpu.hvm_io.msix_unmask_address = address;
+        v->arch.hvm.hvm_io.msix_unmask_address = address;
         goto out;
     }
 
@@ -383,8 +383,8 @@ static bool_t msixtbl_range(const struct hvm_io_handler *handler,
                   PCI_MSIX_ENTRY_VECTOR_CTRL_OFFSET) &&
                  !(data & PCI_MSIX_VECTOR_BITMASK) )
             {
-                curr->arch.hvm_vcpu.hvm_io.msix_snoop_address = addr;
-                curr->arch.hvm_vcpu.hvm_io.msix_snoop_gpa = 0;
+                curr->arch.hvm.hvm_io.msix_snoop_address = addr;
+                curr->arch.hvm.hvm_io.msix_snoop_gpa = 0;
             }
         }
         else if ( (size == 4 || size == 8) &&
@@ -401,9 +401,9 @@ static bool_t msixtbl_range(const struct hvm_io_handler *handler,
             BUILD_BUG_ON((PCI_MSIX_ENTRY_VECTOR_CTRL_OFFSET + 4) &
                          (PCI_MSIX_ENTRY_SIZE - 1));
 
-            curr->arch.hvm_vcpu.hvm_io.msix_snoop_address =
+            curr->arch.hvm.hvm_io.msix_snoop_address =
                 addr + size * r->count - 4;
-            curr->arch.hvm_vcpu.hvm_io.msix_snoop_gpa =
+            curr->arch.hvm.hvm_io.msix_snoop_gpa =
                 r->data + size * r->count - 4;
         }
     }
@@ -506,13 +506,13 @@ int msixtbl_pt_register(struct domain *d, struct pirq *pirq, uint64_t gtable)
         for_each_vcpu ( d, v )
         {
             if ( (v->pause_flags & VPF_blocked_in_xen) &&
-                 !v->arch.hvm_vcpu.hvm_io.msix_snoop_gpa &&
-                 v->arch.hvm_vcpu.hvm_io.msix_snoop_address ==
+                 !v->arch.hvm.hvm_io.msix_snoop_gpa &&
+                 v->arch.hvm.hvm_io.msix_snoop_address ==
                  (gtable + msi_desc->msi_attrib.entry_nr *
                            PCI_MSIX_ENTRY_SIZE +
                   PCI_MSIX_ENTRY_VECTOR_CTRL_OFFSET) )
-                v->arch.hvm_vcpu.hvm_io.msix_unmask_address =
-                    v->arch.hvm_vcpu.hvm_io.msix_snoop_address;
+                v->arch.hvm.hvm_io.msix_unmask_address =
+                    v->arch.hvm.hvm_io.msix_snoop_address;
         }
     }
 
@@ -592,13 +592,13 @@ void msixtbl_pt_cleanup(struct domain *d)
 
 void msix_write_completion(struct vcpu *v)
 {
-    unsigned long ctrl_address = v->arch.hvm_vcpu.hvm_io.msix_unmask_address;
-    unsigned long snoop_addr = v->arch.hvm_vcpu.hvm_io.msix_snoop_address;
+    unsigned long ctrl_address = v->arch.hvm.hvm_io.msix_unmask_address;
+    unsigned long snoop_addr = v->arch.hvm.hvm_io.msix_snoop_address;
 
-    v->arch.hvm_vcpu.hvm_io.msix_snoop_address = 0;
+    v->arch.hvm.hvm_io.msix_snoop_address = 0;
 
     if ( !ctrl_address && snoop_addr &&
-         v->arch.hvm_vcpu.hvm_io.msix_snoop_gpa )
+         v->arch.hvm.hvm_io.msix_snoop_gpa )
     {
         const struct msi_desc *desc;
         uint32_t data;
@@ -610,7 +610,7 @@ void msix_write_completion(struct vcpu *v)
 
         if ( desc &&
              hvm_copy_from_guest_phys(&data,
-                                      v->arch.hvm_vcpu.hvm_io.msix_snoop_gpa,
+                                      v->arch.hvm.hvm_io.msix_snoop_gpa,
                                       sizeof(data)) == HVMTRANS_okay &&
              !(data & PCI_MSIX_VECTOR_BITMASK) )
             ctrl_address = snoop_addr;
@@ -619,7 +619,7 @@ void msix_write_completion(struct vcpu *v)
     if ( !ctrl_address )
         return;
 
-    v->arch.hvm_vcpu.hvm_io.msix_unmask_address = 0;
+    v->arch.hvm.hvm_io.msix_unmask_address = 0;
     if ( msixtbl_write(v, ctrl_address, 4, 0) != X86EMUL_OKAY )
         gdprintk(XENLOG_WARNING, "MSI-X write completion failure\n");
 }
diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c
index eb9b288..889067c 100644
--- a/xen/arch/x86/hvm/vmx/intr.c
+++ b/xen/arch/x86/hvm/vmx/intr.c
@@ -232,7 +232,7 @@ void vmx_intr_assist(void)
     int pt_vector;
 
     /* Block event injection when single step with MTF. */
-    if ( unlikely(v->arch.hvm_vcpu.single_step) )
+    if ( unlikely(v->arch.hvm.single_step) )
     {
         v->arch.hvm_vmx.exec_control |= CPU_BASED_MONITOR_TRAP_FLAG;
         vmx_update_cpu_exec_control(v);
diff --git a/xen/arch/x86/hvm/vmx/realmode.c b/xen/arch/x86/hvm/vmx/realmode.c
index b20d8c4..032a681 100644
--- a/xen/arch/x86/hvm/vmx/realmode.c
+++ b/xen/arch/x86/hvm/vmx/realmode.c
@@ -96,7 +96,7 @@ static void realmode_deliver_exception(
 void vmx_realmode_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt)
 {
     struct vcpu *curr = current;
-    struct hvm_vcpu_io *vio = &curr->arch.hvm_vcpu.hvm_io;
+    struct hvm_vcpu_io *vio = &curr->arch.hvm.hvm_io;
     int rc;
 
     perfc_incr(realmode_emulations);
@@ -115,7 +115,7 @@ void vmx_realmode_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt)
     if ( rc == X86EMUL_UNRECOGNIZED )
     {
         gdprintk(XENLOG_ERR, "Unrecognized insn.\n");
-        if ( curr->arch.hvm_vcpu.guest_cr[0] & X86_CR0_PE )
+        if ( curr->arch.hvm.guest_cr[0] & X86_CR0_PE )
             goto fail;
 
         realmode_deliver_exception(TRAP_invalid_op, 0, hvmemul_ctxt);
@@ -129,7 +129,7 @@ void vmx_realmode_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt)
         {
             domain_pause_for_debugger();
         }
-        else if ( curr->arch.hvm_vcpu.guest_cr[0] & X86_CR0_PE )
+        else if ( curr->arch.hvm.guest_cr[0] & X86_CR0_PE )
         {
             gdprintk(XENLOG_ERR, "Exception %02x in protected mode.\n",
                      hvmemul_ctxt->ctxt.event.vector);
@@ -156,7 +156,7 @@ void vmx_realmode(struct cpu_user_regs *regs)
     struct vcpu *curr = current;
     struct hvm_emulate_ctxt hvmemul_ctxt;
     struct segment_register *sreg;
-    struct hvm_vcpu_io *vio = &curr->arch.hvm_vcpu.hvm_io;
+    struct hvm_vcpu_io *vio = &curr->arch.hvm.hvm_io;
     unsigned long intr_info;
     unsigned int emulations = 0;
 
@@ -168,7 +168,7 @@ void vmx_realmode(struct cpu_user_regs *regs)
     hvm_emulate_init_once(&hvmemul_ctxt, NULL, regs);
 
     /* Only deliver interrupts into emulated real mode. */
-    if ( !(curr->arch.hvm_vcpu.guest_cr[0] & X86_CR0_PE) &&
+    if ( !(curr->arch.hvm.guest_cr[0] & X86_CR0_PE) &&
          (intr_info & INTR_INFO_VALID_MASK) )
     {
         realmode_deliver_exception((uint8_t)intr_info, 0, &hvmemul_ctxt);
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index f30850c..5e4a6b1 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1233,10 +1233,10 @@ static int construct_vmcs(struct vcpu *v)
               | (v->arch.fully_eager_fpu ? 0 : (1U << TRAP_no_device));
     vmx_update_exception_bitmap(v);
 
-    v->arch.hvm_vcpu.guest_cr[0] = X86_CR0_PE | X86_CR0_ET;
+    v->arch.hvm.guest_cr[0] = X86_CR0_PE | X86_CR0_ET;
     hvm_update_guest_cr(v, 0);
 
-    v->arch.hvm_vcpu.guest_cr[4] = 0;
+    v->arch.hvm.guest_cr[4] = 0;
     hvm_update_guest_cr(v, 4);
 
     if ( cpu_has_vmx_tpr_shadow )
@@ -1838,9 +1838,9 @@ void vmx_do_resume(struct vcpu *v)
                   || v->domain->arch.monitor.software_breakpoint_enabled
                   || v->domain->arch.monitor.singlestep_enabled;
 
-    if ( unlikely(v->arch.hvm_vcpu.debug_state_latch != debug_state) )
+    if ( unlikely(v->arch.hvm.debug_state_latch != debug_state) )
     {
-        v->arch.hvm_vcpu.debug_state_latch = debug_state;
+        v->arch.hvm.debug_state_latch = debug_state;
         vmx_update_debug_state(v);
     }
 
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index ccfbacb..4abd327 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -581,7 +581,7 @@ int vmx_guest_x86_mode(struct vcpu *v)
 {
     unsigned long cs_ar_bytes;
 
-    if ( unlikely(!(v->arch.hvm_vcpu.guest_cr[0] & X86_CR0_PE)) )
+    if ( unlikely(!(v->arch.hvm.guest_cr[0] & X86_CR0_PE)) )
         return 0;
     if ( unlikely(guest_cpu_user_regs()->eflags & X86_EFLAGS_VM) )
         return 1;
@@ -594,11 +594,11 @@ int vmx_guest_x86_mode(struct vcpu *v)
 
 static void vmx_save_dr(struct vcpu *v)
 {
-    if ( !v->arch.hvm_vcpu.flag_dr_dirty )
+    if ( !v->arch.hvm.flag_dr_dirty )
         return;
 
     /* Clear the DR dirty flag and re-enable intercepts for DR accesses. */
-    v->arch.hvm_vcpu.flag_dr_dirty = 0;
+    v->arch.hvm.flag_dr_dirty = 0;
     v->arch.hvm_vmx.exec_control |= CPU_BASED_MOV_DR_EXITING;
     vmx_update_cpu_exec_control(v);
 
@@ -613,10 +613,10 @@ static void vmx_save_dr(struct vcpu *v)
 
 static void __restore_debug_registers(struct vcpu *v)
 {
-    if ( v->arch.hvm_vcpu.flag_dr_dirty )
+    if ( v->arch.hvm.flag_dr_dirty )
         return;
 
-    v->arch.hvm_vcpu.flag_dr_dirty = 1;
+    v->arch.hvm.flag_dr_dirty = 1;
 
     write_debugreg(0, v->arch.debugreg[0]);
     write_debugreg(1, v->arch.debugreg[1]);
@@ -645,12 +645,12 @@ static void vmx_vmcs_save(struct vcpu *v, struct hvm_hw_cpu *c)
 
     vmx_vmcs_enter(v);
 
-    c->cr0 = v->arch.hvm_vcpu.guest_cr[0];
-    c->cr2 = v->arch.hvm_vcpu.guest_cr[2];
-    c->cr3 = v->arch.hvm_vcpu.guest_cr[3];
-    c->cr4 = v->arch.hvm_vcpu.guest_cr[4];
+    c->cr0 = v->arch.hvm.guest_cr[0];
+    c->cr2 = v->arch.hvm.guest_cr[2];
+    c->cr3 = v->arch.hvm.guest_cr[3];
+    c->cr4 = v->arch.hvm.guest_cr[4];
 
-    c->msr_efer = v->arch.hvm_vcpu.guest_efer;
+    c->msr_efer = v->arch.hvm.guest_efer;
 
     __vmread(GUEST_SYSENTER_CS, &c->sysenter_cs);
     __vmread(GUEST_SYSENTER_ESP, &c->sysenter_esp);
@@ -696,8 +696,8 @@ static int vmx_restore_cr0_cr3(
             page ? pagetable_from_page(page) : pagetable_null();
     }
 
-    v->arch.hvm_vcpu.guest_cr[0] = cr0 | X86_CR0_ET;
-    v->arch.hvm_vcpu.guest_cr[3] = cr3;
+    v->arch.hvm.guest_cr[0] = cr0 | X86_CR0_ET;
+    v->arch.hvm.guest_cr[3] = cr3;
 
     return 0;
 }
@@ -731,13 +731,13 @@ static int vmx_vmcs_restore(struct vcpu *v, struct hvm_hw_cpu *c)
 
     vmx_vmcs_enter(v);
 
-    v->arch.hvm_vcpu.guest_cr[2] = c->cr2;
-    v->arch.hvm_vcpu.guest_cr[4] = c->cr4;
+    v->arch.hvm.guest_cr[2] = c->cr2;
+    v->arch.hvm.guest_cr[4] = c->cr4;
     vmx_update_guest_cr(v, 0, 0);
     vmx_update_guest_cr(v, 2, 0);
     vmx_update_guest_cr(v, 4, 0);
 
-    v->arch.hvm_vcpu.guest_efer = c->msr_efer;
+    v->arch.hvm.guest_efer = c->msr_efer;
     vmx_update_guest_efer(v);
 
     __vmwrite(GUEST_SYSENTER_CS, c->sysenter_cs);
@@ -827,7 +827,7 @@ static void vmx_save_msr(struct vcpu *v, struct hvm_msr *ctxt)
 
     if ( cpu_has_xsaves && cpu_has_vmx_xsaves )
     {
-        ctxt->msr[ctxt->count].val = v->arch.hvm_vcpu.msr_xss;
+        ctxt->msr[ctxt->count].val = v->arch.hvm.msr_xss;
         if ( ctxt->msr[ctxt->count].val )
             ctxt->msr[ctxt->count++].index = MSR_IA32_XSS;
     }
@@ -854,7 +854,7 @@ static int vmx_load_msr(struct vcpu *v, struct hvm_msr *ctxt)
             break;
         case MSR_IA32_XSS:
             if ( cpu_has_xsaves && cpu_has_vmx_xsaves )
-                v->arch.hvm_vcpu.msr_xss = ctxt->msr[i].val;
+                v->arch.hvm.msr_xss = ctxt->msr[i].val;
             else
                 err = -ENXIO;
             break;
@@ -897,10 +897,10 @@ static void vmx_fpu_leave(struct vcpu *v)
      * then this is not necessary: no FPU activity can occur until the guest
      * clears CR0.TS, and we will initialise the FPU when that happens.
      */
-    if ( !(v->arch.hvm_vcpu.guest_cr[0] & X86_CR0_TS) )
+    if ( !(v->arch.hvm.guest_cr[0] & X86_CR0_TS) )
     {
-        v->arch.hvm_vcpu.hw_cr[0] |= X86_CR0_TS;
-        __vmwrite(GUEST_CR0, v->arch.hvm_vcpu.hw_cr[0]);
+        v->arch.hvm.hw_cr[0] |= X86_CR0_TS;
+        __vmwrite(GUEST_CR0, v->arch.hvm.hw_cr[0]);
         v->arch.hvm_vmx.exception_bitmap |= (1u << TRAP_no_device);
         vmx_update_exception_bitmap(v);
     }
@@ -1192,7 +1192,7 @@ static unsigned long vmx_get_shadow_gs_base(struct vcpu *v)
 static int vmx_set_guest_pat(struct vcpu *v, u64 gpat)
 {
     if ( !paging_mode_hap(v->domain) ||
-         unlikely(v->arch.hvm_vcpu.cache_mode == NO_FILL_CACHE_MODE) )
+         unlikely(v->arch.hvm.cache_mode == NO_FILL_CACHE_MODE) )
         return 0;
 
     vmx_vmcs_enter(v);
@@ -1204,7 +1204,7 @@ static int vmx_set_guest_pat(struct vcpu *v, u64 gpat)
 static int vmx_get_guest_pat(struct vcpu *v, u64 *gpat)
 {
     if ( !paging_mode_hap(v->domain) ||
-         unlikely(v->arch.hvm_vcpu.cache_mode == NO_FILL_CACHE_MODE) )
+         unlikely(v->arch.hvm.cache_mode == NO_FILL_CACHE_MODE) )
         return 0;
 
     vmx_vmcs_enter(v);
@@ -1248,7 +1248,7 @@ static void vmx_handle_cd(struct vcpu *v, unsigned long value)
     }
     else
     {
-        u64 *pat = &v->arch.hvm_vcpu.pat_cr;
+        u64 *pat = &v->arch.hvm.pat_cr;
 
         if ( value & X86_CR0_CD )
         {
@@ -1272,11 +1272,11 @@ static void vmx_handle_cd(struct vcpu *v, unsigned long value)
 
             wbinvd();               /* flush possibly polluted cache */
             hvm_asid_flush_vcpu(v); /* invalidate memory type cached in TLB */
-            v->arch.hvm_vcpu.cache_mode = NO_FILL_CACHE_MODE;
+            v->arch.hvm.cache_mode = NO_FILL_CACHE_MODE;
         }
         else
         {
-            v->arch.hvm_vcpu.cache_mode = NORMAL_CACHE_MODE;
+            v->arch.hvm.cache_mode = NORMAL_CACHE_MODE;
             vmx_set_guest_pat(v, *pat);
             if ( !iommu_enabled || iommu_snoop )
                 vmx_clear_msr_intercept(v, MSR_IA32_CR_PAT, VMX_MSR_RW);
@@ -1369,14 +1369,14 @@ static void vmx_set_interrupt_shadow(struct vcpu *v, unsigned int intr_shadow)
 
 static void vmx_load_pdptrs(struct vcpu *v)
 {
-    unsigned long cr3 = v->arch.hvm_vcpu.guest_cr[3];
+    unsigned long cr3 = v->arch.hvm.guest_cr[3];
     uint64_t *guest_pdptes;
     struct page_info *page;
     p2m_type_t p2mt;
     char *p;
 
     /* EPT needs to load PDPTRS into VMCS for PAE. */
-    if ( !hvm_pae_enabled(v) || (v->arch.hvm_vcpu.guest_efer & EFER_LMA) )
+    if ( !hvm_pae_enabled(v) || (v->arch.hvm.guest_efer & EFER_LMA) )
         return;
 
     if ( (cr3 & 0x1fUL) && !hvm_pcid_enabled(v) )
@@ -1430,7 +1430,7 @@ static void vmx_update_host_cr3(struct vcpu *v)
 
 void vmx_update_debug_state(struct vcpu *v)
 {
-    if ( v->arch.hvm_vcpu.debug_state_latch )
+    if ( v->arch.hvm.debug_state_latch )
         v->arch.hvm_vmx.exception_bitmap |= 1U << TRAP_int3;
     else
         v->arch.hvm_vmx.exception_bitmap &= ~(1U << TRAP_int3);
@@ -1479,22 +1479,22 @@ static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr,
         }
 
         if ( !nestedhvm_vcpu_in_guestmode(v) )
-            __vmwrite(CR0_READ_SHADOW, v->arch.hvm_vcpu.guest_cr[0]);
+            __vmwrite(CR0_READ_SHADOW, v->arch.hvm.guest_cr[0]);
         else
             nvmx_set_cr_read_shadow(v, 0);
 
-        if ( !(v->arch.hvm_vcpu.guest_cr[0] & X86_CR0_TS) )
+        if ( !(v->arch.hvm.guest_cr[0] & X86_CR0_TS) )
         {
             if ( v != current )
             {
                 if ( !v->arch.fully_eager_fpu )
                     hw_cr0_mask |= X86_CR0_TS;
             }
-            else if ( v->arch.hvm_vcpu.hw_cr[0] & X86_CR0_TS )
+            else if ( v->arch.hvm.hw_cr[0] & X86_CR0_TS )
                 vmx_fpu_enter(v);
         }
 
-        realmode = !(v->arch.hvm_vcpu.guest_cr[0] & X86_CR0_PE);
+        realmode = !(v->arch.hvm.guest_cr[0] & X86_CR0_PE);
 
         if ( !vmx_unrestricted_guest(v) &&
              (realmode != v->arch.hvm_vmx.vmx_realmode) )
@@ -1527,24 +1527,24 @@ static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr,
             vmx_update_exception_bitmap(v);
         }
 
-        v->arch.hvm_vcpu.hw_cr[0] =
-            v->arch.hvm_vcpu.guest_cr[0] | hw_cr0_mask;
-        __vmwrite(GUEST_CR0, v->arch.hvm_vcpu.hw_cr[0]);
+        v->arch.hvm.hw_cr[0] =
+            v->arch.hvm.guest_cr[0] | hw_cr0_mask;
+        __vmwrite(GUEST_CR0, v->arch.hvm.hw_cr[0]);
     }
         /* Fallthrough: Changing CR0 can change some bits in real CR4. */
     case 4:
-        v->arch.hvm_vcpu.hw_cr[4] = HVM_CR4_HOST_MASK;
+        v->arch.hvm.hw_cr[4] = HVM_CR4_HOST_MASK;
         if ( paging_mode_hap(v->domain) )
-            v->arch.hvm_vcpu.hw_cr[4] &= ~X86_CR4_PAE;
+            v->arch.hvm.hw_cr[4] &= ~X86_CR4_PAE;
 
         if ( !nestedhvm_vcpu_in_guestmode(v) )
-            __vmwrite(CR4_READ_SHADOW, v->arch.hvm_vcpu.guest_cr[4]);
+            __vmwrite(CR4_READ_SHADOW, v->arch.hvm.guest_cr[4]);
         else
             nvmx_set_cr_read_shadow(v, 4);
 
-        v->arch.hvm_vcpu.hw_cr[4] |= v->arch.hvm_vcpu.guest_cr[4];
+        v->arch.hvm.hw_cr[4] |= v->arch.hvm.guest_cr[4];
         if ( v->arch.hvm_vmx.vmx_realmode )
-            v->arch.hvm_vcpu.hw_cr[4] |= X86_CR4_VME;
+            v->arch.hvm.hw_cr[4] |= X86_CR4_VME;
 
         if ( !hvm_paging_enabled(v) )
         {
@@ -1564,8 +1564,8 @@ static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr,
                  * HVM_PARAM_IDENT_PT which is a 32bit pagetable using 4M
                  * superpages.  Override the guests paging settings to match.
                  */
-                v->arch.hvm_vcpu.hw_cr[4] |= X86_CR4_PSE;
-                v->arch.hvm_vcpu.hw_cr[4] &= ~X86_CR4_PAE;
+                v->arch.hvm.hw_cr[4] |= X86_CR4_PSE;
+                v->arch.hvm.hw_cr[4] &= ~X86_CR4_PAE;
             }
 
             /*
@@ -1576,10 +1576,10 @@ static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr,
              * effect if paging was actually disabled, so hide them behind the
              * back of the guest.
              */
-            v->arch.hvm_vcpu.hw_cr[4] &= ~(X86_CR4_SMEP | X86_CR4_SMAP);
+            v->arch.hvm.hw_cr[4] &= ~(X86_CR4_SMEP | X86_CR4_SMAP);
         }
 
-        __vmwrite(GUEST_CR4, v->arch.hvm_vcpu.hw_cr[4]);
+        __vmwrite(GUEST_CR4, v->arch.hvm.hw_cr[4]);
 
         /*
          * Shadow path has not been optimized because it requires
@@ -1625,12 +1625,12 @@ static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr,
         if ( paging_mode_hap(v->domain) )
         {
             if ( !hvm_paging_enabled(v) && !vmx_unrestricted_guest(v) )
-                v->arch.hvm_vcpu.hw_cr[3] =
+                v->arch.hvm.hw_cr[3] =
                     v->domain->arch.hvm.params[HVM_PARAM_IDENT_PT];
             vmx_load_pdptrs(v);
         }
 
-        __vmwrite(GUEST_CR3, v->arch.hvm_vcpu.hw_cr[3]);
+        __vmwrite(GUEST_CR3, v->arch.hvm.hw_cr[3]);
 
         if ( !(flags & HVM_UPDATE_GUEST_CR3_NOFLUSH) )
             hvm_asid_flush_vcpu(v);
@@ -1645,7 +1645,7 @@ static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr,
 
 static void vmx_update_guest_efer(struct vcpu *v)
 {
-    unsigned long entry_ctls, guest_efer = v->arch.hvm_vcpu.guest_efer,
+    unsigned long entry_ctls, guest_efer = v->arch.hvm.guest_efer,
         xen_efer = read_efer();
 
     if ( paging_mode_shadow(v->domain) )
@@ -1714,7 +1714,7 @@ static void vmx_update_guest_efer(struct vcpu *v)
      * If the guests virtualised view of MSR_EFER matches the value loaded
      * into hardware, clear the read intercept to avoid unnecessary VMExits.
      */
-    if ( guest_efer == v->arch.hvm_vcpu.guest_efer )
+    if ( guest_efer == v->arch.hvm.guest_efer )
         vmx_clear_msr_intercept(v, MSR_EFER, VMX_MSR_R);
     else
         vmx_set_msr_intercept(v, MSR_EFER, VMX_MSR_R);
@@ -1863,7 +1863,7 @@ static void vmx_inject_event(const struct x86_event *event)
 
     case TRAP_page_fault:
         ASSERT(_event.type == X86_EVENTTYPE_HW_EXCEPTION);
-        curr->arch.hvm_vcpu.guest_cr[2] = _event.cr2;
+        curr->arch.hvm.guest_cr[2] = _event.cr2;
         break;
     }
 
@@ -1901,7 +1901,7 @@ static void vmx_inject_event(const struct x86_event *event)
     if ( (_event.vector == TRAP_page_fault) &&
          (_event.type == X86_EVENTTYPE_HW_EXCEPTION) )
         HVMTRACE_LONG_2D(PF_INJECT, _event.error_code,
-                         TRC_PAR_LONG(curr->arch.hvm_vcpu.guest_cr[2]));
+                         TRC_PAR_LONG(curr->arch.hvm.guest_cr[2]));
     else
         HVMTRACE_2D(INJ_EXC, _event.vector, _event.error_code);
 }
@@ -2549,10 +2549,10 @@ static void vmx_fpu_dirty_intercept(void)
     vmx_fpu_enter(curr);
 
     /* Disable TS in guest CR0 unless the guest wants the exception too. */
-    if ( !(curr->arch.hvm_vcpu.guest_cr[0] & X86_CR0_TS) )
+    if ( !(curr->arch.hvm.guest_cr[0] & X86_CR0_TS) )
     {
-        curr->arch.hvm_vcpu.hw_cr[0] &= ~X86_CR0_TS;
-        __vmwrite(GUEST_CR0, curr->arch.hvm_vcpu.hw_cr[0]);
+        curr->arch.hvm.hw_cr[0] &= ~X86_CR0_TS;
+        __vmwrite(GUEST_CR0, curr->arch.hvm.hw_cr[0]);
     }
 }
 
@@ -2586,7 +2586,7 @@ static void vmx_dr_access(unsigned long exit_qualification,
 
     HVMTRACE_0D(DR_WRITE);
 
-    if ( !v->arch.hvm_vcpu.flag_dr_dirty )
+    if ( !v->arch.hvm.flag_dr_dirty )
         __restore_debug_registers(v);
 
     /* Allow guest direct access to DR registers */
@@ -2633,7 +2633,7 @@ static int vmx_cr_access(cr_access_qual_t qual)
 
     case VMX_CR_ACCESS_TYPE_CLTS:
     {
-        unsigned long old = curr->arch.hvm_vcpu.guest_cr[0];
+        unsigned long old = curr->arch.hvm.guest_cr[0];
         unsigned long value = old & ~X86_CR0_TS;
 
         /*
@@ -2642,7 +2642,7 @@ static int vmx_cr_access(cr_access_qual_t qual)
          * return value is ignored for now.
          */
         hvm_monitor_crX(CR0, value, old);
-        curr->arch.hvm_vcpu.guest_cr[0] = value;
+        curr->arch.hvm.guest_cr[0] = value;
         vmx_update_guest_cr(curr, 0, 0);
         HVMTRACE_0D(CLTS);
         break;
@@ -2650,7 +2650,7 @@ static int vmx_cr_access(cr_access_qual_t qual)
 
     case VMX_CR_ACCESS_TYPE_LMSW:
     {
-        unsigned long value = curr->arch.hvm_vcpu.guest_cr[0];
+        unsigned long value = curr->arch.hvm.guest_cr[0];
         int rc;
 
         /* LMSW can (1) set PE; (2) set or clear MP, EM, and TS. */
@@ -3617,14 +3617,14 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
          * Xen allows the guest to modify some CR4 bits directly, update cached
          * values to match.
          */
-        __vmread(GUEST_CR4, &v->arch.hvm_vcpu.hw_cr[4]);
-        v->arch.hvm_vcpu.guest_cr[4] &= v->arch.hvm_vmx.cr4_host_mask;
-        v->arch.hvm_vcpu.guest_cr[4] |= v->arch.hvm_vcpu.hw_cr[4] &
-                                        ~v->arch.hvm_vmx.cr4_host_mask;
+        __vmread(GUEST_CR4, &v->arch.hvm.hw_cr[4]);
+        v->arch.hvm.guest_cr[4] &= v->arch.hvm_vmx.cr4_host_mask;
+        v->arch.hvm.guest_cr[4] |= (v->arch.hvm.hw_cr[4] &
+                                    ~v->arch.hvm_vmx.cr4_host_mask);
 
-        __vmread(GUEST_CR3, &v->arch.hvm_vcpu.hw_cr[3]);
+        __vmread(GUEST_CR3, &v->arch.hvm.hw_cr[3]);
         if ( vmx_unrestricted_guest(v) || hvm_paging_enabled(v) )
-            v->arch.hvm_vcpu.guest_cr[3] = v->arch.hvm_vcpu.hw_cr[3];
+            v->arch.hvm.guest_cr[3] = v->arch.hvm.hw_cr[3];
     }
 
     __vmread(VM_EXIT_REASON, &exit_reason);
@@ -4167,7 +4167,7 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
     case EXIT_REASON_MONITOR_TRAP_FLAG:
         v->arch.hvm_vmx.exec_control &= ~CPU_BASED_MONITOR_TRAP_FLAG;
         vmx_update_cpu_exec_control(v);
-        if ( v->arch.hvm_vcpu.single_step )
+        if ( v->arch.hvm.single_step )
         {
             hvm_monitor_debug(regs->rip,
                               HVM_MONITOR_SINGLESTEP_BREAKPOINT,
@@ -4338,7 +4338,7 @@ bool vmx_vmenter_helper(const struct cpu_user_regs *regs)
     if ( nestedhvm_vcpu_in_guestmode(curr) )
         p_asid = &vcpu_nestedhvm(curr).nv_n2asid;
     else
-        p_asid = &curr->arch.hvm_vcpu.n1asid;
+        p_asid = &curr->arch.hvm.n1asid;
 
     old_asid = p_asid->asid;
     need_flush = hvm_asid_handle_vmenter(p_asid);
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index b7d9a1a..5cdea47 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -383,8 +383,8 @@ static int vmx_inst_check_privilege(struct cpu_user_regs *regs, int vmxop_check)
 
     if ( vmxop_check )
     {
-        if ( !(v->arch.hvm_vcpu.guest_cr[0] & X86_CR0_PE) ||
-             !(v->arch.hvm_vcpu.guest_cr[4] & X86_CR4_VMXE) )
+        if ( !(v->arch.hvm.guest_cr[0] & X86_CR0_PE) ||
+             !(v->arch.hvm.guest_cr[4] & X86_CR4_VMXE) )
             goto invalid_op;
     }
     else if ( !nvmx_vcpu_in_vmx(v) )
@@ -1082,7 +1082,7 @@ static void load_shadow_guest_state(struct vcpu *v)
             hvm_inject_hw_exception(TRAP_gp_fault, 0);
     }
 
-    hvm_set_tsc_offset(v, v->arch.hvm_vcpu.cache_tsc_offset, 0);
+    hvm_set_tsc_offset(v, v->arch.hvm.cache_tsc_offset, 0);
 
     vvmcs_to_shadow_bulk(v, ARRAY_SIZE(vmentry_fields), vmentry_fields);
 
@@ -1170,7 +1170,7 @@ static void virtual_vmentry(struct cpu_user_regs *regs)
      * hvm_set_efer won't work if CR0.PG = 1, so we change the value
      * directly to make hvm_long_mode_active(v) work in L2.
      * An additional update_paging_modes is also needed if
-     * there is 32/64 switch. v->arch.hvm_vcpu.guest_efer doesn't
+     * there is 32/64 switch. v->arch.hvm.guest_efer doesn't
      * need to be saved, since its value on vmexit is determined by
      * L1 exit_controls
      */
@@ -1178,9 +1178,9 @@ static void virtual_vmentry(struct cpu_user_regs *regs)
     lm_l2 = !!(get_vvmcs(v, VM_ENTRY_CONTROLS) & VM_ENTRY_IA32E_MODE);
 
     if ( lm_l2 )
-        v->arch.hvm_vcpu.guest_efer |= EFER_LMA | EFER_LME;
+        v->arch.hvm.guest_efer |= EFER_LMA | EFER_LME;
     else
-        v->arch.hvm_vcpu.guest_efer &= ~(EFER_LMA | EFER_LME);
+        v->arch.hvm.guest_efer &= ~(EFER_LMA | EFER_LME);
 
     load_shadow_control(v);
     load_shadow_guest_state(v);
@@ -1189,7 +1189,7 @@ static void virtual_vmentry(struct cpu_user_regs *regs)
         paging_update_paging_modes(v);
 
     if ( nvmx_ept_enabled(v) && hvm_pae_enabled(v) &&
-         !(v->arch.hvm_vcpu.guest_efer & EFER_LMA) )
+         !(v->arch.hvm.guest_efer & EFER_LMA) )
         vvmcs_to_shadow_bulk(v, ARRAY_SIZE(gpdpte_fields), gpdpte_fields);
 
     regs->rip = get_vvmcs(v, GUEST_RIP);
@@ -1236,7 +1236,7 @@ static void sync_vvmcs_guest_state(struct vcpu *v, struct cpu_user_regs *regs)
 
     if ( v->arch.hvm_vmx.cr4_host_mask != ~0UL )
         /* Only need to update nested GUEST_CR4 if not all bits are trapped. */
-        set_vvmcs(v, GUEST_CR4, v->arch.hvm_vcpu.guest_cr[4]);
+        set_vvmcs(v, GUEST_CR4, v->arch.hvm.guest_cr[4]);
 }
 
 static void sync_vvmcs_ro(struct vcpu *v)
@@ -1288,7 +1288,7 @@ static void load_vvmcs_host_state(struct vcpu *v)
             hvm_inject_hw_exception(TRAP_gp_fault, 0);
     }
 
-    hvm_set_tsc_offset(v, v->arch.hvm_vcpu.cache_tsc_offset, 0);
+    hvm_set_tsc_offset(v, v->arch.hvm.cache_tsc_offset, 0);
 
     set_vvmcs(v, VM_ENTRY_INTR_INFO, 0);
 }
@@ -1369,7 +1369,7 @@ static void virtual_vmexit(struct cpu_user_regs *regs)
     sync_exception_state(v);
 
     if ( nvmx_ept_enabled(v) && hvm_pae_enabled(v) &&
-         !(v->arch.hvm_vcpu.guest_efer & EFER_LMA) )
+         !(v->arch.hvm.guest_efer & EFER_LMA) )
         shadow_to_vvmcs_bulk(v, ARRAY_SIZE(gpdpte_fields), gpdpte_fields);
 
     /* This will clear current pCPU bit in p2m->dirty_cpumask */
@@ -1385,9 +1385,9 @@ static void virtual_vmexit(struct cpu_user_regs *regs)
     lm_l1 = !!(get_vvmcs(v, VM_EXIT_CONTROLS) & VM_EXIT_IA32E_MODE);
 
     if ( lm_l1 )
-        v->arch.hvm_vcpu.guest_efer |= EFER_LMA | EFER_LME;
+        v->arch.hvm.guest_efer |= EFER_LMA | EFER_LME;
     else
-        v->arch.hvm_vcpu.guest_efer &= ~(EFER_LMA | EFER_LME);
+        v->arch.hvm.guest_efer &= ~(EFER_LMA | EFER_LME);
 
     vmx_update_cpu_exec_control(v);
     vmx_update_secondary_exec_control(v);
@@ -2438,7 +2438,7 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
     case EXIT_REASON_DR_ACCESS:
         ctrl = __n2_exec_control(v);
         if ( (ctrl & CPU_BASED_MOV_DR_EXITING) &&
-            v->arch.hvm_vcpu.flag_dr_dirty )
+            v->arch.hvm.flag_dr_dirty )
             nvcpu->nv_vmexit_pending = 1;
         break;
     case EXIT_REASON_INVLPG:
@@ -2620,13 +2620,13 @@ void nvmx_set_cr_read_shadow(struct vcpu *v, unsigned int cr)
          * hardware. It consists of the L2-owned bits from the new
          * value combined with the L1-owned bits from L1's guest cr.
          */
-        v->arch.hvm_vcpu.guest_cr[cr] &= ~virtual_cr_mask;
-        v->arch.hvm_vcpu.guest_cr[cr] |= virtual_cr_mask &
+        v->arch.hvm.guest_cr[cr] &= ~virtual_cr_mask;
+        v->arch.hvm.guest_cr[cr] |= virtual_cr_mask &
             get_vvmcs(v, cr_field);
     }
 
     /* nvcpu.guest_cr is what L2 write to cr actually. */
-    __vmwrite(read_shadow_field, v->arch.hvm_vcpu.nvcpu.guest_cr[cr]);
+    __vmwrite(read_shadow_field, v->arch.hvm.nvcpu.guest_cr[cr]);
 }
 
 /*
diff --git a/xen/arch/x86/hvm/vpt.c b/xen/arch/x86/hvm/vpt.c
index 7b57017..ecd25d7 100644
--- a/xen/arch/x86/hvm/vpt.c
+++ b/xen/arch/x86/hvm/vpt.c
@@ -55,7 +55,7 @@ uint64_t hvm_get_guest_time_fixed(const struct vcpu *v, uint64_t at_tsc)
     }
     spin_unlock(&pl->pl_time_lock);
 
-    return now + v->arch.hvm_vcpu.stime_offset;
+    return now + v->arch.hvm.stime_offset;
 }
 
 void hvm_set_guest_time(struct vcpu *v, u64 guest_time)
@@ -64,9 +64,9 @@ void hvm_set_guest_time(struct vcpu *v, u64 guest_time)
 
     if ( offset )
     {
-        v->arch.hvm_vcpu.stime_offset += offset;
+        v->arch.hvm.stime_offset += offset;
         /*
-         * If hvm_vcpu.stime_offset is updated make sure to
+         * If hvm.stime_offset is updated make sure to
          * also update vcpu time, since this value is used to
          * calculate the TSC.
          */
@@ -159,16 +159,16 @@ static void pt_lock(struct periodic_time *pt)
     for ( ; ; )
     {
         v = pt->vcpu;
-        spin_lock(&v->arch.hvm_vcpu.tm_lock);
+        spin_lock(&v->arch.hvm.tm_lock);
         if ( likely(pt->vcpu == v) )
             break;
-        spin_unlock(&v->arch.hvm_vcpu.tm_lock);
+        spin_unlock(&v->arch.hvm.tm_lock);
     }
 }
 
 static void pt_unlock(struct periodic_time *pt)
 {
-    spin_unlock(&pt->vcpu->arch.hvm_vcpu.tm_lock);
+    spin_unlock(&pt->vcpu->arch.hvm.tm_lock);
 }
 
 static void pt_process_missed_ticks(struct periodic_time *pt)
@@ -195,7 +195,7 @@ static void pt_freeze_time(struct vcpu *v)
     if ( !mode_is(v->domain, delay_for_missed_ticks) )
         return;
 
-    v->arch.hvm_vcpu.guest_time = hvm_get_guest_time(v);
+    v->arch.hvm.guest_time = hvm_get_guest_time(v);
 }
 
 static void pt_thaw_time(struct vcpu *v)
@@ -203,22 +203,22 @@ static void pt_thaw_time(struct vcpu *v)
     if ( !mode_is(v->domain, delay_for_missed_ticks) )
         return;
 
-    if ( v->arch.hvm_vcpu.guest_time == 0 )
+    if ( v->arch.hvm.guest_time == 0 )
         return;
 
-    hvm_set_guest_time(v, v->arch.hvm_vcpu.guest_time);
-    v->arch.hvm_vcpu.guest_time = 0;
+    hvm_set_guest_time(v, v->arch.hvm.guest_time);
+    v->arch.hvm.guest_time = 0;
 }
 
 void pt_save_timer(struct vcpu *v)
 {
-    struct list_head *head = &v->arch.hvm_vcpu.tm_list;
+    struct list_head *head = &v->arch.hvm.tm_list;
     struct periodic_time *pt;
 
     if ( v->pause_flags & VPF_blocked )
         return;
 
-    spin_lock(&v->arch.hvm_vcpu.tm_lock);
+    spin_lock(&v->arch.hvm.tm_lock);
 
     list_for_each_entry ( pt, head, list )
         if ( !pt->do_not_freeze )
@@ -226,15 +226,15 @@ void pt_save_timer(struct vcpu *v)
 
     pt_freeze_time(v);
 
-    spin_unlock(&v->arch.hvm_vcpu.tm_lock);
+    spin_unlock(&v->arch.hvm.tm_lock);
 }
 
 void pt_restore_timer(struct vcpu *v)
 {
-    struct list_head *head = &v->arch.hvm_vcpu.tm_list;
+    struct list_head *head = &v->arch.hvm.tm_list;
     struct periodic_time *pt;
 
-    spin_lock(&v->arch.hvm_vcpu.tm_lock);
+    spin_lock(&v->arch.hvm.tm_lock);
 
     list_for_each_entry ( pt, head, list )
     {
@@ -247,7 +247,7 @@ void pt_restore_timer(struct vcpu *v)
 
     pt_thaw_time(v);
 
-    spin_unlock(&v->arch.hvm_vcpu.tm_lock);
+    spin_unlock(&v->arch.hvm.tm_lock);
 }
 
 static void pt_timer_fn(void *data)
@@ -302,13 +302,13 @@ static void pt_irq_fired(struct vcpu *v, struct periodic_time *pt)
 
 int pt_update_irq(struct vcpu *v)
 {
-    struct list_head *head = &v->arch.hvm_vcpu.tm_list;
+    struct list_head *head = &v->arch.hvm.tm_list;
     struct periodic_time *pt, *temp, *earliest_pt;
     uint64_t max_lag;
     int irq, pt_vector = -1;
     bool level;
 
-    spin_lock(&v->arch.hvm_vcpu.tm_lock);
+    spin_lock(&v->arch.hvm.tm_lock);
 
     earliest_pt = NULL;
     max_lag = -1ULL;
@@ -338,7 +338,7 @@ int pt_update_irq(struct vcpu *v)
 
     if ( earliest_pt == NULL )
     {
-        spin_unlock(&v->arch.hvm_vcpu.tm_lock);
+        spin_unlock(&v->arch.hvm.tm_lock);
         return -1;
     }
 
@@ -346,7 +346,7 @@ int pt_update_irq(struct vcpu *v)
     irq = earliest_pt->irq;
     level = earliest_pt->level;
 
-    spin_unlock(&v->arch.hvm_vcpu.tm_lock);
+    spin_unlock(&v->arch.hvm.tm_lock);
 
     switch ( earliest_pt->source )
     {
@@ -393,9 +393,9 @@ int pt_update_irq(struct vcpu *v)
                 time_cb *cb = NULL;
                 void *cb_priv;
 
-                spin_lock(&v->arch.hvm_vcpu.tm_lock);
+                spin_lock(&v->arch.hvm.tm_lock);
                 /* Make sure the timer is still on the list. */
-                list_for_each_entry ( pt, &v->arch.hvm_vcpu.tm_list, list )
+                list_for_each_entry ( pt, &v->arch.hvm.tm_list, list )
                     if ( pt == earliest_pt )
                     {
                         pt_irq_fired(v, pt);
@@ -403,7 +403,7 @@ int pt_update_irq(struct vcpu *v)
                         cb_priv = pt->priv;
                         break;
                     }
-                spin_unlock(&v->arch.hvm_vcpu.tm_lock);
+                spin_unlock(&v->arch.hvm.tm_lock);
 
                 if ( cb != NULL )
                     cb(v, cb_priv);
@@ -418,7 +418,7 @@ int pt_update_irq(struct vcpu *v)
 static struct periodic_time *is_pt_irq(
     struct vcpu *v, struct hvm_intack intack)
 {
-    struct list_head *head = &v->arch.hvm_vcpu.tm_list;
+    struct list_head *head = &v->arch.hvm.tm_list;
     struct periodic_time *pt;
 
     list_for_each_entry ( pt, head, list )
@@ -440,12 +440,12 @@ void pt_intr_post(struct vcpu *v, struct hvm_intack intack)
     if ( intack.source == hvm_intsrc_vector )
         return;
 
-    spin_lock(&v->arch.hvm_vcpu.tm_lock);
+    spin_lock(&v->arch.hvm.tm_lock);
 
     pt = is_pt_irq(v, intack);
     if ( pt == NULL )
     {
-        spin_unlock(&v->arch.hvm_vcpu.tm_lock);
+        spin_unlock(&v->arch.hvm.tm_lock);
         return;
     }
 
@@ -454,7 +454,7 @@ void pt_intr_post(struct vcpu *v, struct hvm_intack intack)
     cb = pt->cb;
     cb_priv = pt->priv;
 
-    spin_unlock(&v->arch.hvm_vcpu.tm_lock);
+    spin_unlock(&v->arch.hvm.tm_lock);
 
     if ( cb != NULL )
         cb(v, cb_priv);
@@ -462,15 +462,15 @@ void pt_intr_post(struct vcpu *v, struct hvm_intack intack)
 
 void pt_migrate(struct vcpu *v)
 {
-    struct list_head *head = &v->arch.hvm_vcpu.tm_list;
+    struct list_head *head = &v->arch.hvm.tm_list;
     struct periodic_time *pt;
 
-    spin_lock(&v->arch.hvm_vcpu.tm_lock);
+    spin_lock(&v->arch.hvm.tm_lock);
 
     list_for_each_entry ( pt, head, list )
         migrate_timer(&pt->timer, v->processor);
 
-    spin_unlock(&v->arch.hvm_vcpu.tm_lock);
+    spin_unlock(&v->arch.hvm.tm_lock);
 }
 
 void create_periodic_time(
@@ -489,7 +489,7 @@ void create_periodic_time(
 
     destroy_periodic_time(pt);
 
-    spin_lock(&v->arch.hvm_vcpu.tm_lock);
+    spin_lock(&v->arch.hvm.tm_lock);
 
     pt->pending_intr_nr = 0;
     pt->do_not_freeze = 0;
@@ -534,12 +534,12 @@ void create_periodic_time(
     pt->priv = data;
 
     pt->on_list = 1;
-    list_add(&pt->list, &v->arch.hvm_vcpu.tm_list);
+    list_add(&pt->list, &v->arch.hvm.tm_list);
 
     init_timer(&pt->timer, pt_timer_fn, pt, v->processor);
     set_timer(&pt->timer, pt->scheduled);
 
-    spin_unlock(&v->arch.hvm_vcpu.tm_lock);
+    spin_unlock(&v->arch.hvm.tm_lock);
 }
 
 void destroy_periodic_time(struct periodic_time *pt)
@@ -578,16 +578,16 @@ static void pt_adjust_vcpu(struct periodic_time *pt, struct vcpu *v)
     pt->on_list = 0;
     pt_unlock(pt);
 
-    spin_lock(&v->arch.hvm_vcpu.tm_lock);
+    spin_lock(&v->arch.hvm.tm_lock);
     pt->vcpu = v;
     if ( on_list )
     {
         pt->on_list = 1;
-        list_add(&pt->list, &v->arch.hvm_vcpu.tm_list);
+        list_add(&pt->list, &v->arch.hvm.tm_list);
 
         migrate_timer(&pt->timer, v->processor);
     }
-    spin_unlock(&v->arch.hvm_vcpu.tm_lock);
+    spin_unlock(&v->arch.hvm.tm_lock);
 }
 
 void pt_adjust_global_vcpu_target(struct vcpu *v)
@@ -627,7 +627,7 @@ static void pt_resume(struct periodic_time *pt)
     if ( pt->pending_intr_nr && !pt->on_list )
     {
         pt->on_list = 1;
-        list_add(&pt->list, &pt->vcpu->arch.hvm_vcpu.tm_list);
+        list_add(&pt->list, &pt->vcpu->arch.hvm.tm_list);
         vcpu_kick(pt->vcpu);
     }
     pt_unlock(pt);
diff --git a/xen/arch/x86/mm/hap/guest_walk.c b/xen/arch/x86/mm/hap/guest_walk.c
index cb3f9ce..3b8ee2e 100644
--- a/xen/arch/x86/mm/hap/guest_walk.c
+++ b/xen/arch/x86/mm/hap/guest_walk.c
@@ -42,7 +42,7 @@ asm(".file \"" __OBJECT_FILE__ "\"");
 unsigned long hap_gva_to_gfn(GUEST_PAGING_LEVELS)(
     struct vcpu *v, struct p2m_domain *p2m, unsigned long gva, uint32_t *pfec)
 {
-    unsigned long cr3 = v->arch.hvm_vcpu.guest_cr[3];
+    unsigned long cr3 = v->arch.hvm.guest_cr[3];
     return hap_p2m_ga_to_gfn(GUEST_PAGING_LEVELS)(v, p2m, cr3, gva, pfec, NULL);
 }
 
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index fe10e9d..6031361 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -664,7 +664,7 @@ static bool_t hap_invlpg(struct vcpu *v, unsigned long va)
 
 static void hap_update_cr3(struct vcpu *v, int do_locking, bool noflush)
 {
-    v->arch.hvm_vcpu.hw_cr[3] = v->arch.hvm_vcpu.guest_cr[3];
+    v->arch.hvm.hw_cr[3] = v->arch.hvm.guest_cr[3];
     hvm_update_guest_cr3(v, noflush);
 }
 
@@ -680,7 +680,7 @@ hap_paging_get_mode(struct vcpu *v)
 static void hap_update_paging_modes(struct vcpu *v)
 {
     struct domain *d = v->domain;
-    unsigned long cr3_gfn = v->arch.hvm_vcpu.guest_cr[3] >> PAGE_SHIFT;
+    unsigned long cr3_gfn = v->arch.hvm.guest_cr[3] >> PAGE_SHIFT;
     p2m_type_t t;
 
     /* We hold onto the cr3 as it may be modified later, and
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 62819eb..2dac8d1 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -4070,7 +4070,7 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool noflush)
 
      ASSERT(shadow_mode_external(d));
      /* Find where in the page the l3 table is */
-     guest_idx = guest_index((void *)v->arch.hvm_vcpu.guest_cr[3]);
+     guest_idx = guest_index((void *)v->arch.hvm.guest_cr[3]);
 
      // Ignore the low 2 bits of guest_idx -- they are really just
      // cache control.
@@ -4208,19 +4208,17 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool noflush)
 
 
     ///
-    /// v->arch.hvm_vcpu.hw_cr[3]
+    /// v->arch.hvm.hw_cr[3]
     ///
     if ( shadow_mode_external(d) )
     {
         ASSERT(is_hvm_domain(d));
 #if SHADOW_PAGING_LEVELS == 3
         /* 2-on-3 or 3-on-3: Use the PAE shadow l3 table we just fabricated */
-        v->arch.hvm_vcpu.hw_cr[3] =
-            virt_to_maddr(&v->arch.paging.shadow.l3table);
+        v->arch.hvm.hw_cr[3] = virt_to_maddr(&v->arch.paging.shadow.l3table);
 #else
         /* 4-on-4: Just use the shadow top-level directly */
-        v->arch.hvm_vcpu.hw_cr[3] =
-            pagetable_get_paddr(v->arch.shadow_table[0]);
+        v->arch.hvm.hw_cr[3] = pagetable_get_paddr(v->arch.shadow_table[0]);
 #endif
         hvm_update_guest_cr3(v, noflush);
     }
@@ -4543,7 +4541,7 @@ static void sh_pagetable_dying(struct vcpu *v, paddr_t gpa)
     unsigned long l3gfn;
     mfn_t l3mfn;
 
-    gcr3 = (v->arch.hvm_vcpu.guest_cr[3]);
+    gcr3 = v->arch.hvm.guest_cr[3];
     /* fast path: the pagetable belongs to the current context */
     if ( gcr3 == gpa )
         fast_path = 1;
diff --git a/xen/arch/x86/time.c b/xen/arch/x86/time.c
index 5922fbf..e964e60 100644
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1041,7 +1041,7 @@ static void __update_vcpu_system_time(struct vcpu *v, int force)
         {
             struct pl_time *pl = v->domain->arch.hvm.pl_time;
 
-            stime += pl->stime_offset + v->arch.hvm_vcpu.stime_offset;
+            stime += pl->stime_offset + v->arch.hvm.stime_offset;
             if ( stime >= 0 )
                 tsc_stamp = gtime_to_gtsc(d, stime);
             else
@@ -1081,7 +1081,7 @@ static void __update_vcpu_system_time(struct vcpu *v, int force)
         _u.flags |= XEN_PVCLOCK_TSC_STABLE_BIT;
 
     if ( is_hvm_domain(d) )
-        _u.tsc_timestamp += v->arch.hvm_vcpu.cache_tsc_offset;
+        _u.tsc_timestamp += v->arch.hvm.cache_tsc_offset;
 
     /* Don't bother unless timestamp record has changed or we are forced. */
     _u.version = u->version; /* make versions match for memcmp test */
@@ -2199,7 +2199,7 @@ void tsc_set_info(struct domain *d,
              */
             d->arch.hvm.sync_tsc = rdtsc();
             hvm_set_tsc_offset(d->vcpu[0],
-                               d->vcpu[0]->arch.hvm_vcpu.cache_tsc_offset,
+                               d->vcpu[0]->arch.hvm.cache_tsc_offset,
                                d->arch.hvm.sync_tsc);
         }
     }
diff --git a/xen/arch/x86/x86_64/asm-offsets.c b/xen/arch/x86/x86_64/asm-offsets.c
index 26524c4..555804c 100644
--- a/xen/arch/x86/x86_64/asm-offsets.c
+++ b/xen/arch/x86/x86_64/asm-offsets.c
@@ -95,12 +95,12 @@ void __dummy__(void)
     OFFSET(VCPU_vmx_realmode, struct vcpu, arch.hvm_vmx.vmx_realmode);
     OFFSET(VCPU_vmx_emulate, struct vcpu, arch.hvm_vmx.vmx_emulate);
     OFFSET(VCPU_vm86_seg_mask, struct vcpu, arch.hvm_vmx.vm86_segment_mask);
-    OFFSET(VCPU_hvm_guest_cr2, struct vcpu, arch.hvm_vcpu.guest_cr[2]);
+    OFFSET(VCPU_hvm_guest_cr2, struct vcpu, arch.hvm.guest_cr[2]);
     BLANK();
 
-    OFFSET(VCPU_nhvm_guestmode, struct vcpu, arch.hvm_vcpu.nvcpu.nv_guestmode);
-    OFFSET(VCPU_nhvm_p2m, struct vcpu, arch.hvm_vcpu.nvcpu.nv_p2m);
-    OFFSET(VCPU_nsvm_hap_enabled, struct vcpu, arch.hvm_vcpu.nvcpu.u.nsvm.ns_hap_enabled);
+    OFFSET(VCPU_nhvm_guestmode, struct vcpu, arch.hvm.nvcpu.nv_guestmode);
+    OFFSET(VCPU_nhvm_p2m, struct vcpu, arch.hvm.nvcpu.nv_p2m);
+    OFFSET(VCPU_nsvm_hap_enabled, struct vcpu, arch.hvm.nvcpu.u.nsvm.ns_hap_enabled);
     BLANK();
 
     OFFSET(DOMAIN_is_32bit_pv, struct domain, arch.is_32bit_pv);
diff --git a/xen/arch/x86/x86_64/traps.c b/xen/arch/x86/x86_64/traps.c
index 606b1b0..c423bc0 100644
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -104,10 +104,10 @@ void show_registers(const struct cpu_user_regs *regs)
     {
         struct segment_register sreg;
         context = CTXT_hvm_guest;
-        fault_crs[0] = v->arch.hvm_vcpu.guest_cr[0];
-        fault_crs[2] = v->arch.hvm_vcpu.guest_cr[2];
-        fault_crs[3] = v->arch.hvm_vcpu.guest_cr[3];
-        fault_crs[4] = v->arch.hvm_vcpu.guest_cr[4];
+        fault_crs[0] = v->arch.hvm.guest_cr[0];
+        fault_crs[2] = v->arch.hvm.guest_cr[2];
+        fault_crs[3] = v->arch.hvm.guest_cr[3];
+        fault_crs[4] = v->arch.hvm.guest_cr[4];
         hvm_get_segment_register(v, x86_seg_cs, &sreg);
         fault_regs.cs = sreg.sel;
         hvm_get_segment_register(v, x86_seg_ds, &sreg);
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 4722c2d..0ea3742 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -541,7 +541,7 @@ struct arch_vcpu
     /* Virtual Machine Extensions */
     union {
         struct pv_vcpu pv;
-        struct hvm_vcpu hvm_vcpu;
+        struct hvm_vcpu hvm;
     };
 
     pagetable_t guest_table_user;       /* (MFN) x86/64 user-space pagetable */
@@ -605,8 +605,8 @@ void update_guest_memory_policy(struct vcpu *v,
                                 struct guest_memory_policy *policy);
 
 /* Shorthands to improve code legibility. */
-#define hvm_vmx         hvm_vcpu.u.vmx
-#define hvm_svm         hvm_vcpu.u.svm
+#define hvm_vmx         hvm.u.vmx
+#define hvm_svm         hvm.u.svm
 
 bool update_runstate_area(struct vcpu *);
 bool update_secondary_system_time(struct vcpu *,
diff --git a/xen/include/asm-x86/guest_pt.h b/xen/include/asm-x86/guest_pt.h
index 08031c8..8684b83 100644
--- a/xen/include/asm-x86/guest_pt.h
+++ b/xen/include/asm-x86/guest_pt.h
@@ -215,7 +215,7 @@ static inline bool guest_can_use_l2_superpages(const struct vcpu *v)
     return (is_pv_vcpu(v) ||
             GUEST_PAGING_LEVELS != 2 ||
             !hvm_paging_enabled(v) ||
-            (v->arch.hvm_vcpu.guest_cr[4] & X86_CR4_PSE));
+            (v->arch.hvm.guest_cr[4] & X86_CR4_PSE));
 }
 
 static inline bool guest_can_use_l3_superpages(const struct domain *d)
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index ac0f035..132e62b 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -285,27 +285,27 @@ void vmsi_deliver_pirq(struct domain *d, const struct hvm_pirq_dpci *);
 int hvm_girq_dest_2_vcpu_id(struct domain *d, uint8_t dest, uint8_t dest_mode);
 
 #define hvm_paging_enabled(v) \
-    (!!((v)->arch.hvm_vcpu.guest_cr[0] & X86_CR0_PG))
+    (!!((v)->arch.hvm.guest_cr[0] & X86_CR0_PG))
 #define hvm_wp_enabled(v) \
-    (!!((v)->arch.hvm_vcpu.guest_cr[0] & X86_CR0_WP))
+    (!!((v)->arch.hvm.guest_cr[0] & X86_CR0_WP))
 #define hvm_pcid_enabled(v) \
-    (!!((v)->arch.hvm_vcpu.guest_cr[4] & X86_CR4_PCIDE))
+    (!!((v)->arch.hvm.guest_cr[4] & X86_CR4_PCIDE))
 #define hvm_pae_enabled(v) \
-    (hvm_paging_enabled(v) && ((v)->arch.hvm_vcpu.guest_cr[4] & X86_CR4_PAE))
+    (hvm_paging_enabled(v) && ((v)->arch.hvm.guest_cr[4] & X86_CR4_PAE))
 #define hvm_smep_enabled(v) \
-    (hvm_paging_enabled(v) && ((v)->arch.hvm_vcpu.guest_cr[4] & X86_CR4_SMEP))
+    (hvm_paging_enabled(v) && ((v)->arch.hvm.guest_cr[4] & X86_CR4_SMEP))
 #define hvm_smap_enabled(v) \
-    (hvm_paging_enabled(v) && ((v)->arch.hvm_vcpu.guest_cr[4] & X86_CR4_SMAP))
+    (hvm_paging_enabled(v) && ((v)->arch.hvm.guest_cr[4] & X86_CR4_SMAP))
 #define hvm_nx_enabled(v) \
-    ((v)->arch.hvm_vcpu.guest_efer & EFER_NX)
+    ((v)->arch.hvm.guest_efer & EFER_NX)
 #define hvm_pku_enabled(v) \
-    (hvm_paging_enabled(v) && ((v)->arch.hvm_vcpu.guest_cr[4] & X86_CR4_PKE))
+    (hvm_paging_enabled(v) && ((v)->arch.hvm.guest_cr[4] & X86_CR4_PKE))
 
 /* Can we use superpages in the HAP p2m table? */
 #define hap_has_1gb (!!(hvm_funcs.hap_capabilities & HVM_HAP_SUPERPAGE_1GB))
 #define hap_has_2mb (!!(hvm_funcs.hap_capabilities & HVM_HAP_SUPERPAGE_2MB))
 
-#define hvm_long_mode_active(v) (!!((v)->arch.hvm_vcpu.guest_efer & EFER_LMA))
+#define hvm_long_mode_active(v) (!!((v)->arch.hvm.guest_efer & EFER_LMA))
 
 enum hvm_intblk
 hvm_interrupt_blocked(struct vcpu *v, struct hvm_intack intack);
@@ -548,7 +548,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
 #define hvm_msr_tsc_aux(v) ({                                               \
     struct domain *__d = (v)->domain;                                       \
     (__d->arch.tsc_mode == TSC_MODE_PVRDTSCP)                               \
-        ? (u32)__d->arch.incarnation : (u32)(v)->arch.hvm_vcpu.msr_tsc_aux; \
+        ? (u32)__d->arch.incarnation : (u32)(v)->arch.hvm.msr_tsc_aux;      \
 })
 
 int hvm_x2apic_msr_read(struct vcpu *v, unsigned int msr, uint64_t *msr_content);
diff --git a/xen/include/asm-x86/hvm/nestedhvm.h b/xen/include/asm-x86/hvm/nestedhvm.h
index 4a041e2..9d1c274 100644
--- a/xen/include/asm-x86/hvm/nestedhvm.h
+++ b/xen/include/asm-x86/hvm/nestedhvm.h
@@ -89,7 +89,7 @@ static inline void nestedhvm_set_cr(struct vcpu *v, unsigned int cr,
 {
     if ( !nestedhvm_vmswitch_in_progress(v) &&
          nestedhvm_vcpu_in_guestmode(v) )
-        v->arch.hvm_vcpu.nvcpu.guest_cr[cr] = value;
+        v->arch.hvm.nvcpu.guest_cr[cr] = value;
 }
 
 #endif /* _HVM_NESTEDHVM_H */
diff --git a/xen/include/asm-x86/hvm/svm/nestedsvm.h b/xen/include/asm-x86/hvm/svm/nestedsvm.h
index abcf2e7..31fb4bf 100644
--- a/xen/include/asm-x86/hvm/svm/nestedsvm.h
+++ b/xen/include/asm-x86/hvm/svm/nestedsvm.h
@@ -94,7 +94,7 @@ struct nestedsvm {
 
 /* True when l1 guest enabled SVM in EFER */
 #define nsvm_efer_svm_enabled(v) \
-    (!!((v)->arch.hvm_vcpu.guest_efer & EFER_SVME))
+    (!!((v)->arch.hvm.guest_efer & EFER_SVME))
 
 int nestedsvm_vmcb_map(struct vcpu *v, uint64_t vmcbaddr);
 void nestedsvm_vmexit_defer(struct vcpu *v,
diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
index 86b4ee2..54ea044 100644
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -135,14 +135,14 @@ struct nestedvcpu {
     unsigned long       guest_cr[5];
 };
 
-#define vcpu_nestedhvm(v) ((v)->arch.hvm_vcpu.nvcpu)
+#define vcpu_nestedhvm(v) ((v)->arch.hvm.nvcpu)
 
 struct altp2mvcpu {
     uint16_t    p2midx;         /* alternate p2m index */
     gfn_t       veinfo_gfn;     /* #VE information page gfn */
 };
 
-#define vcpu_altp2m(v) ((v)->arch.hvm_vcpu.avcpu)
+#define vcpu_altp2m(v) ((v)->arch.hvm.avcpu)
 
 struct hvm_vcpu {
     /* Guest control-register and EFER values, just as the guest sees them. */
diff --git a/xen/include/asm-x86/hvm/vlapic.h b/xen/include/asm-x86/hvm/vlapic.h
index 212c36b..8dbec90 100644
--- a/xen/include/asm-x86/hvm/vlapic.h
+++ b/xen/include/asm-x86/hvm/vlapic.h
@@ -25,10 +25,10 @@
 #include <public/hvm/ioreq.h>
 #include <asm/hvm/vpt.h>
 
-#define vcpu_vlapic(x)   (&(x)->arch.hvm_vcpu.vlapic)
-#define vlapic_vcpu(x)   (container_of((x), struct vcpu, arch.hvm_vcpu.vlapic))
+#define vcpu_vlapic(x)   (&(x)->arch.hvm.vlapic)
+#define vlapic_vcpu(x)   (container_of((x), struct vcpu, arch.hvm.vlapic))
 #define const_vlapic_vcpu(x) (container_of((x), const struct vcpu, \
-                              arch.hvm_vcpu.vlapic))
+                              arch.hvm.vlapic))
 #define vlapic_domain(x) (vlapic_vcpu(x)->domain)
 
 #define _VLAPIC_ID(vlapic, id) (vlapic_x2apic_mode(vlapic) \
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index 89619e4..23de869 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -552,7 +552,7 @@ static inline void vpid_sync_vcpu_gva(struct vcpu *v, unsigned long gva)
         type = INVVPID_ALL_CONTEXT;
 
 execute_invvpid:
-    __invvpid(type, v->arch.hvm_vcpu.n1asid.asid, (u64)gva);
+    __invvpid(type, v->arch.hvm.n1asid.asid, (u64)gva);
 }
 
 static inline void vpid_sync_all(void)
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH 5/7] x86/vtx: Rename arch_vmx_struct to vmx_vcpu
  2018-08-28 17:38 [PATCH 0/7] x86: Structure naming and consistency improvements Andrew Cooper
                   ` (3 preceding siblings ...)
  2018-08-28 17:39 ` [PATCH 4/7] x86/hvm: Rename v->arch.hvm_vcpu to v->arch.hvm Andrew Cooper
@ 2018-08-28 17:39 ` Andrew Cooper
  2018-08-29  8:03   ` Wei Liu
                     ` (2 more replies)
  2018-08-28 17:39 ` [PATCH 6/7] x86/svm: Rename arch_svm_struct to svm_vcpu Andrew Cooper
  2018-08-28 17:39 ` [PATCH 7/7] x86/hvm: Drop hvm_{vmx,svm} shorthands Andrew Cooper
  6 siblings, 3 replies; 39+ messages in thread
From: Andrew Cooper @ 2018-08-28 17:39 UTC (permalink / raw)
  To: Xen-devel
  Cc: Kevin Tian, Wei Liu, Jan Beulich, Andrew Cooper, Jun Nakajima,
	Roger Pau Monné

The suffix and prefix are redundant, and the name is curiously odd.  Rename it
to vmx_vcpu to be consistent with all the other similar structures.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>

Some of the local pointers are named arch_vmx.  I'm open to renaming them to
just vmx (like all the other local pointers) if people are happy with the
additional patch delta.
---
 xen/arch/x86/hvm/vmx/vmcs.c        | 14 +++++++-------
 xen/arch/x86/hvm/vmx/vmx.c         |  4 ++--
 xen/include/asm-x86/hvm/vcpu.h     |  2 +-
 xen/include/asm-x86/hvm/vmx/vmcs.h |  2 +-
 4 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 5e4a6b1..bfa3e77 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -518,7 +518,7 @@ static void vmx_free_vmcs(paddr_t pa)
 static void __vmx_clear_vmcs(void *info)
 {
     struct vcpu *v = info;
-    struct arch_vmx_struct *arch_vmx = &v->arch.hvm_vmx;
+    struct vmx_vcpu *arch_vmx = &v->arch.hvm_vmx;
 
     /* Otherwise we can nest (vmx_cpu_down() vs. vmx_clear_vmcs()). */
     ASSERT(!local_irq_is_enabled());
@@ -901,7 +901,7 @@ bool vmx_msr_is_intercepted(struct vmx_msr_bitmap *msr_bitmap,
  */
 void vmx_vmcs_switch(paddr_t from, paddr_t to)
 {
-    struct arch_vmx_struct *vmx = &current->arch.hvm_vmx;
+    struct vmx_vcpu *vmx = &current->arch.hvm_vmx;
     spin_lock(&vmx->vmcs_lock);
 
     __vmpclear(from);
@@ -1308,7 +1308,7 @@ static struct vmx_msr_entry *locate_msr_entry(
 struct vmx_msr_entry *vmx_find_msr(const struct vcpu *v, uint32_t msr,
                                    enum vmx_msr_list_type type)
 {
-    const struct arch_vmx_struct *vmx = &v->arch.hvm_vmx;
+    const struct vmx_vcpu *vmx = &v->arch.hvm_vmx;
     struct vmx_msr_entry *start = NULL, *ent, *end;
     unsigned int substart = 0, subend = vmx->msr_save_count;
     unsigned int total = vmx->msr_load_count;
@@ -1349,7 +1349,7 @@ struct vmx_msr_entry *vmx_find_msr(const struct vcpu *v, uint32_t msr,
 int vmx_add_msr(struct vcpu *v, uint32_t msr, uint64_t val,
                 enum vmx_msr_list_type type)
 {
-    struct arch_vmx_struct *vmx = &v->arch.hvm_vmx;
+    struct vmx_vcpu *vmx = &v->arch.hvm_vmx;
     struct vmx_msr_entry **ptr, *start = NULL, *ent, *end;
     unsigned int substart, subend, total;
     int rc;
@@ -1460,7 +1460,7 @@ int vmx_add_msr(struct vcpu *v, uint32_t msr, uint64_t val,
 
 int vmx_del_msr(struct vcpu *v, uint32_t msr, enum vmx_msr_list_type type)
 {
-    struct arch_vmx_struct *vmx = &v->arch.hvm_vmx;
+    struct vmx_vcpu *vmx = &v->arch.hvm_vmx;
     struct vmx_msr_entry *start = NULL, *ent, *end;
     unsigned int substart = 0, subend = vmx->msr_save_count;
     unsigned int total = vmx->msr_load_count;
@@ -1743,7 +1743,7 @@ void vmx_domain_update_eptp(struct domain *d)
 
 int vmx_create_vmcs(struct vcpu *v)
 {
-    struct arch_vmx_struct *arch_vmx = &v->arch.hvm_vmx;
+    struct vmx_vcpu *arch_vmx = &v->arch.hvm_vmx;
     int rc;
 
     if ( (arch_vmx->vmcs_pa = vmx_alloc_vmcs()) == 0 )
@@ -1765,7 +1765,7 @@ int vmx_create_vmcs(struct vcpu *v)
 
 void vmx_destroy_vmcs(struct vcpu *v)
 {
-    struct arch_vmx_struct *arch_vmx = &v->arch.hvm_vmx;
+    struct vmx_vcpu *arch_vmx = &v->arch.hvm_vmx;
 
     vmx_clear_vmcs(v);
 
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 4abd327..95dec46 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -201,7 +201,7 @@ void vmx_pi_desc_fixup(unsigned int cpu)
 {
     unsigned int new_cpu, dest;
     unsigned long flags;
-    struct arch_vmx_struct *vmx, *tmp;
+    struct vmx_vcpu *vmx, *tmp;
     spinlock_t *new_lock, *old_lock = &per_cpu(vmx_pi_blocking, cpu).lock;
     struct list_head *blocked_vcpus = &per_cpu(vmx_pi_blocking, cpu).list;
 
@@ -2356,7 +2356,7 @@ static struct hvm_function_table __initdata vmx_function_table = {
 /* Handle VT-d posted-interrupt when VCPU is blocked. */
 static void pi_wakeup_interrupt(struct cpu_user_regs *regs)
 {
-    struct arch_vmx_struct *vmx, *tmp;
+    struct vmx_vcpu *vmx, *tmp;
     spinlock_t *lock = &per_cpu(vmx_pi_blocking, smp_processor_id()).lock;
     struct list_head *blocked_vcpus =
 		&per_cpu(vmx_pi_blocking, smp_processor_id()).list;
diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
index 54ea044..abf78e4 100644
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -176,7 +176,7 @@ struct hvm_vcpu {
     u64                 msr_xss;
 
     union {
-        struct arch_vmx_struct vmx;
+        struct vmx_vcpu vmx;
         struct arch_svm_struct svm;
     } u;
 
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index 46668a7..f964a95 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -100,7 +100,7 @@ struct pi_blocking_vcpu {
     spinlock_t           *lock;
 };
 
-struct arch_vmx_struct {
+struct vmx_vcpu {
     /* Physical address of VMCS. */
     paddr_t              vmcs_pa;
     /* VMCS shadow machine address. */
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH 6/7] x86/svm: Rename arch_svm_struct to svm_vcpu
  2018-08-28 17:38 [PATCH 0/7] x86: Structure naming and consistency improvements Andrew Cooper
                   ` (4 preceding siblings ...)
  2018-08-28 17:39 ` [PATCH 5/7] x86/vtx: Rename arch_vmx_struct to vmx_vcpu Andrew Cooper
@ 2018-08-28 17:39 ` Andrew Cooper
  2018-08-30 14:54   ` Jan Beulich
  2018-08-28 17:39 ` [PATCH 7/7] x86/hvm: Drop hvm_{vmx,svm} shorthands Andrew Cooper
  6 siblings, 1 reply; 39+ messages in thread
From: Andrew Cooper @ 2018-08-28 17:39 UTC (permalink / raw)
  To: Xen-devel
  Cc: Wei Liu, Jan Beulich, Andrew Cooper, Suravee Suthikulpanit,
	Boris Ostrovsky, Brian Woods, Roger Pau Monné

The suffix and prefix are redundant, and the name is curiously odd.  Rename it
to svm_vcpu to be consistent with all the other similar structures.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
CC: Brian Woods <brian.woods@amd.com>

All of the local pointers are named arch_svm.  I'm open to renaming them to
just svm if people are happy with the additional patch delta.
---
 xen/arch/x86/hvm/svm/nestedsvm.c   | 2 +-
 xen/arch/x86/hvm/svm/svm.c         | 4 ++--
 xen/arch/x86/hvm/svm/vmcb.c        | 6 +++---
 xen/include/asm-x86/hvm/svm/vmcb.h | 2 +-
 xen/include/asm-x86/hvm/vcpu.h     | 2 +-
 5 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
index a1f840e..9d0fef1 100644
--- a/xen/arch/x86/hvm/svm/nestedsvm.c
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
@@ -350,7 +350,7 @@ static int nsvm_vcpu_hostrestore(struct vcpu *v, struct cpu_user_regs *regs)
 
 static int nsvm_vmrun_permissionmap(struct vcpu *v, bool_t viopm)
 {
-    struct arch_svm_struct *arch_svm = &v->arch.hvm_svm;
+    struct svm_vcpu *arch_svm = &v->arch.hvm_svm;
     struct nestedsvm *svm = &vcpu_nestedsvm(v);
     struct nestedvcpu *nv = &vcpu_nestedhvm(v);
     struct vmcb_struct *ns_vmcb = nv->nv_vvmcx;
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 92b29b1..e7e944d 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -663,7 +663,7 @@ static void svm_update_guest_efer(struct vcpu *v)
 
 static void svm_cpuid_policy_changed(struct vcpu *v)
 {
-    struct arch_svm_struct *arch_svm = &v->arch.hvm_svm;
+    struct svm_vcpu *arch_svm = &v->arch.hvm_svm;
     struct vmcb_struct *vmcb = arch_svm->vmcb;
     const struct cpuid_policy *cp = v->domain->arch.cpuid;
     u32 bitmap = vmcb_get_exception_intercepts(vmcb);
@@ -683,7 +683,7 @@ static void svm_cpuid_policy_changed(struct vcpu *v)
 
 static void svm_sync_vmcb(struct vcpu *v, enum vmcb_sync_state new_state)
 {
-    struct arch_svm_struct *arch_svm = &v->arch.hvm_svm;
+    struct svm_vcpu *arch_svm = &v->arch.hvm_svm;
 
     if ( new_state == vmcb_needs_vmsave )
     {
diff --git a/xen/arch/x86/hvm/svm/vmcb.c b/xen/arch/x86/hvm/svm/vmcb.c
index 3776c53..a7630a0 100644
--- a/xen/arch/x86/hvm/svm/vmcb.c
+++ b/xen/arch/x86/hvm/svm/vmcb.c
@@ -53,7 +53,7 @@ void free_vmcb(struct vmcb_struct *vmcb)
 /* This function can directly access fields which are covered by clean bits. */
 static int construct_vmcb(struct vcpu *v)
 {
-    struct arch_svm_struct *arch_svm = &v->arch.hvm_svm;
+    struct svm_vcpu *arch_svm = &v->arch.hvm_svm;
     struct vmcb_struct *vmcb = arch_svm->vmcb;
 
     /* Build-time check of the size of VMCB AMD structure. */
@@ -225,7 +225,7 @@ static int construct_vmcb(struct vcpu *v)
 int svm_create_vmcb(struct vcpu *v)
 {
     struct nestedvcpu *nv = &vcpu_nestedhvm(v);
-    struct arch_svm_struct *arch_svm = &v->arch.hvm_svm;
+    struct svm_vcpu *arch_svm = &v->arch.hvm_svm;
     int rc;
 
     if ( (nv->nv_n1vmcx == NULL) &&
@@ -252,7 +252,7 @@ int svm_create_vmcb(struct vcpu *v)
 void svm_destroy_vmcb(struct vcpu *v)
 {
     struct nestedvcpu *nv = &vcpu_nestedhvm(v);
-    struct arch_svm_struct *arch_svm = &v->arch.hvm_svm;
+    struct svm_vcpu *arch_svm = &v->arch.hvm_svm;
 
     if ( nv->nv_n1vmcx != NULL )
         free_vmcb(nv->nv_n1vmcx);
diff --git a/xen/include/asm-x86/hvm/svm/vmcb.h b/xen/include/asm-x86/hvm/svm/vmcb.h
index f7974da..3a514f8 100644
--- a/xen/include/asm-x86/hvm/svm/vmcb.h
+++ b/xen/include/asm-x86/hvm/svm/vmcb.h
@@ -518,7 +518,7 @@ enum vmcb_sync_state {
     vmcb_needs_vmload     /* VMCB dirty (VMLOAD needed)? */
 };
 
-struct arch_svm_struct {
+struct svm_vcpu {
     struct vmcb_struct *vmcb;
     u64    vmcb_pa;
     unsigned long *msrpm;
diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
index abf78e4..c8d0a4e 100644
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -177,7 +177,7 @@ struct hvm_vcpu {
 
     union {
         struct vmx_vcpu vmx;
-        struct arch_svm_struct svm;
+        struct svm_vcpu svm;
     } u;
 
     struct tasklet      assert_evtchn_irq_tasklet;
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH 7/7] x86/hvm: Drop hvm_{vmx,svm} shorthands
  2018-08-28 17:38 [PATCH 0/7] x86: Structure naming and consistency improvements Andrew Cooper
                   ` (5 preceding siblings ...)
  2018-08-28 17:39 ` [PATCH 6/7] x86/svm: Rename arch_svm_struct to svm_vcpu Andrew Cooper
@ 2018-08-28 17:39 ` Andrew Cooper
  2018-08-29  8:03   ` Wei Liu
                     ` (3 more replies)
  6 siblings, 4 replies; 39+ messages in thread
From: Andrew Cooper @ 2018-08-28 17:39 UTC (permalink / raw)
  To: Xen-devel
  Cc: Kevin Tian, Wei Liu, Jan Beulich, George Dunlap, Andrew Cooper,
	Jun Nakajima, Boris Ostrovsky, Brian Woods,
	Suravee Suthikulpanit, Roger Pau Monné

By making {vmx,svm} in hvm_vcpu into an anonymous union (consistent with
domain side of things), the hvm_{vmx,svm} defines can be dropped, and all code
refer to the correctly-named fields.  This means that the data hierachy is no
longer obscured from grep/cscope/tags/etc.

Reformat one comment and switch one bool_t to bool while making changes.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
CC: Brian Woods <brian.woods@amd.com>
CC: George Dunlap <george.dunlap@eu.citrix.com>
---
 xen/arch/x86/cpuid.c               |   2 +-
 xen/arch/x86/hvm/svm/asid.c        |   2 +-
 xen/arch/x86/hvm/svm/emulate.c     |   4 +-
 xen/arch/x86/hvm/svm/intr.c        |   8 +-
 xen/arch/x86/hvm/svm/nestedsvm.c   |  28 ++--
 xen/arch/x86/hvm/svm/svm.c         | 178 ++++++++++++------------
 xen/arch/x86/hvm/svm/vmcb.c        |   8 +-
 xen/arch/x86/hvm/vmx/intr.c        |  18 +--
 xen/arch/x86/hvm/vmx/realmode.c    |  18 +--
 xen/arch/x86/hvm/vmx/vmcs.c        | 154 ++++++++++-----------
 xen/arch/x86/hvm/vmx/vmx.c         | 272 ++++++++++++++++++-------------------
 xen/arch/x86/hvm/vmx/vvmx.c        |  64 ++++-----
 xen/arch/x86/mm/p2m-ept.c          |   6 +-
 xen/arch/x86/x86_64/asm-offsets.c  |  12 +-
 xen/drivers/passthrough/io.c       |   2 +-
 xen/include/asm-x86/domain.h       |   4 -
 xen/include/asm-x86/hvm/svm/asid.h |   2 +-
 xen/include/asm-x86/hvm/vcpu.h     |   2 +-
 xen/include/asm-x86/hvm/vmx/vmcs.h |   2 +-
 19 files changed, 390 insertions(+), 396 deletions(-)

diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 59d3298..d21e745 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -1072,7 +1072,7 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
     case 0x8000001c:
         if ( (v->arch.xcr0 & X86_XCR0_LWP) && cpu_has_svm )
             /* Turn on available bit and other features specified in lwp_cfg. */
-            res->a = (res->d & v->arch.hvm_svm.guest_lwp_cfg) | 1;
+            res->a = (res->d & v->arch.hvm.svm.guest_lwp_cfg) | 1;
         break;
     }
 }
diff --git a/xen/arch/x86/hvm/svm/asid.c b/xen/arch/x86/hvm/svm/asid.c
index 7cc54da..e554e25 100644
--- a/xen/arch/x86/hvm/svm/asid.c
+++ b/xen/arch/x86/hvm/svm/asid.c
@@ -40,7 +40,7 @@ void svm_asid_init(const struct cpuinfo_x86 *c)
 void svm_asid_handle_vmrun(void)
 {
     struct vcpu *curr = current;
-    struct vmcb_struct *vmcb = curr->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = curr->arch.hvm.svm.vmcb;
     struct hvm_vcpu_asid *p_asid =
         nestedhvm_vcpu_in_guestmode(curr)
         ? &vcpu_nestedhvm(curr).nv_n2asid : &curr->arch.hvm.n1asid;
diff --git a/xen/arch/x86/hvm/svm/emulate.c b/xen/arch/x86/hvm/svm/emulate.c
index 535674e..3d04af0 100644
--- a/xen/arch/x86/hvm/svm/emulate.c
+++ b/xen/arch/x86/hvm/svm/emulate.c
@@ -28,7 +28,7 @@
 
 static unsigned long svm_nextrip_insn_length(struct vcpu *v)
 {
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
 
     if ( !cpu_has_svm_nrips )
         return 0;
@@ -86,7 +86,7 @@ static const struct {
 int __get_instruction_length_from_list(struct vcpu *v,
         const enum instruction_index *list, unsigned int list_count)
 {
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
     struct hvm_emulate_ctxt ctxt;
     struct x86_emulate_state *state;
     unsigned long inst_len, j;
diff --git a/xen/arch/x86/hvm/svm/intr.c b/xen/arch/x86/hvm/svm/intr.c
index 8511ff0..a17ec8c 100644
--- a/xen/arch/x86/hvm/svm/intr.c
+++ b/xen/arch/x86/hvm/svm/intr.c
@@ -40,7 +40,7 @@
 
 static void svm_inject_nmi(struct vcpu *v)
 {
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
     u32 general1_intercepts = vmcb_get_general1_intercepts(vmcb);
     eventinj_t event;
 
@@ -62,7 +62,7 @@ static void svm_inject_nmi(struct vcpu *v)
 
 static void svm_inject_extint(struct vcpu *v, int vector)
 {
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
     eventinj_t event;
 
     event.bytes = 0;
@@ -76,7 +76,7 @@ static void svm_inject_extint(struct vcpu *v, int vector)
 
 static void svm_enable_intr_window(struct vcpu *v, struct hvm_intack intack)
 {
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
     uint32_t general1_intercepts = vmcb_get_general1_intercepts(vmcb);
     vintr_t intr;
 
@@ -133,7 +133,7 @@ static void svm_enable_intr_window(struct vcpu *v, struct hvm_intack intack)
 void svm_intr_assist(void) 
 {
     struct vcpu *v = current;
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
     struct hvm_intack intack;
     enum hvm_intblk intblk;
 
diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
index 9d0fef1..3f4f403 100644
--- a/xen/arch/x86/hvm/svm/nestedsvm.c
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
@@ -137,7 +137,7 @@ void nsvm_vcpu_destroy(struct vcpu *v)
      * of l1 vmcb page.
      */
     if (nv->nv_n1vmcx)
-        v->arch.hvm_svm.vmcb = nv->nv_n1vmcx;
+        v->arch.hvm.svm.vmcb = nv->nv_n1vmcx;
 
     if (svm->ns_cached_msrpm) {
         free_xenheap_pages(svm->ns_cached_msrpm,
@@ -272,8 +272,8 @@ static int nsvm_vcpu_hostrestore(struct vcpu *v, struct cpu_user_regs *regs)
      */
 
     /* switch vmcb to l1 guest's vmcb */
-    v->arch.hvm_svm.vmcb = n1vmcb;
-    v->arch.hvm_svm.vmcb_pa = nv->nv_n1vmcx_pa;
+    v->arch.hvm.svm.vmcb = n1vmcb;
+    v->arch.hvm.svm.vmcb_pa = nv->nv_n1vmcx_pa;
 
     /* EFER */
     v->arch.hvm.guest_efer = n1vmcb->_efer;
@@ -350,7 +350,7 @@ static int nsvm_vcpu_hostrestore(struct vcpu *v, struct cpu_user_regs *regs)
 
 static int nsvm_vmrun_permissionmap(struct vcpu *v, bool_t viopm)
 {
-    struct svm_vcpu *arch_svm = &v->arch.hvm_svm;
+    struct svm_vcpu *arch_svm = &v->arch.hvm.svm;
     struct nestedsvm *svm = &vcpu_nestedsvm(v);
     struct nestedvcpu *nv = &vcpu_nestedhvm(v);
     struct vmcb_struct *ns_vmcb = nv->nv_vvmcx;
@@ -390,9 +390,7 @@ static int nsvm_vmrun_permissionmap(struct vcpu *v, bool_t viopm)
     nv->nv_ioport80 = ioport_80;
     nv->nv_ioportED = ioport_ed;
 
-    /* v->arch.hvm_svm.msrpm has type unsigned long, thus
-     * BYTES_PER_LONG.
-     */
+    /* v->arch.hvm.svm.msrpm has type unsigned long, thus BYTES_PER_LONG. */
     for (i = 0; i < MSRPM_SIZE / BYTES_PER_LONG; i++)
         svm->ns_merged_msrpm[i] = arch_svm->msrpm[i] | ns_msrpm_ptr[i];
 
@@ -730,8 +728,8 @@ nsvm_vcpu_vmentry(struct vcpu *v, struct cpu_user_regs *regs,
     }
 
     /* switch vmcb to shadow vmcb */
-    v->arch.hvm_svm.vmcb = nv->nv_n2vmcx;
-    v->arch.hvm_svm.vmcb_pa = nv->nv_n2vmcx_pa;
+    v->arch.hvm.svm.vmcb = nv->nv_n2vmcx;
+    v->arch.hvm.svm.vmcb_pa = nv->nv_n2vmcx_pa;
 
     ret = nsvm_vmcb_prepare4vmrun(v, regs);
     if (ret) {
@@ -800,7 +798,7 @@ nsvm_vcpu_vmexit_inject(struct vcpu *v, struct cpu_user_regs *regs,
     struct nestedvcpu *nv = &vcpu_nestedhvm(v);
     struct nestedsvm *svm = &vcpu_nestedsvm(v);
     struct vmcb_struct *ns_vmcb;
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
 
     if ( vmcb->_vintr.fields.vgif_enable )
         ASSERT(vmcb->_vintr.fields.vgif == 0);
@@ -1348,7 +1346,7 @@ nestedsvm_vmexit_defer(struct vcpu *v,
     uint64_t exitcode, uint64_t exitinfo1, uint64_t exitinfo2)
 {
     struct nestedsvm *svm = &vcpu_nestedsvm(v);
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
 
     if ( vmcb->_vintr.fields.vgif_enable )
         vmcb->_vintr.fields.vgif = 0;
@@ -1522,7 +1520,7 @@ void nsvm_vcpu_switch(struct cpu_user_regs *regs)
 
     nv = &vcpu_nestedhvm(v);
     svm = &vcpu_nestedsvm(v);
-    ASSERT(v->arch.hvm_svm.vmcb != NULL);
+    ASSERT(v->arch.hvm.svm.vmcb != NULL);
     ASSERT(nv->nv_n1vmcx != NULL);
     ASSERT(nv->nv_n2vmcx != NULL);
     ASSERT(nv->nv_n1vmcx_pa != INVALID_PADDR);
@@ -1607,7 +1605,7 @@ bool_t
 nestedsvm_gif_isset(struct vcpu *v)
 {
     struct nestedsvm *svm = &vcpu_nestedsvm(v);
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
 
     /* get the vmcb gif value if using vgif */
     if ( vmcb->_vintr.fields.vgif_enable )
@@ -1640,7 +1638,7 @@ void svm_vmexit_do_stgi(struct cpu_user_regs *regs, struct vcpu *v)
 
 void svm_vmexit_do_clgi(struct cpu_user_regs *regs, struct vcpu *v)
 {
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
     unsigned int inst_len;
     uint32_t general1_intercepts = vmcb_get_general1_intercepts(vmcb);
     vintr_t intr;
@@ -1672,7 +1670,7 @@ void svm_vmexit_do_clgi(struct cpu_user_regs *regs, struct vcpu *v)
  */
 void svm_nested_features_on_efer_update(struct vcpu *v)
 {
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
     struct nestedsvm *svm = &vcpu_nestedsvm(v);
     u32 general2_intercepts;
     vintr_t vintr;
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index e7e944d..0ec87b2 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -88,7 +88,7 @@ static DEFINE_SPINLOCK(osvw_lock);
 /* Only crash the guest if the problem originates in kernel mode. */
 static void svm_crash_or_fault(struct vcpu *v)
 {
-    if ( vmcb_get_cpl(v->arch.hvm_svm.vmcb) )
+    if ( vmcb_get_cpl(v->arch.hvm.svm.vmcb) )
         hvm_inject_hw_exception(TRAP_invalid_op, X86_EVENT_NO_EC);
     else
         domain_crash(v->domain);
@@ -113,7 +113,7 @@ void __update_guest_eip(struct cpu_user_regs *regs, unsigned int inst_len)
     regs->rip += inst_len;
     regs->eflags &= ~X86_EFLAGS_RF;
 
-    curr->arch.hvm_svm.vmcb->interrupt_shadow = 0;
+    curr->arch.hvm.svm.vmcb->interrupt_shadow = 0;
 
     if ( regs->eflags & X86_EFLAGS_TF )
         hvm_inject_hw_exception(TRAP_debug, X86_EVENT_NO_EC);
@@ -147,7 +147,7 @@ void svm_intercept_msr(struct vcpu *v, uint32_t msr, int flags)
     unsigned long *msr_bit;
     const struct domain *d = v->domain;
 
-    msr_bit = svm_msrbit(v->arch.hvm_svm.msrpm, msr);
+    msr_bit = svm_msrbit(v->arch.hvm.svm.msrpm, msr);
     BUG_ON(msr_bit == NULL);
     msr &= 0x1fff;
 
@@ -176,7 +176,7 @@ static void svm_set_icebp_interception(struct domain *d, bool enable)
 
     for_each_vcpu ( d, v )
     {
-        struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+        struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
         uint32_t intercepts = vmcb_get_general2_intercepts(vmcb);
 
         if ( enable )
@@ -190,7 +190,7 @@ static void svm_set_icebp_interception(struct domain *d, bool enable)
 
 static void svm_save_dr(struct vcpu *v)
 {
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
     unsigned int flag_dr_dirty = v->arch.hvm.flag_dr_dirty;
 
     if ( !flag_dr_dirty )
@@ -207,10 +207,10 @@ static void svm_save_dr(struct vcpu *v)
         svm_intercept_msr(v, MSR_AMD64_DR2_ADDRESS_MASK, MSR_INTERCEPT_RW);
         svm_intercept_msr(v, MSR_AMD64_DR3_ADDRESS_MASK, MSR_INTERCEPT_RW);
 
-        rdmsrl(MSR_AMD64_DR0_ADDRESS_MASK, v->arch.hvm_svm.dr_mask[0]);
-        rdmsrl(MSR_AMD64_DR1_ADDRESS_MASK, v->arch.hvm_svm.dr_mask[1]);
-        rdmsrl(MSR_AMD64_DR2_ADDRESS_MASK, v->arch.hvm_svm.dr_mask[2]);
-        rdmsrl(MSR_AMD64_DR3_ADDRESS_MASK, v->arch.hvm_svm.dr_mask[3]);
+        rdmsrl(MSR_AMD64_DR0_ADDRESS_MASK, v->arch.hvm.svm.dr_mask[0]);
+        rdmsrl(MSR_AMD64_DR1_ADDRESS_MASK, v->arch.hvm.svm.dr_mask[1]);
+        rdmsrl(MSR_AMD64_DR2_ADDRESS_MASK, v->arch.hvm.svm.dr_mask[2]);
+        rdmsrl(MSR_AMD64_DR3_ADDRESS_MASK, v->arch.hvm.svm.dr_mask[3]);
     }
 
     v->arch.debugreg[0] = read_debugreg(0);
@@ -238,10 +238,10 @@ static void __restore_debug_registers(struct vmcb_struct *vmcb, struct vcpu *v)
         svm_intercept_msr(v, MSR_AMD64_DR2_ADDRESS_MASK, MSR_INTERCEPT_NONE);
         svm_intercept_msr(v, MSR_AMD64_DR3_ADDRESS_MASK, MSR_INTERCEPT_NONE);
 
-        wrmsrl(MSR_AMD64_DR0_ADDRESS_MASK, v->arch.hvm_svm.dr_mask[0]);
-        wrmsrl(MSR_AMD64_DR1_ADDRESS_MASK, v->arch.hvm_svm.dr_mask[1]);
-        wrmsrl(MSR_AMD64_DR2_ADDRESS_MASK, v->arch.hvm_svm.dr_mask[2]);
-        wrmsrl(MSR_AMD64_DR3_ADDRESS_MASK, v->arch.hvm_svm.dr_mask[3]);
+        wrmsrl(MSR_AMD64_DR0_ADDRESS_MASK, v->arch.hvm.svm.dr_mask[0]);
+        wrmsrl(MSR_AMD64_DR1_ADDRESS_MASK, v->arch.hvm.svm.dr_mask[1]);
+        wrmsrl(MSR_AMD64_DR2_ADDRESS_MASK, v->arch.hvm.svm.dr_mask[2]);
+        wrmsrl(MSR_AMD64_DR3_ADDRESS_MASK, v->arch.hvm.svm.dr_mask[3]);
     }
 
     write_debugreg(0, v->arch.debugreg[0]);
@@ -260,23 +260,23 @@ static void __restore_debug_registers(struct vmcb_struct *vmcb, struct vcpu *v)
  */
 static void svm_restore_dr(struct vcpu *v)
 {
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
     if ( unlikely(v->arch.debugreg[7] & DR7_ACTIVE_MASK) )
         __restore_debug_registers(vmcb, v);
 }
 
 static int svm_vmcb_save(struct vcpu *v, struct hvm_hw_cpu *c)
 {
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
 
     c->cr0 = v->arch.hvm.guest_cr[0];
     c->cr2 = v->arch.hvm.guest_cr[2];
     c->cr3 = v->arch.hvm.guest_cr[3];
     c->cr4 = v->arch.hvm.guest_cr[4];
 
-    c->sysenter_cs = v->arch.hvm_svm.guest_sysenter_cs;
-    c->sysenter_esp = v->arch.hvm_svm.guest_sysenter_esp;
-    c->sysenter_eip = v->arch.hvm_svm.guest_sysenter_eip;
+    c->sysenter_cs = v->arch.hvm.svm.guest_sysenter_cs;
+    c->sysenter_esp = v->arch.hvm.svm.guest_sysenter_esp;
+    c->sysenter_eip = v->arch.hvm.svm.guest_sysenter_eip;
 
     c->pending_event = 0;
     c->error_code = 0;
@@ -294,7 +294,7 @@ static int svm_vmcb_save(struct vcpu *v, struct hvm_hw_cpu *c)
 static int svm_vmcb_restore(struct vcpu *v, struct hvm_hw_cpu *c)
 {
     struct page_info *page = NULL;
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
     struct p2m_domain *p2m = p2m_get_hostp2m(v->domain);
 
     if ( c->pending_valid )
@@ -346,9 +346,9 @@ static int svm_vmcb_restore(struct vcpu *v, struct hvm_hw_cpu *c)
     svm_update_guest_cr(v, 4, 0);
 
     /* Load sysenter MSRs into both VMCB save area and VCPU fields. */
-    vmcb->sysenter_cs = v->arch.hvm_svm.guest_sysenter_cs = c->sysenter_cs;
-    vmcb->sysenter_esp = v->arch.hvm_svm.guest_sysenter_esp = c->sysenter_esp;
-    vmcb->sysenter_eip = v->arch.hvm_svm.guest_sysenter_eip = c->sysenter_eip;
+    vmcb->sysenter_cs = v->arch.hvm.svm.guest_sysenter_cs = c->sysenter_cs;
+    vmcb->sysenter_esp = v->arch.hvm.svm.guest_sysenter_esp = c->sysenter_esp;
+    vmcb->sysenter_eip = v->arch.hvm.svm.guest_sysenter_eip = c->sysenter_eip;
     
     if ( paging_mode_hap(v->domain) )
     {
@@ -377,7 +377,7 @@ static int svm_vmcb_restore(struct vcpu *v, struct hvm_hw_cpu *c)
 
 static void svm_save_cpu_state(struct vcpu *v, struct hvm_hw_cpu *data)
 {
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
 
     data->shadow_gs        = vmcb->kerngsbase;
     data->msr_lstar        = vmcb->lstar;
@@ -391,7 +391,7 @@ static void svm_save_cpu_state(struct vcpu *v, struct hvm_hw_cpu *data)
 
 static void svm_load_cpu_state(struct vcpu *v, struct hvm_hw_cpu *data)
 {
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
 
     vmcb->kerngsbase = data->shadow_gs;
     vmcb->lstar      = data->msr_lstar;
@@ -429,19 +429,19 @@ static void svm_save_msr(struct vcpu *v, struct hvm_msr *ctxt)
 {
     if ( boot_cpu_has(X86_FEATURE_DBEXT) )
     {
-        ctxt->msr[ctxt->count].val = v->arch.hvm_svm.dr_mask[0];
+        ctxt->msr[ctxt->count].val = v->arch.hvm.svm.dr_mask[0];
         if ( ctxt->msr[ctxt->count].val )
             ctxt->msr[ctxt->count++].index = MSR_AMD64_DR0_ADDRESS_MASK;
 
-        ctxt->msr[ctxt->count].val = v->arch.hvm_svm.dr_mask[1];
+        ctxt->msr[ctxt->count].val = v->arch.hvm.svm.dr_mask[1];
         if ( ctxt->msr[ctxt->count].val )
             ctxt->msr[ctxt->count++].index = MSR_AMD64_DR1_ADDRESS_MASK;
 
-        ctxt->msr[ctxt->count].val = v->arch.hvm_svm.dr_mask[2];
+        ctxt->msr[ctxt->count].val = v->arch.hvm.svm.dr_mask[2];
         if ( ctxt->msr[ctxt->count].val )
             ctxt->msr[ctxt->count++].index = MSR_AMD64_DR2_ADDRESS_MASK;
 
-        ctxt->msr[ctxt->count].val = v->arch.hvm_svm.dr_mask[3];
+        ctxt->msr[ctxt->count].val = v->arch.hvm.svm.dr_mask[3];
         if ( ctxt->msr[ctxt->count].val )
             ctxt->msr[ctxt->count++].index = MSR_AMD64_DR3_ADDRESS_MASK;
     }
@@ -462,7 +462,7 @@ static int svm_load_msr(struct vcpu *v, struct hvm_msr *ctxt)
             else if ( ctxt->msr[i].val >> 32 )
                 err = -EDOM;
             else
-                v->arch.hvm_svm.dr_mask[0] = ctxt->msr[i].val;
+                v->arch.hvm.svm.dr_mask[0] = ctxt->msr[i].val;
             break;
 
         case MSR_AMD64_DR1_ADDRESS_MASK ... MSR_AMD64_DR3_ADDRESS_MASK:
@@ -471,7 +471,7 @@ static int svm_load_msr(struct vcpu *v, struct hvm_msr *ctxt)
             else if ( ctxt->msr[i].val >> 32 )
                 err = -EDOM;
             else
-                v->arch.hvm_svm.dr_mask[idx - MSR_AMD64_DR1_ADDRESS_MASK + 1] =
+                v->arch.hvm.svm.dr_mask[idx - MSR_AMD64_DR1_ADDRESS_MASK + 1] =
                     ctxt->msr[i].val;
             break;
 
@@ -520,7 +520,7 @@ static void svm_fpu_leave(struct vcpu *v)
 
 static unsigned int svm_get_interrupt_shadow(struct vcpu *v)
 {
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
     unsigned int intr_shadow = 0;
 
     if ( vmcb->interrupt_shadow )
@@ -534,7 +534,7 @@ static unsigned int svm_get_interrupt_shadow(struct vcpu *v)
 
 static void svm_set_interrupt_shadow(struct vcpu *v, unsigned int intr_shadow)
 {
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
     u32 general1_intercepts = vmcb_get_general1_intercepts(vmcb);
 
     vmcb->interrupt_shadow =
@@ -548,7 +548,7 @@ static void svm_set_interrupt_shadow(struct vcpu *v, unsigned int intr_shadow)
 
 static int svm_guest_x86_mode(struct vcpu *v)
 {
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
 
     if ( unlikely(!(v->arch.hvm.guest_cr[0] & X86_CR0_PE)) )
         return 0;
@@ -561,7 +561,7 @@ static int svm_guest_x86_mode(struct vcpu *v)
 
 void svm_update_guest_cr(struct vcpu *v, unsigned int cr, unsigned int flags)
 {
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
     uint64_t value;
 
     switch ( cr )
@@ -645,8 +645,8 @@ void svm_update_guest_cr(struct vcpu *v, unsigned int cr, unsigned int flags)
 
 static void svm_update_guest_efer(struct vcpu *v)
 {
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
-    bool_t lma = !!(v->arch.hvm.guest_efer & EFER_LMA);
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
+    bool lma = v->arch.hvm.guest_efer & EFER_LMA;
     uint64_t new_efer;
 
     new_efer = (v->arch.hvm.guest_efer | EFER_SVME) & ~EFER_LME;
@@ -663,7 +663,7 @@ static void svm_update_guest_efer(struct vcpu *v)
 
 static void svm_cpuid_policy_changed(struct vcpu *v)
 {
-    struct svm_vcpu *arch_svm = &v->arch.hvm_svm;
+    struct svm_vcpu *arch_svm = &v->arch.hvm.svm;
     struct vmcb_struct *vmcb = arch_svm->vmcb;
     const struct cpuid_policy *cp = v->domain->arch.cpuid;
     u32 bitmap = vmcb_get_exception_intercepts(vmcb);
@@ -683,7 +683,7 @@ static void svm_cpuid_policy_changed(struct vcpu *v)
 
 static void svm_sync_vmcb(struct vcpu *v, enum vmcb_sync_state new_state)
 {
-    struct svm_vcpu *arch_svm = &v->arch.hvm_svm;
+    struct svm_vcpu *arch_svm = &v->arch.hvm.svm;
 
     if ( new_state == vmcb_needs_vmsave )
     {
@@ -704,13 +704,13 @@ static void svm_sync_vmcb(struct vcpu *v, enum vmcb_sync_state new_state)
 
 static unsigned int svm_get_cpl(struct vcpu *v)
 {
-    return vmcb_get_cpl(v->arch.hvm_svm.vmcb);
+    return vmcb_get_cpl(v->arch.hvm.svm.vmcb);
 }
 
 static void svm_get_segment_register(struct vcpu *v, enum x86_segment seg,
                                      struct segment_register *reg)
 {
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
 
     ASSERT((v == current) || !vcpu_runnable(v));
 
@@ -755,7 +755,7 @@ static void svm_get_segment_register(struct vcpu *v, enum x86_segment seg,
 static void svm_set_segment_register(struct vcpu *v, enum x86_segment seg,
                                      struct segment_register *reg)
 {
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
 
     ASSERT((v == current) || !vcpu_runnable(v));
 
@@ -824,12 +824,12 @@ static void svm_set_segment_register(struct vcpu *v, enum x86_segment seg,
 
 static unsigned long svm_get_shadow_gs_base(struct vcpu *v)
 {
-    return v->arch.hvm_svm.vmcb->kerngsbase;
+    return v->arch.hvm.svm.vmcb->kerngsbase;
 }
 
 static int svm_set_guest_pat(struct vcpu *v, u64 gpat)
 {
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
 
     if ( !paging_mode_hap(v->domain) )
         return 0;
@@ -840,7 +840,7 @@ static int svm_set_guest_pat(struct vcpu *v, u64 gpat)
 
 static int svm_get_guest_pat(struct vcpu *v, u64 *gpat)
 {
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
 
     if ( !paging_mode_hap(v->domain) )
         return 0;
@@ -888,7 +888,7 @@ static uint64_t svm_get_tsc_offset(uint64_t host_tsc, uint64_t guest_tsc,
 
 static void svm_set_tsc_offset(struct vcpu *v, u64 offset, u64 at_tsc)
 {
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
     struct vmcb_struct *n1vmcb, *n2vmcb;
     uint64_t n2_tsc_offset = 0;
     struct domain *d = v->domain;
@@ -921,7 +921,7 @@ static void svm_set_tsc_offset(struct vcpu *v, u64 offset, u64 at_tsc)
 
 static void svm_set_rdtsc_exiting(struct vcpu *v, bool_t enable)
 {
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
     u32 general1_intercepts = vmcb_get_general1_intercepts(vmcb);
     u32 general2_intercepts = vmcb_get_general2_intercepts(vmcb);
 
@@ -940,7 +940,7 @@ static void svm_set_rdtsc_exiting(struct vcpu *v, bool_t enable)
 
 static void svm_set_descriptor_access_exiting(struct vcpu *v, bool enable)
 {
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
     u32 general1_intercepts = vmcb_get_general1_intercepts(vmcb);
     u32 mask = GENERAL1_INTERCEPT_IDTR_READ | GENERAL1_INTERCEPT_GDTR_READ
             | GENERAL1_INTERCEPT_LDTR_READ | GENERAL1_INTERCEPT_TR_READ
@@ -957,14 +957,14 @@ static void svm_set_descriptor_access_exiting(struct vcpu *v, bool enable)
 
 static unsigned int svm_get_insn_bytes(struct vcpu *v, uint8_t *buf)
 {
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
-    unsigned int len = v->arch.hvm_svm.cached_insn_len;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
+    unsigned int len = v->arch.hvm.svm.cached_insn_len;
 
     if ( len != 0 )
     {
         /* Latch and clear the cached instruction. */
         memcpy(buf, vmcb->guest_ins, MAX_INST_LEN);
-        v->arch.hvm_svm.cached_insn_len = 0;
+        v->arch.hvm.svm.cached_insn_len = 0;
     }
 
     return len;
@@ -1000,14 +1000,14 @@ static void svm_lwp_interrupt(struct cpu_user_regs *regs)
     ack_APIC_irq();
     vlapic_set_irq(
         vcpu_vlapic(curr),
-        (curr->arch.hvm_svm.guest_lwp_cfg >> 40) & 0xff,
+        (curr->arch.hvm.svm.guest_lwp_cfg >> 40) & 0xff,
         0);
 }
 
 static inline void svm_lwp_save(struct vcpu *v)
 {
     /* Don't mess up with other guests. Disable LWP for next VCPU. */
-    if ( v->arch.hvm_svm.guest_lwp_cfg )
+    if ( v->arch.hvm.svm.guest_lwp_cfg )
     {
         wrmsrl(MSR_AMD64_LWP_CFG, 0x0);
         wrmsrl(MSR_AMD64_LWP_CBADDR, 0x0);
@@ -1017,8 +1017,8 @@ static inline void svm_lwp_save(struct vcpu *v)
 static inline void svm_lwp_load(struct vcpu *v)
 {
     /* Only LWP_CFG is reloaded. LWP_CBADDR will be reloaded via xrstor. */
-   if ( v->arch.hvm_svm.guest_lwp_cfg ) 
-       wrmsrl(MSR_AMD64_LWP_CFG, v->arch.hvm_svm.cpu_lwp_cfg);
+   if ( v->arch.hvm.svm.guest_lwp_cfg )
+       wrmsrl(MSR_AMD64_LWP_CFG, v->arch.hvm.svm.cpu_lwp_cfg);
 }
 
 /* Update LWP_CFG MSR (0xc0000105). Return -1 if error; otherwise returns 0. */
@@ -1035,22 +1035,22 @@ static int svm_update_lwp_cfg(struct vcpu *v, uint64_t msr_content)
         if ( msr_low & ~v->domain->arch.cpuid->extd.raw[0x1c].d )
             return -1;
 
-        v->arch.hvm_svm.guest_lwp_cfg = msr_content;
+        v->arch.hvm.svm.guest_lwp_cfg = msr_content;
 
         /* setup interrupt handler if needed */
         if ( (msr_content & 0x80000000) && ((msr_content >> 40) & 0xff) )
         {
             alloc_direct_apic_vector(&lwp_intr_vector, svm_lwp_interrupt);
-            v->arch.hvm_svm.cpu_lwp_cfg = (msr_content & 0xffff00ffffffffffULL)
+            v->arch.hvm.svm.cpu_lwp_cfg = (msr_content & 0xffff00ffffffffffULL)
                 | ((uint64_t)lwp_intr_vector << 40);
         }
         else
         {
             /* otherwise disable it */
-            v->arch.hvm_svm.cpu_lwp_cfg = msr_content & 0xffff00ff7fffffffULL;
+            v->arch.hvm.svm.cpu_lwp_cfg = msr_content & 0xffff00ff7fffffffULL;
         }
         
-        wrmsrl(MSR_AMD64_LWP_CFG, v->arch.hvm_svm.cpu_lwp_cfg);
+        wrmsrl(MSR_AMD64_LWP_CFG, v->arch.hvm.svm.cpu_lwp_cfg);
 
         /* track nonalzy state if LWP_CFG is non-zero. */
         v->arch.nonlazy_xstate_used = !!(msr_content);
@@ -1100,7 +1100,7 @@ static void svm_ctxt_switch_from(struct vcpu *v)
 
 static void svm_ctxt_switch_to(struct vcpu *v)
 {
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
     int cpu = smp_processor_id();
 
     /*
@@ -1129,7 +1129,7 @@ static void svm_ctxt_switch_to(struct vcpu *v)
 
 static void noreturn svm_do_resume(struct vcpu *v)
 {
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
     bool debug_state = (v->domain->debugger_attached ||
                         v->domain->arch.monitor.software_breakpoint_enabled ||
                         v->domain->arch.monitor.debug_exception_enabled);
@@ -1150,9 +1150,9 @@ static void noreturn svm_do_resume(struct vcpu *v)
                               : (intercepts & ~(1U << TRAP_int3)));
     }
 
-    if ( v->arch.hvm_svm.launch_core != smp_processor_id() )
+    if ( v->arch.hvm.svm.launch_core != smp_processor_id() )
     {
-        v->arch.hvm_svm.launch_core = smp_processor_id();
+        v->arch.hvm.svm.launch_core = smp_processor_id();
         hvm_migrate_timers(v);
         hvm_migrate_pirqs(v);
         /* Migrating to another ASID domain.  Request a new ASID. */
@@ -1178,7 +1178,7 @@ static void noreturn svm_do_resume(struct vcpu *v)
 void svm_vmenter_helper(const struct cpu_user_regs *regs)
 {
     struct vcpu *curr = current;
-    struct vmcb_struct *vmcb = curr->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = curr->arch.hvm.svm.vmcb;
 
     svm_asid_handle_vmrun();
 
@@ -1284,7 +1284,7 @@ static int svm_vcpu_initialise(struct vcpu *v)
 {
     int rc;
 
-    v->arch.hvm_svm.launch_core = -1;
+    v->arch.hvm.svm.launch_core = -1;
 
     if ( (rc = svm_create_vmcb(v)) != 0 )
     {
@@ -1314,7 +1314,7 @@ static void svm_vcpu_destroy(struct vcpu *v)
 static void svm_emul_swint_injection(struct x86_event *event)
 {
     struct vcpu *curr = current;
-    const struct vmcb_struct *vmcb = curr->arch.hvm_svm.vmcb;
+    const struct vmcb_struct *vmcb = curr->arch.hvm.svm.vmcb;
     const struct cpu_user_regs *regs = guest_cpu_user_regs();
     unsigned int trap = event->vector, type = event->type;
     unsigned int fault = TRAP_gp_fault, ec = 0;
@@ -1421,7 +1421,7 @@ static void svm_emul_swint_injection(struct x86_event *event)
 static void svm_inject_event(const struct x86_event *event)
 {
     struct vcpu *curr = current;
-    struct vmcb_struct *vmcb = curr->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = curr->arch.hvm.svm.vmcb;
     eventinj_t eventinj = vmcb->eventinj;
     struct x86_event _event = *event;
     struct cpu_user_regs *regs = guest_cpu_user_regs();
@@ -1552,7 +1552,7 @@ static void svm_inject_event(const struct x86_event *event)
 
 static int svm_event_pending(struct vcpu *v)
 {
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
     return vmcb->eventinj.fields.v;
 }
 
@@ -1792,7 +1792,7 @@ static void svm_do_nested_pgfault(struct vcpu *v,
 static void svm_fpu_dirty_intercept(void)
 {
     struct vcpu *v = current;
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
     struct vmcb_struct *n1vmcb = vcpu_nestedhvm(v).nv_n1vmcx;
 
     svm_fpu_enter(v);
@@ -1862,7 +1862,7 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
     int ret;
     struct vcpu *v = current;
     const struct domain *d = v->domain;
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
 
     switch ( msr )
     {
@@ -1886,13 +1886,13 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
     switch ( msr )
     {
     case MSR_IA32_SYSENTER_CS:
-        *msr_content = v->arch.hvm_svm.guest_sysenter_cs;
+        *msr_content = v->arch.hvm.svm.guest_sysenter_cs;
         break;
     case MSR_IA32_SYSENTER_ESP:
-        *msr_content = v->arch.hvm_svm.guest_sysenter_esp;
+        *msr_content = v->arch.hvm.svm.guest_sysenter_esp;
         break;
     case MSR_IA32_SYSENTER_EIP:
-        *msr_content = v->arch.hvm_svm.guest_sysenter_eip;
+        *msr_content = v->arch.hvm.svm.guest_sysenter_eip;
         break;
 
     case MSR_STAR:
@@ -1962,7 +1962,7 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
         break;
 
     case MSR_AMD64_LWP_CFG:
-        *msr_content = v->arch.hvm_svm.guest_lwp_cfg;
+        *msr_content = v->arch.hvm.svm.guest_lwp_cfg;
         break;
 
     case MSR_K7_PERFCTR0:
@@ -1992,14 +1992,14 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
     case MSR_AMD64_DR0_ADDRESS_MASK:
         if ( !v->domain->arch.cpuid->extd.dbext )
             goto gpf;
-        *msr_content = v->arch.hvm_svm.dr_mask[0];
+        *msr_content = v->arch.hvm.svm.dr_mask[0];
         break;
 
     case MSR_AMD64_DR1_ADDRESS_MASK ... MSR_AMD64_DR3_ADDRESS_MASK:
         if ( !v->domain->arch.cpuid->extd.dbext )
             goto gpf;
         *msr_content =
-            v->arch.hvm_svm.dr_mask[msr - MSR_AMD64_DR1_ADDRESS_MASK + 1];
+            v->arch.hvm.svm.dr_mask[msr - MSR_AMD64_DR1_ADDRESS_MASK + 1];
         break;
 
     case MSR_AMD_OSVW_ID_LENGTH:
@@ -2051,7 +2051,7 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
     int ret, result = X86EMUL_OKAY;
     struct vcpu *v = current;
     struct domain *d = v->domain;
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
 
     switch ( msr )
     {
@@ -2084,11 +2084,11 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
         switch ( msr )
         {
         case MSR_IA32_SYSENTER_ESP:
-            vmcb->sysenter_esp = v->arch.hvm_svm.guest_sysenter_esp = msr_content;
+            vmcb->sysenter_esp = v->arch.hvm.svm.guest_sysenter_esp = msr_content;
             break;
 
         case MSR_IA32_SYSENTER_EIP:
-            vmcb->sysenter_eip = v->arch.hvm_svm.guest_sysenter_eip = msr_content;
+            vmcb->sysenter_eip = v->arch.hvm.svm.guest_sysenter_eip = msr_content;
             break;
 
         case MSR_LSTAR:
@@ -2114,7 +2114,7 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
         break;
 
     case MSR_IA32_SYSENTER_CS:
-        vmcb->sysenter_cs = v->arch.hvm_svm.guest_sysenter_cs = msr_content;
+        vmcb->sysenter_cs = v->arch.hvm.svm.guest_sysenter_cs = msr_content;
         break;
 
     case MSR_STAR:
@@ -2194,13 +2194,13 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
     case MSR_AMD64_DR0_ADDRESS_MASK:
         if ( !v->domain->arch.cpuid->extd.dbext || (msr_content >> 32) )
             goto gpf;
-        v->arch.hvm_svm.dr_mask[0] = msr_content;
+        v->arch.hvm.svm.dr_mask[0] = msr_content;
         break;
 
     case MSR_AMD64_DR1_ADDRESS_MASK ... MSR_AMD64_DR3_ADDRESS_MASK:
         if ( !v->domain->arch.cpuid->extd.dbext || (msr_content >> 32) )
             goto gpf;
-        v->arch.hvm_svm.dr_mask[msr - MSR_AMD64_DR1_ADDRESS_MASK + 1] =
+        v->arch.hvm.svm.dr_mask[msr - MSR_AMD64_DR1_ADDRESS_MASK + 1] =
             msr_content;
         break;
 
@@ -2251,7 +2251,7 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
 static void svm_do_msr_access(struct cpu_user_regs *regs)
 {
     struct vcpu *curr = current;
-    bool rdmsr = curr->arch.hvm_svm.vmcb->exitinfo1 == 0;
+    bool rdmsr = curr->arch.hvm.svm.vmcb->exitinfo1 == 0;
     int rc, inst_len = __get_instruction_length(
         curr, rdmsr ? INSTR_RDMSR : INSTR_WRMSR);
 
@@ -2391,7 +2391,7 @@ svm_vmexit_do_vmload(struct vmcb_struct *vmcb,
     put_page(page);
 
     /* State in L1 VMCB is stale now */
-    v->arch.hvm_svm.vmcb_sync_state = vmcb_needs_vmsave;
+    v->arch.hvm.svm.vmcb_sync_state = vmcb_needs_vmsave;
 
     __update_guest_eip(regs, inst_len);
 }
@@ -2519,7 +2519,7 @@ static void svm_invlpg(struct vcpu *v, unsigned long vaddr)
 
 static bool svm_get_pending_event(struct vcpu *v, struct x86_event *info)
 {
-    const struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    const struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
 
     if ( vmcb->eventinj.fields.v )
         return false;
@@ -2594,7 +2594,7 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
 {
     uint64_t exit_reason;
     struct vcpu *v = current;
-    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+    struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
     eventinj_t eventinj;
     int inst_len, rc;
     vintr_t intr;
@@ -2816,9 +2816,9 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
                     regs->rdx, regs->rsi, regs->rdi);
 
         if ( cpu_has_svm_decode )
-            v->arch.hvm_svm.cached_insn_len = vmcb->guest_ins_len & 0xf;
+            v->arch.hvm.svm.cached_insn_len = vmcb->guest_ins_len & 0xf;
         rc = paging_fault(va, regs);
-        v->arch.hvm_svm.cached_insn_len = 0;
+        v->arch.hvm.svm.cached_insn_len = 0;
 
         if ( rc )
         {
@@ -3020,7 +3020,7 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
     case VMEXIT_NPF:
         perfc_incra(svmexits, VMEXIT_NPF_PERFC);
         if ( cpu_has_svm_decode )
-            v->arch.hvm_svm.cached_insn_len = vmcb->guest_ins_len & 0xf;
+            v->arch.hvm.svm.cached_insn_len = vmcb->guest_ins_len & 0xf;
         rc = vmcb->exitinfo1 & PFEC_page_present
              ? p2m_pt_handle_deferred_changes(vmcb->exitinfo2) : 0;
         if ( rc >= 0 )
@@ -3032,7 +3032,7 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
                    v, rc, vmcb->exitinfo2, vmcb->exitinfo1);
             domain_crash(v->domain);
         }
-        v->arch.hvm_svm.cached_insn_len = 0;
+        v->arch.hvm.svm.cached_insn_len = 0;
         break;
 
     case VMEXIT_IRET: {
diff --git a/xen/arch/x86/hvm/svm/vmcb.c b/xen/arch/x86/hvm/svm/vmcb.c
index a7630a0..4558f57 100644
--- a/xen/arch/x86/hvm/svm/vmcb.c
+++ b/xen/arch/x86/hvm/svm/vmcb.c
@@ -53,7 +53,7 @@ void free_vmcb(struct vmcb_struct *vmcb)
 /* This function can directly access fields which are covered by clean bits. */
 static int construct_vmcb(struct vcpu *v)
 {
-    struct svm_vcpu *arch_svm = &v->arch.hvm_svm;
+    struct svm_vcpu *arch_svm = &v->arch.hvm.svm;
     struct vmcb_struct *vmcb = arch_svm->vmcb;
 
     /* Build-time check of the size of VMCB AMD structure. */
@@ -225,7 +225,7 @@ static int construct_vmcb(struct vcpu *v)
 int svm_create_vmcb(struct vcpu *v)
 {
     struct nestedvcpu *nv = &vcpu_nestedhvm(v);
-    struct svm_vcpu *arch_svm = &v->arch.hvm_svm;
+    struct svm_vcpu *arch_svm = &v->arch.hvm.svm;
     int rc;
 
     if ( (nv->nv_n1vmcx == NULL) &&
@@ -252,7 +252,7 @@ int svm_create_vmcb(struct vcpu *v)
 void svm_destroy_vmcb(struct vcpu *v)
 {
     struct nestedvcpu *nv = &vcpu_nestedhvm(v);
-    struct svm_vcpu *arch_svm = &v->arch.hvm_svm;
+    struct svm_vcpu *arch_svm = &v->arch.hvm.svm;
 
     if ( nv->nv_n1vmcx != NULL )
         free_vmcb(nv->nv_n1vmcx);
@@ -286,7 +286,7 @@ static void vmcb_dump(unsigned char ch)
         for_each_vcpu ( d, v )
         {
             printk("\tVCPU %d\n", v->vcpu_id);
-            svm_vmcb_dump("key_handler", v->arch.hvm_svm.vmcb);
+            svm_vmcb_dump("key_handler", v->arch.hvm.svm.vmcb);
         }
     }
 
diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c
index 889067c..5e8cbd4 100644
--- a/xen/arch/x86/hvm/vmx/intr.c
+++ b/xen/arch/x86/hvm/vmx/intr.c
@@ -106,9 +106,9 @@ static void vmx_enable_intr_window(struct vcpu *v, struct hvm_intack intack)
         ctl = CPU_BASED_VIRTUAL_NMI_PENDING;
     }
 
-    if ( !(v->arch.hvm_vmx.exec_control & ctl) )
+    if ( !(v->arch.hvm.vmx.exec_control & ctl) )
     {
-        v->arch.hvm_vmx.exec_control |= ctl;
+        v->arch.hvm.vmx.exec_control |= ctl;
         vmx_update_cpu_exec_control(v);
     }
 }
@@ -137,7 +137,7 @@ static void vmx_enable_intr_window(struct vcpu *v, struct hvm_intack intack)
  *  Unfortunately, interrupt blocking in L2 won't work with simple
  *  intr_window_open (which depends on L2's IF). To solve this,
  *  the following algorithm can be used:
- *   v->arch.hvm_vmx.exec_control.VIRTUAL_INTR_PENDING now denotes
+ *   v->arch.hvm.vmx.exec_control.VIRTUAL_INTR_PENDING now denotes
  *   only L0 control, physical control may be different from it.
  *       - if in L1, it behaves normally, intr window is written
  *         to physical control as it is
@@ -234,7 +234,7 @@ void vmx_intr_assist(void)
     /* Block event injection when single step with MTF. */
     if ( unlikely(v->arch.hvm.single_step) )
     {
-        v->arch.hvm_vmx.exec_control |= CPU_BASED_MONITOR_TRAP_FLAG;
+        v->arch.hvm.vmx.exec_control |= CPU_BASED_MONITOR_TRAP_FLAG;
         vmx_update_cpu_exec_control(v);
         return;
     }
@@ -352,7 +352,7 @@ void vmx_intr_assist(void)
                     printk("\n");
                 }
 
-                pi_desc = &v->arch.hvm_vmx.pi_desc;
+                pi_desc = &v->arch.hvm.vmx.pi_desc;
                 if ( pi_desc )
                 {
                     word = (const void *)&pi_desc->pir;
@@ -374,12 +374,12 @@ void vmx_intr_assist(void)
                     intack.vector;
         __vmwrite(GUEST_INTR_STATUS, status);
 
-        n = ARRAY_SIZE(v->arch.hvm_vmx.eoi_exit_bitmap);
-        while ( (i = find_first_bit(&v->arch.hvm_vmx.eoi_exitmap_changed,
+        n = ARRAY_SIZE(v->arch.hvm.vmx.eoi_exit_bitmap);
+        while ( (i = find_first_bit(&v->arch.hvm.vmx.eoi_exitmap_changed,
                                     n)) < n )
         {
-            clear_bit(i, &v->arch.hvm_vmx.eoi_exitmap_changed);
-            __vmwrite(EOI_EXIT_BITMAP(i), v->arch.hvm_vmx.eoi_exit_bitmap[i]);
+            clear_bit(i, &v->arch.hvm.vmx.eoi_exitmap_changed);
+            __vmwrite(EOI_EXIT_BITMAP(i), v->arch.hvm.vmx.eoi_exit_bitmap[i]);
         }
 
         pt_intr_post(v, intack);
diff --git a/xen/arch/x86/hvm/vmx/realmode.c b/xen/arch/x86/hvm/vmx/realmode.c
index 032a681..bb0b443 100644
--- a/xen/arch/x86/hvm/vmx/realmode.c
+++ b/xen/arch/x86/hvm/vmx/realmode.c
@@ -175,8 +175,8 @@ void vmx_realmode(struct cpu_user_regs *regs)
         intr_info = 0;
     }
 
-    curr->arch.hvm_vmx.vmx_emulate = 1;
-    while ( curr->arch.hvm_vmx.vmx_emulate &&
+    curr->arch.hvm.vmx.vmx_emulate = 1;
+    while ( curr->arch.hvm.vmx.vmx_emulate &&
             !softirq_pending(smp_processor_id()) )
     {
         /*
@@ -185,7 +185,7 @@ void vmx_realmode(struct cpu_user_regs *regs)
          * in real mode, because we don't emulate protected-mode IDT vectoring.
          */
         if ( unlikely(!(++emulations & 15)) &&
-             curr->arch.hvm_vmx.vmx_realmode && 
+             curr->arch.hvm.vmx.vmx_realmode &&
              hvm_local_events_need_delivery(curr) )
             break;
 
@@ -195,20 +195,20 @@ void vmx_realmode(struct cpu_user_regs *regs)
             break;
 
         /* Stop emulating unless our segment state is not safe */
-        if ( curr->arch.hvm_vmx.vmx_realmode )
-            curr->arch.hvm_vmx.vmx_emulate = 
-                (curr->arch.hvm_vmx.vm86_segment_mask != 0);
+        if ( curr->arch.hvm.vmx.vmx_realmode )
+            curr->arch.hvm.vmx.vmx_emulate =
+                (curr->arch.hvm.vmx.vm86_segment_mask != 0);
         else
-            curr->arch.hvm_vmx.vmx_emulate = 
+            curr->arch.hvm.vmx.vmx_emulate =
                  ((hvmemul_ctxt.seg_reg[x86_seg_cs].sel & 3)
                   || (hvmemul_ctxt.seg_reg[x86_seg_ss].sel & 3));
     }
 
     /* Need to emulate next time if we've started an IO operation */
     if ( vio->io_req.state != STATE_IOREQ_NONE )
-        curr->arch.hvm_vmx.vmx_emulate = 1;
+        curr->arch.hvm.vmx.vmx_emulate = 1;
 
-    if ( !curr->arch.hvm_vmx.vmx_emulate && !curr->arch.hvm_vmx.vmx_realmode )
+    if ( !curr->arch.hvm.vmx.vmx_emulate && !curr->arch.hvm.vmx.vmx_realmode )
     {
         /*
          * Cannot enter protected mode with bogus selector RPLs and DPLs.
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index bfa3e77..c536655 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -518,7 +518,7 @@ static void vmx_free_vmcs(paddr_t pa)
 static void __vmx_clear_vmcs(void *info)
 {
     struct vcpu *v = info;
-    struct vmx_vcpu *arch_vmx = &v->arch.hvm_vmx;
+    struct vmx_vcpu *arch_vmx = &v->arch.hvm.vmx;
 
     /* Otherwise we can nest (vmx_cpu_down() vs. vmx_clear_vmcs()). */
     ASSERT(!local_irq_is_enabled());
@@ -541,7 +541,7 @@ static void __vmx_clear_vmcs(void *info)
 
 static void vmx_clear_vmcs(struct vcpu *v)
 {
-    int cpu = v->arch.hvm_vmx.active_cpu;
+    int cpu = v->arch.hvm.vmx.active_cpu;
 
     if ( cpu != -1 )
         on_selected_cpus(cpumask_of(cpu), __vmx_clear_vmcs, v, 1);
@@ -553,16 +553,16 @@ static void vmx_load_vmcs(struct vcpu *v)
 
     local_irq_save(flags);
 
-    if ( v->arch.hvm_vmx.active_cpu == -1 )
+    if ( v->arch.hvm.vmx.active_cpu == -1 )
     {
-        list_add(&v->arch.hvm_vmx.active_list, &this_cpu(active_vmcs_list));
-        v->arch.hvm_vmx.active_cpu = smp_processor_id();
+        list_add(&v->arch.hvm.vmx.active_list, &this_cpu(active_vmcs_list));
+        v->arch.hvm.vmx.active_cpu = smp_processor_id();
     }
 
-    ASSERT(v->arch.hvm_vmx.active_cpu == smp_processor_id());
+    ASSERT(v->arch.hvm.vmx.active_cpu == smp_processor_id());
 
-    __vmptrld(v->arch.hvm_vmx.vmcs_pa);
-    this_cpu(current_vmcs) = v->arch.hvm_vmx.vmcs_pa;
+    __vmptrld(v->arch.hvm.vmx.vmcs_pa);
+    this_cpu(current_vmcs) = v->arch.hvm.vmx.vmcs_pa;
 
     local_irq_restore(flags);
 }
@@ -571,11 +571,11 @@ void vmx_vmcs_reload(struct vcpu *v)
 {
     /*
      * As we may be running with interrupts disabled, we can't acquire
-     * v->arch.hvm_vmx.vmcs_lock here. However, with interrupts disabled
+     * v->arch.hvm.vmx.vmcs_lock here. However, with interrupts disabled
      * the VMCS can't be taken away from us anymore if we still own it.
      */
     ASSERT(v->is_running || !local_irq_is_enabled());
-    if ( v->arch.hvm_vmx.vmcs_pa == this_cpu(current_vmcs) )
+    if ( v->arch.hvm.vmx.vmcs_pa == this_cpu(current_vmcs) )
         return;
 
     vmx_load_vmcs(v);
@@ -717,7 +717,7 @@ void vmx_cpu_down(void)
 
     while ( !list_empty(active_vmcs_list) )
         __vmx_clear_vmcs(list_entry(active_vmcs_list->next,
-                                    struct vcpu, arch.hvm_vmx.active_list));
+                                    struct vcpu, arch.hvm.vmx.active_list));
 
     BUG_ON(!(read_cr4() & X86_CR4_VMXE));
     this_cpu(vmxon) = 0;
@@ -741,7 +741,7 @@ bool_t vmx_vmcs_try_enter(struct vcpu *v)
      * vmx_vmcs_enter/exit and scheduling tail critical regions.
      */
     if ( likely(v == current) )
-        return v->arch.hvm_vmx.vmcs_pa == this_cpu(current_vmcs);
+        return v->arch.hvm.vmx.vmcs_pa == this_cpu(current_vmcs);
 
     fv = &this_cpu(foreign_vmcs);
 
@@ -755,7 +755,7 @@ bool_t vmx_vmcs_try_enter(struct vcpu *v)
         BUG_ON(fv->count != 0);
 
         vcpu_pause(v);
-        spin_lock(&v->arch.hvm_vmx.vmcs_lock);
+        spin_lock(&v->arch.hvm.vmx.vmcs_lock);
 
         vmx_clear_vmcs(v);
         vmx_load_vmcs(v);
@@ -793,7 +793,7 @@ void vmx_vmcs_exit(struct vcpu *v)
         if ( is_hvm_vcpu(current) )
             vmx_load_vmcs(current);
 
-        spin_unlock(&v->arch.hvm_vmx.vmcs_lock);
+        spin_unlock(&v->arch.hvm.vmx.vmcs_lock);
         vcpu_unpause(v);
 
         fv->v = NULL;
@@ -824,7 +824,7 @@ static void vmx_set_host_env(struct vcpu *v)
 void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr,
                              enum vmx_msr_intercept_type type)
 {
-    struct vmx_msr_bitmap *msr_bitmap = v->arch.hvm_vmx.msr_bitmap;
+    struct vmx_msr_bitmap *msr_bitmap = v->arch.hvm.vmx.msr_bitmap;
     struct domain *d = v->domain;
 
     /* VMX MSR bitmap supported? */
@@ -856,7 +856,7 @@ void vmx_clear_msr_intercept(struct vcpu *v, unsigned int msr,
 void vmx_set_msr_intercept(struct vcpu *v, unsigned int msr,
                            enum vmx_msr_intercept_type type)
 {
-    struct vmx_msr_bitmap *msr_bitmap = v->arch.hvm_vmx.msr_bitmap;
+    struct vmx_msr_bitmap *msr_bitmap = v->arch.hvm.vmx.msr_bitmap;
 
     /* VMX MSR bitmap supported? */
     if ( msr_bitmap == NULL )
@@ -901,7 +901,7 @@ bool vmx_msr_is_intercepted(struct vmx_msr_bitmap *msr_bitmap,
  */
 void vmx_vmcs_switch(paddr_t from, paddr_t to)
 {
-    struct vmx_vcpu *vmx = &current->arch.hvm_vmx;
+    struct vmx_vcpu *vmx = &current->arch.hvm.vmx;
     spin_lock(&vmx->vmcs_lock);
 
     __vmpclear(from);
@@ -924,14 +924,14 @@ void vmx_vmcs_switch(paddr_t from, paddr_t to)
 
 void virtual_vmcs_enter(const struct vcpu *v)
 {
-    __vmptrld(v->arch.hvm_vmx.vmcs_shadow_maddr);
+    __vmptrld(v->arch.hvm.vmx.vmcs_shadow_maddr);
 }
 
 void virtual_vmcs_exit(const struct vcpu *v)
 {
     paddr_t cur = this_cpu(current_vmcs);
 
-    __vmpclear(v->arch.hvm_vmx.vmcs_shadow_maddr);
+    __vmpclear(v->arch.hvm.vmx.vmcs_shadow_maddr);
     if ( cur )
         __vmptrld(cur);
 }
@@ -984,13 +984,13 @@ enum vmx_insn_errno virtual_vmcs_vmwrite_safe(const struct vcpu *v,
  */
 static void pi_desc_init(struct vcpu *v)
 {
-    v->arch.hvm_vmx.pi_desc.nv = posted_intr_vector;
+    v->arch.hvm.vmx.pi_desc.nv = posted_intr_vector;
 
     /*
      * Mark NDST as invalid, then we can use this invalid value as a
      * marker to whether update NDST or not in vmx_pi_hooks_assign().
      */
-    v->arch.hvm_vmx.pi_desc.ndst = APIC_INVALID_DEST;
+    v->arch.hvm.vmx.pi_desc.ndst = APIC_INVALID_DEST;
 }
 
 static int construct_vmcs(struct vcpu *v)
@@ -1005,31 +1005,31 @@ static int construct_vmcs(struct vcpu *v)
     /* VMCS controls. */
     __vmwrite(PIN_BASED_VM_EXEC_CONTROL, vmx_pin_based_exec_control);
 
-    v->arch.hvm_vmx.exec_control = vmx_cpu_based_exec_control;
+    v->arch.hvm.vmx.exec_control = vmx_cpu_based_exec_control;
     if ( d->arch.vtsc && !cpu_has_vmx_tsc_scaling )
-        v->arch.hvm_vmx.exec_control |= CPU_BASED_RDTSC_EXITING;
+        v->arch.hvm.vmx.exec_control |= CPU_BASED_RDTSC_EXITING;
 
-    v->arch.hvm_vmx.secondary_exec_control = vmx_secondary_exec_control;
+    v->arch.hvm.vmx.secondary_exec_control = vmx_secondary_exec_control;
 
     /*
      * Disable descriptor table exiting: It's controlled by the VM event
      * monitor requesting it.
      */
-    v->arch.hvm_vmx.secondary_exec_control &=
+    v->arch.hvm.vmx.secondary_exec_control &=
         ~SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING;
 
     /* Disable VPID for now: we decide when to enable it on VMENTER. */
-    v->arch.hvm_vmx.secondary_exec_control &= ~SECONDARY_EXEC_ENABLE_VPID;
+    v->arch.hvm.vmx.secondary_exec_control &= ~SECONDARY_EXEC_ENABLE_VPID;
 
     if ( paging_mode_hap(d) )
     {
-        v->arch.hvm_vmx.exec_control &= ~(CPU_BASED_INVLPG_EXITING |
+        v->arch.hvm.vmx.exec_control &= ~(CPU_BASED_INVLPG_EXITING |
                                           CPU_BASED_CR3_LOAD_EXITING |
                                           CPU_BASED_CR3_STORE_EXITING);
     }
     else
     {
-        v->arch.hvm_vmx.secondary_exec_control &= 
+        v->arch.hvm.vmx.secondary_exec_control &=
             ~(SECONDARY_EXEC_ENABLE_EPT | 
               SECONDARY_EXEC_UNRESTRICTED_GUEST |
               SECONDARY_EXEC_ENABLE_INVPCID);
@@ -1039,25 +1039,25 @@ static int construct_vmcs(struct vcpu *v)
     }
 
     /* Disable Virtualize x2APIC mode by default. */
-    v->arch.hvm_vmx.secondary_exec_control &=
+    v->arch.hvm.vmx.secondary_exec_control &=
         ~SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE;
 
     /* Do not enable Monitor Trap Flag unless start single step debug */
-    v->arch.hvm_vmx.exec_control &= ~CPU_BASED_MONITOR_TRAP_FLAG;
+    v->arch.hvm.vmx.exec_control &= ~CPU_BASED_MONITOR_TRAP_FLAG;
 
     /* Disable VMFUNC and #VE for now: they may be enabled later by altp2m. */
-    v->arch.hvm_vmx.secondary_exec_control &=
+    v->arch.hvm.vmx.secondary_exec_control &=
         ~(SECONDARY_EXEC_ENABLE_VM_FUNCTIONS |
           SECONDARY_EXEC_ENABLE_VIRT_EXCEPTIONS);
 
     if ( !has_vlapic(d) )
     {
         /* Disable virtual apics, TPR */
-        v->arch.hvm_vmx.secondary_exec_control &=
+        v->arch.hvm.vmx.secondary_exec_control &=
             ~(SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES
               | SECONDARY_EXEC_APIC_REGISTER_VIRT
               | SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY);
-        v->arch.hvm_vmx.exec_control &= ~CPU_BASED_TPR_SHADOW;
+        v->arch.hvm.vmx.exec_control &= ~CPU_BASED_TPR_SHADOW;
 
         /* In turn, disable posted interrupts. */
         __vmwrite(PIN_BASED_VM_EXEC_CONTROL,
@@ -1077,7 +1077,7 @@ static int construct_vmcs(struct vcpu *v)
 
     if ( cpu_has_vmx_secondary_exec_control )
         __vmwrite(SECONDARY_VM_EXEC_CONTROL,
-                  v->arch.hvm_vmx.secondary_exec_control);
+                  v->arch.hvm.vmx.secondary_exec_control);
 
     /* MSR access bitmap. */
     if ( cpu_has_vmx_msr_bitmap )
@@ -1091,7 +1091,7 @@ static int construct_vmcs(struct vcpu *v)
         }
 
         memset(msr_bitmap, ~0, PAGE_SIZE);
-        v->arch.hvm_vmx.msr_bitmap = msr_bitmap;
+        v->arch.hvm.vmx.msr_bitmap = msr_bitmap;
         __vmwrite(MSR_BITMAP, virt_to_maddr(msr_bitmap));
 
         vmx_clear_msr_intercept(v, MSR_FS_BASE, VMX_MSR_RW);
@@ -1116,8 +1116,8 @@ static int construct_vmcs(struct vcpu *v)
         unsigned int i;
 
         /* EOI-exit bitmap */
-        bitmap_zero(v->arch.hvm_vmx.eoi_exit_bitmap, NR_VECTORS);
-        for ( i = 0; i < ARRAY_SIZE(v->arch.hvm_vmx.eoi_exit_bitmap); ++i )
+        bitmap_zero(v->arch.hvm.vmx.eoi_exit_bitmap, NR_VECTORS);
+        for ( i = 0; i < ARRAY_SIZE(v->arch.hvm.vmx.eoi_exit_bitmap); ++i )
             __vmwrite(EOI_EXIT_BITMAP(i), 0);
 
         /* Initialise Guest Interrupt Status (RVI and SVI) to 0 */
@@ -1129,12 +1129,12 @@ static int construct_vmcs(struct vcpu *v)
         if ( iommu_intpost )
             pi_desc_init(v);
 
-        __vmwrite(PI_DESC_ADDR, virt_to_maddr(&v->arch.hvm_vmx.pi_desc));
+        __vmwrite(PI_DESC_ADDR, virt_to_maddr(&v->arch.hvm.vmx.pi_desc));
         __vmwrite(POSTED_INTR_NOTIFICATION_VECTOR, posted_intr_vector);
     }
 
     /* Disable PML anyway here as it will only be enabled in log dirty mode */
-    v->arch.hvm_vmx.secondary_exec_control &= ~SECONDARY_EXEC_ENABLE_PML;
+    v->arch.hvm.vmx.secondary_exec_control &= ~SECONDARY_EXEC_ENABLE_PML;
 
     /* Host data selectors. */
     __vmwrite(HOST_SS_SELECTOR, __HYPERVISOR_DS);
@@ -1147,10 +1147,10 @@ static int construct_vmcs(struct vcpu *v)
     __vmwrite(HOST_TR_SELECTOR, TSS_ENTRY << 3);
 
     /* Host control registers. */
-    v->arch.hvm_vmx.host_cr0 = read_cr0() & ~X86_CR0_TS;
+    v->arch.hvm.vmx.host_cr0 = read_cr0() & ~X86_CR0_TS;
     if ( !v->arch.fully_eager_fpu )
-        v->arch.hvm_vmx.host_cr0 |= X86_CR0_TS;
-    __vmwrite(HOST_CR0, v->arch.hvm_vmx.host_cr0);
+        v->arch.hvm.vmx.host_cr0 |= X86_CR0_TS;
+    __vmwrite(HOST_CR0, v->arch.hvm.vmx.host_cr0);
     __vmwrite(HOST_CR4, mmu_cr4_features);
     if ( cpu_has_vmx_efer )
         __vmwrite(HOST_EFER, read_efer());
@@ -1172,7 +1172,7 @@ static int construct_vmcs(struct vcpu *v)
 
     __vmwrite(CR0_GUEST_HOST_MASK, ~0UL);
     __vmwrite(CR4_GUEST_HOST_MASK, ~0UL);
-    v->arch.hvm_vmx.cr4_host_mask = ~0UL;
+    v->arch.hvm.vmx.cr4_host_mask = ~0UL;
 
     __vmwrite(PAGE_FAULT_ERROR_CODE_MASK, 0);
     __vmwrite(PAGE_FAULT_ERROR_CODE_MATCH, 0);
@@ -1228,7 +1228,7 @@ static int construct_vmcs(struct vcpu *v)
     __vmwrite(GUEST_DR7, 0);
     __vmwrite(VMCS_LINK_POINTER, ~0UL);
 
-    v->arch.hvm_vmx.exception_bitmap = HVM_TRAP_MASK
+    v->arch.hvm.vmx.exception_bitmap = HVM_TRAP_MASK
               | (paging_mode_hap(d) ? 0 : (1U << TRAP_page_fault))
               | (v->arch.fully_eager_fpu ? 0 : (1U << TRAP_no_device));
     vmx_update_exception_bitmap(v);
@@ -1308,7 +1308,7 @@ static struct vmx_msr_entry *locate_msr_entry(
 struct vmx_msr_entry *vmx_find_msr(const struct vcpu *v, uint32_t msr,
                                    enum vmx_msr_list_type type)
 {
-    const struct vmx_vcpu *vmx = &v->arch.hvm_vmx;
+    const struct vmx_vcpu *vmx = &v->arch.hvm.vmx;
     struct vmx_msr_entry *start = NULL, *ent, *end;
     unsigned int substart = 0, subend = vmx->msr_save_count;
     unsigned int total = vmx->msr_load_count;
@@ -1349,7 +1349,7 @@ struct vmx_msr_entry *vmx_find_msr(const struct vcpu *v, uint32_t msr,
 int vmx_add_msr(struct vcpu *v, uint32_t msr, uint64_t val,
                 enum vmx_msr_list_type type)
 {
-    struct vmx_vcpu *vmx = &v->arch.hvm_vmx;
+    struct vmx_vcpu *vmx = &v->arch.hvm.vmx;
     struct vmx_msr_entry **ptr, *start = NULL, *ent, *end;
     unsigned int substart, subend, total;
     int rc;
@@ -1460,7 +1460,7 @@ int vmx_add_msr(struct vcpu *v, uint32_t msr, uint64_t val,
 
 int vmx_del_msr(struct vcpu *v, uint32_t msr, enum vmx_msr_list_type type)
 {
-    struct vmx_vcpu *vmx = &v->arch.hvm_vmx;
+    struct vmx_vcpu *vmx = &v->arch.hvm.vmx;
     struct vmx_msr_entry *start = NULL, *ent, *end;
     unsigned int substart = 0, subend = vmx->msr_save_count;
     unsigned int total = vmx->msr_load_count;
@@ -1524,21 +1524,21 @@ int vmx_del_msr(struct vcpu *v, uint32_t msr, enum vmx_msr_list_type type)
 
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector)
 {
-    if ( !test_and_set_bit(vector, v->arch.hvm_vmx.eoi_exit_bitmap) )
+    if ( !test_and_set_bit(vector, v->arch.hvm.vmx.eoi_exit_bitmap) )
         set_bit(vector / BITS_PER_LONG,
-                &v->arch.hvm_vmx.eoi_exitmap_changed);
+                &v->arch.hvm.vmx.eoi_exitmap_changed);
 }
 
 void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector)
 {
-    if ( test_and_clear_bit(vector, v->arch.hvm_vmx.eoi_exit_bitmap) )
+    if ( test_and_clear_bit(vector, v->arch.hvm.vmx.eoi_exit_bitmap) )
         set_bit(vector / BITS_PER_LONG,
-                &v->arch.hvm_vmx.eoi_exitmap_changed);
+                &v->arch.hvm.vmx.eoi_exitmap_changed);
 }
 
 bool_t vmx_vcpu_pml_enabled(const struct vcpu *v)
 {
-    return !!(v->arch.hvm_vmx.secondary_exec_control &
+    return !!(v->arch.hvm.vmx.secondary_exec_control &
               SECONDARY_EXEC_ENABLE_PML);
 }
 
@@ -1547,19 +1547,19 @@ int vmx_vcpu_enable_pml(struct vcpu *v)
     if ( vmx_vcpu_pml_enabled(v) )
         return 0;
 
-    v->arch.hvm_vmx.pml_pg = v->domain->arch.paging.alloc_page(v->domain);
-    if ( !v->arch.hvm_vmx.pml_pg )
+    v->arch.hvm.vmx.pml_pg = v->domain->arch.paging.alloc_page(v->domain);
+    if ( !v->arch.hvm.vmx.pml_pg )
         return -ENOMEM;
 
     vmx_vmcs_enter(v);
 
-    __vmwrite(PML_ADDRESS, page_to_maddr(v->arch.hvm_vmx.pml_pg));
+    __vmwrite(PML_ADDRESS, page_to_maddr(v->arch.hvm.vmx.pml_pg));
     __vmwrite(GUEST_PML_INDEX, NR_PML_ENTRIES - 1);
 
-    v->arch.hvm_vmx.secondary_exec_control |= SECONDARY_EXEC_ENABLE_PML;
+    v->arch.hvm.vmx.secondary_exec_control |= SECONDARY_EXEC_ENABLE_PML;
 
     __vmwrite(SECONDARY_VM_EXEC_CONTROL,
-              v->arch.hvm_vmx.secondary_exec_control);
+              v->arch.hvm.vmx.secondary_exec_control);
 
     vmx_vmcs_exit(v);
 
@@ -1576,14 +1576,14 @@ void vmx_vcpu_disable_pml(struct vcpu *v)
 
     vmx_vmcs_enter(v);
 
-    v->arch.hvm_vmx.secondary_exec_control &= ~SECONDARY_EXEC_ENABLE_PML;
+    v->arch.hvm.vmx.secondary_exec_control &= ~SECONDARY_EXEC_ENABLE_PML;
     __vmwrite(SECONDARY_VM_EXEC_CONTROL,
-              v->arch.hvm_vmx.secondary_exec_control);
+              v->arch.hvm.vmx.secondary_exec_control);
 
     vmx_vmcs_exit(v);
 
-    v->domain->arch.paging.free_page(v->domain, v->arch.hvm_vmx.pml_pg);
-    v->arch.hvm_vmx.pml_pg = NULL;
+    v->domain->arch.paging.free_page(v->domain, v->arch.hvm.vmx.pml_pg);
+    v->arch.hvm.vmx.pml_pg = NULL;
 }
 
 void vmx_vcpu_flush_pml_buffer(struct vcpu *v)
@@ -1602,7 +1602,7 @@ void vmx_vcpu_flush_pml_buffer(struct vcpu *v)
     if ( pml_idx == (NR_PML_ENTRIES - 1) )
         goto out;
 
-    pml_buf = __map_domain_page(v->arch.hvm_vmx.pml_pg);
+    pml_buf = __map_domain_page(v->arch.hvm.vmx.pml_pg);
 
     /*
      * PML index can be either 2^16-1 (buffer is full), or 0 ~ NR_PML_ENTRIES-1
@@ -1743,7 +1743,7 @@ void vmx_domain_update_eptp(struct domain *d)
 
 int vmx_create_vmcs(struct vcpu *v)
 {
-    struct vmx_vcpu *arch_vmx = &v->arch.hvm_vmx;
+    struct vmx_vcpu *arch_vmx = &v->arch.hvm.vmx;
     int rc;
 
     if ( (arch_vmx->vmcs_pa = vmx_alloc_vmcs()) == 0 )
@@ -1765,15 +1765,15 @@ int vmx_create_vmcs(struct vcpu *v)
 
 void vmx_destroy_vmcs(struct vcpu *v)
 {
-    struct vmx_vcpu *arch_vmx = &v->arch.hvm_vmx;
+    struct vmx_vcpu *arch_vmx = &v->arch.hvm.vmx;
 
     vmx_clear_vmcs(v);
 
     vmx_free_vmcs(arch_vmx->vmcs_pa);
 
-    free_xenheap_page(v->arch.hvm_vmx.host_msr_area);
-    free_xenheap_page(v->arch.hvm_vmx.msr_area);
-    free_xenheap_page(v->arch.hvm_vmx.msr_bitmap);
+    free_xenheap_page(v->arch.hvm.vmx.host_msr_area);
+    free_xenheap_page(v->arch.hvm.vmx.msr_area);
+    free_xenheap_page(v->arch.hvm.vmx.msr_bitmap);
 }
 
 void vmx_vmentry_failure(void)
@@ -1783,7 +1783,7 @@ void vmx_vmentry_failure(void)
 
     __vmread(VM_INSTRUCTION_ERROR, &error);
     gprintk(XENLOG_ERR, "VM%s error: %#lx\n",
-            curr->arch.hvm_vmx.launched ? "RESUME" : "LAUNCH", error);
+            curr->arch.hvm.vmx.launched ? "RESUME" : "LAUNCH", error);
 
     if ( error == VMX_INSN_INVALID_CONTROL_STATE ||
          error == VMX_INSN_INVALID_HOST_STATE )
@@ -1797,7 +1797,7 @@ void vmx_do_resume(struct vcpu *v)
     bool_t debug_state;
     unsigned long host_cr4;
 
-    if ( v->arch.hvm_vmx.active_cpu == smp_processor_id() )
+    if ( v->arch.hvm.vmx.active_cpu == smp_processor_id() )
         vmx_vmcs_reload(v);
     else
     {
@@ -1814,7 +1814,7 @@ void vmx_do_resume(struct vcpu *v)
         if ( has_arch_pdevs(v->domain) && !iommu_snoop
                 && !cpu_has_wbinvd_exiting )
         {
-            int cpu = v->arch.hvm_vmx.active_cpu;
+            int cpu = v->arch.hvm.vmx.active_cpu;
             if ( cpu != -1 )
                 flush_mask(cpumask_of(cpu), FLUSH_CACHE);
         }
@@ -1829,7 +1829,7 @@ void vmx_do_resume(struct vcpu *v)
          * VCPU migration. The environment of current VMCS is updated in place,
          * but the action of another VMCS is deferred till it is switched in.
          */
-        v->arch.hvm_vmx.hostenv_migrated = 1;
+        v->arch.hvm.vmx.hostenv_migrated = 1;
 
         hvm_asid_flush_vcpu(v);
     }
@@ -1925,7 +1925,7 @@ void vmcs_dump_vcpu(struct vcpu *v)
     printk("CR4: actual=0x%016lx, shadow=0x%016lx, gh_mask=%016lx\n",
            cr4, vmr(CR4_READ_SHADOW), vmr(CR4_GUEST_HOST_MASK));
     printk("CR3 = 0x%016lx\n", vmr(GUEST_CR3));
-    if ( (v->arch.hvm_vmx.secondary_exec_control &
+    if ( (v->arch.hvm.vmx.secondary_exec_control &
           SECONDARY_EXEC_ENABLE_EPT) &&
          (cr4 & X86_CR4_PAE) && !(vmentry_ctl & VM_ENTRY_IA32E_MODE) )
     {
@@ -1965,7 +1965,7 @@ void vmcs_dump_vcpu(struct vcpu *v)
                vmr(GUEST_PERF_GLOBAL_CTRL), vmr(GUEST_BNDCFGS));
     printk("Interruptibility = %08x  ActivityState = %08x\n",
            vmr32(GUEST_INTERRUPTIBILITY_INFO), vmr32(GUEST_ACTIVITY_STATE));
-    if ( v->arch.hvm_vmx.secondary_exec_control &
+    if ( v->arch.hvm.vmx.secondary_exec_control &
          SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY )
         printk("InterruptStatus = %04x\n", vmr16(GUEST_INTR_STATUS));
 
@@ -2016,11 +2016,11 @@ void vmcs_dump_vcpu(struct vcpu *v)
            vmr32(IDT_VECTORING_INFO), vmr32(IDT_VECTORING_ERROR_CODE));
     printk("TSC Offset = 0x%016lx  TSC Multiplier = 0x%016lx\n",
            vmr(TSC_OFFSET), vmr(TSC_MULTIPLIER));
-    if ( (v->arch.hvm_vmx.exec_control & CPU_BASED_TPR_SHADOW) ||
+    if ( (v->arch.hvm.vmx.exec_control & CPU_BASED_TPR_SHADOW) ||
          (vmx_pin_based_exec_control & PIN_BASED_POSTED_INTERRUPT) )
         printk("TPR Threshold = 0x%02x  PostedIntrVec = 0x%02x\n",
                vmr32(TPR_THRESHOLD), vmr16(POSTED_INTR_NOTIFICATION_VECTOR));
-    if ( (v->arch.hvm_vmx.secondary_exec_control &
+    if ( (v->arch.hvm.vmx.secondary_exec_control &
           SECONDARY_EXEC_ENABLE_EPT) )
         printk("EPT pointer = 0x%016lx  EPTP index = 0x%04x\n",
                vmr(EPT_POINTER), vmr16(EPTP_INDEX));
@@ -2031,11 +2031,11 @@ void vmcs_dump_vcpu(struct vcpu *v)
                i + 1, vmr(CR3_TARGET_VALUE(i + 1)));
     if ( i < n )
         printk("CR3 target%u=%016lx\n", i, vmr(CR3_TARGET_VALUE(i)));
-    if ( v->arch.hvm_vmx.secondary_exec_control &
+    if ( v->arch.hvm.vmx.secondary_exec_control &
          SECONDARY_EXEC_PAUSE_LOOP_EXITING )
         printk("PLE Gap=%08x Window=%08x\n",
                vmr32(PLE_GAP), vmr32(PLE_WINDOW));
-    if ( v->arch.hvm_vmx.secondary_exec_control &
+    if ( v->arch.hvm.vmx.secondary_exec_control &
          (SECONDARY_EXEC_ENABLE_VPID | SECONDARY_EXEC_ENABLE_VM_FUNCTIONS) )
         printk("Virtual processor ID = 0x%04x VMfunc controls = %016lx\n",
                vmr16(VIRTUAL_PROCESSOR_ID), vmr(VM_FUNCTION_CONTROL));
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 95dec46..def40d6 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -104,20 +104,20 @@ static void vmx_vcpu_block(struct vcpu *v)
     spinlock_t *old_lock;
     spinlock_t *pi_blocking_list_lock =
 		&per_cpu(vmx_pi_blocking, v->processor).lock;
-    struct pi_desc *pi_desc = &v->arch.hvm_vmx.pi_desc;
+    struct pi_desc *pi_desc = &v->arch.hvm.vmx.pi_desc;
 
     spin_lock_irqsave(pi_blocking_list_lock, flags);
-    old_lock = cmpxchg(&v->arch.hvm_vmx.pi_blocking.lock, NULL,
+    old_lock = cmpxchg(&v->arch.hvm.vmx.pi_blocking.lock, NULL,
                        pi_blocking_list_lock);
 
     /*
-     * 'v->arch.hvm_vmx.pi_blocking.lock' should be NULL before
+     * 'v->arch.hvm.vmx.pi_blocking.lock' should be NULL before
      * being assigned to a new value, since the vCPU is currently
      * running and it cannot be on any blocking list.
      */
     ASSERT(old_lock == NULL);
 
-    list_add_tail(&v->arch.hvm_vmx.pi_blocking.list,
+    list_add_tail(&v->arch.hvm.vmx.pi_blocking.list,
                   &per_cpu(vmx_pi_blocking, v->processor).list);
     spin_unlock_irqrestore(pi_blocking_list_lock, flags);
 
@@ -133,7 +133,7 @@ static void vmx_vcpu_block(struct vcpu *v)
 
 static void vmx_pi_switch_from(struct vcpu *v)
 {
-    struct pi_desc *pi_desc = &v->arch.hvm_vmx.pi_desc;
+    struct pi_desc *pi_desc = &v->arch.hvm.vmx.pi_desc;
 
     if ( test_bit(_VPF_blocked, &v->pause_flags) )
         return;
@@ -143,7 +143,7 @@ static void vmx_pi_switch_from(struct vcpu *v)
 
 static void vmx_pi_switch_to(struct vcpu *v)
 {
-    struct pi_desc *pi_desc = &v->arch.hvm_vmx.pi_desc;
+    struct pi_desc *pi_desc = &v->arch.hvm.vmx.pi_desc;
     unsigned int dest = cpu_physical_id(v->processor);
 
     write_atomic(&pi_desc->ndst,
@@ -156,7 +156,7 @@ static void vmx_pi_unblock_vcpu(struct vcpu *v)
 {
     unsigned long flags;
     spinlock_t *pi_blocking_list_lock;
-    struct pi_desc *pi_desc = &v->arch.hvm_vmx.pi_desc;
+    struct pi_desc *pi_desc = &v->arch.hvm.vmx.pi_desc;
 
     /*
      * Set 'NV' field back to posted_intr_vector, so the
@@ -165,7 +165,7 @@ static void vmx_pi_unblock_vcpu(struct vcpu *v)
      */
     write_atomic(&pi_desc->nv, posted_intr_vector);
 
-    pi_blocking_list_lock = v->arch.hvm_vmx.pi_blocking.lock;
+    pi_blocking_list_lock = v->arch.hvm.vmx.pi_blocking.lock;
 
     /* Prevent the compiler from eliminating the local variable.*/
     smp_rmb();
@@ -177,14 +177,14 @@ static void vmx_pi_unblock_vcpu(struct vcpu *v)
     spin_lock_irqsave(pi_blocking_list_lock, flags);
 
     /*
-     * v->arch.hvm_vmx.pi_blocking.lock == NULL here means the vCPU
+     * v->arch.hvm.vmx.pi_blocking.lock == NULL here means the vCPU
      * was removed from the blocking list while we are acquiring the lock.
      */
-    if ( v->arch.hvm_vmx.pi_blocking.lock != NULL )
+    if ( v->arch.hvm.vmx.pi_blocking.lock != NULL )
     {
-        ASSERT(v->arch.hvm_vmx.pi_blocking.lock == pi_blocking_list_lock);
-        list_del(&v->arch.hvm_vmx.pi_blocking.list);
-        v->arch.hvm_vmx.pi_blocking.lock = NULL;
+        ASSERT(v->arch.hvm.vmx.pi_blocking.lock == pi_blocking_list_lock);
+        list_del(&v->arch.hvm.vmx.pi_blocking.list);
+        v->arch.hvm.vmx.pi_blocking.lock = NULL;
     }
 
     spin_unlock_irqrestore(pi_blocking_list_lock, flags);
@@ -233,7 +233,7 @@ void vmx_pi_desc_fixup(unsigned int cpu)
         {
             list_del(&vmx->pi_blocking.list);
             vmx->pi_blocking.lock = NULL;
-            vcpu_unblock(container_of(vmx, struct vcpu, arch.hvm_vmx));
+            vcpu_unblock(container_of(vmx, struct vcpu, arch.hvm.vmx));
         }
         else
         {
@@ -335,7 +335,7 @@ void vmx_pi_hooks_assign(struct domain *d)
     for_each_vcpu ( d, v )
     {
         unsigned int dest = cpu_physical_id(v->processor);
-        struct pi_desc *pi_desc = &v->arch.hvm_vmx.pi_desc;
+        struct pi_desc *pi_desc = &v->arch.hvm.vmx.pi_desc;
 
         /*
          * We don't need to update NDST if vmx_pi_switch_to()
@@ -424,9 +424,9 @@ static int vmx_vcpu_initialise(struct vcpu *v)
 {
     int rc;
 
-    spin_lock_init(&v->arch.hvm_vmx.vmcs_lock);
+    spin_lock_init(&v->arch.hvm.vmx.vmcs_lock);
 
-    INIT_LIST_HEAD(&v->arch.hvm_vmx.pi_blocking.list);
+    INIT_LIST_HEAD(&v->arch.hvm.vmx.pi_blocking.list);
 
     if ( (rc = vmx_create_vmcs(v)) != 0 )
     {
@@ -498,15 +498,15 @@ static void vmx_save_guest_msrs(struct vcpu *v)
      * We cannot cache SHADOW_GS_BASE while the VCPU runs, as it can
      * be updated at any time via SWAPGS, which we cannot trap.
      */
-    v->arch.hvm_vmx.shadow_gs = rdgsshadow();
+    v->arch.hvm.vmx.shadow_gs = rdgsshadow();
 }
 
 static void vmx_restore_guest_msrs(struct vcpu *v)
 {
-    wrgsshadow(v->arch.hvm_vmx.shadow_gs);
-    wrmsrl(MSR_STAR,           v->arch.hvm_vmx.star);
-    wrmsrl(MSR_LSTAR,          v->arch.hvm_vmx.lstar);
-    wrmsrl(MSR_SYSCALL_MASK,   v->arch.hvm_vmx.sfmask);
+    wrgsshadow(v->arch.hvm.vmx.shadow_gs);
+    wrmsrl(MSR_STAR,           v->arch.hvm.vmx.star);
+    wrmsrl(MSR_LSTAR,          v->arch.hvm.vmx.lstar);
+    wrmsrl(MSR_SYSCALL_MASK,   v->arch.hvm.vmx.sfmask);
 
     if ( cpu_has_rdtscp )
         wrmsr_tsc_aux(hvm_msr_tsc_aux(v));
@@ -515,25 +515,25 @@ static void vmx_restore_guest_msrs(struct vcpu *v)
 void vmx_update_cpu_exec_control(struct vcpu *v)
 {
     if ( nestedhvm_vcpu_in_guestmode(v) )
-        nvmx_update_exec_control(v, v->arch.hvm_vmx.exec_control);
+        nvmx_update_exec_control(v, v->arch.hvm.vmx.exec_control);
     else
-        __vmwrite(CPU_BASED_VM_EXEC_CONTROL, v->arch.hvm_vmx.exec_control);
+        __vmwrite(CPU_BASED_VM_EXEC_CONTROL, v->arch.hvm.vmx.exec_control);
 }
 
 void vmx_update_secondary_exec_control(struct vcpu *v)
 {
     if ( nestedhvm_vcpu_in_guestmode(v) )
         nvmx_update_secondary_exec_control(v,
-            v->arch.hvm_vmx.secondary_exec_control);
+            v->arch.hvm.vmx.secondary_exec_control);
     else
         __vmwrite(SECONDARY_VM_EXEC_CONTROL,
-                  v->arch.hvm_vmx.secondary_exec_control);
+                  v->arch.hvm.vmx.secondary_exec_control);
 }
 
 void vmx_update_exception_bitmap(struct vcpu *v)
 {
-    u32 bitmap = unlikely(v->arch.hvm_vmx.vmx_realmode)
-        ? 0xffffffffu : v->arch.hvm_vmx.exception_bitmap;
+    u32 bitmap = unlikely(v->arch.hvm.vmx.vmx_realmode)
+        ? 0xffffffffu : v->arch.hvm.vmx.exception_bitmap;
 
     if ( nestedhvm_vcpu_in_guestmode(v) )
         nvmx_update_exception_bitmap(v, bitmap);
@@ -547,9 +547,9 @@ static void vmx_cpuid_policy_changed(struct vcpu *v)
 
     if ( opt_hvm_fep ||
          (v->domain->arch.cpuid->x86_vendor != boot_cpu_data.x86_vendor) )
-        v->arch.hvm_vmx.exception_bitmap |= (1U << TRAP_invalid_op);
+        v->arch.hvm.vmx.exception_bitmap |= (1U << TRAP_invalid_op);
     else
-        v->arch.hvm_vmx.exception_bitmap &= ~(1U << TRAP_invalid_op);
+        v->arch.hvm.vmx.exception_bitmap &= ~(1U << TRAP_invalid_op);
 
     vmx_vmcs_enter(v);
     vmx_update_exception_bitmap(v);
@@ -599,7 +599,7 @@ static void vmx_save_dr(struct vcpu *v)
 
     /* Clear the DR dirty flag and re-enable intercepts for DR accesses. */
     v->arch.hvm.flag_dr_dirty = 0;
-    v->arch.hvm_vmx.exec_control |= CPU_BASED_MOV_DR_EXITING;
+    v->arch.hvm.vmx.exec_control |= CPU_BASED_MOV_DR_EXITING;
     vmx_update_cpu_exec_control(v);
 
     v->arch.debugreg[0] = read_debugreg(0);
@@ -768,21 +768,21 @@ static int vmx_vmcs_restore(struct vcpu *v, struct hvm_hw_cpu *c)
 
 static void vmx_save_cpu_state(struct vcpu *v, struct hvm_hw_cpu *data)
 {
-    data->shadow_gs        = v->arch.hvm_vmx.shadow_gs;
+    data->shadow_gs        = v->arch.hvm.vmx.shadow_gs;
     data->msr_flags        = 0;
-    data->msr_lstar        = v->arch.hvm_vmx.lstar;
-    data->msr_star         = v->arch.hvm_vmx.star;
-    data->msr_cstar        = v->arch.hvm_vmx.cstar;
-    data->msr_syscall_mask = v->arch.hvm_vmx.sfmask;
+    data->msr_lstar        = v->arch.hvm.vmx.lstar;
+    data->msr_star         = v->arch.hvm.vmx.star;
+    data->msr_cstar        = v->arch.hvm.vmx.cstar;
+    data->msr_syscall_mask = v->arch.hvm.vmx.sfmask;
 }
 
 static void vmx_load_cpu_state(struct vcpu *v, struct hvm_hw_cpu *data)
 {
-    v->arch.hvm_vmx.shadow_gs = data->shadow_gs;
-    v->arch.hvm_vmx.star      = data->msr_star;
-    v->arch.hvm_vmx.lstar     = data->msr_lstar;
-    v->arch.hvm_vmx.cstar     = data->msr_cstar;
-    v->arch.hvm_vmx.sfmask    = data->msr_syscall_mask;
+    v->arch.hvm.vmx.shadow_gs = data->shadow_gs;
+    v->arch.hvm.vmx.star      = data->msr_star;
+    v->arch.hvm.vmx.lstar     = data->msr_lstar;
+    v->arch.hvm.vmx.cstar     = data->msr_cstar;
+    v->arch.hvm.vmx.sfmask    = data->msr_syscall_mask;
 }
 
 
@@ -874,10 +874,10 @@ static int vmx_load_msr(struct vcpu *v, struct hvm_msr *ctxt)
 static void vmx_fpu_enter(struct vcpu *v)
 {
     vcpu_restore_fpu_lazy(v);
-    v->arch.hvm_vmx.exception_bitmap &= ~(1u << TRAP_no_device);
+    v->arch.hvm.vmx.exception_bitmap &= ~(1u << TRAP_no_device);
     vmx_update_exception_bitmap(v);
-    v->arch.hvm_vmx.host_cr0 &= ~X86_CR0_TS;
-    __vmwrite(HOST_CR0, v->arch.hvm_vmx.host_cr0);
+    v->arch.hvm.vmx.host_cr0 &= ~X86_CR0_TS;
+    __vmwrite(HOST_CR0, v->arch.hvm.vmx.host_cr0);
 }
 
 static void vmx_fpu_leave(struct vcpu *v)
@@ -885,10 +885,10 @@ static void vmx_fpu_leave(struct vcpu *v)
     ASSERT(!v->fpu_dirtied);
     ASSERT(read_cr0() & X86_CR0_TS);
 
-    if ( !(v->arch.hvm_vmx.host_cr0 & X86_CR0_TS) )
+    if ( !(v->arch.hvm.vmx.host_cr0 & X86_CR0_TS) )
     {
-        v->arch.hvm_vmx.host_cr0 |= X86_CR0_TS;
-        __vmwrite(HOST_CR0, v->arch.hvm_vmx.host_cr0);
+        v->arch.hvm.vmx.host_cr0 |= X86_CR0_TS;
+        __vmwrite(HOST_CR0, v->arch.hvm.vmx.host_cr0);
     }
 
     /*
@@ -901,7 +901,7 @@ static void vmx_fpu_leave(struct vcpu *v)
     {
         v->arch.hvm.hw_cr[0] |= X86_CR0_TS;
         __vmwrite(GUEST_CR0, v->arch.hvm.hw_cr[0]);
-        v->arch.hvm_vmx.exception_bitmap |= (1u << TRAP_no_device);
+        v->arch.hvm.vmx.exception_bitmap |= (1u << TRAP_no_device);
         vmx_update_exception_bitmap(v);
     }
 }
@@ -1056,10 +1056,10 @@ static void vmx_get_segment_register(struct vcpu *v, enum x86_segment seg,
         (!(attr & (1u << 16)) << 7) | (attr & 0x7f) | ((attr >> 4) & 0xf00);
 
     /* Adjust for virtual 8086 mode */
-    if ( v->arch.hvm_vmx.vmx_realmode && seg <= x86_seg_tr 
-         && !(v->arch.hvm_vmx.vm86_segment_mask & (1u << seg)) )
+    if ( v->arch.hvm.vmx.vmx_realmode && seg <= x86_seg_tr
+         && !(v->arch.hvm.vmx.vm86_segment_mask & (1u << seg)) )
     {
-        struct segment_register *sreg = &v->arch.hvm_vmx.vm86_saved_seg[seg];
+        struct segment_register *sreg = &v->arch.hvm.vmx.vm86_saved_seg[seg];
         if ( seg == x86_seg_tr ) 
             *reg = *sreg;
         else if ( reg->base != sreg->base || seg == x86_seg_ss )
@@ -1096,10 +1096,10 @@ static void vmx_set_segment_register(struct vcpu *v, enum x86_segment seg,
     base = reg->base;
 
     /* Adjust CS/SS/DS/ES/FS/GS/TR for virtual 8086 mode */
-    if ( v->arch.hvm_vmx.vmx_realmode && seg <= x86_seg_tr )
+    if ( v->arch.hvm.vmx.vmx_realmode && seg <= x86_seg_tr )
     {
         /* Remember the proper contents */
-        v->arch.hvm_vmx.vm86_saved_seg[seg] = *reg;
+        v->arch.hvm.vmx.vm86_saved_seg[seg] = *reg;
         
         if ( seg == x86_seg_tr ) 
         {
@@ -1118,10 +1118,10 @@ static void vmx_set_segment_register(struct vcpu *v, enum x86_segment seg,
                     cmpxchg(&d->arch.hvm.params[HVM_PARAM_VM86_TSS_SIZED],
                             val, val & ~VM86_TSS_UPDATED);
                 }
-                v->arch.hvm_vmx.vm86_segment_mask &= ~(1u << seg);
+                v->arch.hvm.vmx.vm86_segment_mask &= ~(1u << seg);
             }
             else
-                v->arch.hvm_vmx.vm86_segment_mask |= (1u << seg);
+                v->arch.hvm.vmx.vm86_segment_mask |= (1u << seg);
         }
         else
         {
@@ -1134,10 +1134,10 @@ static void vmx_set_segment_register(struct vcpu *v, enum x86_segment seg,
                 sel = base >> 4;
                 attr = vm86_ds_attr;
                 limit = 0xffff;
-                v->arch.hvm_vmx.vm86_segment_mask &= ~(1u << seg);
+                v->arch.hvm.vmx.vm86_segment_mask &= ~(1u << seg);
             }
             else 
-                v->arch.hvm_vmx.vm86_segment_mask |= (1u << seg);
+                v->arch.hvm.vmx.vm86_segment_mask |= (1u << seg);
         }
     }
 
@@ -1186,7 +1186,7 @@ static void vmx_set_segment_register(struct vcpu *v, enum x86_segment seg,
 
 static unsigned long vmx_get_shadow_gs_base(struct vcpu *v)
 {
-    return v->arch.hvm_vmx.shadow_gs;
+    return v->arch.hvm.vmx.shadow_gs;
 }
 
 static int vmx_set_guest_pat(struct vcpu *v, u64 gpat)
@@ -1309,9 +1309,9 @@ static void vmx_set_tsc_offset(struct vcpu *v, u64 offset, u64 at_tsc)
 static void vmx_set_rdtsc_exiting(struct vcpu *v, bool_t enable)
 {
     vmx_vmcs_enter(v);
-    v->arch.hvm_vmx.exec_control &= ~CPU_BASED_RDTSC_EXITING;
+    v->arch.hvm.vmx.exec_control &= ~CPU_BASED_RDTSC_EXITING;
     if ( enable )
-        v->arch.hvm_vmx.exec_control |= CPU_BASED_RDTSC_EXITING;
+        v->arch.hvm.vmx.exec_control |= CPU_BASED_RDTSC_EXITING;
     vmx_update_cpu_exec_control(v);
     vmx_vmcs_exit(v);
 }
@@ -1319,10 +1319,10 @@ static void vmx_set_rdtsc_exiting(struct vcpu *v, bool_t enable)
 static void vmx_set_descriptor_access_exiting(struct vcpu *v, bool enable)
 {
     if ( enable )
-        v->arch.hvm_vmx.secondary_exec_control |=
+        v->arch.hvm.vmx.secondary_exec_control |=
             SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING;
     else
-        v->arch.hvm_vmx.secondary_exec_control &=
+        v->arch.hvm.vmx.secondary_exec_control &=
             ~SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITING;
 
     vmx_vmcs_enter(v);
@@ -1431,9 +1431,9 @@ static void vmx_update_host_cr3(struct vcpu *v)
 void vmx_update_debug_state(struct vcpu *v)
 {
     if ( v->arch.hvm.debug_state_latch )
-        v->arch.hvm_vmx.exception_bitmap |= 1U << TRAP_int3;
+        v->arch.hvm.vmx.exception_bitmap |= 1U << TRAP_int3;
     else
-        v->arch.hvm_vmx.exception_bitmap &= ~(1U << TRAP_int3);
+        v->arch.hvm.vmx.exception_bitmap &= ~(1U << TRAP_int3);
 
     vmx_vmcs_enter(v);
     vmx_update_exception_bitmap(v);
@@ -1461,20 +1461,20 @@ static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr,
         if ( paging_mode_hap(v->domain) )
         {
             /* Manage GUEST_CR3 when CR0.PE=0. */
-            uint32_t old_ctls = v->arch.hvm_vmx.exec_control;
+            uint32_t old_ctls = v->arch.hvm.vmx.exec_control;
             uint32_t cr3_ctls = (CPU_BASED_CR3_LOAD_EXITING |
                                  CPU_BASED_CR3_STORE_EXITING);
 
-            v->arch.hvm_vmx.exec_control &= ~cr3_ctls;
+            v->arch.hvm.vmx.exec_control &= ~cr3_ctls;
             if ( !hvm_paging_enabled(v) && !vmx_unrestricted_guest(v) )
-                v->arch.hvm_vmx.exec_control |= cr3_ctls;
+                v->arch.hvm.vmx.exec_control |= cr3_ctls;
 
             /* Trap CR3 updates if CR3 memory events are enabled. */
             if ( v->domain->arch.monitor.write_ctrlreg_enabled &
                  monitor_ctrlreg_bitmask(VM_EVENT_X86_CR3) )
-                v->arch.hvm_vmx.exec_control |= CPU_BASED_CR3_LOAD_EXITING;
+                v->arch.hvm.vmx.exec_control |= CPU_BASED_CR3_LOAD_EXITING;
 
-            if ( old_ctls != v->arch.hvm_vmx.exec_control )
+            if ( old_ctls != v->arch.hvm.vmx.exec_control )
                 vmx_update_cpu_exec_control(v);
         }
 
@@ -1497,7 +1497,7 @@ static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr,
         realmode = !(v->arch.hvm.guest_cr[0] & X86_CR0_PE);
 
         if ( !vmx_unrestricted_guest(v) &&
-             (realmode != v->arch.hvm_vmx.vmx_realmode) )
+             (realmode != v->arch.hvm.vmx.vmx_realmode) )
         {
             enum x86_segment s;
             struct segment_register reg[x86_seg_tr + 1];
@@ -1509,7 +1509,7 @@ static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr,
              * the saved values we'll use when returning to prot mode. */
             for ( s = 0; s < ARRAY_SIZE(reg); s++ )
                 hvm_get_segment_register(v, s, &reg[s]);
-            v->arch.hvm_vmx.vmx_realmode = realmode;
+            v->arch.hvm.vmx.vmx_realmode = realmode;
 
             if ( realmode )
             {
@@ -1519,9 +1519,9 @@ static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr,
             else
             {
                 for ( s = 0; s < ARRAY_SIZE(reg); s++ )
-                    if ( !(v->arch.hvm_vmx.vm86_segment_mask & (1<<s)) )
+                    if ( !(v->arch.hvm.vmx.vm86_segment_mask & (1<<s)) )
                         hvm_set_segment_register(
-                            v, s, &v->arch.hvm_vmx.vm86_saved_seg[s]);
+                            v, s, &v->arch.hvm.vmx.vm86_saved_seg[s]);
             }
 
             vmx_update_exception_bitmap(v);
@@ -1543,7 +1543,7 @@ static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr,
             nvmx_set_cr_read_shadow(v, 4);
 
         v->arch.hvm.hw_cr[4] |= v->arch.hvm.guest_cr[4];
-        if ( v->arch.hvm_vmx.vmx_realmode )
+        if ( v->arch.hvm.vmx.vmx_realmode )
             v->arch.hvm.hw_cr[4] |= X86_CR4_VME;
 
         if ( !hvm_paging_enabled(v) )
@@ -1592,27 +1592,27 @@ static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr,
              * Update CR4 host mask to only trap when the guest tries to set
              * bits that are controlled by the hypervisor.
              */
-            v->arch.hvm_vmx.cr4_host_mask =
+            v->arch.hvm.vmx.cr4_host_mask =
                 (HVM_CR4_HOST_MASK | X86_CR4_PKE |
                  ~hvm_cr4_guest_valid_bits(v->domain, false));
 
-            v->arch.hvm_vmx.cr4_host_mask |= v->arch.hvm_vmx.vmx_realmode ?
+            v->arch.hvm.vmx.cr4_host_mask |= v->arch.hvm.vmx.vmx_realmode ?
                                              X86_CR4_VME : 0;
-            v->arch.hvm_vmx.cr4_host_mask |= !hvm_paging_enabled(v) ?
+            v->arch.hvm.vmx.cr4_host_mask |= !hvm_paging_enabled(v) ?
                                              (X86_CR4_PSE | X86_CR4_SMEP |
                                               X86_CR4_SMAP)
                                              : 0;
             if ( v->domain->arch.monitor.write_ctrlreg_enabled &
                  monitor_ctrlreg_bitmask(VM_EVENT_X86_CR4) )
-                v->arch.hvm_vmx.cr4_host_mask |=
+                v->arch.hvm.vmx.cr4_host_mask |=
                 ~v->domain->arch.monitor.write_ctrlreg_mask[VM_EVENT_X86_CR4];
 
             if ( nestedhvm_vcpu_in_guestmode(v) )
                 /* Add the nested host mask to get the more restrictive one. */
-                v->arch.hvm_vmx.cr4_host_mask |= get_vvmcs(v,
+                v->arch.hvm.vmx.cr4_host_mask |= get_vvmcs(v,
                                                            CR4_GUEST_HOST_MASK);
 
-            __vmwrite(CR4_GUEST_HOST_MASK, v->arch.hvm_vmx.cr4_host_mask);
+            __vmwrite(CR4_GUEST_HOST_MASK, v->arch.hvm.vmx.cr4_host_mask);
         }
 
         break;
@@ -1773,8 +1773,8 @@ static void __vmx_inject_exception(int trap, int type, int error_code)
 
     /* Can't inject exceptions in virtual 8086 mode because they would 
      * use the protected-mode IDT.  Emulate at the next vmenter instead. */
-    if ( curr->arch.hvm_vmx.vmx_realmode ) 
-        curr->arch.hvm_vmx.vmx_emulate = 1;
+    if ( curr->arch.hvm.vmx.vmx_realmode )
+        curr->arch.hvm.vmx.vmx_emulate = 1;
 }
 
 void vmx_inject_extint(int trap, uint8_t source)
@@ -1988,10 +1988,10 @@ static void vmx_process_isr(int isr, struct vcpu *v)
     for ( i = 0x10; i < NR_VECTORS; ++i )
         if ( vlapic_test_vector(i, &vlapic->regs->data[APIC_IRR]) ||
              vlapic_test_vector(i, &vlapic->regs->data[APIC_ISR]) )
-            set_bit(i, v->arch.hvm_vmx.eoi_exit_bitmap);
+            set_bit(i, v->arch.hvm.vmx.eoi_exit_bitmap);
 
-    for ( i = 0; i < ARRAY_SIZE(v->arch.hvm_vmx.eoi_exit_bitmap); ++i )
-        __vmwrite(EOI_EXIT_BITMAP(i), v->arch.hvm_vmx.eoi_exit_bitmap[i]);
+    for ( i = 0; i < ARRAY_SIZE(v->arch.hvm.vmx.eoi_exit_bitmap); ++i )
+        __vmwrite(EOI_EXIT_BITMAP(i), v->arch.hvm.vmx.eoi_exit_bitmap[i]);
 
     vmx_vmcs_exit(v);
 }
@@ -2053,23 +2053,23 @@ static void __vmx_deliver_posted_interrupt(struct vcpu *v)
 
 static void vmx_deliver_posted_intr(struct vcpu *v, u8 vector)
 {
-    if ( pi_test_and_set_pir(vector, &v->arch.hvm_vmx.pi_desc) )
+    if ( pi_test_and_set_pir(vector, &v->arch.hvm.vmx.pi_desc) )
         return;
 
-    if ( unlikely(v->arch.hvm_vmx.eoi_exitmap_changed) )
+    if ( unlikely(v->arch.hvm.vmx.eoi_exitmap_changed) )
     {
         /*
          * If EOI exitbitmap needs to changed or notification vector
          * can't be allocated, interrupt will not be injected till
          * VMEntry as it used to be.
          */
-        pi_set_on(&v->arch.hvm_vmx.pi_desc);
+        pi_set_on(&v->arch.hvm.vmx.pi_desc);
     }
     else
     {
         struct pi_desc old, new, prev;
 
-        prev.control = v->arch.hvm_vmx.pi_desc.control;
+        prev.control = v->arch.hvm.vmx.pi_desc.control;
 
         do {
             /*
@@ -2085,12 +2085,12 @@ static void vmx_deliver_posted_intr(struct vcpu *v, u8 vector)
                 return;
             }
 
-            old.control = v->arch.hvm_vmx.pi_desc.control &
+            old.control = v->arch.hvm.vmx.pi_desc.control &
                           ~((1 << POSTED_INTR_ON) | (1 << POSTED_INTR_SN));
-            new.control = v->arch.hvm_vmx.pi_desc.control |
+            new.control = v->arch.hvm.vmx.pi_desc.control |
                           (1 << POSTED_INTR_ON);
 
-            prev.control = cmpxchg(&v->arch.hvm_vmx.pi_desc.control,
+            prev.control = cmpxchg(&v->arch.hvm.vmx.pi_desc.control,
                                    old.control, new.control);
         } while ( prev.control != old.control );
 
@@ -2107,11 +2107,11 @@ static void vmx_sync_pir_to_irr(struct vcpu *v)
     unsigned int group, i;
     DECLARE_BITMAP(pending_intr, NR_VECTORS);
 
-    if ( !pi_test_and_clear_on(&v->arch.hvm_vmx.pi_desc) )
+    if ( !pi_test_and_clear_on(&v->arch.hvm.vmx.pi_desc) )
         return;
 
     for ( group = 0; group < ARRAY_SIZE(pending_intr); group++ )
-        pending_intr[group] = pi_get_pir(&v->arch.hvm_vmx.pi_desc, group);
+        pending_intr[group] = pi_get_pir(&v->arch.hvm.vmx.pi_desc, group);
 
     for_each_set_bit(i, pending_intr, NR_VECTORS)
         vlapic_set_vector(i, &vlapic->regs->data[APIC_IRR]);
@@ -2119,7 +2119,7 @@ static void vmx_sync_pir_to_irr(struct vcpu *v)
 
 static bool vmx_test_pir(const struct vcpu *v, uint8_t vec)
 {
-    return pi_test_pir(vec, &v->arch.hvm_vmx.pi_desc);
+    return pi_test_pir(vec, &v->arch.hvm.vmx.pi_desc);
 }
 
 static void vmx_handle_eoi(u8 vector)
@@ -2163,7 +2163,7 @@ static void vmx_vcpu_update_eptp(struct vcpu *v)
 
     __vmwrite(EPT_POINTER, ept->eptp);
 
-    if ( v->arch.hvm_vmx.secondary_exec_control &
+    if ( v->arch.hvm.vmx.secondary_exec_control &
          SECONDARY_EXEC_ENABLE_VIRT_EXCEPTIONS )
         __vmwrite(EPTP_INDEX, vcpu_altp2m(v).p2midx);
 
@@ -2185,7 +2185,7 @@ static void vmx_vcpu_update_vmfunc_ve(struct vcpu *v)
 
     if ( !d->is_dying && altp2m_active(d) )
     {
-        v->arch.hvm_vmx.secondary_exec_control |= mask;
+        v->arch.hvm.vmx.secondary_exec_control |= mask;
         __vmwrite(VM_FUNCTION_CONTROL, VMX_VMFUNC_EPTP_SWITCHING);
         __vmwrite(EPTP_LIST_ADDR, virt_to_maddr(d->arch.altp2m_eptp));
 
@@ -2206,12 +2206,12 @@ static void vmx_vcpu_update_vmfunc_ve(struct vcpu *v)
                 __vmwrite(EPTP_INDEX, vcpu_altp2m(v).p2midx);
             }
             else
-                v->arch.hvm_vmx.secondary_exec_control &=
+                v->arch.hvm.vmx.secondary_exec_control &=
                     ~SECONDARY_EXEC_ENABLE_VIRT_EXCEPTIONS;
         }
     }
     else
-        v->arch.hvm_vmx.secondary_exec_control &= ~mask;
+        v->arch.hvm.vmx.secondary_exec_control &= ~mask;
 
     vmx_update_secondary_exec_control(v);
     vmx_vmcs_exit(v);
@@ -2378,7 +2378,7 @@ static void pi_wakeup_interrupt(struct cpu_user_regs *regs)
             list_del(&vmx->pi_blocking.list);
             ASSERT(vmx->pi_blocking.lock == lock);
             vmx->pi_blocking.lock = NULL;
-            vcpu_unblock(container_of(vmx, struct vcpu, arch.hvm_vmx));
+            vcpu_unblock(container_of(vmx, struct vcpu, arch.hvm.vmx));
         }
     }
 
@@ -2590,7 +2590,7 @@ static void vmx_dr_access(unsigned long exit_qualification,
         __restore_debug_registers(v);
 
     /* Allow guest direct access to DR registers */
-    v->arch.hvm_vmx.exec_control &= ~CPU_BASED_MOV_DR_EXITING;
+    v->arch.hvm.vmx.exec_control &= ~CPU_BASED_MOV_DR_EXITING;
     vmx_update_cpu_exec_control(v);
 }
 
@@ -2905,19 +2905,19 @@ static int vmx_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
         break;
 
     case MSR_STAR:
-        *msr_content = curr->arch.hvm_vmx.star;
+        *msr_content = curr->arch.hvm.vmx.star;
         break;
 
     case MSR_LSTAR:
-        *msr_content = curr->arch.hvm_vmx.lstar;
+        *msr_content = curr->arch.hvm.vmx.lstar;
         break;
 
     case MSR_CSTAR:
-        *msr_content = curr->arch.hvm_vmx.cstar;
+        *msr_content = curr->arch.hvm.vmx.cstar;
         break;
 
     case MSR_SYSCALL_MASK:
-        *msr_content = curr->arch.hvm_vmx.sfmask;
+        *msr_content = curr->arch.hvm.vmx.sfmask;
         break;
 
     case MSR_IA32_DEBUGCTLMSR:
@@ -3046,7 +3046,7 @@ void vmx_vlapic_msr_changed(struct vcpu *v)
         return;
 
     vmx_vmcs_enter(v);
-    v->arch.hvm_vmx.secondary_exec_control &=
+    v->arch.hvm.vmx.secondary_exec_control &=
         ~(SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES |
           SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE);
     if ( !vlapic_hw_disabled(vlapic) &&
@@ -3054,7 +3054,7 @@ void vmx_vlapic_msr_changed(struct vcpu *v)
     {
         if ( virtualize_x2apic_mode && vlapic_x2apic_mode(vlapic) )
         {
-            v->arch.hvm_vmx.secondary_exec_control |=
+            v->arch.hvm.vmx.secondary_exec_control |=
                 SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE;
             if ( cpu_has_vmx_apic_reg_virt )
             {
@@ -3074,10 +3074,10 @@ void vmx_vlapic_msr_changed(struct vcpu *v)
             }
         }
         else
-            v->arch.hvm_vmx.secondary_exec_control |=
+            v->arch.hvm.vmx.secondary_exec_control |=
                 SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
     }
-    if ( !(v->arch.hvm_vmx.secondary_exec_control &
+    if ( !(v->arch.hvm.vmx.secondary_exec_control &
            SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE) )
         for ( msr = MSR_IA32_APICBASE_MSR;
               msr <= MSR_IA32_APICBASE_MSR + 0xff; msr++ )
@@ -3128,25 +3128,25 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
         break;
 
     case MSR_STAR:
-        v->arch.hvm_vmx.star = msr_content;
+        v->arch.hvm.vmx.star = msr_content;
         wrmsrl(MSR_STAR, msr_content);
         break;
 
     case MSR_LSTAR:
         if ( !is_canonical_address(msr_content) )
             goto gp_fault;
-        v->arch.hvm_vmx.lstar = msr_content;
+        v->arch.hvm.vmx.lstar = msr_content;
         wrmsrl(MSR_LSTAR, msr_content);
         break;
 
     case MSR_CSTAR:
         if ( !is_canonical_address(msr_content) )
             goto gp_fault;
-        v->arch.hvm_vmx.cstar = msr_content;
+        v->arch.hvm.vmx.cstar = msr_content;
         break;
 
     case MSR_SYSCALL_MASK:
-        v->arch.hvm_vmx.sfmask = msr_content;
+        v->arch.hvm.vmx.sfmask = msr_content;
         wrmsrl(MSR_SYSCALL_MASK, msr_content);
         break;
 
@@ -3187,7 +3187,7 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
          * the guest won't execute correctly either.  Simply crash the domain
          * to make the failure obvious.
          */
-        if ( !(v->arch.hvm_vmx.lbr_flags & LBR_MSRS_INSERTED) &&
+        if ( !(v->arch.hvm.vmx.lbr_flags & LBR_MSRS_INSERTED) &&
              (msr_content & IA32_DEBUGCTLMSR_LBR) )
         {
             const struct lbr_info *lbr = last_branch_msr_get();
@@ -3219,11 +3219,11 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
                 }
             }
 
-            v->arch.hvm_vmx.lbr_flags |= LBR_MSRS_INSERTED;
+            v->arch.hvm.vmx.lbr_flags |= LBR_MSRS_INSERTED;
             if ( lbr_tsx_fixup_needed )
-                v->arch.hvm_vmx.lbr_flags |= LBR_FIXUP_TSX;
+                v->arch.hvm.vmx.lbr_flags |= LBR_FIXUP_TSX;
             if ( bdw_erratum_bdf14_fixup_needed )
-                v->arch.hvm_vmx.lbr_flags |= LBR_FIXUP_BDF14;
+                v->arch.hvm.vmx.lbr_flags |= LBR_FIXUP_BDF14;
         }
 
         __vmwrite(GUEST_IA32_DEBUGCTL, msr_content);
@@ -3419,7 +3419,7 @@ static void vmx_failed_vmentry(unsigned int exit_reason,
             printk("  Entry out of range\n");
         else
         {
-            msr = &curr->arch.hvm_vmx.msr_area[idx];
+            msr = &curr->arch.hvm.vmx.msr_area[idx];
 
             printk("  msr %08x val %016"PRIx64" (mbz %#x)\n",
                    msr->index, msr->data, msr->mbz);
@@ -3452,7 +3452,7 @@ void vmx_enter_realmode(struct cpu_user_regs *regs)
     /* Adjust RFLAGS to enter virtual 8086 mode with IOPL == 3.  Since
      * we have CR4.VME == 1 and our own TSS with an empty interrupt
      * redirection bitmap, all software INTs will be handled by vm86 */
-    v->arch.hvm_vmx.vm86_saved_eflags = regs->eflags;
+    v->arch.hvm.vmx.vm86_saved_eflags = regs->eflags;
     regs->eflags |= (X86_EFLAGS_VM | X86_EFLAGS_IOPL);
 }
 
@@ -3618,9 +3618,9 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
          * values to match.
          */
         __vmread(GUEST_CR4, &v->arch.hvm.hw_cr[4]);
-        v->arch.hvm.guest_cr[4] &= v->arch.hvm_vmx.cr4_host_mask;
+        v->arch.hvm.guest_cr[4] &= v->arch.hvm.vmx.cr4_host_mask;
         v->arch.hvm.guest_cr[4] |= (v->arch.hvm.hw_cr[4] &
-                                    ~v->arch.hvm_vmx.cr4_host_mask);
+                                    ~v->arch.hvm.vmx.cr4_host_mask);
 
         __vmread(GUEST_CR3, &v->arch.hvm.hw_cr[3]);
         if ( vmx_unrestricted_guest(v) || hvm_paging_enabled(v) )
@@ -3671,12 +3671,12 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
      * figure out whether it has done so and update the altp2m data.
      */
     if ( altp2m_active(v->domain) &&
-        (v->arch.hvm_vmx.secondary_exec_control &
+        (v->arch.hvm.vmx.secondary_exec_control &
         SECONDARY_EXEC_ENABLE_VM_FUNCTIONS) )
     {
         unsigned long idx;
 
-        if ( v->arch.hvm_vmx.secondary_exec_control &
+        if ( v->arch.hvm.vmx.secondary_exec_control &
             SECONDARY_EXEC_ENABLE_VIRT_EXCEPTIONS )
             __vmread(EPTP_INDEX, &idx);
         else
@@ -3718,11 +3718,11 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
     if ( unlikely(exit_reason & VMX_EXIT_REASONS_FAILED_VMENTRY) )
         return vmx_failed_vmentry(exit_reason, regs);
 
-    if ( v->arch.hvm_vmx.vmx_realmode )
+    if ( v->arch.hvm.vmx.vmx_realmode )
     {
         /* Put RFLAGS back the way the guest wants it */
         regs->eflags &= ~(X86_EFLAGS_VM | X86_EFLAGS_IOPL);
-        regs->eflags |= (v->arch.hvm_vmx.vm86_saved_eflags & X86_EFLAGS_IOPL);
+        regs->eflags |= (v->arch.hvm.vmx.vm86_saved_eflags & X86_EFLAGS_IOPL);
 
         /* Unless this exit was for an interrupt, we've hit something
          * vm86 can't handle.  Try again, using the emulator. */
@@ -3735,7 +3735,7 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
             {
         default:
                 perfc_incr(realmode_exits);
-                v->arch.hvm_vmx.vmx_emulate = 1;
+                v->arch.hvm.vmx.vmx_emulate = 1;
                 HVMTRACE_0D(REALMODE_EMULATE);
                 return;
             }
@@ -3911,12 +3911,12 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
         break;
     case EXIT_REASON_PENDING_VIRT_INTR:
         /* Disable the interrupt window. */
-        v->arch.hvm_vmx.exec_control &= ~CPU_BASED_VIRTUAL_INTR_PENDING;
+        v->arch.hvm.vmx.exec_control &= ~CPU_BASED_VIRTUAL_INTR_PENDING;
         vmx_update_cpu_exec_control(v);
         break;
     case EXIT_REASON_PENDING_VIRT_NMI:
         /* Disable the NMI window. */
-        v->arch.hvm_vmx.exec_control &= ~CPU_BASED_VIRTUAL_NMI_PENDING;
+        v->arch.hvm.vmx.exec_control &= ~CPU_BASED_VIRTUAL_NMI_PENDING;
         vmx_update_cpu_exec_control(v);
         break;
     case EXIT_REASON_TASK_SWITCH: {
@@ -4165,7 +4165,7 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
     }
 
     case EXIT_REASON_MONITOR_TRAP_FLAG:
-        v->arch.hvm_vmx.exec_control &= ~CPU_BASED_MONITOR_TRAP_FLAG;
+        v->arch.hvm.vmx.exec_control &= ~CPU_BASED_MONITOR_TRAP_FLAG;
         vmx_update_cpu_exec_control(v);
         if ( v->arch.hvm.single_step )
         {
@@ -4265,8 +4265,8 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
 static void lbr_tsx_fixup(void)
 {
     struct vcpu *curr = current;
-    unsigned int msr_count = curr->arch.hvm_vmx.msr_save_count;
-    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.msr_area;
+    unsigned int msr_count = curr->arch.hvm.vmx.msr_save_count;
+    struct vmx_msr_entry *msr_area = curr->arch.hvm.vmx.msr_area;
     struct vmx_msr_entry *msr;
 
     if ( (msr = vmx_find_msr(curr, lbr_from_start, VMX_MSR_GUEST)) != NULL )
@@ -4312,9 +4312,9 @@ static void lbr_fixup(void)
 {
     struct vcpu *curr = current;
 
-    if ( curr->arch.hvm_vmx.lbr_flags & LBR_FIXUP_TSX )
+    if ( curr->arch.hvm.vmx.lbr_flags & LBR_FIXUP_TSX )
         lbr_tsx_fixup();
-    if ( curr->arch.hvm_vmx.lbr_flags & LBR_FIXUP_BDF14 )
+    if ( curr->arch.hvm.vmx.lbr_flags & LBR_FIXUP_BDF14 )
         bdw_erratum_bdf14_fixup();
 }
 
@@ -4350,14 +4350,14 @@ bool vmx_vmenter_helper(const struct cpu_user_regs *regs)
         if ( !old_asid && new_asid )
         {
             /* VPID was disabled: now enabled. */
-            curr->arch.hvm_vmx.secondary_exec_control |=
+            curr->arch.hvm.vmx.secondary_exec_control |=
                 SECONDARY_EXEC_ENABLE_VPID;
             vmx_update_secondary_exec_control(curr);
         }
         else if ( old_asid && !new_asid )
         {
             /* VPID was enabled: now disabled. */
-            curr->arch.hvm_vmx.secondary_exec_control &=
+            curr->arch.hvm.vmx.secondary_exec_control &=
                 ~SECONDARY_EXEC_ENABLE_VPID;
             vmx_update_secondary_exec_control(curr);
         }
@@ -4382,7 +4382,7 @@ bool vmx_vmenter_helper(const struct cpu_user_regs *regs)
     }
 
  out:
-    if ( unlikely(curr->arch.hvm_vmx.lbr_flags & LBR_FIXUP_MASK) )
+    if ( unlikely(curr->arch.hvm.vmx.lbr_flags & LBR_FIXUP_MASK) )
         lbr_fixup();
 
     HVMTRACE_ND(VMENTRY, 0, 1/*cycles*/, 0, 0, 0, 0, 0, 0, 0);
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 5cdea47..0e45db8 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -82,7 +82,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
             gdprintk(XENLOG_ERR, "nest: allocation for vmread bitmap failed\n");
             return -ENOMEM;
         }
-        v->arch.hvm_vmx.vmread_bitmap = vmread_bitmap;
+        v->arch.hvm.vmx.vmread_bitmap = vmread_bitmap;
 
         clear_domain_page(page_to_mfn(vmread_bitmap));
 
@@ -92,7 +92,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
             gdprintk(XENLOG_ERR, "nest: allocation for vmwrite bitmap failed\n");
             return -ENOMEM;
         }
-        v->arch.hvm_vmx.vmwrite_bitmap = vmwrite_bitmap;
+        v->arch.hvm.vmx.vmwrite_bitmap = vmwrite_bitmap;
 
         vw = __map_domain_page(vmwrite_bitmap);
         clear_page(vw);
@@ -138,7 +138,7 @@ void nvmx_vcpu_destroy(struct vcpu *v)
      * leak of L1 VMCS page.
      */
     if ( nvcpu->nv_n1vmcx_pa )
-        v->arch.hvm_vmx.vmcs_pa = nvcpu->nv_n1vmcx_pa;
+        v->arch.hvm.vmx.vmcs_pa = nvcpu->nv_n1vmcx_pa;
 
     if ( nvcpu->nv_n2vmcx_pa )
     {
@@ -155,15 +155,15 @@ void nvmx_vcpu_destroy(struct vcpu *v)
             xfree(item);
         }
 
-    if ( v->arch.hvm_vmx.vmread_bitmap )
+    if ( v->arch.hvm.vmx.vmread_bitmap )
     {
-        free_domheap_page(v->arch.hvm_vmx.vmread_bitmap);
-        v->arch.hvm_vmx.vmread_bitmap = NULL;
+        free_domheap_page(v->arch.hvm.vmx.vmread_bitmap);
+        v->arch.hvm.vmx.vmread_bitmap = NULL;
     }
-    if ( v->arch.hvm_vmx.vmwrite_bitmap )
+    if ( v->arch.hvm.vmx.vmwrite_bitmap )
     {
-        free_domheap_page(v->arch.hvm_vmx.vmwrite_bitmap);
-        v->arch.hvm_vmx.vmwrite_bitmap = NULL;
+        free_domheap_page(v->arch.hvm.vmx.vmwrite_bitmap);
+        v->arch.hvm.vmx.vmwrite_bitmap = NULL;
     }
 }
  
@@ -809,7 +809,7 @@ static void nvmx_purge_vvmcs(struct vcpu *v)
         hvm_unmap_guest_frame(nvcpu->nv_vvmcx, 1);
     nvcpu->nv_vvmcx = NULL;
     nvcpu->nv_vvmcxaddr = INVALID_PADDR;
-    v->arch.hvm_vmx.vmcs_shadow_maddr = 0;
+    v->arch.hvm.vmx.vmcs_shadow_maddr = 0;
     for (i=0; i<2; i++) {
         if ( nvmx->iobitmap[i] ) {
             hvm_unmap_guest_frame(nvmx->iobitmap[i], 1);
@@ -1101,8 +1101,8 @@ static void load_shadow_guest_state(struct vcpu *v)
                      (get_vvmcs(v, CR4_READ_SHADOW) & cr_gh_mask);
     __vmwrite(CR4_READ_SHADOW, cr_read_shadow);
     /* Add the nested host mask to the one set by vmx_update_guest_cr. */
-    v->arch.hvm_vmx.cr4_host_mask |= cr_gh_mask;
-    __vmwrite(CR4_GUEST_HOST_MASK, v->arch.hvm_vmx.cr4_host_mask);
+    v->arch.hvm.vmx.cr4_host_mask |= cr_gh_mask;
+    __vmwrite(CR4_GUEST_HOST_MASK, v->arch.hvm.vmx.cr4_host_mask);
 
     /* TODO: CR3 target control */
 }
@@ -1133,18 +1133,18 @@ static bool_t nvmx_vpid_enabled(const struct vcpu *v)
 
 static void nvmx_set_vmcs_pointer(struct vcpu *v, struct vmcs_struct *vvmcs)
 {
-    paddr_t vvmcs_maddr = v->arch.hvm_vmx.vmcs_shadow_maddr;
+    paddr_t vvmcs_maddr = v->arch.hvm.vmx.vmcs_shadow_maddr;
 
     __vmpclear(vvmcs_maddr);
     vvmcs->vmcs_revision_id |= VMCS_RID_TYPE_MASK;
     __vmwrite(VMCS_LINK_POINTER, vvmcs_maddr);
-    __vmwrite(VMREAD_BITMAP, page_to_maddr(v->arch.hvm_vmx.vmread_bitmap));
-    __vmwrite(VMWRITE_BITMAP, page_to_maddr(v->arch.hvm_vmx.vmwrite_bitmap));
+    __vmwrite(VMREAD_BITMAP, page_to_maddr(v->arch.hvm.vmx.vmread_bitmap));
+    __vmwrite(VMWRITE_BITMAP, page_to_maddr(v->arch.hvm.vmx.vmwrite_bitmap));
 }
 
 static void nvmx_clear_vmcs_pointer(struct vcpu *v, struct vmcs_struct *vvmcs)
 {
-    paddr_t vvmcs_maddr = v->arch.hvm_vmx.vmcs_shadow_maddr;
+    paddr_t vvmcs_maddr = v->arch.hvm.vmx.vmcs_shadow_maddr;
 
     __vmpclear(vvmcs_maddr);
     vvmcs->vmcs_revision_id &= ~VMCS_RID_TYPE_MASK;
@@ -1159,7 +1159,7 @@ static void virtual_vmentry(struct cpu_user_regs *regs)
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
     unsigned long lm_l1, lm_l2;
 
-    vmx_vmcs_switch(v->arch.hvm_vmx.vmcs_pa, nvcpu->nv_n2vmcx_pa);
+    vmx_vmcs_switch(v->arch.hvm.vmx.vmcs_pa, nvcpu->nv_n2vmcx_pa);
 
     nestedhvm_vcpu_enter_guestmode(v);
     nvcpu->nv_vmentry_pending = 0;
@@ -1197,7 +1197,7 @@ static void virtual_vmentry(struct cpu_user_regs *regs)
     regs->rflags = get_vvmcs(v, GUEST_RFLAGS);
 
     /* updating host cr0 to sync TS bit */
-    __vmwrite(HOST_CR0, v->arch.hvm_vmx.host_cr0);
+    __vmwrite(HOST_CR0, v->arch.hvm.vmx.host_cr0);
 
     /* Setup virtual ETP for L2 guest*/
     if ( nestedhvm_paging_mode_hap(v) )
@@ -1234,7 +1234,7 @@ static void sync_vvmcs_guest_state(struct vcpu *v, struct cpu_user_regs *regs)
     if ( !(__n2_exec_control(v) & CPU_BASED_CR3_LOAD_EXITING) )
         shadow_to_vvmcs(v, GUEST_CR3);
 
-    if ( v->arch.hvm_vmx.cr4_host_mask != ~0UL )
+    if ( v->arch.hvm.vmx.cr4_host_mask != ~0UL )
         /* Only need to update nested GUEST_CR4 if not all bits are trapped. */
         set_vvmcs(v, GUEST_CR4, v->arch.hvm.guest_cr[4]);
 }
@@ -1375,7 +1375,7 @@ static void virtual_vmexit(struct cpu_user_regs *regs)
     /* This will clear current pCPU bit in p2m->dirty_cpumask */
     np2m_schedule(NP2M_SCHEDLE_OUT);
 
-    vmx_vmcs_switch(v->arch.hvm_vmx.vmcs_pa, nvcpu->nv_n1vmcx_pa);
+    vmx_vmcs_switch(v->arch.hvm.vmx.vmcs_pa, nvcpu->nv_n1vmcx_pa);
 
     nestedhvm_vcpu_exit_guestmode(v);
     nvcpu->nv_vmexit_pending = 0;
@@ -1404,7 +1404,7 @@ static void virtual_vmexit(struct cpu_user_regs *regs)
     regs->rflags = X86_EFLAGS_MBS;
 
     /* updating host cr0 to sync TS bit */
-    __vmwrite(HOST_CR0, v->arch.hvm_vmx.host_cr0);
+    __vmwrite(HOST_CR0, v->arch.hvm.vmx.host_cr0);
 
     if ( cpu_has_vmx_virtual_intr_delivery )
         nvmx_update_apicv(v);
@@ -1511,12 +1511,12 @@ int nvmx_handle_vmxon(struct cpu_user_regs *regs)
      * `fork' the host vmcs to shadow_vmcs
      * vmcs_lock is not needed since we are on current
      */
-    nvcpu->nv_n1vmcx_pa = v->arch.hvm_vmx.vmcs_pa;
-    __vmpclear(v->arch.hvm_vmx.vmcs_pa);
+    nvcpu->nv_n1vmcx_pa = v->arch.hvm.vmx.vmcs_pa;
+    __vmpclear(v->arch.hvm.vmx.vmcs_pa);
     copy_domain_page(_mfn(PFN_DOWN(nvcpu->nv_n2vmcx_pa)),
-                     _mfn(PFN_DOWN(v->arch.hvm_vmx.vmcs_pa)));
-    __vmptrld(v->arch.hvm_vmx.vmcs_pa);
-    v->arch.hvm_vmx.launched = 0;
+                     _mfn(PFN_DOWN(v->arch.hvm.vmx.vmcs_pa)));
+    __vmptrld(v->arch.hvm.vmx.vmcs_pa);
+    v->arch.hvm.vmx.launched = 0;
     vmsucceed(regs);
 
     return X86EMUL_OKAY;
@@ -1636,7 +1636,7 @@ int nvmx_handle_vmresume(struct cpu_user_regs *regs)
     }
 
     launched = vvmcs_launched(&nvmx->launched_list,
-                              PFN_DOWN(v->arch.hvm_vmx.vmcs_shadow_maddr));
+                              PFN_DOWN(v->arch.hvm.vmx.vmcs_shadow_maddr));
     if ( !launched )
     {
         vmfail_valid(regs, VMX_INSN_VMRESUME_NONLAUNCHED_VMCS);
@@ -1670,7 +1670,7 @@ int nvmx_handle_vmlaunch(struct cpu_user_regs *regs)
     }
 
     launched = vvmcs_launched(&nvmx->launched_list,
-                              PFN_DOWN(v->arch.hvm_vmx.vmcs_shadow_maddr));
+                              PFN_DOWN(v->arch.hvm.vmx.vmcs_shadow_maddr));
     if ( launched )
     {
         vmfail_valid(regs, VMX_INSN_VMLAUNCH_NONCLEAR_VMCS);
@@ -1681,7 +1681,7 @@ int nvmx_handle_vmlaunch(struct cpu_user_regs *regs)
         if ( rc == X86EMUL_OKAY )
         {
             if ( set_vvmcs_launched(&nvmx->launched_list,
-                                    PFN_DOWN(v->arch.hvm_vmx.vmcs_shadow_maddr)) < 0 )
+                                    PFN_DOWN(v->arch.hvm.vmx.vmcs_shadow_maddr)) < 0 )
                 return X86EMUL_UNHANDLEABLE;
         }
     }
@@ -1732,7 +1732,7 @@ int nvmx_handle_vmptrld(struct cpu_user_regs *regs)
                 }
                 nvcpu->nv_vvmcx = vvmcx;
                 nvcpu->nv_vvmcxaddr = gpa;
-                v->arch.hvm_vmx.vmcs_shadow_maddr =
+                v->arch.hvm.vmx.vmcs_shadow_maddr =
                     mfn_to_maddr(domain_page_map_to_mfn(vvmcx));
             }
             else
@@ -1806,7 +1806,7 @@ int nvmx_handle_vmclear(struct cpu_user_regs *regs)
         if ( cpu_has_vmx_vmcs_shadowing )
             nvmx_clear_vmcs_pointer(v, nvcpu->nv_vvmcx);
         clear_vvmcs_launched(&nvmx->launched_list,
-                             PFN_DOWN(v->arch.hvm_vmx.vmcs_shadow_maddr));
+                             PFN_DOWN(v->arch.hvm.vmx.vmcs_shadow_maddr));
         nvmx_purge_vvmcs(v);
     }
     else 
@@ -2041,7 +2041,7 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 *msr_content)
     case MSR_IA32_VMX_BASIC:
     {
         const struct vmcs_struct *vmcs =
-            map_domain_page(_mfn(PFN_DOWN(v->arch.hvm_vmx.vmcs_pa)));
+            map_domain_page(_mfn(PFN_DOWN(v->arch.hvm.vmx.vmcs_pa)));
 
         data = (host_data & (~0ul << 32)) |
                (vmcs->vmcs_revision_id & 0x7fffffff);
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 14b5939..1ff4f14 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -643,7 +643,7 @@ static int resolve_misconfig(struct p2m_domain *p2m, unsigned long gfn)
         struct vcpu *v;
 
         for_each_vcpu ( p2m->domain, v )
-            v->arch.hvm_vmx.ept_spurious_misconfig = 1;
+            v->arch.hvm.vmx.ept_spurious_misconfig = 1;
     }
 
     return rc;
@@ -658,9 +658,9 @@ bool_t ept_handle_misconfig(uint64_t gpa)
 
     p2m_lock(p2m);
 
-    spurious = curr->arch.hvm_vmx.ept_spurious_misconfig;
+    spurious = curr->arch.hvm.vmx.ept_spurious_misconfig;
     rc = resolve_misconfig(p2m, PFN_DOWN(gpa));
-    curr->arch.hvm_vmx.ept_spurious_misconfig = 0;
+    curr->arch.hvm.vmx.ept_spurious_misconfig = 0;
 
     p2m_unlock(p2m);
 
diff --git a/xen/arch/x86/x86_64/asm-offsets.c b/xen/arch/x86/x86_64/asm-offsets.c
index 555804c..0a214e1 100644
--- a/xen/arch/x86/x86_64/asm-offsets.c
+++ b/xen/arch/x86/x86_64/asm-offsets.c
@@ -87,14 +87,14 @@ void __dummy__(void)
     DEFINE(_VGCF_syscall_disables_events,  _VGCF_syscall_disables_events);
     BLANK();
 
-    OFFSET(VCPU_svm_vmcb_pa, struct vcpu, arch.hvm_svm.vmcb_pa);
-    OFFSET(VCPU_svm_vmcb, struct vcpu, arch.hvm_svm.vmcb);
+    OFFSET(VCPU_svm_vmcb_pa, struct vcpu, arch.hvm.svm.vmcb_pa);
+    OFFSET(VCPU_svm_vmcb, struct vcpu, arch.hvm.svm.vmcb);
     BLANK();
 
-    OFFSET(VCPU_vmx_launched, struct vcpu, arch.hvm_vmx.launched);
-    OFFSET(VCPU_vmx_realmode, struct vcpu, arch.hvm_vmx.vmx_realmode);
-    OFFSET(VCPU_vmx_emulate, struct vcpu, arch.hvm_vmx.vmx_emulate);
-    OFFSET(VCPU_vm86_seg_mask, struct vcpu, arch.hvm_vmx.vm86_segment_mask);
+    OFFSET(VCPU_vmx_launched, struct vcpu, arch.hvm.vmx.launched);
+    OFFSET(VCPU_vmx_realmode, struct vcpu, arch.hvm.vmx.vmx_realmode);
+    OFFSET(VCPU_vmx_emulate, struct vcpu, arch.hvm.vmx.vmx_emulate);
+    OFFSET(VCPU_vm86_seg_mask, struct vcpu, arch.hvm.vmx.vm86_segment_mask);
     OFFSET(VCPU_hvm_guest_cr2, struct vcpu, arch.hvm.guest_cr[2]);
     BLANK();
 
diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
index bab3aa3..a5668e6 100644
--- a/xen/drivers/passthrough/io.c
+++ b/xen/drivers/passthrough/io.c
@@ -439,7 +439,7 @@ int pt_irq_create_bind(
 
         /* Use interrupt posting if it is supported. */
         if ( iommu_intpost )
-            pi_update_irte(vcpu ? &vcpu->arch.hvm_vmx.pi_desc : NULL,
+            pi_update_irte(vcpu ? &vcpu->arch.hvm.vmx.pi_desc : NULL,
                            info, pirq_dpci->gmsi.gvec);
 
         if ( pt_irq_bind->u.msi.gflags & XEN_DOMCTL_VMSI_X86_UNMASKED )
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 0ea3742..c216f37 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -604,10 +604,6 @@ struct guest_memory_policy
 void update_guest_memory_policy(struct vcpu *v,
                                 struct guest_memory_policy *policy);
 
-/* Shorthands to improve code legibility. */
-#define hvm_vmx         hvm.u.vmx
-#define hvm_svm         hvm.u.svm
-
 bool update_runstate_area(struct vcpu *);
 bool update_secondary_system_time(struct vcpu *,
                                   struct vcpu_time_info *);
diff --git a/xen/include/asm-x86/hvm/svm/asid.h b/xen/include/asm-x86/hvm/svm/asid.h
index d3a144c..60cbb7b 100644
--- a/xen/include/asm-x86/hvm/svm/asid.h
+++ b/xen/include/asm-x86/hvm/svm/asid.h
@@ -29,7 +29,7 @@ static inline void svm_asid_g_invlpg(struct vcpu *v, unsigned long g_vaddr)
 {
 #if 0
     /* Optimization? */
-    svm_invlpga(g_vaddr, v->arch.hvm_svm.vmcb->guest_asid);
+    svm_invlpga(g_vaddr, v->arch.hvm.svm.vmcb->guest_asid);
 #endif
 
     /* Safe fallback. Take a new ASID. */
diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
index c8d0a4e..c663155 100644
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -178,7 +178,7 @@ struct hvm_vcpu {
     union {
         struct vmx_vcpu vmx;
         struct svm_vcpu svm;
-    } u;
+    };
 
     struct tasklet      assert_evtchn_irq_tasklet;
 
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index f964a95..76dd04a 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -311,7 +311,7 @@ extern u64 vmx_ept_vpid_cap;
 #define cpu_has_vmx_unrestricted_guest \
     (vmx_secondary_exec_control & SECONDARY_EXEC_UNRESTRICTED_GUEST)
 #define vmx_unrestricted_guest(v)               \
-    ((v)->arch.hvm_vmx.secondary_exec_control & \
+    ((v)->arch.hvm.vmx.secondary_exec_control & \
      SECONDARY_EXEC_UNRESTRICTED_GUEST)
 #define cpu_has_vmx_ple \
     (vmx_secondary_exec_control & SECONDARY_EXEC_PAUSE_LOOP_EXITING)
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [PATCH 3/7] xen/hvm: Rename d->arch.hvm_domain to d->arch.hvm
  2018-08-28 17:39 ` [PATCH 3/7] xen/hvm: Rename d->arch.hvm_domain to d->arch.hvm Andrew Cooper
@ 2018-08-28 18:56   ` Razvan Cojocaru
  2018-08-29  7:56   ` Wei Liu
                     ` (4 subsequent siblings)
  5 siblings, 0 replies; 39+ messages in thread
From: Razvan Cojocaru @ 2018-08-28 18:56 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel
  Cc: Kevin Tian, Tamas K Lengyel, Wei Liu, Jun Nakajima,
	George Dunlap, Tim Deegan, Julien Grall, Paul Durrant,
	Stefano Stabellini, Jan Beulich, Boris Ostrovsky, Brian Woods,
	Suravee Suthikulpanit, Roger Pau Monné

On 8/28/18 8:39 PM, Andrew Cooper wrote:
> The trailing _domain suffix is redundant, but adds to code volume.  Drop it.
> 
> Reflow lines as appropriate, and switch to using the new XFREE/etc wrappers
> where applicable.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com>


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 4/7] x86/hvm: Rename v->arch.hvm_vcpu to v->arch.hvm
  2018-08-28 17:39 ` [PATCH 4/7] x86/hvm: Rename v->arch.hvm_vcpu to v->arch.hvm Andrew Cooper
@ 2018-08-28 18:59   ` Razvan Cojocaru
  2018-08-29  7:57   ` Wei Liu
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 39+ messages in thread
From: Razvan Cojocaru @ 2018-08-28 18:59 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel
  Cc: Kevin Tian, Tamas K Lengyel, Wei Liu, Jun Nakajima,
	George Dunlap, Tim Deegan, Julien Grall, Paul Durrant,
	Stefano Stabellini, Jan Beulich, Boris Ostrovsky, Brian Woods,
	Suravee Suthikulpanit, Roger Pau Monné

On 8/28/18 8:39 PM, Andrew Cooper wrote:
> The trailing _vcpu suffix is redundant, but adds to code volume.  Drop it.
> 
> Reflow lines as appropriate, and switch to using the new XFREE/etc wrappers
> where applicable.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Razvan Cojocaru <rcojocaru@bitdefender.com>


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 1/7] x86/pv: Rename d->arch.pv_domain to d->arch.pv
  2018-08-28 17:39 ` [PATCH 1/7] x86/pv: Rename d->arch.pv_domain to d->arch.pv Andrew Cooper
@ 2018-08-29  7:45   ` Wei Liu
  2018-08-29 15:55   ` Jan Beulich
  1 sibling, 0 replies; 39+ messages in thread
From: Wei Liu @ 2018-08-29  7:45 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Wei Liu, George Dunlap, Tim Deegan, Xen-devel, Jan Beulich,
	Roger Pau Monné

On Tue, Aug 28, 2018 at 06:39:00PM +0100, Andrew Cooper wrote:
> The trailing _domain suffix is redundant, but adds to code volume.  Drop it.
> 
> Reflow lines as appropriate, and switch to using the new XFREE/etc wrappers
> where applicable.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 2/7] x86/pv: Rename v->arch.pv_vcpu to v->arch.pv
  2018-08-28 17:39 ` [PATCH 2/7] x86/pv: Rename v->arch.pv_vcpu to v->arch.pv Andrew Cooper
@ 2018-08-29  7:53   ` Wei Liu
  2018-08-29 16:01   ` Jan Beulich
  1 sibling, 0 replies; 39+ messages in thread
From: Wei Liu @ 2018-08-29  7:53 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Roger Pau Monné, Wei Liu, Jan Beulich, Xen-devel

On Tue, Aug 28, 2018 at 06:39:01PM +0100, Andrew Cooper wrote:
> The trailing _vcpu suffix is redundant, but adds to code volume.  Drop it.
> 
> Reflow lines as appropriate, and switch to using the new XFREE/etc wrappers
> where applicable.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 3/7] xen/hvm: Rename d->arch.hvm_domain to d->arch.hvm
  2018-08-28 17:39 ` [PATCH 3/7] xen/hvm: Rename d->arch.hvm_domain to d->arch.hvm Andrew Cooper
  2018-08-28 18:56   ` Razvan Cojocaru
@ 2018-08-29  7:56   ` Wei Liu
  2018-08-30  1:34   ` Tian, Kevin
                     ` (3 subsequent siblings)
  5 siblings, 0 replies; 39+ messages in thread
From: Wei Liu @ 2018-08-29  7:56 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Kevin Tian, Tamas K Lengyel, Wei Liu, Jun Nakajima,
	Razvan Cojocaru, George Dunlap, Tim Deegan, Xen-devel,
	Julien Grall, Paul Durrant, Stefano Stabellini, Jan Beulich,
	Boris Ostrovsky, Brian Woods, Suravee Suthikulpanit,
	Roger Pau Monné

On Tue, Aug 28, 2018 at 06:39:02PM +0100, Andrew Cooper wrote:
> The trailing _domain suffix is redundant, but adds to code volume.  Drop it.
> 
> Reflow lines as appropriate, and switch to using the new XFREE/etc wrappers
> where applicable.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Wei Liu <wei.liu2@citrix.com>

> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> index 4cdcd5d..3dcd7f9 100644
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -505,7 +505,7 @@ int arch_domain_create(struct domain *d,
>  
>      /* Need to determine if HAP is enabled before initialising paging */
>      if ( is_hvm_domain(d) )
> -        d->arch.hvm_domain.hap_enabled =
> +        d->arch.hvm.hap_enabled =
>              hvm_hap_supported() && (config->flags & XEN_DOMCTL_CDF_hap);
>  
>      if ( (rc = paging_domain_init(d, config->flags)) != 0 )
> diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
> index fdbcce0..f306614 100644
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -745,7 +745,7 @@ long arch_do_domctl(
>          unsigned int fmp = domctl->u.ioport_mapping.first_mport;
>          unsigned int np = domctl->u.ioport_mapping.nr_ports;
>          unsigned int add = domctl->u.ioport_mapping.add_mapping;
> -        struct hvm_domain *hvm_domain;
> +        struct hvm_domain *hvm;

Yet I think this hvm is clearer if it stays hvm_domain. I don't feel too
strong for this though.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 4/7] x86/hvm: Rename v->arch.hvm_vcpu to v->arch.hvm
  2018-08-28 17:39 ` [PATCH 4/7] x86/hvm: Rename v->arch.hvm_vcpu to v->arch.hvm Andrew Cooper
  2018-08-28 18:59   ` Razvan Cojocaru
@ 2018-08-29  7:57   ` Wei Liu
  2018-08-30  1:34   ` Tian, Kevin
                     ` (2 subsequent siblings)
  4 siblings, 0 replies; 39+ messages in thread
From: Wei Liu @ 2018-08-29  7:57 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Kevin Tian, Tamas K Lengyel, Wei Liu, Jun Nakajima,
	Razvan Cojocaru, George Dunlap, Tim Deegan, Xen-devel,
	Julien Grall, Paul Durrant, Stefano Stabellini, Jan Beulich,
	Boris Ostrovsky, Brian Woods, Suravee Suthikulpanit,
	Roger Pau Monné

On Tue, Aug 28, 2018 at 06:39:03PM +0100, Andrew Cooper wrote:
> The trailing _vcpu suffix is redundant, but adds to code volume.  Drop it.
> 
> Reflow lines as appropriate, and switch to using the new XFREE/etc wrappers
> where applicable.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 5/7] x86/vtx: Rename arch_vmx_struct to vmx_vcpu
  2018-08-28 17:39 ` [PATCH 5/7] x86/vtx: Rename arch_vmx_struct to vmx_vcpu Andrew Cooper
@ 2018-08-29  8:03   ` Wei Liu
  2018-08-29 11:17     ` Andrew Cooper
  2018-08-30  1:36   ` Tian, Kevin
  2018-08-30 14:54   ` Jan Beulich
  2 siblings, 1 reply; 39+ messages in thread
From: Wei Liu @ 2018-08-29  8:03 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Kevin Tian, Wei Liu, Jan Beulich, Xen-devel, Jun Nakajima,
	Roger Pau Monné

On Tue, Aug 28, 2018 at 06:39:04PM +0100, Andrew Cooper wrote:
> The suffix and prefix are redundant, and the name is curiously odd.  Rename it
> to vmx_vcpu to be consistent with all the other similar structures.

What other similar structures do you have in mind?

The code changes look okay to me.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 7/7] x86/hvm: Drop hvm_{vmx,svm} shorthands
  2018-08-28 17:39 ` [PATCH 7/7] x86/hvm: Drop hvm_{vmx,svm} shorthands Andrew Cooper
@ 2018-08-29  8:03   ` Wei Liu
  2018-08-30  1:39   ` Tian, Kevin
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 39+ messages in thread
From: Wei Liu @ 2018-08-29  8:03 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Kevin Tian, Wei Liu, Jan Beulich, George Dunlap, Xen-devel,
	Jun Nakajima, Boris Ostrovsky, Brian Woods,
	Suravee Suthikulpanit, Roger Pau Monné

On Tue, Aug 28, 2018 at 06:39:06PM +0100, Andrew Cooper wrote:
> By making {vmx,svm} in hvm_vcpu into an anonymous union (consistent with
> domain side of things), the hvm_{vmx,svm} defines can be dropped, and all code
> refer to the correctly-named fields.  This means that the data hierachy is no
> longer obscured from grep/cscope/tags/etc.
> 
> Reformat one comment and switch one bool_t to bool while making changes.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 5/7] x86/vtx: Rename arch_vmx_struct to vmx_vcpu
  2018-08-29  8:03   ` Wei Liu
@ 2018-08-29 11:17     ` Andrew Cooper
  2018-08-29 13:16       ` Wei Liu
  0 siblings, 1 reply; 39+ messages in thread
From: Andrew Cooper @ 2018-08-29 11:17 UTC (permalink / raw)
  To: Wei Liu
  Cc: Kevin Tian, Roger Pau Monné, Jun Nakajima, Jan Beulich, Xen-devel

On 29/08/18 09:03, Wei Liu wrote:
> On Tue, Aug 28, 2018 at 06:39:04PM +0100, Andrew Cooper wrote:
>> The suffix and prefix are redundant, and the name is curiously odd.  Rename it
>> to vmx_vcpu to be consistent with all the other similar structures.
> What other similar structures do you have in mind?

Most relevantly, {vmx,svm}_domain, but {pv,hvm}_{domain,vcpu} as well.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 5/7] x86/vtx: Rename arch_vmx_struct to vmx_vcpu
  2018-08-29 11:17     ` Andrew Cooper
@ 2018-08-29 13:16       ` Wei Liu
  0 siblings, 0 replies; 39+ messages in thread
From: Wei Liu @ 2018-08-29 13:16 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Kevin Tian, Wei Liu, Jan Beulich, Xen-devel, Jun Nakajima,
	Roger Pau Monné

On Wed, Aug 29, 2018 at 12:17:46PM +0100, Andrew Cooper wrote:
> On 29/08/18 09:03, Wei Liu wrote:
> > On Tue, Aug 28, 2018 at 06:39:04PM +0100, Andrew Cooper wrote:
> >> The suffix and prefix are redundant, and the name is curiously odd.  Rename it
> >> to vmx_vcpu to be consistent with all the other similar structures.
> > What other similar structures do you have in mind?
> 
> Most relevantly, {vmx,svm}_domain, but {pv,hvm}_{domain,vcpu} as well.

Reviewed-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 1/7] x86/pv: Rename d->arch.pv_domain to d->arch.pv
  2018-08-28 17:39 ` [PATCH 1/7] x86/pv: Rename d->arch.pv_domain to d->arch.pv Andrew Cooper
  2018-08-29  7:45   ` Wei Liu
@ 2018-08-29 15:55   ` Jan Beulich
  1 sibling, 0 replies; 39+ messages in thread
From: Jan Beulich @ 2018-08-29 15:55 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: George Dunlap, Tim Deegan, Xen-devel, Wei Liu, Roger Pau Monne

>>> On 28.08.18 at 19:39, <andrew.cooper3@citrix.com> wrote:
> The trailing _domain suffix is redundant, but adds to code volume.  Drop it.
> 
> Reflow lines as appropriate, and switch to using the new XFREE/etc wrappers
> where applicable.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

I've long wanted to do this, but didn't dare to with the discussions
we had during PVHv1 development, where some suggestions of
mine to shorten names like this weren't really liked. So - thanks!

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 2/7] x86/pv: Rename v->arch.pv_vcpu to v->arch.pv
  2018-08-28 17:39 ` [PATCH 2/7] x86/pv: Rename v->arch.pv_vcpu to v->arch.pv Andrew Cooper
  2018-08-29  7:53   ` Wei Liu
@ 2018-08-29 16:01   ` Jan Beulich
  1 sibling, 0 replies; 39+ messages in thread
From: Jan Beulich @ 2018-08-29 16:01 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Xen-devel, Wei Liu, Roger Pau Monne

>>> On 28.08.18 at 19:39, <andrew.cooper3@citrix.com> wrote:
> @@ -76,11 +76,11 @@ static long register_guest_callback(struct callback_register *reg)
>      switch ( reg->type )
>      {
>      case CALLBACKTYPE_event:
> -        curr->arch.pv_vcpu.event_callback_eip    = reg->address;
> +        curr->arch.pv.event_callback_eip    = reg->address;

Would you mind dropping the stray extra blanks? Also further down
in this same file.

In any event
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 3/7] xen/hvm: Rename d->arch.hvm_domain to d->arch.hvm
  2018-08-28 17:39 ` [PATCH 3/7] xen/hvm: Rename d->arch.hvm_domain to d->arch.hvm Andrew Cooper
  2018-08-28 18:56   ` Razvan Cojocaru
  2018-08-29  7:56   ` Wei Liu
@ 2018-08-30  1:34   ` Tian, Kevin
  2018-08-30 14:39   ` Jan Beulich
                     ` (2 subsequent siblings)
  5 siblings, 0 replies; 39+ messages in thread
From: Tian, Kevin @ 2018-08-30  1:34 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel
  Cc: Tamas K Lengyel, Wei Liu, Nakajima, Jun, Razvan Cojocaru,
	George Dunlap, Tim Deegan, Julien Grall, Paul Durrant,
	Stefano Stabellini, Jan Beulich, Boris Ostrovsky, Brian Woods,
	Suravee Suthikulpanit, Roger Pau Monné

> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: Wednesday, August 29, 2018 1:39 AM
> 
> The trailing _domain suffix is redundant, but adds to code volume.  Drop it.
> 
> Reflow lines as appropriate, and switch to using the new XFREE/etc
> wrappers
> where applicable.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 4/7] x86/hvm: Rename v->arch.hvm_vcpu to v->arch.hvm
  2018-08-28 17:39 ` [PATCH 4/7] x86/hvm: Rename v->arch.hvm_vcpu to v->arch.hvm Andrew Cooper
  2018-08-28 18:59   ` Razvan Cojocaru
  2018-08-29  7:57   ` Wei Liu
@ 2018-08-30  1:34   ` Tian, Kevin
  2018-08-30 14:52   ` Jan Beulich
  2018-09-03  8:13   ` Paul Durrant
  4 siblings, 0 replies; 39+ messages in thread
From: Tian, Kevin @ 2018-08-30  1:34 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel
  Cc: Tamas K Lengyel, Wei Liu, Nakajima, Jun, Razvan Cojocaru,
	George Dunlap, Tim Deegan, Julien Grall, Paul Durrant,
	Stefano Stabellini, Jan Beulich, Boris Ostrovsky, Brian Woods,
	Suravee Suthikulpanit, Roger Pau Monné

> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: Wednesday, August 29, 2018 1:39 AM
> 
> The trailing _vcpu suffix is redundant, but adds to code volume.  Drop it.
> 
> Reflow lines as appropriate, and switch to using the new XFREE/etc
> wrappers
> where applicable.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 5/7] x86/vtx: Rename arch_vmx_struct to vmx_vcpu
  2018-08-28 17:39 ` [PATCH 5/7] x86/vtx: Rename arch_vmx_struct to vmx_vcpu Andrew Cooper
  2018-08-29  8:03   ` Wei Liu
@ 2018-08-30  1:36   ` Tian, Kevin
  2018-08-30 14:54   ` Jan Beulich
  2 siblings, 0 replies; 39+ messages in thread
From: Tian, Kevin @ 2018-08-30  1:36 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel
  Cc: Wei Liu, Nakajima, Jun, Jan Beulich, Roger Pau Monné

> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: Wednesday, August 29, 2018 1:39 AM
> 
> The suffix and prefix are redundant, and the name is curiously odd.
> Rename it
> to vmx_vcpu to be consistent with all the other similar structures.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Kevin Tian <kevin.tian@intel.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 7/7] x86/hvm: Drop hvm_{vmx,svm} shorthands
  2018-08-28 17:39 ` [PATCH 7/7] x86/hvm: Drop hvm_{vmx,svm} shorthands Andrew Cooper
  2018-08-29  8:03   ` Wei Liu
@ 2018-08-30  1:39   ` Tian, Kevin
  2018-08-30 16:08     ` Andrew Cooper
  2018-08-30 15:03   ` Jan Beulich
  2018-08-30 23:11   ` Boris Ostrovsky
  3 siblings, 1 reply; 39+ messages in thread
From: Tian, Kevin @ 2018-08-30  1:39 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel
  Cc: Wei Liu, Nakajima, Jun, George Dunlap, Jan Beulich,
	Suravee Suthikulpanit, Boris Ostrovsky, Brian Woods,
	Roger Pau Monné

> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: Wednesday, August 29, 2018 1:39 AM
> 
> By making {vmx,svm} in hvm_vcpu into an anonymous union (consistent
> with
> domain side of things), the hvm_{vmx,svm} defines can be dropped, and all
> code
> refer to the correctly-named fields.  This means that the data hierachy is no
> longer obscured from grep/cscope/tags/etc.
> 
> Reformat one comment and switch one bool_t to bool while making
> changes.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

one small note. In coverletter:

	" This series started by trying to address the bug in patch 7, 
and ballooned somewhat."

there is no bug per se in this patch, right?

Thanks
Kevin
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 3/7] xen/hvm: Rename d->arch.hvm_domain to d->arch.hvm
  2018-08-28 17:39 ` [PATCH 3/7] xen/hvm: Rename d->arch.hvm_domain to d->arch.hvm Andrew Cooper
                     ` (2 preceding siblings ...)
  2018-08-30  1:34   ` Tian, Kevin
@ 2018-08-30 14:39   ` Jan Beulich
  2018-08-30 16:44   ` Julien Grall
  2018-09-03  8:11   ` Paul Durrant
  5 siblings, 0 replies; 39+ messages in thread
From: Jan Beulich @ 2018-08-30 14:39 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, Jun Nakajima,
	Razvan Cojocaru, George Dunlap, Tim Deegan, Xen-devel,
	Julien Grall, Paul Durrant, Tamas K Lengyel,
	Suravee Suthikulpanit, Boris Ostrovsky, Brian Woods,
	Roger Pau Monne

>>> On 28.08.18 at 19:39, <andrew.cooper3@citrix.com> wrote:
> --- a/xen/arch/x86/hvm/vmsi.c
> +++ b/xen/arch/x86/hvm/vmsi.c
> @@ -173,7 +173,7 @@ static DEFINE_RCU_READ_LOCK(msixtbl_rcu_lock);
>   */
>  static bool msixtbl_initialised(const struct domain *d)
>  {
> -    return !!d->arch.hvm_domain.msixtbl_list.next;
> +    return !!d->arch.hvm.msixtbl_list.next;

How about dropping the !! here at the same time?

> @@ -1643,7 +1643,7 @@ void vmx_vcpu_flush_pml_buffer(struct vcpu *v)
>  
>  bool_t vmx_domain_pml_enabled(const struct domain *d)
>  {
> -    return !!(d->arch.hvm_domain.vmx.status & VMX_DOMAIN_PML_ENABLED);
> +    return !!(d->arch.hvm.vmx.status & VMX_DOMAIN_PML_ENABLED);
>  }

And here.

> @@ -112,7 +112,7 @@ static void vpic_update_int_output(struct hvm_hw_vpic *vpic)
>          if ( vpic->is_master )
>          {
>              /* Master INT line is connected in Virtual Wire Mode. */
> -            struct vcpu *v = vpic_domain(vpic)->arch.hvm_domain.i8259_target;
> +            struct vcpu *v = vpic_domain(vpic)->arch.hvm.i8259_target;
>              if ( v != NULL )

And adding a blank line here?


In any event
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 4/7] x86/hvm: Rename v->arch.hvm_vcpu to v->arch.hvm
  2018-08-28 17:39 ` [PATCH 4/7] x86/hvm: Rename v->arch.hvm_vcpu to v->arch.hvm Andrew Cooper
                     ` (2 preceding siblings ...)
  2018-08-30  1:34   ` Tian, Kevin
@ 2018-08-30 14:52   ` Jan Beulich
  2018-08-30 16:03     ` Andrew Cooper
  2018-09-03  8:13   ` Paul Durrant
  4 siblings, 1 reply; 39+ messages in thread
From: Jan Beulich @ 2018-08-30 14:52 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, Jun Nakajima,
	Razvan Cojocaru, George Dunlap, Tim Deegan, Xen-devel,
	Julien Grall, Paul Durrant, Tamas K Lengyel,
	Suravee Suthikulpanit, Boris Ostrovsky, Brian Woods,
	Roger Pau Monne

>>> On 28.08.18 at 19:39, <andrew.cooper3@citrix.com> wrote:
>>> On 28.08.18 at 19:39, <andrew.cooper3@citrix.com> wrote:
> The trailing _vcpu suffix is redundant, but adds to code volume.  Drop it.
> 
> Reflow lines as appropriate, and switch to using the new XFREE/etc wrappers
> where applicable.

I couldn't find any such conversion, so perhaps better to delete that
part of the description.

> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

> @@ -3888,19 +3886,19 @@ void hvm_vcpu_reset_state(struct vcpu *v, uint16_t cs, uint16_t ip)
>      v->arch.user_regs.rip = ip;
>      memset(&v->arch.debugreg, 0, sizeof(v->arch.debugreg));
>  
> -    v->arch.hvm_vcpu.guest_cr[0] = X86_CR0_ET;
> +    v->arch.hvm.guest_cr[0] = X86_CR0_ET;
>      hvm_update_guest_cr(v, 0);
>  
> -    v->arch.hvm_vcpu.guest_cr[2] = 0;
> +    v->arch.hvm.guest_cr[2] = 0;
>      hvm_update_guest_cr(v, 2);
>  
> -    v->arch.hvm_vcpu.guest_cr[3] = 0;
> +    v->arch.hvm.guest_cr[3] = 0;
>      hvm_update_guest_cr(v, 3);
>  
> -    v->arch.hvm_vcpu.guest_cr[4] = 0;
> +    v->arch.hvm.guest_cr[4] = 0;
>      hvm_update_guest_cr(v, 4);
>  
> -    v->arch.hvm_vcpu.guest_efer = 0;
> +    v->arch.hvm.guest_efer = 0;
>      hvm_update_guest_efer(v);

Noticing this, a thought unrelated to this series: Wouldn't we be better off
setting all the structure fields first, and only then invoke all the functions?
Just like arch_set_info_hvm_guest() does?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 5/7] x86/vtx: Rename arch_vmx_struct to vmx_vcpu
  2018-08-28 17:39 ` [PATCH 5/7] x86/vtx: Rename arch_vmx_struct to vmx_vcpu Andrew Cooper
  2018-08-29  8:03   ` Wei Liu
  2018-08-30  1:36   ` Tian, Kevin
@ 2018-08-30 14:54   ` Jan Beulich
  2018-08-30 15:47     ` Andrew Cooper
  2 siblings, 1 reply; 39+ messages in thread
From: Jan Beulich @ 2018-08-30 14:54 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Kevin Tian, Xen-devel, Wei Liu, Jun Nakajima, Roger Pau Monne

>>> On 28.08.18 at 19:39, <andrew.cooper3@citrix.com> wrote:
> The suffix and prefix are redundant, and the name is curiously odd.  Rename it
> to vmx_vcpu to be consistent with all the other similar structures.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wei.liu2@citrix.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Jun Nakajima <jun.nakajima@intel.com>
> CC: Kevin Tian <kevin.tian@intel.com>
> 
> Some of the local pointers are named arch_vmx.  I'm open to renaming them to
> just vmx (like all the other local pointers) if people are happy with the
> additional patch delta.

I'd be fine with that. With or without
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 6/7] x86/svm: Rename arch_svm_struct to svm_vcpu
  2018-08-28 17:39 ` [PATCH 6/7] x86/svm: Rename arch_svm_struct to svm_vcpu Andrew Cooper
@ 2018-08-30 14:54   ` Jan Beulich
  0 siblings, 0 replies; 39+ messages in thread
From: Jan Beulich @ 2018-08-30 14:54 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Wei Liu, Xen-devel, Suravee Suthikulpanit, Boris Ostrovsky,
	Brian Woods, Roger Pau Monne

>>> On 28.08.18 at 19:39, <andrew.cooper3@citrix.com> wrote:
> The suffix and prefix are redundant, and the name is curiously odd.  Rename it
> to svm_vcpu to be consistent with all the other similar structures.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wei.liu2@citrix.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> CC: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> CC: Brian Woods <brian.woods@amd.com>
> 
> All of the local pointers are named arch_svm.  I'm open to renaming them to
> just svm if people are happy with the additional patch delta.

Same here - with or without
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 7/7] x86/hvm: Drop hvm_{vmx,svm} shorthands
  2018-08-28 17:39 ` [PATCH 7/7] x86/hvm: Drop hvm_{vmx,svm} shorthands Andrew Cooper
  2018-08-29  8:03   ` Wei Liu
  2018-08-30  1:39   ` Tian, Kevin
@ 2018-08-30 15:03   ` Jan Beulich
  2018-08-30 23:11   ` Boris Ostrovsky
  3 siblings, 0 replies; 39+ messages in thread
From: Jan Beulich @ 2018-08-30 15:03 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Kevin Tian, Wei Liu, Suravee Suthikulpanit, George Dunlap,
	Xen-devel, Jun Nakajima, Boris Ostrovsky, Brian Woods,
	Roger Pau Monne

>>> On 28.08.18 at 19:39, <andrew.cooper3@citrix.com> wrote:
> By making {vmx,svm} in hvm_vcpu into an anonymous union (consistent with
> domain side of things), the hvm_{vmx,svm} defines can be dropped, and all code
> refer to the correctly-named fields.  This means that the data hierachy is no
> longer obscured from grep/cscope/tags/etc.
> 
> Reformat one comment and switch one bool_t to bool while making changes.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 5/7] x86/vtx: Rename arch_vmx_struct to vmx_vcpu
  2018-08-30 14:54   ` Jan Beulich
@ 2018-08-30 15:47     ` Andrew Cooper
  2018-08-31  1:35       ` Tian, Kevin
  0 siblings, 1 reply; 39+ messages in thread
From: Andrew Cooper @ 2018-08-30 15:47 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Kevin Tian, Xen-devel, Wei Liu, Jun Nakajima, Roger Pau Monne

On 30/08/18 15:54, Jan Beulich wrote:
>>>> On 28.08.18 at 19:39, <andrew.cooper3@citrix.com> wrote:
>> The suffix and prefix are redundant, and the name is curiously odd.  Rename it
>> to vmx_vcpu to be consistent with all the other similar structures.
>>
>> No functional change.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Wei Liu <wei.liu2@citrix.com>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>> CC: Jun Nakajima <jun.nakajima@intel.com>
>> CC: Kevin Tian <kevin.tian@intel.com>
>>
>> Some of the local pointers are named arch_vmx.  I'm open to renaming them to
>> just vmx (like all the other local pointers) if people are happy with the
>> additional patch delta.
> I'd be fine with that. With or without
> Acked-by: Jan Beulich <jbeulich@suse.com>

TBH, I was hoping for a comment from Kevin on this question.

Given that the net diffstat including the pointer renames is:

andrewcoop@andrewcoop:/local/xen.git/xen$ git d HEAD^ --stat
 xen/arch/x86/hvm/vmx/vmcs.c        | 44
++++++++++++++++++++++----------------------
 xen/arch/x86/hvm/vmx/vmx.c         |  4 ++--
 xen/include/asm-x86/hvm/vcpu.h     |  2 +-
 xen/include/asm-x86/hvm/vmx/vmcs.h |  2 +-
 4 files changed, 26 insertions(+), 26 deletions(-)

I've decided to go ahead and do them, to improve the eventual code
consistency.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 4/7] x86/hvm: Rename v->arch.hvm_vcpu to v->arch.hvm
  2018-08-30 14:52   ` Jan Beulich
@ 2018-08-30 16:03     ` Andrew Cooper
  0 siblings, 0 replies; 39+ messages in thread
From: Andrew Cooper @ 2018-08-30 16:03 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Kevin Tian, Stefano Stabellini, Wei Liu, Jun Nakajima,
	Razvan Cojocaru, George Dunlap, Tim Deegan, Xen-devel,
	Julien Grall, Paul Durrant, Tamas K Lengyel,
	Suravee Suthikulpanit, Boris Ostrovsky, Brian Woods,
	Roger Pau Monne

On 30/08/18 15:52, Jan Beulich wrote:
>>>> On 28.08.18 at 19:39, <andrew.cooper3@citrix.com> wrote:
>>>> On 28.08.18 at 19:39, <andrew.cooper3@citrix.com> wrote:
>> The trailing _vcpu suffix is redundant, but adds to code volume.  Drop it.
>>
>> Reflow lines as appropriate, and switch to using the new XFREE/etc wrappers
>> where applicable.
> I couldn't find any such conversion, so perhaps better to delete that
> part of the description.

Fixed.

>
>> No functional change.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks.

>
>> @@ -3888,19 +3886,19 @@ void hvm_vcpu_reset_state(struct vcpu *v, uint16_t cs, uint16_t ip)
>>      v->arch.user_regs.rip = ip;
>>      memset(&v->arch.debugreg, 0, sizeof(v->arch.debugreg));
>>  
>> -    v->arch.hvm_vcpu.guest_cr[0] = X86_CR0_ET;
>> +    v->arch.hvm.guest_cr[0] = X86_CR0_ET;
>>      hvm_update_guest_cr(v, 0);
>>  
>> -    v->arch.hvm_vcpu.guest_cr[2] = 0;
>> +    v->arch.hvm.guest_cr[2] = 0;
>>      hvm_update_guest_cr(v, 2);
>>  
>> -    v->arch.hvm_vcpu.guest_cr[3] = 0;
>> +    v->arch.hvm.guest_cr[3] = 0;
>>      hvm_update_guest_cr(v, 3);
>>  
>> -    v->arch.hvm_vcpu.guest_cr[4] = 0;
>> +    v->arch.hvm.guest_cr[4] = 0;
>>      hvm_update_guest_cr(v, 4);
>>  
>> -    v->arch.hvm_vcpu.guest_efer = 0;
>> +    v->arch.hvm.guest_efer = 0;
>>      hvm_update_guest_efer(v);
> Noticing this, a thought unrelated to this series: Wouldn't we be better off
> setting all the structure fields first, and only then invoke all the functions?
> Just like arch_set_info_hvm_guest() does?

Actually, arch_set_info_hvm_guest() is broken in a related way.  When
nested virt is in the mix, the usual rules for which control bits can be
changed are relaxed, and you can get into a number of corner cases where
these helpers don't do the correct thing.  (e.g. when a 32bit PAE guest
tries vmexiting to a PCID hypervisor).

Resolving this mess is on the todo list, which includes this function,
and others.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 7/7] x86/hvm: Drop hvm_{vmx,svm} shorthands
  2018-08-30  1:39   ` Tian, Kevin
@ 2018-08-30 16:08     ` Andrew Cooper
  0 siblings, 0 replies; 39+ messages in thread
From: Andrew Cooper @ 2018-08-30 16:08 UTC (permalink / raw)
  To: Tian, Kevin, Xen-devel
  Cc: Wei Liu, Nakajima, Jun, George Dunlap, Jan Beulich,
	Suravee Suthikulpanit, Boris Ostrovsky, Brian Woods,
	Roger Pau Monné

On 30/08/18 02:39, Tian, Kevin wrote:
>> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
>> Sent: Wednesday, August 29, 2018 1:39 AM
>>
>> By making {vmx,svm} in hvm_vcpu into an anonymous union (consistent
>> with
>> domain side of things), the hvm_{vmx,svm} defines can be dropped, and all
>> code
>> refer to the correctly-named fields.  This means that the data hierachy is no
>> longer obscured from grep/cscope/tags/etc.
>>
>> Reformat one comment and switch one bool_t to bool while making
>> changes.
>>
>> No functional change.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
>
> one small note. In coverletter:
>
> 	" This series started by trying to address the bug in patch 7, 
> and ballooned somewhat."
>
> there is no bug per se in this patch, right?

I should probably have said issue rather than bug.  I was referring to
the obscuring of data from grep/cscope/tags/etc.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 3/7] xen/hvm: Rename d->arch.hvm_domain to d->arch.hvm
  2018-08-28 17:39 ` [PATCH 3/7] xen/hvm: Rename d->arch.hvm_domain to d->arch.hvm Andrew Cooper
                     ` (3 preceding siblings ...)
  2018-08-30 14:39   ` Jan Beulich
@ 2018-08-30 16:44   ` Julien Grall
  2018-09-03  8:11   ` Paul Durrant
  5 siblings, 0 replies; 39+ messages in thread
From: Julien Grall @ 2018-08-30 16:44 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel
  Cc: Kevin Tian, Tamas K Lengyel, Wei Liu, Jun Nakajima,
	Razvan Cojocaru, George Dunlap, Tim Deegan, Paul Durrant,
	Stefano Stabellini, Jan Beulich, Boris Ostrovsky, Brian Woods,
	Suravee Suthikulpanit, Roger Pau Monné

Hi Andrew,

On 28/08/18 18:39, Andrew Cooper wrote:
> The trailing _domain suffix is redundant, but adds to code volume.  Drop it.
> 
> Reflow lines as appropriate, and switch to using the new XFREE/etc wrappers
> where applicable.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Julien Grall <julien.grall@arm.com>

Cheers,

> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wei.liu2@citrix.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Tim Deegan <tim@xen.org>
> CC: George Dunlap <george.dunlap@eu.citrix.com>
> CC: Paul Durrant <paul.durrant@citrix.com>
> CC: Razvan Cojocaru <rcojocaru@bitdefender.com>
> CC: Tamas K Lengyel <tamas@tklengyel.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien.grall@arm.com>
> CC: Jun Nakajima <jun.nakajima@intel.com>
> CC: Kevin Tian <kevin.tian@intel.com>
> CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> CC: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> CC: Brian Woods <brian.woods@amd.com>
> ---
>   xen/arch/arm/domain_build.c         |   2 +-
>   xen/arch/arm/hvm.c                  |   4 +-
>   xen/arch/x86/domain.c               |   2 +-
>   xen/arch/x86/domctl.c               |  10 +--
>   xen/arch/x86/hvm/dom0_build.c       |   4 +-
>   xen/arch/x86/hvm/domain.c           |   2 +-
>   xen/arch/x86/hvm/hpet.c             |   8 +-
>   xen/arch/x86/hvm/hvm.c              | 145 +++++++++++++++++-------------------
>   xen/arch/x86/hvm/hypercall.c        |   6 +-
>   xen/arch/x86/hvm/intercept.c        |  14 ++--
>   xen/arch/x86/hvm/io.c               |  48 ++++++------
>   xen/arch/x86/hvm/ioreq.c            |  80 ++++++++++----------
>   xen/arch/x86/hvm/irq.c              |  50 ++++++-------
>   xen/arch/x86/hvm/mtrr.c             |  14 ++--
>   xen/arch/x86/hvm/pmtimer.c          |  40 +++++-----
>   xen/arch/x86/hvm/rtc.c              |   4 +-
>   xen/arch/x86/hvm/save.c             |   6 +-
>   xen/arch/x86/hvm/stdvga.c           |  18 ++---
>   xen/arch/x86/hvm/svm/svm.c          |   5 +-
>   xen/arch/x86/hvm/svm/vmcb.c         |   2 +-
>   xen/arch/x86/hvm/vioapic.c          |  44 +++++------
>   xen/arch/x86/hvm/viridian.c         |  56 +++++++-------
>   xen/arch/x86/hvm/vlapic.c           |   8 +-
>   xen/arch/x86/hvm/vmsi.c             |  14 ++--
>   xen/arch/x86/hvm/vmx/vmcs.c         |  12 +--
>   xen/arch/x86/hvm/vmx/vmx.c          |  46 ++++++------
>   xen/arch/x86/hvm/vpic.c             |  20 ++---
>   xen/arch/x86/hvm/vpt.c              |  20 ++---
>   xen/arch/x86/irq.c                  |  10 +--
>   xen/arch/x86/mm/hap/hap.c           |  11 ++-
>   xen/arch/x86/mm/mem_sharing.c       |   6 +-
>   xen/arch/x86/mm/shadow/common.c     |  18 ++---
>   xen/arch/x86/mm/shadow/multi.c      |   8 +-
>   xen/arch/x86/physdev.c              |   2 +-
>   xen/arch/x86/setup.c                |  10 +--
>   xen/arch/x86/time.c                 |   8 +-
>   xen/common/vm_event.c               |   2 +-
>   xen/drivers/passthrough/pci.c       |   2 +-
>   xen/drivers/vpci/msix.c             |   6 +-
>   xen/include/asm-arm/domain.h        |   2 +-
>   xen/include/asm-x86/domain.h        |   4 +-
>   xen/include/asm-x86/hvm/domain.h    |   2 +-
>   xen/include/asm-x86/hvm/hvm.h       |  11 ++-
>   xen/include/asm-x86/hvm/irq.h       |   2 +-
>   xen/include/asm-x86/hvm/nestedhvm.h |   4 +-
>   xen/include/asm-x86/hvm/vioapic.h   |   2 +-
>   xen/include/asm-x86/hvm/vpt.h       |   4 +-
>   xen/include/asm-x86/irq.h           |   3 +-
>   48 files changed, 395 insertions(+), 406 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index e1c79b2..72fd2ae 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -2075,7 +2075,7 @@ static void __init evtchn_allocate(struct domain *d)
>       val |= MASK_INSR(HVM_PARAM_CALLBACK_TYPE_PPI_FLAG_LOW_LEVEL,
>                        HVM_PARAM_CALLBACK_TYPE_PPI_FLAG_MASK);
>       val |= d->arch.evtchn_irq;
> -    d->arch.hvm_domain.params[HVM_PARAM_CALLBACK_IRQ] = val;
> +    d->arch.hvm.params[HVM_PARAM_CALLBACK_IRQ] = val;
>   }
>   
>   static void __init find_gnttab_region(struct domain *d,
> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
> index a56b3fe..76b27c9 100644
> --- a/xen/arch/arm/hvm.c
> +++ b/xen/arch/arm/hvm.c
> @@ -59,11 +59,11 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>   
>           if ( op == HVMOP_set_param )
>           {
> -            d->arch.hvm_domain.params[a.index] = a.value;
> +            d->arch.hvm.params[a.index] = a.value;
>           }
>           else
>           {
> -            a.value = d->arch.hvm_domain.params[a.index];
> +            a.value = d->arch.hvm.params[a.index];
>               rc = copy_to_guest(arg, &a, 1) ? -EFAULT : 0;
>           }
>   
> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> index 4cdcd5d..3dcd7f9 100644
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -505,7 +505,7 @@ int arch_domain_create(struct domain *d,
>   
>       /* Need to determine if HAP is enabled before initialising paging */
>       if ( is_hvm_domain(d) )
> -        d->arch.hvm_domain.hap_enabled =
> +        d->arch.hvm.hap_enabled =
>               hvm_hap_supported() && (config->flags & XEN_DOMCTL_CDF_hap);
>   
>       if ( (rc = paging_domain_init(d, config->flags)) != 0 )
> diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
> index fdbcce0..f306614 100644
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -745,7 +745,7 @@ long arch_do_domctl(
>           unsigned int fmp = domctl->u.ioport_mapping.first_mport;
>           unsigned int np = domctl->u.ioport_mapping.nr_ports;
>           unsigned int add = domctl->u.ioport_mapping.add_mapping;
> -        struct hvm_domain *hvm_domain;
> +        struct hvm_domain *hvm;
>           struct g2m_ioport *g2m_ioport;
>           int found = 0;
>   
> @@ -774,14 +774,14 @@ long arch_do_domctl(
>           if ( ret )
>               break;
>   
> -        hvm_domain = &d->arch.hvm_domain;
> +        hvm = &d->arch.hvm;
>           if ( add )
>           {
>               printk(XENLOG_G_INFO
>                      "ioport_map:add: dom%d gport=%x mport=%x nr=%x\n",
>                      d->domain_id, fgp, fmp, np);
>   
> -            list_for_each_entry(g2m_ioport, &hvm_domain->g2m_ioport_list, list)
> +            list_for_each_entry(g2m_ioport, &hvm->g2m_ioport_list, list)
>                   if (g2m_ioport->mport == fmp )
>                   {
>                       g2m_ioport->gport = fgp;
> @@ -800,7 +800,7 @@ long arch_do_domctl(
>                   g2m_ioport->gport = fgp;
>                   g2m_ioport->mport = fmp;
>                   g2m_ioport->np = np;
> -                list_add_tail(&g2m_ioport->list, &hvm_domain->g2m_ioport_list);
> +                list_add_tail(&g2m_ioport->list, &hvm->g2m_ioport_list);
>               }
>               if ( !ret )
>                   ret = ioports_permit_access(d, fmp, fmp + np - 1);
> @@ -815,7 +815,7 @@ long arch_do_domctl(
>               printk(XENLOG_G_INFO
>                      "ioport_map:remove: dom%d gport=%x mport=%x nr=%x\n",
>                      d->domain_id, fgp, fmp, np);
> -            list_for_each_entry(g2m_ioport, &hvm_domain->g2m_ioport_list, list)
> +            list_for_each_entry(g2m_ioport, &hvm->g2m_ioport_list, list)
>                   if ( g2m_ioport->mport == fmp )
>                   {
>                       list_del(&g2m_ioport->list);
> diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
> index 5065729..22e335f 100644
> --- a/xen/arch/x86/hvm/dom0_build.c
> +++ b/xen/arch/x86/hvm/dom0_build.c
> @@ -240,7 +240,7 @@ static int __init pvh_setup_vmx_realmode_helpers(struct domain *d)
>           if ( hvm_copy_to_guest_phys(gaddr, NULL, HVM_VM86_TSS_SIZE, v) !=
>                HVMTRANS_okay )
>               printk("Unable to zero VM86 TSS area\n");
> -        d->arch.hvm_domain.params[HVM_PARAM_VM86_TSS_SIZED] =
> +        d->arch.hvm.params[HVM_PARAM_VM86_TSS_SIZED] =
>               VM86_TSS_UPDATED | ((uint64_t)HVM_VM86_TSS_SIZE << 32) | gaddr;
>           if ( pvh_add_mem_range(d, gaddr, gaddr + HVM_VM86_TSS_SIZE,
>                                  E820_RESERVED) )
> @@ -271,7 +271,7 @@ static int __init pvh_setup_vmx_realmode_helpers(struct domain *d)
>       write_32bit_pse_identmap(ident_pt);
>       unmap_domain_page(ident_pt);
>       put_page(mfn_to_page(mfn));
> -    d->arch.hvm_domain.params[HVM_PARAM_IDENT_PT] = gaddr;
> +    d->arch.hvm.params[HVM_PARAM_IDENT_PT] = gaddr;
>       if ( pvh_add_mem_range(d, gaddr, gaddr + PAGE_SIZE, E820_RESERVED) )
>               printk("Unable to set identity page tables as reserved in the memory map\n");
>   
> diff --git a/xen/arch/x86/hvm/domain.c b/xen/arch/x86/hvm/domain.c
> index ae70aaf..8a2c83e 100644
> --- a/xen/arch/x86/hvm/domain.c
> +++ b/xen/arch/x86/hvm/domain.c
> @@ -319,7 +319,7 @@ int arch_set_info_hvm_guest(struct vcpu *v, const vcpu_hvm_context_t *ctx)
>       v->arch.hvm_vcpu.cache_tsc_offset =
>           d->vcpu[0]->arch.hvm_vcpu.cache_tsc_offset;
>       hvm_set_tsc_offset(v, v->arch.hvm_vcpu.cache_tsc_offset,
> -                       d->arch.hvm_domain.sync_tsc);
> +                       d->arch.hvm.sync_tsc);
>   
>       paging_update_paging_modes(v);
>   
> diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
> index a594254..8090699 100644
> --- a/xen/arch/x86/hvm/hpet.c
> +++ b/xen/arch/x86/hvm/hpet.c
> @@ -26,7 +26,7 @@
>   #include <xen/event.h>
>   #include <xen/trace.h>
>   
> -#define domain_vhpet(x) (&(x)->arch.hvm_domain.pl_time->vhpet)
> +#define domain_vhpet(x) (&(x)->arch.hvm.pl_time->vhpet)
>   #define vcpu_vhpet(x)   (domain_vhpet((x)->domain))
>   #define vhpet_domain(x) (container_of(x, struct pl_time, vhpet)->domain)
>   #define vhpet_vcpu(x)   (pt_global_vcpu_target(vhpet_domain(x)))
> @@ -164,7 +164,7 @@ static int hpet_read(
>       unsigned long result;
>       uint64_t val;
>   
> -    if ( !v->domain->arch.hvm_domain.params[HVM_PARAM_HPET_ENABLED] )
> +    if ( !v->domain->arch.hvm.params[HVM_PARAM_HPET_ENABLED] )
>       {
>           result = ~0ul;
>           goto out;
> @@ -354,7 +354,7 @@ static int hpet_write(
>   #define set_start_timer(n)   (__set_bit((n), &start_timers))
>   #define set_restart_timer(n) (set_stop_timer(n),set_start_timer(n))
>   
> -    if ( !v->domain->arch.hvm_domain.params[HVM_PARAM_HPET_ENABLED] )
> +    if ( !v->domain->arch.hvm.params[HVM_PARAM_HPET_ENABLED] )
>           goto out;
>   
>       addr &= HPET_MMAP_SIZE-1;
> @@ -735,7 +735,7 @@ void hpet_init(struct domain *d)
>   
>       hpet_set(domain_vhpet(d));
>       register_mmio_handler(d, &hpet_mmio_ops);
> -    d->arch.hvm_domain.params[HVM_PARAM_HPET_ENABLED] = 1;
> +    d->arch.hvm.params[HVM_PARAM_HPET_ENABLED] = 1;
>   }
>   
>   void hpet_deinit(struct domain *d)
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 72c51fa..f895339 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -382,7 +382,7 @@ u64 hvm_get_tsc_scaling_ratio(u32 gtsc_khz)
>   
>   u64 hvm_scale_tsc(const struct domain *d, u64 tsc)
>   {
> -    u64 ratio = d->arch.hvm_domain.tsc_scaling_ratio;
> +    u64 ratio = d->arch.hvm.tsc_scaling_ratio;
>       u64 dummy;
>   
>       if ( ratio == hvm_default_tsc_scaling_ratio )
> @@ -583,14 +583,14 @@ int hvm_domain_initialise(struct domain *d)
>           return -EINVAL;
>       }
>   
> -    spin_lock_init(&d->arch.hvm_domain.irq_lock);
> -    spin_lock_init(&d->arch.hvm_domain.uc_lock);
> -    spin_lock_init(&d->arch.hvm_domain.write_map.lock);
> -    rwlock_init(&d->arch.hvm_domain.mmcfg_lock);
> -    INIT_LIST_HEAD(&d->arch.hvm_domain.write_map.list);
> -    INIT_LIST_HEAD(&d->arch.hvm_domain.g2m_ioport_list);
> -    INIT_LIST_HEAD(&d->arch.hvm_domain.mmcfg_regions);
> -    INIT_LIST_HEAD(&d->arch.hvm_domain.msix_tables);
> +    spin_lock_init(&d->arch.hvm.irq_lock);
> +    spin_lock_init(&d->arch.hvm.uc_lock);
> +    spin_lock_init(&d->arch.hvm.write_map.lock);
> +    rwlock_init(&d->arch.hvm.mmcfg_lock);
> +    INIT_LIST_HEAD(&d->arch.hvm.write_map.list);
> +    INIT_LIST_HEAD(&d->arch.hvm.g2m_ioport_list);
> +    INIT_LIST_HEAD(&d->arch.hvm.mmcfg_regions);
> +    INIT_LIST_HEAD(&d->arch.hvm.msix_tables);
>   
>       rc = create_perdomain_mapping(d, PERDOMAIN_VIRT_START, 0, NULL, NULL);
>       if ( rc )
> @@ -603,15 +603,15 @@ int hvm_domain_initialise(struct domain *d)
>           goto fail0;
>   
>       nr_gsis = is_hardware_domain(d) ? nr_irqs_gsi : NR_HVM_DOMU_IRQS;
> -    d->arch.hvm_domain.pl_time = xzalloc(struct pl_time);
> -    d->arch.hvm_domain.params = xzalloc_array(uint64_t, HVM_NR_PARAMS);
> -    d->arch.hvm_domain.io_handler = xzalloc_array(struct hvm_io_handler,
> -                                                  NR_IO_HANDLERS);
> -    d->arch.hvm_domain.irq = xzalloc_bytes(hvm_irq_size(nr_gsis));
> +    d->arch.hvm.pl_time = xzalloc(struct pl_time);
> +    d->arch.hvm.params = xzalloc_array(uint64_t, HVM_NR_PARAMS);
> +    d->arch.hvm.io_handler = xzalloc_array(struct hvm_io_handler,
> +                                           NR_IO_HANDLERS);
> +    d->arch.hvm.irq = xzalloc_bytes(hvm_irq_size(nr_gsis));
>   
>       rc = -ENOMEM;
> -    if ( !d->arch.hvm_domain.pl_time || !d->arch.hvm_domain.irq ||
> -         !d->arch.hvm_domain.params  || !d->arch.hvm_domain.io_handler )
> +    if ( !d->arch.hvm.pl_time || !d->arch.hvm.irq ||
> +         !d->arch.hvm.params  || !d->arch.hvm.io_handler )
>           goto fail1;
>   
>       /* Set the number of GSIs */
> @@ -621,21 +621,21 @@ int hvm_domain_initialise(struct domain *d)
>       ASSERT(hvm_domain_irq(d)->nr_gsis >= NR_ISAIRQS);
>   
>       /* need link to containing domain */
> -    d->arch.hvm_domain.pl_time->domain = d;
> +    d->arch.hvm.pl_time->domain = d;
>   
>       /* Set the default IO Bitmap. */
>       if ( is_hardware_domain(d) )
>       {
> -        d->arch.hvm_domain.io_bitmap = _xmalloc(HVM_IOBITMAP_SIZE, PAGE_SIZE);
> -        if ( d->arch.hvm_domain.io_bitmap == NULL )
> +        d->arch.hvm.io_bitmap = _xmalloc(HVM_IOBITMAP_SIZE, PAGE_SIZE);
> +        if ( d->arch.hvm.io_bitmap == NULL )
>           {
>               rc = -ENOMEM;
>               goto fail1;
>           }
> -        memset(d->arch.hvm_domain.io_bitmap, ~0, HVM_IOBITMAP_SIZE);
> +        memset(d->arch.hvm.io_bitmap, ~0, HVM_IOBITMAP_SIZE);
>       }
>       else
> -        d->arch.hvm_domain.io_bitmap = hvm_io_bitmap;
> +        d->arch.hvm.io_bitmap = hvm_io_bitmap;
>   
>       register_g2m_portio_handler(d);
>       register_vpci_portio_handler(d);
> @@ -644,7 +644,7 @@ int hvm_domain_initialise(struct domain *d)
>   
>       hvm_init_guest_time(d);
>   
> -    d->arch.hvm_domain.params[HVM_PARAM_TRIPLE_FAULT_REASON] = SHUTDOWN_reboot;
> +    d->arch.hvm.params[HVM_PARAM_TRIPLE_FAULT_REASON] = SHUTDOWN_reboot;
>   
>       vpic_init(d);
>   
> @@ -659,7 +659,7 @@ int hvm_domain_initialise(struct domain *d)
>       register_portio_handler(d, 0xe9, 1, hvm_print_line);
>   
>       if ( hvm_tsc_scaling_supported )
> -        d->arch.hvm_domain.tsc_scaling_ratio = hvm_default_tsc_scaling_ratio;
> +        d->arch.hvm.tsc_scaling_ratio = hvm_default_tsc_scaling_ratio;
>   
>       rc = hvm_funcs.domain_initialise(d);
>       if ( rc != 0 )
> @@ -673,11 +673,11 @@ int hvm_domain_initialise(struct domain *d)
>       vioapic_deinit(d);
>    fail1:
>       if ( is_hardware_domain(d) )
> -        xfree(d->arch.hvm_domain.io_bitmap);
> -    xfree(d->arch.hvm_domain.io_handler);
> -    xfree(d->arch.hvm_domain.params);
> -    xfree(d->arch.hvm_domain.pl_time);
> -    xfree(d->arch.hvm_domain.irq);
> +        xfree(d->arch.hvm.io_bitmap);
> +    xfree(d->arch.hvm.io_handler);
> +    xfree(d->arch.hvm.params);
> +    xfree(d->arch.hvm.pl_time);
> +    xfree(d->arch.hvm.irq);
>    fail0:
>       hvm_destroy_cacheattr_region_list(d);
>       destroy_perdomain_mapping(d, PERDOMAIN_VIRT_START, 0);
> @@ -710,11 +710,8 @@ void hvm_domain_destroy(struct domain *d)
>       struct list_head *ioport_list, *tmp;
>       struct g2m_ioport *ioport;
>   
> -    xfree(d->arch.hvm_domain.io_handler);
> -    d->arch.hvm_domain.io_handler = NULL;
> -
> -    xfree(d->arch.hvm_domain.params);
> -    d->arch.hvm_domain.params = NULL;
> +    XFREE(d->arch.hvm.io_handler);
> +    XFREE(d->arch.hvm.params);
>   
>       hvm_destroy_cacheattr_region_list(d);
>   
> @@ -723,14 +720,10 @@ void hvm_domain_destroy(struct domain *d)
>       stdvga_deinit(d);
>       vioapic_deinit(d);
>   
> -    xfree(d->arch.hvm_domain.pl_time);
> -    d->arch.hvm_domain.pl_time = NULL;
> -
> -    xfree(d->arch.hvm_domain.irq);
> -    d->arch.hvm_domain.irq = NULL;
> +    XFREE(d->arch.hvm.pl_time);
> +    XFREE(d->arch.hvm.irq);
>   
> -    list_for_each_safe ( ioport_list, tmp,
> -                         &d->arch.hvm_domain.g2m_ioport_list )
> +    list_for_each_safe ( ioport_list, tmp, &d->arch.hvm.g2m_ioport_list )
>       {
>           ioport = list_entry(ioport_list, struct g2m_ioport, list);
>           list_del(&ioport->list);
> @@ -798,7 +791,7 @@ static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
>           /* Architecture-specific vmcs/vmcb bits */
>           hvm_funcs.save_cpu_ctxt(v, &ctxt);
>   
> -        ctxt.tsc = hvm_get_guest_tsc_fixed(v, d->arch.hvm_domain.sync_tsc);
> +        ctxt.tsc = hvm_get_guest_tsc_fixed(v, d->arch.hvm.sync_tsc);
>   
>           ctxt.msr_tsc_aux = hvm_msr_tsc_aux(v);
>   
> @@ -1053,7 +1046,7 @@ static int hvm_load_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
>   
>       v->arch.hvm_vcpu.msr_tsc_aux = ctxt.msr_tsc_aux;
>   
> -    hvm_set_guest_tsc_fixed(v, ctxt.tsc, d->arch.hvm_domain.sync_tsc);
> +    hvm_set_guest_tsc_fixed(v, ctxt.tsc, d->arch.hvm.sync_tsc);
>   
>       seg.limit = ctxt.idtr_limit;
>       seg.base = ctxt.idtr_base;
> @@ -1637,7 +1630,7 @@ void hvm_triple_fault(void)
>   {
>       struct vcpu *v = current;
>       struct domain *d = v->domain;
> -    u8 reason = d->arch.hvm_domain.params[HVM_PARAM_TRIPLE_FAULT_REASON];
> +    u8 reason = d->arch.hvm.params[HVM_PARAM_TRIPLE_FAULT_REASON];
>   
>       gprintk(XENLOG_INFO,
>               "Triple fault - invoking HVM shutdown action %d\n",
> @@ -2046,7 +2039,7 @@ static bool_t domain_exit_uc_mode(struct vcpu *v)
>   
>   static void hvm_set_uc_mode(struct vcpu *v, bool_t is_in_uc_mode)
>   {
> -    v->domain->arch.hvm_domain.is_in_uc_mode = is_in_uc_mode;
> +    v->domain->arch.hvm.is_in_uc_mode = is_in_uc_mode;
>       shadow_blow_tables_per_domain(v->domain);
>   }
>   
> @@ -2130,10 +2123,10 @@ void hvm_shadow_handle_cd(struct vcpu *v, unsigned long value)
>       if ( value & X86_CR0_CD )
>       {
>           /* Entering no fill cache mode. */
> -        spin_lock(&v->domain->arch.hvm_domain.uc_lock);
> +        spin_lock(&v->domain->arch.hvm.uc_lock);
>           v->arch.hvm_vcpu.cache_mode = NO_FILL_CACHE_MODE;
>   
> -        if ( !v->domain->arch.hvm_domain.is_in_uc_mode )
> +        if ( !v->domain->arch.hvm.is_in_uc_mode )
>           {
>               domain_pause_nosync(v->domain);
>   
> @@ -2143,19 +2136,19 @@ void hvm_shadow_handle_cd(struct vcpu *v, unsigned long value)
>   
>               domain_unpause(v->domain);
>           }
> -        spin_unlock(&v->domain->arch.hvm_domain.uc_lock);
> +        spin_unlock(&v->domain->arch.hvm.uc_lock);
>       }
>       else if ( !(value & X86_CR0_CD) &&
>                 (v->arch.hvm_vcpu.cache_mode == NO_FILL_CACHE_MODE) )
>       {
>           /* Exit from no fill cache mode. */
> -        spin_lock(&v->domain->arch.hvm_domain.uc_lock);
> +        spin_lock(&v->domain->arch.hvm.uc_lock);
>           v->arch.hvm_vcpu.cache_mode = NORMAL_CACHE_MODE;
>   
>           if ( domain_exit_uc_mode(v) )
>               hvm_set_uc_mode(v, 0);
>   
> -        spin_unlock(&v->domain->arch.hvm_domain.uc_lock);
> +        spin_unlock(&v->domain->arch.hvm.uc_lock);
>       }
>   }
>   
> @@ -2597,9 +2590,9 @@ static void *_hvm_map_guest_frame(unsigned long gfn, bool_t permanent,
>               return NULL;
>           }
>           track->page = page;
> -        spin_lock(&d->arch.hvm_domain.write_map.lock);
> -        list_add_tail(&track->list, &d->arch.hvm_domain.write_map.list);
> -        spin_unlock(&d->arch.hvm_domain.write_map.lock);
> +        spin_lock(&d->arch.hvm.write_map.lock);
> +        list_add_tail(&track->list, &d->arch.hvm.write_map.list);
> +        spin_unlock(&d->arch.hvm.write_map.lock);
>       }
>   
>       map = __map_domain_page_global(page);
> @@ -2640,8 +2633,8 @@ void hvm_unmap_guest_frame(void *p, bool_t permanent)
>           struct hvm_write_map *track;
>   
>           unmap_domain_page_global(p);
> -        spin_lock(&d->arch.hvm_domain.write_map.lock);
> -        list_for_each_entry(track, &d->arch.hvm_domain.write_map.list, list)
> +        spin_lock(&d->arch.hvm.write_map.lock);
> +        list_for_each_entry(track, &d->arch.hvm.write_map.list, list)
>               if ( track->page == page )
>               {
>                   paging_mark_dirty(d, mfn);
> @@ -2649,7 +2642,7 @@ void hvm_unmap_guest_frame(void *p, bool_t permanent)
>                   xfree(track);
>                   break;
>               }
> -        spin_unlock(&d->arch.hvm_domain.write_map.lock);
> +        spin_unlock(&d->arch.hvm.write_map.lock);
>       }
>   
>       put_page(page);
> @@ -2659,10 +2652,10 @@ void hvm_mapped_guest_frames_mark_dirty(struct domain *d)
>   {
>       struct hvm_write_map *track;
>   
> -    spin_lock(&d->arch.hvm_domain.write_map.lock);
> -    list_for_each_entry(track, &d->arch.hvm_domain.write_map.list, list)
> +    spin_lock(&d->arch.hvm.write_map.lock);
> +    list_for_each_entry(track, &d->arch.hvm.write_map.list, list)
>           paging_mark_dirty(d, page_to_mfn(track->page));
> -    spin_unlock(&d->arch.hvm_domain.write_map.lock);
> +    spin_unlock(&d->arch.hvm.write_map.lock);
>   }
>   
>   static void *hvm_map_entry(unsigned long va, bool_t *writable)
> @@ -3942,7 +3935,7 @@ void hvm_vcpu_reset_state(struct vcpu *v, uint16_t cs, uint16_t ip)
>       v->arch.hvm_vcpu.cache_tsc_offset =
>           v->domain->vcpu[0]->arch.hvm_vcpu.cache_tsc_offset;
>       hvm_set_tsc_offset(v, v->arch.hvm_vcpu.cache_tsc_offset,
> -                       d->arch.hvm_domain.sync_tsc);
> +                       d->arch.hvm.sync_tsc);
>   
>       v->arch.hvm_vcpu.msr_tsc_adjust = 0;
>   
> @@ -3964,7 +3957,7 @@ static void hvm_s3_suspend(struct domain *d)
>       domain_lock(d);
>   
>       if ( d->is_dying || (d->vcpu == NULL) || (d->vcpu[0] == NULL) ||
> -         test_and_set_bool(d->arch.hvm_domain.is_s3_suspended) )
> +         test_and_set_bool(d->arch.hvm.is_s3_suspended) )
>       {
>           domain_unlock(d);
>           domain_unpause(d);
> @@ -3994,7 +3987,7 @@ static void hvm_s3_suspend(struct domain *d)
>   
>   static void hvm_s3_resume(struct domain *d)
>   {
> -    if ( test_and_clear_bool(d->arch.hvm_domain.is_s3_suspended) )
> +    if ( test_and_clear_bool(d->arch.hvm.is_s3_suspended) )
>       {
>           struct vcpu *v;
>   
> @@ -4074,7 +4067,7 @@ static int hvmop_set_evtchn_upcall_vector(
>   static int hvm_allow_set_param(struct domain *d,
>                                  const struct xen_hvm_param *a)
>   {
> -    uint64_t value = d->arch.hvm_domain.params[a->index];
> +    uint64_t value = d->arch.hvm.params[a->index];
>       int rc;
>   
>       rc = xsm_hvm_param(XSM_TARGET, d, HVMOP_set_param);
> @@ -4177,7 +4170,7 @@ static int hvmop_set_param(
>            */
>           if ( !paging_mode_hap(d) || !cpu_has_vmx )
>           {
> -            d->arch.hvm_domain.params[a.index] = a.value;
> +            d->arch.hvm.params[a.index] = a.value;
>               break;
>           }
>   
> @@ -4192,7 +4185,7 @@ static int hvmop_set_param(
>   
>           rc = 0;
>           domain_pause(d);
> -        d->arch.hvm_domain.params[a.index] = a.value;
> +        d->arch.hvm.params[a.index] = a.value;
>           for_each_vcpu ( d, v )
>               paging_update_cr3(v, false);
>           domain_unpause(d);
> @@ -4241,11 +4234,11 @@ static int hvmop_set_param(
>           if ( !paging_mode_hap(d) && a.value )
>               rc = -EINVAL;
>           if ( a.value &&
> -             d->arch.hvm_domain.params[HVM_PARAM_ALTP2M] )
> +             d->arch.hvm.params[HVM_PARAM_ALTP2M] )
>               rc = -EINVAL;
>           /* Set up NHVM state for any vcpus that are already up. */
>           if ( a.value &&
> -             !d->arch.hvm_domain.params[HVM_PARAM_NESTEDHVM] )
> +             !d->arch.hvm.params[HVM_PARAM_NESTEDHVM] )
>               for_each_vcpu(d, v)
>                   if ( rc == 0 )
>                       rc = nestedhvm_vcpu_initialise(v);
> @@ -4260,7 +4253,7 @@ static int hvmop_set_param(
>           if ( a.value > XEN_ALTP2M_limited )
>               rc = -EINVAL;
>           if ( a.value &&
> -             d->arch.hvm_domain.params[HVM_PARAM_NESTEDHVM] )
> +             d->arch.hvm.params[HVM_PARAM_NESTEDHVM] )
>               rc = -EINVAL;
>           break;
>       case HVM_PARAM_BUFIOREQ_EVTCHN:
> @@ -4271,20 +4264,20 @@ static int hvmop_set_param(
>               rc = -EINVAL;
>           break;
>       case HVM_PARAM_IOREQ_SERVER_PFN:
> -        d->arch.hvm_domain.ioreq_gfn.base = a.value;
> +        d->arch.hvm.ioreq_gfn.base = a.value;
>           break;
>       case HVM_PARAM_NR_IOREQ_SERVER_PAGES:
>       {
>           unsigned int i;
>   
>           if ( a.value == 0 ||
> -             a.value > sizeof(d->arch.hvm_domain.ioreq_gfn.mask) * 8 )
> +             a.value > sizeof(d->arch.hvm.ioreq_gfn.mask) * 8 )
>           {
>               rc = -EINVAL;
>               break;
>           }
>           for ( i = 0; i < a.value; i++ )
> -            set_bit(i, &d->arch.hvm_domain.ioreq_gfn.mask);
> +            set_bit(i, &d->arch.hvm.ioreq_gfn.mask);
>   
>           break;
>       }
> @@ -4339,7 +4332,7 @@ static int hvmop_set_param(
>       if ( rc != 0 )
>           goto out;
>   
> -    d->arch.hvm_domain.params[a.index] = a.value;
> +    d->arch.hvm.params[a.index] = a.value;
>   
>       HVM_DBG_LOG(DBG_LEVEL_HCALL, "set param %u = %"PRIx64,
>                   a.index, a.value);
> @@ -4418,15 +4411,15 @@ static int hvmop_get_param(
>       switch ( a.index )
>       {
>       case HVM_PARAM_ACPI_S_STATE:
> -        a.value = d->arch.hvm_domain.is_s3_suspended ? 3 : 0;
> +        a.value = d->arch.hvm.is_s3_suspended ? 3 : 0;
>           break;
>   
>       case HVM_PARAM_VM86_TSS:
> -        a.value = (uint32_t)d->arch.hvm_domain.params[HVM_PARAM_VM86_TSS_SIZED];
> +        a.value = (uint32_t)d->arch.hvm.params[HVM_PARAM_VM86_TSS_SIZED];
>           break;
>   
>       case HVM_PARAM_VM86_TSS_SIZED:
> -        a.value = d->arch.hvm_domain.params[HVM_PARAM_VM86_TSS_SIZED] &
> +        a.value = d->arch.hvm.params[HVM_PARAM_VM86_TSS_SIZED] &
>                     ~VM86_TSS_UPDATED;
>           break;
>   
> @@ -4453,7 +4446,7 @@ static int hvmop_get_param(
>   
>       /*FALLTHRU*/
>       default:
> -        a.value = d->arch.hvm_domain.params[a.index];
> +        a.value = d->arch.hvm.params[a.index];
>           break;
>       }
>   
> @@ -4553,7 +4546,7 @@ static int do_altp2m_op(
>           goto out;
>       }
>   
> -    mode = d->arch.hvm_domain.params[HVM_PARAM_ALTP2M];
> +    mode = d->arch.hvm.params[HVM_PARAM_ALTP2M];
>   
>       if ( XEN_ALTP2M_disabled == mode )
>       {
> diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c
> index 85eacd7..3d7ac49 100644
> --- a/xen/arch/x86/hvm/hypercall.c
> +++ b/xen/arch/x86/hvm/hypercall.c
> @@ -41,7 +41,7 @@ static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>           rc = compat_memory_op(cmd, arg);
>   
>       if ( (cmd & MEMOP_CMD_MASK) == XENMEM_decrease_reservation )
> -        curr->domain->arch.hvm_domain.qemu_mapcache_invalidate = true;
> +        curr->domain->arch.hvm.qemu_mapcache_invalidate = true;
>   
>       return rc;
>   }
> @@ -286,8 +286,8 @@ int hvm_hypercall(struct cpu_user_regs *regs)
>       if ( curr->hcall_preempted )
>           return HVM_HCALL_preempted;
>   
> -    if ( unlikely(currd->arch.hvm_domain.qemu_mapcache_invalidate) &&
> -         test_and_clear_bool(currd->arch.hvm_domain.qemu_mapcache_invalidate) )
> +    if ( unlikely(currd->arch.hvm.qemu_mapcache_invalidate) &&
> +         test_and_clear_bool(currd->arch.hvm.qemu_mapcache_invalidate) )
>           send_invalidate_req();
>   
>       return HVM_HCALL_completed;
> diff --git a/xen/arch/x86/hvm/intercept.c b/xen/arch/x86/hvm/intercept.c
> index 2bc156d..aac22c5 100644
> --- a/xen/arch/x86/hvm/intercept.c
> +++ b/xen/arch/x86/hvm/intercept.c
> @@ -219,10 +219,10 @@ static const struct hvm_io_handler *hvm_find_io_handler(const ioreq_t *p)
>       BUG_ON((p->type != IOREQ_TYPE_PIO) &&
>              (p->type != IOREQ_TYPE_COPY));
>   
> -    for ( i = 0; i < curr_d->arch.hvm_domain.io_handler_count; i++ )
> +    for ( i = 0; i < curr_d->arch.hvm.io_handler_count; i++ )
>       {
>           const struct hvm_io_handler *handler =
> -            &curr_d->arch.hvm_domain.io_handler[i];
> +            &curr_d->arch.hvm.io_handler[i];
>           const struct hvm_io_ops *ops = handler->ops;
>   
>           if ( handler->type != p->type )
> @@ -257,9 +257,9 @@ int hvm_io_intercept(ioreq_t *p)
>   
>   struct hvm_io_handler *hvm_next_io_handler(struct domain *d)
>   {
> -    unsigned int i = d->arch.hvm_domain.io_handler_count++;
> +    unsigned int i = d->arch.hvm.io_handler_count++;
>   
> -    ASSERT(d->arch.hvm_domain.io_handler);
> +    ASSERT(d->arch.hvm.io_handler);
>   
>       if ( i == NR_IO_HANDLERS )
>       {
> @@ -267,7 +267,7 @@ struct hvm_io_handler *hvm_next_io_handler(struct domain *d)
>           return NULL;
>       }
>   
> -    return &d->arch.hvm_domain.io_handler[i];
> +    return &d->arch.hvm.io_handler[i];
>   }
>   
>   void register_mmio_handler(struct domain *d,
> @@ -303,10 +303,10 @@ void relocate_portio_handler(struct domain *d, unsigned int old_port,
>   {
>       unsigned int i;
>   
> -    for ( i = 0; i < d->arch.hvm_domain.io_handler_count; i++ )
> +    for ( i = 0; i < d->arch.hvm.io_handler_count; i++ )
>       {
>           struct hvm_io_handler *handler =
> -            &d->arch.hvm_domain.io_handler[i];
> +            &d->arch.hvm.io_handler[i];
>   
>           if ( handler->type != IOREQ_TYPE_PIO )
>               continue;
> diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
> index bf4d874..f1ea7d7 100644
> --- a/xen/arch/x86/hvm/io.c
> +++ b/xen/arch/x86/hvm/io.c
> @@ -179,12 +179,12 @@ static bool_t g2m_portio_accept(const struct hvm_io_handler *handler,
>                                   const ioreq_t *p)
>   {
>       struct vcpu *curr = current;
> -    const struct hvm_domain *hvm_domain = &curr->domain->arch.hvm_domain;
> +    const struct hvm_domain *hvm = &curr->domain->arch.hvm;
>       struct hvm_vcpu_io *vio = &curr->arch.hvm_vcpu.hvm_io;
>       struct g2m_ioport *g2m_ioport;
>       unsigned int start, end;
>   
> -    list_for_each_entry( g2m_ioport, &hvm_domain->g2m_ioport_list, list )
> +    list_for_each_entry( g2m_ioport, &hvm->g2m_ioport_list, list )
>       {
>           start = g2m_ioport->gport;
>           end = start + g2m_ioport->np;
> @@ -313,12 +313,12 @@ static int vpci_portio_read(const struct hvm_io_handler *handler,
>       if ( addr == 0xcf8 )
>       {
>           ASSERT(size == 4);
> -        *data = d->arch.hvm_domain.pci_cf8;
> +        *data = d->arch.hvm.pci_cf8;
>           return X86EMUL_OKAY;
>       }
>   
>       ASSERT((addr & ~3) == 0xcfc);
> -    cf8 = ACCESS_ONCE(d->arch.hvm_domain.pci_cf8);
> +    cf8 = ACCESS_ONCE(d->arch.hvm.pci_cf8);
>       if ( !CF8_ENABLED(cf8) )
>           return X86EMUL_UNHANDLEABLE;
>   
> @@ -343,12 +343,12 @@ static int vpci_portio_write(const struct hvm_io_handler *handler,
>       if ( addr == 0xcf8 )
>       {
>           ASSERT(size == 4);
> -        d->arch.hvm_domain.pci_cf8 = data;
> +        d->arch.hvm.pci_cf8 = data;
>           return X86EMUL_OKAY;
>       }
>   
>       ASSERT((addr & ~3) == 0xcfc);
> -    cf8 = ACCESS_ONCE(d->arch.hvm_domain.pci_cf8);
> +    cf8 = ACCESS_ONCE(d->arch.hvm.pci_cf8);
>       if ( !CF8_ENABLED(cf8) )
>           return X86EMUL_UNHANDLEABLE;
>   
> @@ -397,7 +397,7 @@ static const struct hvm_mmcfg *vpci_mmcfg_find(const struct domain *d,
>   {
>       const struct hvm_mmcfg *mmcfg;
>   
> -    list_for_each_entry ( mmcfg, &d->arch.hvm_domain.mmcfg_regions, next )
> +    list_for_each_entry ( mmcfg, &d->arch.hvm.mmcfg_regions, next )
>           if ( addr >= mmcfg->addr && addr < mmcfg->addr + mmcfg->size )
>               return mmcfg;
>   
> @@ -420,9 +420,9 @@ static int vpci_mmcfg_accept(struct vcpu *v, unsigned long addr)
>       struct domain *d = v->domain;
>       bool found;
>   
> -    read_lock(&d->arch.hvm_domain.mmcfg_lock);
> +    read_lock(&d->arch.hvm.mmcfg_lock);
>       found = vpci_mmcfg_find(d, addr);
> -    read_unlock(&d->arch.hvm_domain.mmcfg_lock);
> +    read_unlock(&d->arch.hvm.mmcfg_lock);
>   
>       return found;
>   }
> @@ -437,16 +437,16 @@ static int vpci_mmcfg_read(struct vcpu *v, unsigned long addr,
>   
>       *data = ~0ul;
>   
> -    read_lock(&d->arch.hvm_domain.mmcfg_lock);
> +    read_lock(&d->arch.hvm.mmcfg_lock);
>       mmcfg = vpci_mmcfg_find(d, addr);
>       if ( !mmcfg )
>       {
> -        read_unlock(&d->arch.hvm_domain.mmcfg_lock);
> +        read_unlock(&d->arch.hvm.mmcfg_lock);
>           return X86EMUL_RETRY;
>       }
>   
>       reg = vpci_mmcfg_decode_addr(mmcfg, addr, &sbdf);
> -    read_unlock(&d->arch.hvm_domain.mmcfg_lock);
> +    read_unlock(&d->arch.hvm.mmcfg_lock);
>   
>       if ( !vpci_access_allowed(reg, len) ||
>            (reg + len) > PCI_CFG_SPACE_EXP_SIZE )
> @@ -479,16 +479,16 @@ static int vpci_mmcfg_write(struct vcpu *v, unsigned long addr,
>       unsigned int reg;
>       pci_sbdf_t sbdf;
>   
> -    read_lock(&d->arch.hvm_domain.mmcfg_lock);
> +    read_lock(&d->arch.hvm.mmcfg_lock);
>       mmcfg = vpci_mmcfg_find(d, addr);
>       if ( !mmcfg )
>       {
> -        read_unlock(&d->arch.hvm_domain.mmcfg_lock);
> +        read_unlock(&d->arch.hvm.mmcfg_lock);
>           return X86EMUL_RETRY;
>       }
>   
>       reg = vpci_mmcfg_decode_addr(mmcfg, addr, &sbdf);
> -    read_unlock(&d->arch.hvm_domain.mmcfg_lock);
> +    read_unlock(&d->arch.hvm.mmcfg_lock);
>   
>       if ( !vpci_access_allowed(reg, len) ||
>            (reg + len) > PCI_CFG_SPACE_EXP_SIZE )
> @@ -527,8 +527,8 @@ int register_vpci_mmcfg_handler(struct domain *d, paddr_t addr,
>       new->segment = seg;
>       new->size = (end_bus - start_bus + 1) << 20;
>   
> -    write_lock(&d->arch.hvm_domain.mmcfg_lock);
> -    list_for_each_entry ( mmcfg, &d->arch.hvm_domain.mmcfg_regions, next )
> +    write_lock(&d->arch.hvm.mmcfg_lock);
> +    list_for_each_entry ( mmcfg, &d->arch.hvm.mmcfg_regions, next )
>           if ( new->addr < mmcfg->addr + mmcfg->size &&
>                mmcfg->addr < new->addr + new->size )
>           {
> @@ -539,25 +539,25 @@ int register_vpci_mmcfg_handler(struct domain *d, paddr_t addr,
>                    new->segment == mmcfg->segment &&
>                    new->size == mmcfg->size )
>                   ret = 0;
> -            write_unlock(&d->arch.hvm_domain.mmcfg_lock);
> +            write_unlock(&d->arch.hvm.mmcfg_lock);
>               xfree(new);
>               return ret;
>           }
>   
> -    if ( list_empty(&d->arch.hvm_domain.mmcfg_regions) )
> +    if ( list_empty(&d->arch.hvm.mmcfg_regions) )
>           register_mmio_handler(d, &vpci_mmcfg_ops);
>   
> -    list_add(&new->next, &d->arch.hvm_domain.mmcfg_regions);
> -    write_unlock(&d->arch.hvm_domain.mmcfg_lock);
> +    list_add(&new->next, &d->arch.hvm.mmcfg_regions);
> +    write_unlock(&d->arch.hvm.mmcfg_lock);
>   
>       return 0;
>   }
>   
>   void destroy_vpci_mmcfg(struct domain *d)
>   {
> -    struct list_head *mmcfg_regions = &d->arch.hvm_domain.mmcfg_regions;
> +    struct list_head *mmcfg_regions = &d->arch.hvm.mmcfg_regions;
>   
> -    write_lock(&d->arch.hvm_domain.mmcfg_lock);
> +    write_lock(&d->arch.hvm.mmcfg_lock);
>       while ( !list_empty(mmcfg_regions) )
>       {
>           struct hvm_mmcfg *mmcfg = list_first_entry(mmcfg_regions,
> @@ -566,7 +566,7 @@ void destroy_vpci_mmcfg(struct domain *d)
>           list_del(&mmcfg->next);
>           xfree(mmcfg);
>       }
> -    write_unlock(&d->arch.hvm_domain.mmcfg_lock);
> +    write_unlock(&d->arch.hvm.mmcfg_lock);
>   }
>   
>   /*
> diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
> index 940a2c9..8d60b02 100644
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -38,13 +38,13 @@ static void set_ioreq_server(struct domain *d, unsigned int id,
>                                struct hvm_ioreq_server *s)
>   {
>       ASSERT(id < MAX_NR_IOREQ_SERVERS);
> -    ASSERT(!s || !d->arch.hvm_domain.ioreq_server.server[id]);
> +    ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]);
>   
> -    d->arch.hvm_domain.ioreq_server.server[id] = s;
> +    d->arch.hvm.ioreq_server.server[id] = s;
>   }
>   
>   #define GET_IOREQ_SERVER(d, id) \
> -    (d)->arch.hvm_domain.ioreq_server.server[id]
> +    (d)->arch.hvm.ioreq_server.server[id]
>   
>   static struct hvm_ioreq_server *get_ioreq_server(const struct domain *d,
>                                                    unsigned int id)
> @@ -247,10 +247,10 @@ static gfn_t hvm_alloc_ioreq_gfn(struct hvm_ioreq_server *s)
>   
>       ASSERT(!IS_DEFAULT(s));
>   
> -    for ( i = 0; i < sizeof(d->arch.hvm_domain.ioreq_gfn.mask) * 8; i++ )
> +    for ( i = 0; i < sizeof(d->arch.hvm.ioreq_gfn.mask) * 8; i++ )
>       {
> -        if ( test_and_clear_bit(i, &d->arch.hvm_domain.ioreq_gfn.mask) )
> -            return _gfn(d->arch.hvm_domain.ioreq_gfn.base + i);
> +        if ( test_and_clear_bit(i, &d->arch.hvm.ioreq_gfn.mask) )
> +            return _gfn(d->arch.hvm.ioreq_gfn.base + i);
>       }
>   
>       return INVALID_GFN;
> @@ -259,12 +259,12 @@ static gfn_t hvm_alloc_ioreq_gfn(struct hvm_ioreq_server *s)
>   static void hvm_free_ioreq_gfn(struct hvm_ioreq_server *s, gfn_t gfn)
>   {
>       struct domain *d = s->target;
> -    unsigned int i = gfn_x(gfn) - d->arch.hvm_domain.ioreq_gfn.base;
> +    unsigned int i = gfn_x(gfn) - d->arch.hvm.ioreq_gfn.base;
>   
>       ASSERT(!IS_DEFAULT(s));
>       ASSERT(!gfn_eq(gfn, INVALID_GFN));
>   
> -    set_bit(i, &d->arch.hvm_domain.ioreq_gfn.mask);
> +    set_bit(i, &d->arch.hvm.ioreq_gfn.mask);
>   }
>   
>   static void hvm_unmap_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
> @@ -307,8 +307,8 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
>   
>       if ( IS_DEFAULT(s) )
>           iorp->gfn = _gfn(buf ?
> -                         d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_PFN] :
> -                         d->arch.hvm_domain.params[HVM_PARAM_IOREQ_PFN]);
> +                         d->arch.hvm.params[HVM_PARAM_BUFIOREQ_PFN] :
> +                         d->arch.hvm.params[HVM_PARAM_IOREQ_PFN]);
>       else
>           iorp->gfn = hvm_alloc_ioreq_gfn(s);
>   
> @@ -394,7 +394,7 @@ bool is_ioreq_server_page(struct domain *d, const struct page_info *page)
>       unsigned int id;
>       bool found = false;
>   
> -    spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> +    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
>   
>       FOR_EACH_IOREQ_SERVER(d, id, s)
>       {
> @@ -405,7 +405,7 @@ bool is_ioreq_server_page(struct domain *d, const struct page_info *page)
>           }
>       }
>   
> -    spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> +    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>   
>       return found;
>   }
> @@ -492,7 +492,7 @@ static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s,
>   
>           s->bufioreq_evtchn = rc;
>           if ( IS_DEFAULT(s) )
> -            d->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN] =
> +            d->arch.hvm.params[HVM_PARAM_BUFIOREQ_EVTCHN] =
>                   s->bufioreq_evtchn;
>       }
>   
> @@ -797,7 +797,7 @@ int hvm_create_ioreq_server(struct domain *d, bool is_default,
>           return -ENOMEM;
>   
>       domain_pause(d);
> -    spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> +    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
>   
>       if ( is_default )
>       {
> @@ -841,13 +841,13 @@ int hvm_create_ioreq_server(struct domain *d, bool is_default,
>       if ( id )
>           *id = i;
>   
> -    spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> +    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>       domain_unpause(d);
>   
>       return 0;
>   
>    fail:
> -    spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> +    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>       domain_unpause(d);
>   
>       xfree(s);
> @@ -862,7 +862,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
>       if ( id == DEFAULT_IOSERVID )
>           return -EPERM;
>   
> -    spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> +    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
>   
>       s = get_ioreq_server(d, id);
>   
> @@ -898,7 +898,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
>       rc = 0;
>   
>    out:
> -    spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> +    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>   
>       return rc;
>   }
> @@ -914,7 +914,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
>       if ( id == DEFAULT_IOSERVID )
>           return -EOPNOTSUPP;
>   
> -    spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> +    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
>   
>       s = get_ioreq_server(d, id);
>   
> @@ -950,7 +950,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
>       rc = 0;
>   
>    out:
> -    spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> +    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>   
>       return rc;
>   }
> @@ -967,7 +967,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
>       if ( !is_hvm_domain(d) )
>           return -EINVAL;
>   
> -    spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> +    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
>   
>       s = get_ioreq_server(d, id);
>   
> @@ -1007,7 +1007,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
>       }
>   
>    out:
> -    spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> +    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>   
>       return rc;
>   }
> @@ -1026,7 +1026,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
>       if ( id == DEFAULT_IOSERVID )
>           return -EOPNOTSUPP;
>   
> -    spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> +    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
>   
>       s = get_ioreq_server(d, id);
>   
> @@ -1064,7 +1064,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
>       rc = rangeset_add_range(r, start, end);
>   
>    out:
> -    spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> +    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>   
>       return rc;
>   }
> @@ -1083,7 +1083,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
>       if ( id == DEFAULT_IOSERVID )
>           return -EOPNOTSUPP;
>   
> -    spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> +    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
>   
>       s = get_ioreq_server(d, id);
>   
> @@ -1121,7 +1121,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
>       rc = rangeset_remove_range(r, start, end);
>   
>    out:
> -    spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> +    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>   
>       return rc;
>   }
> @@ -1149,7 +1149,7 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
>       if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE )
>           return -EINVAL;
>   
> -    spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> +    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
>   
>       s = get_ioreq_server(d, id);
>   
> @@ -1166,7 +1166,7 @@ int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
>       rc = p2m_set_ioreq_server(d, flags, s);
>   
>    out:
> -    spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> +    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>   
>       if ( rc == 0 && flags == 0 )
>       {
> @@ -1188,7 +1188,7 @@ int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
>       if ( id == DEFAULT_IOSERVID )
>           return -EOPNOTSUPP;
>   
> -    spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> +    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
>   
>       s = get_ioreq_server(d, id);
>   
> @@ -1214,7 +1214,7 @@ int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
>       rc = 0;
>   
>    out:
> -    spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> +    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>       return rc;
>   }
>   
> @@ -1224,7 +1224,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
>       unsigned int id;
>       int rc;
>   
> -    spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> +    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
>   
>       FOR_EACH_IOREQ_SERVER(d, id, s)
>       {
> @@ -1233,7 +1233,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
>               goto fail;
>       }
>   
> -    spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> +    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>   
>       return 0;
>   
> @@ -1248,7 +1248,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
>           hvm_ioreq_server_remove_vcpu(s, v);
>       }
>   
> -    spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> +    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>   
>       return rc;
>   }
> @@ -1258,12 +1258,12 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v)
>       struct hvm_ioreq_server *s;
>       unsigned int id;
>   
> -    spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> +    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
>   
>       FOR_EACH_IOREQ_SERVER(d, id, s)
>           hvm_ioreq_server_remove_vcpu(s, v);
>   
> -    spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> +    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>   }
>   
>   void hvm_destroy_all_ioreq_servers(struct domain *d)
> @@ -1271,7 +1271,7 @@ void hvm_destroy_all_ioreq_servers(struct domain *d)
>       struct hvm_ioreq_server *s;
>       unsigned int id;
>   
> -    spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> +    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
>   
>       /* No need to domain_pause() as the domain is being torn down */
>   
> @@ -1291,7 +1291,7 @@ void hvm_destroy_all_ioreq_servers(struct domain *d)
>           xfree(s);
>       }
>   
> -    spin_unlock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> +    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
>   }
>   
>   struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
> @@ -1306,7 +1306,7 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
>       if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO )
>           return GET_IOREQ_SERVER(d, DEFAULT_IOSERVID);
>   
> -    cf8 = d->arch.hvm_domain.pci_cf8;
> +    cf8 = d->arch.hvm.pci_cf8;
>   
>       if ( p->type == IOREQ_TYPE_PIO &&
>            (p->addr & ~3) == 0xcfc &&
> @@ -1564,7 +1564,7 @@ static int hvm_access_cf8(
>       struct domain *d = current->domain;
>   
>       if ( dir == IOREQ_WRITE && bytes == 4 )
> -        d->arch.hvm_domain.pci_cf8 = *val;
> +        d->arch.hvm.pci_cf8 = *val;
>   
>       /* We always need to fall through to the catch all emulator */
>       return X86EMUL_UNHANDLEABLE;
> @@ -1572,7 +1572,7 @@ static int hvm_access_cf8(
>   
>   void hvm_ioreq_init(struct domain *d)
>   {
> -    spin_lock_init(&d->arch.hvm_domain.ioreq_server.lock);
> +    spin_lock_init(&d->arch.hvm.ioreq_server.lock);
>   
>       register_portio_handler(d, 0xcf8, 4, hvm_access_cf8);
>   }
> diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
> index dfe8ed6..1ded2c2 100644
> --- a/xen/arch/x86/hvm/irq.c
> +++ b/xen/arch/x86/hvm/irq.c
> @@ -52,11 +52,11 @@ int hvm_ioapic_assert(struct domain *d, unsigned int gsi, bool level)
>           return -1;
>       }
>   
> -    spin_lock(&d->arch.hvm_domain.irq_lock);
> +    spin_lock(&d->arch.hvm.irq_lock);
>       if ( !level || hvm_irq->gsi_assert_count[gsi]++ == 0 )
>           assert_gsi(d, gsi);
>       vector = vioapic_get_vector(d, gsi);
> -    spin_unlock(&d->arch.hvm_domain.irq_lock);
> +    spin_unlock(&d->arch.hvm.irq_lock);
>   
>       return vector;
>   }
> @@ -71,9 +71,9 @@ void hvm_ioapic_deassert(struct domain *d, unsigned int gsi)
>           return;
>       }
>   
> -    spin_lock(&d->arch.hvm_domain.irq_lock);
> +    spin_lock(&d->arch.hvm.irq_lock);
>       hvm_irq->gsi_assert_count[gsi]--;
> -    spin_unlock(&d->arch.hvm_domain.irq_lock);
> +    spin_unlock(&d->arch.hvm.irq_lock);
>   }
>   
>   static void assert_irq(struct domain *d, unsigned ioapic_gsi, unsigned pic_irq)
> @@ -122,9 +122,9 @@ static void __hvm_pci_intx_assert(
>   void hvm_pci_intx_assert(
>       struct domain *d, unsigned int device, unsigned int intx)
>   {
> -    spin_lock(&d->arch.hvm_domain.irq_lock);
> +    spin_lock(&d->arch.hvm.irq_lock);
>       __hvm_pci_intx_assert(d, device, intx);
> -    spin_unlock(&d->arch.hvm_domain.irq_lock);
> +    spin_unlock(&d->arch.hvm.irq_lock);
>   }
>   
>   static void __hvm_pci_intx_deassert(
> @@ -156,9 +156,9 @@ static void __hvm_pci_intx_deassert(
>   void hvm_pci_intx_deassert(
>       struct domain *d, unsigned int device, unsigned int intx)
>   {
> -    spin_lock(&d->arch.hvm_domain.irq_lock);
> +    spin_lock(&d->arch.hvm.irq_lock);
>       __hvm_pci_intx_deassert(d, device, intx);
> -    spin_unlock(&d->arch.hvm_domain.irq_lock);
> +    spin_unlock(&d->arch.hvm.irq_lock);
>   }
>   
>   void hvm_gsi_assert(struct domain *d, unsigned int gsi)
> @@ -179,13 +179,13 @@ void hvm_gsi_assert(struct domain *d, unsigned int gsi)
>        * for the hardware domain, Xen needs to rely on gsi_assert_count in order
>        * to know if the GSI is pending or not.
>        */
> -    spin_lock(&d->arch.hvm_domain.irq_lock);
> +    spin_lock(&d->arch.hvm.irq_lock);
>       if ( !hvm_irq->gsi_assert_count[gsi] )
>       {
>           hvm_irq->gsi_assert_count[gsi] = 1;
>           assert_gsi(d, gsi);
>       }
> -    spin_unlock(&d->arch.hvm_domain.irq_lock);
> +    spin_unlock(&d->arch.hvm.irq_lock);
>   }
>   
>   void hvm_gsi_deassert(struct domain *d, unsigned int gsi)
> @@ -198,9 +198,9 @@ void hvm_gsi_deassert(struct domain *d, unsigned int gsi)
>           return;
>       }
>   
> -    spin_lock(&d->arch.hvm_domain.irq_lock);
> +    spin_lock(&d->arch.hvm.irq_lock);
>       hvm_irq->gsi_assert_count[gsi] = 0;
> -    spin_unlock(&d->arch.hvm_domain.irq_lock);
> +    spin_unlock(&d->arch.hvm.irq_lock);
>   }
>   
>   int hvm_isa_irq_assert(struct domain *d, unsigned int isa_irq,
> @@ -213,7 +213,7 @@ int hvm_isa_irq_assert(struct domain *d, unsigned int isa_irq,
>   
>       ASSERT(isa_irq <= 15);
>   
> -    spin_lock(&d->arch.hvm_domain.irq_lock);
> +    spin_lock(&d->arch.hvm.irq_lock);
>   
>       if ( !__test_and_set_bit(isa_irq, &hvm_irq->isa_irq.i) &&
>            (hvm_irq->gsi_assert_count[gsi]++ == 0) )
> @@ -222,7 +222,7 @@ int hvm_isa_irq_assert(struct domain *d, unsigned int isa_irq,
>       if ( get_vector )
>           vector = get_vector(d, gsi);
>   
> -    spin_unlock(&d->arch.hvm_domain.irq_lock);
> +    spin_unlock(&d->arch.hvm.irq_lock);
>   
>       return vector;
>   }
> @@ -235,13 +235,13 @@ void hvm_isa_irq_deassert(
>   
>       ASSERT(isa_irq <= 15);
>   
> -    spin_lock(&d->arch.hvm_domain.irq_lock);
> +    spin_lock(&d->arch.hvm.irq_lock);
>   
>       if ( __test_and_clear_bit(isa_irq, &hvm_irq->isa_irq.i) &&
>            (--hvm_irq->gsi_assert_count[gsi] == 0) )
>           deassert_irq(d, isa_irq);
>   
> -    spin_unlock(&d->arch.hvm_domain.irq_lock);
> +    spin_unlock(&d->arch.hvm.irq_lock);
>   }
>   
>   static void hvm_set_callback_irq_level(struct vcpu *v)
> @@ -252,7 +252,7 @@ static void hvm_set_callback_irq_level(struct vcpu *v)
>   
>       ASSERT(v->vcpu_id == 0);
>   
> -    spin_lock(&d->arch.hvm_domain.irq_lock);
> +    spin_lock(&d->arch.hvm.irq_lock);
>   
>       /* NB. Do not check the evtchn_upcall_mask. It is not used in HVM mode. */
>       asserted = !!vcpu_info(v, evtchn_upcall_pending);
> @@ -289,7 +289,7 @@ static void hvm_set_callback_irq_level(struct vcpu *v)
>       }
>   
>    out:
> -    spin_unlock(&d->arch.hvm_domain.irq_lock);
> +    spin_unlock(&d->arch.hvm.irq_lock);
>   }
>   
>   void hvm_maybe_deassert_evtchn_irq(void)
> @@ -331,7 +331,7 @@ int hvm_set_pci_link_route(struct domain *d, u8 link, u8 isa_irq)
>       if ( (link > 3) || (isa_irq > 15) )
>           return -EINVAL;
>   
> -    spin_lock(&d->arch.hvm_domain.irq_lock);
> +    spin_lock(&d->arch.hvm.irq_lock);
>   
>       old_isa_irq = hvm_irq->pci_link.route[link];
>       if ( old_isa_irq == isa_irq )
> @@ -363,7 +363,7 @@ int hvm_set_pci_link_route(struct domain *d, u8 link, u8 isa_irq)
>       }
>   
>    out:
> -    spin_unlock(&d->arch.hvm_domain.irq_lock);
> +    spin_unlock(&d->arch.hvm.irq_lock);
>   
>       dprintk(XENLOG_G_INFO, "Dom%u PCI link %u changed %u -> %u\n",
>               d->domain_id, link, old_isa_irq, isa_irq);
> @@ -431,7 +431,7 @@ void hvm_set_callback_via(struct domain *d, uint64_t via)
>            (!has_vlapic(d) || !has_vioapic(d) || !has_vpic(d)) )
>           return;
>   
> -    spin_lock(&d->arch.hvm_domain.irq_lock);
> +    spin_lock(&d->arch.hvm.irq_lock);
>   
>       /* Tear down old callback via. */
>       if ( hvm_irq->callback_via_asserted )
> @@ -481,7 +481,7 @@ void hvm_set_callback_via(struct domain *d, uint64_t via)
>           break;
>       }
>   
> -    spin_unlock(&d->arch.hvm_domain.irq_lock);
> +    spin_unlock(&d->arch.hvm.irq_lock);
>   
>       for_each_vcpu ( d, v )
>           if ( is_vcpu_online(v) )
> @@ -509,7 +509,7 @@ void hvm_set_callback_via(struct domain *d, uint64_t via)
>   
>   struct hvm_intack hvm_vcpu_has_pending_irq(struct vcpu *v)
>   {
> -    struct hvm_domain *plat = &v->domain->arch.hvm_domain;
> +    struct hvm_domain *plat = &v->domain->arch.hvm;
>       int vector;
>   
>       if ( unlikely(v->nmi_pending) )
> @@ -645,7 +645,7 @@ static int irq_save_pci(struct domain *d, hvm_domain_context_t *h)
>       unsigned int asserted, pdev, pintx;
>       int rc;
>   
> -    spin_lock(&d->arch.hvm_domain.irq_lock);
> +    spin_lock(&d->arch.hvm.irq_lock);
>   
>       pdev  = hvm_irq->callback_via.pci.dev;
>       pintx = hvm_irq->callback_via.pci.intx;
> @@ -666,7 +666,7 @@ static int irq_save_pci(struct domain *d, hvm_domain_context_t *h)
>       if ( asserted )
>           __hvm_pci_intx_assert(d, pdev, pintx);
>   
> -    spin_unlock(&d->arch.hvm_domain.irq_lock);
> +    spin_unlock(&d->arch.hvm.irq_lock);
>   
>       return rc;
>   }
> diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
> index edfe5cd..8a772bc 100644
> --- a/xen/arch/x86/hvm/mtrr.c
> +++ b/xen/arch/x86/hvm/mtrr.c
> @@ -539,12 +539,12 @@ static DEFINE_RCU_READ_LOCK(pinned_cacheattr_rcu_lock);
>   
>   void hvm_init_cacheattr_region_list(struct domain *d)
>   {
> -    INIT_LIST_HEAD(&d->arch.hvm_domain.pinned_cacheattr_ranges);
> +    INIT_LIST_HEAD(&d->arch.hvm.pinned_cacheattr_ranges);
>   }
>   
>   void hvm_destroy_cacheattr_region_list(struct domain *d)
>   {
> -    struct list_head *head = &d->arch.hvm_domain.pinned_cacheattr_ranges;
> +    struct list_head *head = &d->arch.hvm.pinned_cacheattr_ranges;
>       struct hvm_mem_pinned_cacheattr_range *range;
>   
>       while ( !list_empty(head) )
> @@ -568,7 +568,7 @@ int hvm_get_mem_pinned_cacheattr(struct domain *d, gfn_t gfn,
>   
>       rcu_read_lock(&pinned_cacheattr_rcu_lock);
>       list_for_each_entry_rcu ( range,
> -                              &d->arch.hvm_domain.pinned_cacheattr_ranges,
> +                              &d->arch.hvm.pinned_cacheattr_ranges,
>                                 list )
>       {
>           if ( ((gfn_x(gfn) & mask) >= range->start) &&
> @@ -612,7 +612,7 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start,
>           /* Remove the requested range. */
>           rcu_read_lock(&pinned_cacheattr_rcu_lock);
>           list_for_each_entry_rcu ( range,
> -                                  &d->arch.hvm_domain.pinned_cacheattr_ranges,
> +                                  &d->arch.hvm.pinned_cacheattr_ranges,
>                                     list )
>               if ( range->start == gfn_start && range->end == gfn_end )
>               {
> @@ -655,7 +655,7 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start,
>   
>       rcu_read_lock(&pinned_cacheattr_rcu_lock);
>       list_for_each_entry_rcu ( range,
> -                              &d->arch.hvm_domain.pinned_cacheattr_ranges,
> +                              &d->arch.hvm.pinned_cacheattr_ranges,
>                                 list )
>       {
>           if ( range->start == gfn_start && range->end == gfn_end )
> @@ -682,7 +682,7 @@ int hvm_set_mem_pinned_cacheattr(struct domain *d, uint64_t gfn_start,
>       range->end = gfn_end;
>       range->type = type;
>   
> -    list_add_rcu(&range->list, &d->arch.hvm_domain.pinned_cacheattr_ranges);
> +    list_add_rcu(&range->list, &d->arch.hvm.pinned_cacheattr_ranges);
>       p2m_memory_type_changed(d);
>       if ( type != PAT_TYPE_WRBACK )
>           flush_all(FLUSH_CACHE);
> @@ -827,7 +827,7 @@ int epte_get_entry_emt(struct domain *d, unsigned long gfn, mfn_t mfn,
>   
>       if ( direct_mmio )
>       {
> -        if ( (mfn_x(mfn) ^ d->arch.hvm_domain.vmx.apic_access_mfn) >> order )
> +        if ( (mfn_x(mfn) ^ d->arch.hvm.vmx.apic_access_mfn) >> order )
>               return MTRR_TYPE_UNCACHABLE;
>           if ( order )
>               return -1;
> diff --git a/xen/arch/x86/hvm/pmtimer.c b/xen/arch/x86/hvm/pmtimer.c
> index 435647f..75b9408 100644
> --- a/xen/arch/x86/hvm/pmtimer.c
> +++ b/xen/arch/x86/hvm/pmtimer.c
> @@ -56,7 +56,7 @@
>   /* Dispatch SCIs based on the PM1a_STS and PM1a_EN registers */
>   static void pmt_update_sci(PMTState *s)
>   {
> -    struct hvm_hw_acpi *acpi = &s->vcpu->domain->arch.hvm_domain.acpi;
> +    struct hvm_hw_acpi *acpi = &s->vcpu->domain->arch.hvm.acpi;
>   
>       ASSERT(spin_is_locked(&s->lock));
>   
> @@ -68,26 +68,26 @@ static void pmt_update_sci(PMTState *s)
>   
>   void hvm_acpi_power_button(struct domain *d)
>   {
> -    PMTState *s = &d->arch.hvm_domain.pl_time->vpmt;
> +    PMTState *s = &d->arch.hvm.pl_time->vpmt;
>   
>       if ( !has_vpm(d) )
>           return;
>   
>       spin_lock(&s->lock);
> -    d->arch.hvm_domain.acpi.pm1a_sts |= PWRBTN_STS;
> +    d->arch.hvm.acpi.pm1a_sts |= PWRBTN_STS;
>       pmt_update_sci(s);
>       spin_unlock(&s->lock);
>   }
>   
>   void hvm_acpi_sleep_button(struct domain *d)
>   {
> -    PMTState *s = &d->arch.hvm_domain.pl_time->vpmt;
> +    PMTState *s = &d->arch.hvm.pl_time->vpmt;
>   
>       if ( !has_vpm(d) )
>           return;
>   
>       spin_lock(&s->lock);
> -    d->arch.hvm_domain.acpi.pm1a_sts |= PWRBTN_STS;
> +    d->arch.hvm.acpi.pm1a_sts |= PWRBTN_STS;
>       pmt_update_sci(s);
>       spin_unlock(&s->lock);
>   }
> @@ -97,7 +97,7 @@ void hvm_acpi_sleep_button(struct domain *d)
>   static void pmt_update_time(PMTState *s)
>   {
>       uint64_t curr_gtime, tmp;
> -    struct hvm_hw_acpi *acpi = &s->vcpu->domain->arch.hvm_domain.acpi;
> +    struct hvm_hw_acpi *acpi = &s->vcpu->domain->arch.hvm.acpi;
>       uint32_t tmr_val = acpi->tmr_val, msb = tmr_val & TMR_VAL_MSB;
>       
>       ASSERT(spin_is_locked(&s->lock));
> @@ -137,7 +137,7 @@ static void pmt_timer_callback(void *opaque)
>   
>       /* How close are we to the next MSB flip? */
>       pmt_cycles_until_flip = TMR_VAL_MSB -
> -        (s->vcpu->domain->arch.hvm_domain.acpi.tmr_val & (TMR_VAL_MSB - 1));
> +        (s->vcpu->domain->arch.hvm.acpi.tmr_val & (TMR_VAL_MSB - 1));
>   
>       /* Overall time between MSB flips */
>       time_until_flip = (1000000000ULL << 23) / FREQUENCE_PMTIMER;
> @@ -156,13 +156,13 @@ static int handle_evt_io(
>       int dir, unsigned int port, unsigned int bytes, uint32_t *val)
>   {
>       struct vcpu *v = current;
> -    struct hvm_hw_acpi *acpi = &v->domain->arch.hvm_domain.acpi;
> -    PMTState *s = &v->domain->arch.hvm_domain.pl_time->vpmt;
> +    struct hvm_hw_acpi *acpi = &v->domain->arch.hvm.acpi;
> +    PMTState *s = &v->domain->arch.hvm.pl_time->vpmt;
>       uint32_t addr, data, byte;
>       int i;
>   
>       addr = port -
> -        ((v->domain->arch.hvm_domain.params[
> +        ((v->domain->arch.hvm.params[
>               HVM_PARAM_ACPI_IOPORTS_LOCATION] == 0) ?
>            PM1a_STS_ADDR_V0 : PM1a_STS_ADDR_V1);
>   
> @@ -220,8 +220,8 @@ static int handle_pmt_io(
>       int dir, unsigned int port, unsigned int bytes, uint32_t *val)
>   {
>       struct vcpu *v = current;
> -    struct hvm_hw_acpi *acpi = &v->domain->arch.hvm_domain.acpi;
> -    PMTState *s = &v->domain->arch.hvm_domain.pl_time->vpmt;
> +    struct hvm_hw_acpi *acpi = &v->domain->arch.hvm.acpi;
> +    PMTState *s = &v->domain->arch.hvm.pl_time->vpmt;
>   
>       if ( bytes != 4 || dir != IOREQ_READ )
>       {
> @@ -251,8 +251,8 @@ static int handle_pmt_io(
>   
>   static int acpi_save(struct domain *d, hvm_domain_context_t *h)
>   {
> -    struct hvm_hw_acpi *acpi = &d->arch.hvm_domain.acpi;
> -    PMTState *s = &d->arch.hvm_domain.pl_time->vpmt;
> +    struct hvm_hw_acpi *acpi = &d->arch.hvm.acpi;
> +    PMTState *s = &d->arch.hvm.pl_time->vpmt;
>       uint32_t x, msb = acpi->tmr_val & TMR_VAL_MSB;
>       int rc;
>   
> @@ -282,8 +282,8 @@ static int acpi_save(struct domain *d, hvm_domain_context_t *h)
>   
>   static int acpi_load(struct domain *d, hvm_domain_context_t *h)
>   {
> -    struct hvm_hw_acpi *acpi = &d->arch.hvm_domain.acpi;
> -    PMTState *s = &d->arch.hvm_domain.pl_time->vpmt;
> +    struct hvm_hw_acpi *acpi = &d->arch.hvm.acpi;
> +    PMTState *s = &d->arch.hvm.pl_time->vpmt;
>   
>       if ( !has_vpm(d) )
>           return -ENODEV;
> @@ -320,7 +320,7 @@ int pmtimer_change_ioport(struct domain *d, unsigned int version)
>           return -ENODEV;
>   
>       /* Check that version is changing. */
> -    old_version = d->arch.hvm_domain.params[HVM_PARAM_ACPI_IOPORTS_LOCATION];
> +    old_version = d->arch.hvm.params[HVM_PARAM_ACPI_IOPORTS_LOCATION];
>       if ( version == old_version )
>           return 0;
>   
> @@ -346,7 +346,7 @@ int pmtimer_change_ioport(struct domain *d, unsigned int version)
>   
>   void pmtimer_init(struct vcpu *v)
>   {
> -    PMTState *s = &v->domain->arch.hvm_domain.pl_time->vpmt;
> +    PMTState *s = &v->domain->arch.hvm.pl_time->vpmt;
>   
>       if ( !has_vpm(v->domain) )
>           return;
> @@ -370,7 +370,7 @@ void pmtimer_init(struct vcpu *v)
>   
>   void pmtimer_deinit(struct domain *d)
>   {
> -    PMTState *s = &d->arch.hvm_domain.pl_time->vpmt;
> +    PMTState *s = &d->arch.hvm.pl_time->vpmt;
>   
>       if ( !has_vpm(d) )
>           return;
> @@ -384,7 +384,7 @@ void pmtimer_reset(struct domain *d)
>           return;
>   
>       /* Reset the counter. */
> -    d->arch.hvm_domain.acpi.tmr_val = 0;
> +    d->arch.hvm.acpi.tmr_val = 0;
>   }
>   
>   /*
> diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
> index 96921bb..1828587 100644
> --- a/xen/arch/x86/hvm/rtc.c
> +++ b/xen/arch/x86/hvm/rtc.c
> @@ -38,7 +38,7 @@
>   #define MIN_PER_HOUR    60
>   #define HOUR_PER_DAY    24
>   
> -#define domain_vrtc(x) (&(x)->arch.hvm_domain.pl_time->vrtc)
> +#define domain_vrtc(x) (&(x)->arch.hvm.pl_time->vrtc)
>   #define vcpu_vrtc(x)   (domain_vrtc((x)->domain))
>   #define vrtc_domain(x) (container_of(x, struct pl_time, vrtc)->domain)
>   #define vrtc_vcpu(x)   (pt_global_vcpu_target(vrtc_domain(x)))
> @@ -148,7 +148,7 @@ static void rtc_timer_update(RTCState *s)
>                   s_time_t now = NOW();
>   
>                   s->period = period;
> -                if ( v->domain->arch.hvm_domain.params[HVM_PARAM_VPT_ALIGN] )
> +                if ( v->domain->arch.hvm.params[HVM_PARAM_VPT_ALIGN] )
>                       delta = 0;
>                   else
>                       delta = period - ((now - s->start_time) % period);
> diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
> index d2dc430..0ace160 100644
> --- a/xen/arch/x86/hvm/save.c
> +++ b/xen/arch/x86/hvm/save.c
> @@ -39,7 +39,7 @@ void arch_hvm_save(struct domain *d, struct hvm_save_header *hdr)
>       hdr->gtsc_khz = d->arch.tsc_khz;
>   
>       /* Time when saving started */
> -    d->arch.hvm_domain.sync_tsc = rdtsc();
> +    d->arch.hvm.sync_tsc = rdtsc();
>   }
>   
>   int arch_hvm_load(struct domain *d, struct hvm_save_header *hdr)
> @@ -74,10 +74,10 @@ int arch_hvm_load(struct domain *d, struct hvm_save_header *hdr)
>           hvm_set_rdtsc_exiting(d, 1);
>   
>       /* Time when restore started  */
> -    d->arch.hvm_domain.sync_tsc = rdtsc();
> +    d->arch.hvm.sync_tsc = rdtsc();
>   
>       /* VGA state is not saved/restored, so we nobble the cache. */
> -    d->arch.hvm_domain.stdvga.cache = STDVGA_CACHE_DISABLED;
> +    d->arch.hvm.stdvga.cache = STDVGA_CACHE_DISABLED;
>   
>       return 0;
>   }
> diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c
> index 925bab2..bd398db 100644
> --- a/xen/arch/x86/hvm/stdvga.c
> +++ b/xen/arch/x86/hvm/stdvga.c
> @@ -134,7 +134,7 @@ static bool_t stdvga_cache_is_enabled(const struct hvm_hw_stdvga *s)
>   
>   static int stdvga_outb(uint64_t addr, uint8_t val)
>   {
> -    struct hvm_hw_stdvga *s = &current->domain->arch.hvm_domain.stdvga;
> +    struct hvm_hw_stdvga *s = &current->domain->arch.hvm.stdvga;
>       int rc = 1, prev_stdvga = s->stdvga;
>   
>       switch ( addr )
> @@ -202,7 +202,7 @@ static void stdvga_out(uint32_t port, uint32_t bytes, uint32_t val)
>   static int stdvga_intercept_pio(
>       int dir, unsigned int port, unsigned int bytes, uint32_t *val)
>   {
> -    struct hvm_hw_stdvga *s = &current->domain->arch.hvm_domain.stdvga;
> +    struct hvm_hw_stdvga *s = &current->domain->arch.hvm.stdvga;
>   
>       if ( dir == IOREQ_WRITE )
>       {
> @@ -252,7 +252,7 @@ static unsigned int stdvga_mem_offset(
>   
>   static uint8_t stdvga_mem_readb(uint64_t addr)
>   {
> -    struct hvm_hw_stdvga *s = &current->domain->arch.hvm_domain.stdvga;
> +    struct hvm_hw_stdvga *s = &current->domain->arch.hvm.stdvga;
>       int plane;
>       uint32_t ret, *vram_l;
>       uint8_t *vram_b;
> @@ -347,7 +347,7 @@ static int stdvga_mem_read(const struct hvm_io_handler *handler,
>   
>   static void stdvga_mem_writeb(uint64_t addr, uint32_t val)
>   {
> -    struct hvm_hw_stdvga *s = &current->domain->arch.hvm_domain.stdvga;
> +    struct hvm_hw_stdvga *s = &current->domain->arch.hvm.stdvga;
>       int plane, write_mode, b, func_select, mask;
>       uint32_t write_mask, bit_mask, set_mask, *vram_l;
>       uint8_t *vram_b;
> @@ -457,7 +457,7 @@ static int stdvga_mem_write(const struct hvm_io_handler *handler,
>                               uint64_t addr, uint32_t size,
>                               uint64_t data)
>   {
> -    struct hvm_hw_stdvga *s = &current->domain->arch.hvm_domain.stdvga;
> +    struct hvm_hw_stdvga *s = &current->domain->arch.hvm.stdvga;
>       ioreq_t p = {
>           .type = IOREQ_TYPE_COPY,
>           .addr = addr,
> @@ -517,7 +517,7 @@ static int stdvga_mem_write(const struct hvm_io_handler *handler,
>   static bool_t stdvga_mem_accept(const struct hvm_io_handler *handler,
>                                   const ioreq_t *p)
>   {
> -    struct hvm_hw_stdvga *s = &current->domain->arch.hvm_domain.stdvga;
> +    struct hvm_hw_stdvga *s = &current->domain->arch.hvm.stdvga;
>   
>       /*
>        * The range check must be done without taking the lock, to avoid
> @@ -560,7 +560,7 @@ static bool_t stdvga_mem_accept(const struct hvm_io_handler *handler,
>   
>   static void stdvga_mem_complete(const struct hvm_io_handler *handler)
>   {
> -    struct hvm_hw_stdvga *s = &current->domain->arch.hvm_domain.stdvga;
> +    struct hvm_hw_stdvga *s = &current->domain->arch.hvm.stdvga;
>   
>       spin_unlock(&s->lock);
>   }
> @@ -574,7 +574,7 @@ static const struct hvm_io_ops stdvga_mem_ops = {
>   
>   void stdvga_init(struct domain *d)
>   {
> -    struct hvm_hw_stdvga *s = &d->arch.hvm_domain.stdvga;
> +    struct hvm_hw_stdvga *s = &d->arch.hvm.stdvga;
>       struct page_info *pg;
>       unsigned int i;
>   
> @@ -615,7 +615,7 @@ void stdvga_init(struct domain *d)
>   
>   void stdvga_deinit(struct domain *d)
>   {
> -    struct hvm_hw_stdvga *s = &d->arch.hvm_domain.stdvga;
> +    struct hvm_hw_stdvga *s = &d->arch.hvm.stdvga;
>       int i;
>   
>       if ( !has_vvga(d) )
> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index a16f372..2d52247 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -1197,7 +1197,7 @@ void svm_vmenter_helper(const struct cpu_user_regs *regs)
>   
>   static void svm_guest_osvw_init(struct domain *d)
>   {
> -    struct svm_domain *svm = &d->arch.hvm_domain.svm;
> +    struct svm_domain *svm = &d->arch.hvm.svm;
>   
>       spin_lock(&osvw_lock);
>   
> @@ -2006,8 +2006,7 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
>       case MSR_AMD_OSVW_STATUS:
>           if ( !d->arch.cpuid->extd.osvw )
>               goto gpf;
> -        *msr_content =
> -            d->arch.hvm_domain.svm.osvw.raw[msr - MSR_AMD_OSVW_ID_LENGTH];
> +        *msr_content = d->arch.hvm.svm.osvw.raw[msr - MSR_AMD_OSVW_ID_LENGTH];
>           break;
>   
>       default:
> diff --git a/xen/arch/x86/hvm/svm/vmcb.c b/xen/arch/x86/hvm/svm/vmcb.c
> index 04518fd..d31fcfa 100644
> --- a/xen/arch/x86/hvm/svm/vmcb.c
> +++ b/xen/arch/x86/hvm/svm/vmcb.c
> @@ -106,7 +106,7 @@ static int construct_vmcb(struct vcpu *v)
>           svm_disable_intercept_for_msr(v, MSR_AMD64_LWP_CBADDR);
>   
>       vmcb->_msrpm_base_pa = (u64)virt_to_maddr(arch_svm->msrpm);
> -    vmcb->_iopm_base_pa = __pa(v->domain->arch.hvm_domain.io_bitmap);
> +    vmcb->_iopm_base_pa = __pa(v->domain->arch.hvm.io_bitmap);
>   
>       /* Virtualise EFLAGS.IF and LAPIC TPR (CR8). */
>       vmcb->_vintr.fields.intr_masking = 1;
> diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
> index 97b419f..9675424 100644
> --- a/xen/arch/x86/hvm/vioapic.c
> +++ b/xen/arch/x86/hvm/vioapic.c
> @@ -49,7 +49,7 @@ static struct hvm_vioapic *addr_vioapic(const struct domain *d,
>   {
>       unsigned int i;
>   
> -    for ( i = 0; i < d->arch.hvm_domain.nr_vioapics; i++ )
> +    for ( i = 0; i < d->arch.hvm.nr_vioapics; i++ )
>       {
>           struct hvm_vioapic *vioapic = domain_vioapic(d, i);
>   
> @@ -66,7 +66,7 @@ static struct hvm_vioapic *gsi_vioapic(const struct domain *d,
>   {
>       unsigned int i;
>   
> -    for ( i = 0; i < d->arch.hvm_domain.nr_vioapics; i++ )
> +    for ( i = 0; i < d->arch.hvm.nr_vioapics; i++ )
>       {
>           struct hvm_vioapic *vioapic = domain_vioapic(d, i);
>   
> @@ -214,7 +214,7 @@ static void vioapic_write_redirent(
>       int unmasked = 0;
>       unsigned int gsi = vioapic->base_gsi + idx;
>   
> -    spin_lock(&d->arch.hvm_domain.irq_lock);
> +    spin_lock(&d->arch.hvm.irq_lock);
>   
>       pent = &vioapic->redirtbl[idx];
>       ent  = *pent;
> @@ -264,7 +264,7 @@ static void vioapic_write_redirent(
>           vioapic_deliver(vioapic, idx);
>       }
>   
> -    spin_unlock(&d->arch.hvm_domain.irq_lock);
> +    spin_unlock(&d->arch.hvm.irq_lock);
>   
>       if ( gsi == 0 || unmasked )
>           pt_may_unmask_irq(d, NULL);
> @@ -388,7 +388,7 @@ static void vioapic_deliver(struct hvm_vioapic *vioapic, unsigned int pin)
>       struct vcpu *v;
>       unsigned int irq = vioapic->base_gsi + pin;
>   
> -    ASSERT(spin_is_locked(&d->arch.hvm_domain.irq_lock));
> +    ASSERT(spin_is_locked(&d->arch.hvm.irq_lock));
>   
>       HVM_DBG_LOG(DBG_LEVEL_IOAPIC,
>                   "dest=%x dest_mode=%x delivery_mode=%x "
> @@ -476,7 +476,7 @@ void vioapic_irq_positive_edge(struct domain *d, unsigned int irq)
>       HVM_DBG_LOG(DBG_LEVEL_IOAPIC, "irq %x", irq);
>   
>       ASSERT(pin < vioapic->nr_pins);
> -    ASSERT(spin_is_locked(&d->arch.hvm_domain.irq_lock));
> +    ASSERT(spin_is_locked(&d->arch.hvm.irq_lock));
>   
>       ent = &vioapic->redirtbl[pin];
>       if ( ent->fields.mask )
> @@ -501,9 +501,9 @@ void vioapic_update_EOI(struct domain *d, u8 vector)
>   
>       ASSERT(has_vioapic(d));
>   
> -    spin_lock(&d->arch.hvm_domain.irq_lock);
> +    spin_lock(&d->arch.hvm.irq_lock);
>   
> -    for ( i = 0; i < d->arch.hvm_domain.nr_vioapics; i++ )
> +    for ( i = 0; i < d->arch.hvm.nr_vioapics; i++ )
>       {
>           struct hvm_vioapic *vioapic = domain_vioapic(d, i);
>           unsigned int pin;
> @@ -518,9 +518,9 @@ void vioapic_update_EOI(struct domain *d, u8 vector)
>   
>               if ( iommu_enabled )
>               {
> -                spin_unlock(&d->arch.hvm_domain.irq_lock);
> +                spin_unlock(&d->arch.hvm.irq_lock);
>                   hvm_dpci_eoi(d, vioapic->base_gsi + pin, ent);
> -                spin_lock(&d->arch.hvm_domain.irq_lock);
> +                spin_lock(&d->arch.hvm.irq_lock);
>               }
>   
>               if ( (ent->fields.trig_mode == VIOAPIC_LEVEL_TRIG) &&
> @@ -533,7 +533,7 @@ void vioapic_update_EOI(struct domain *d, u8 vector)
>           }
>       }
>   
> -    spin_unlock(&d->arch.hvm_domain.irq_lock);
> +    spin_unlock(&d->arch.hvm.irq_lock);
>   }
>   
>   int vioapic_get_mask(const struct domain *d, unsigned int gsi)
> @@ -579,7 +579,7 @@ static int ioapic_save(struct domain *d, hvm_domain_context_t *h)
>       s = domain_vioapic(d, 0);
>   
>       if ( s->nr_pins != ARRAY_SIZE(s->domU.redirtbl) ||
> -         d->arch.hvm_domain.nr_vioapics != 1 )
> +         d->arch.hvm.nr_vioapics != 1 )
>           return -EOPNOTSUPP;
>   
>       return hvm_save_entry(IOAPIC, 0, h, &s->domU);
> @@ -595,7 +595,7 @@ static int ioapic_load(struct domain *d, hvm_domain_context_t *h)
>       s = domain_vioapic(d, 0);
>   
>       if ( s->nr_pins != ARRAY_SIZE(s->domU.redirtbl) ||
> -         d->arch.hvm_domain.nr_vioapics != 1 )
> +         d->arch.hvm.nr_vioapics != 1 )
>           return -EOPNOTSUPP;
>   
>       return hvm_load_entry(IOAPIC, h, &s->domU);
> @@ -609,11 +609,11 @@ void vioapic_reset(struct domain *d)
>   
>       if ( !has_vioapic(d) )
>       {
> -        ASSERT(!d->arch.hvm_domain.nr_vioapics);
> +        ASSERT(!d->arch.hvm.nr_vioapics);
>           return;
>       }
>   
> -    for ( i = 0; i < d->arch.hvm_domain.nr_vioapics; i++ )
> +    for ( i = 0; i < d->arch.hvm.nr_vioapics; i++ )
>       {
>           struct hvm_vioapic *vioapic = domain_vioapic(d, i);
>           unsigned int nr_pins = vioapic->nr_pins, base_gsi = vioapic->base_gsi;
> @@ -646,7 +646,7 @@ static void vioapic_free(const struct domain *d, unsigned int nr_vioapics)
>   
>       for ( i = 0; i < nr_vioapics; i++)
>           xfree(domain_vioapic(d, i));
> -    xfree(d->arch.hvm_domain.vioapic);
> +    xfree(d->arch.hvm.vioapic);
>   }
>   
>   int vioapic_init(struct domain *d)
> @@ -655,14 +655,14 @@ int vioapic_init(struct domain *d)
>   
>       if ( !has_vioapic(d) )
>       {
> -        ASSERT(!d->arch.hvm_domain.nr_vioapics);
> +        ASSERT(!d->arch.hvm.nr_vioapics);
>           return 0;
>       }
>   
>       nr_vioapics = is_hardware_domain(d) ? nr_ioapics : 1;
>   
> -    if ( (d->arch.hvm_domain.vioapic == NULL) &&
> -         ((d->arch.hvm_domain.vioapic =
> +    if ( (d->arch.hvm.vioapic == NULL) &&
> +         ((d->arch.hvm.vioapic =
>              xzalloc_array(struct hvm_vioapic *, nr_vioapics)) == NULL) )
>           return -ENOMEM;
>   
> @@ -699,7 +699,7 @@ int vioapic_init(struct domain *d)
>        */
>       ASSERT(hvm_domain_irq(d)->nr_gsis >= nr_gsis);
>   
> -    d->arch.hvm_domain.nr_vioapics = nr_vioapics;
> +    d->arch.hvm.nr_vioapics = nr_vioapics;
>       vioapic_reset(d);
>   
>       register_mmio_handler(d, &vioapic_mmio_ops);
> @@ -711,9 +711,9 @@ void vioapic_deinit(struct domain *d)
>   {
>       if ( !has_vioapic(d) )
>       {
> -        ASSERT(!d->arch.hvm_domain.nr_vioapics);
> +        ASSERT(!d->arch.hvm.nr_vioapics);
>           return;
>       }
>   
> -    vioapic_free(d, d->arch.hvm_domain.nr_vioapics);
> +    vioapic_free(d, d->arch.hvm.nr_vioapics);
>   }
> diff --git a/xen/arch/x86/hvm/viridian.c b/xen/arch/x86/hvm/viridian.c
> index 4860651..5ddb41b 100644
> --- a/xen/arch/x86/hvm/viridian.c
> +++ b/xen/arch/x86/hvm/viridian.c
> @@ -223,7 +223,7 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,
>       case 2:
>           /* Hypervisor information, but only if the guest has set its
>              own version number. */
> -        if ( d->arch.hvm_domain.viridian.guest_os_id.raw == 0 )
> +        if ( d->arch.hvm.viridian.guest_os_id.raw == 0 )
>               break;
>           res->a = viridian_build;
>           res->b = ((uint32_t)viridian_major << 16) | viridian_minor;
> @@ -268,8 +268,8 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint32_t leaf,
>   
>       case 4:
>           /* Recommended hypercall usage. */
> -        if ( (d->arch.hvm_domain.viridian.guest_os_id.raw == 0) ||
> -             (d->arch.hvm_domain.viridian.guest_os_id.fields.os < 4) )
> +        if ( (d->arch.hvm.viridian.guest_os_id.raw == 0) ||
> +             (d->arch.hvm.viridian.guest_os_id.fields.os < 4) )
>               break;
>           res->a = CPUID4A_RELAX_TIMER_INT;
>           if ( viridian_feature_mask(d) & HVMPV_hcall_remote_tlb_flush )
> @@ -301,7 +301,7 @@ static void dump_guest_os_id(const struct domain *d)
>   {
>       const union viridian_guest_os_id *goi;
>   
> -    goi = &d->arch.hvm_domain.viridian.guest_os_id;
> +    goi = &d->arch.hvm.viridian.guest_os_id;
>   
>       printk(XENLOG_G_INFO
>              "d%d: VIRIDIAN GUEST_OS_ID: vendor: %x os: %x major: %x minor: %x sp: %x build: %x\n",
> @@ -315,7 +315,7 @@ static void dump_hypercall(const struct domain *d)
>   {
>       const union viridian_hypercall_gpa *hg;
>   
> -    hg = &d->arch.hvm_domain.viridian.hypercall_gpa;
> +    hg = &d->arch.hvm.viridian.hypercall_gpa;
>   
>       printk(XENLOG_G_INFO "d%d: VIRIDIAN HYPERCALL: enabled: %x pfn: %lx\n",
>              d->domain_id,
> @@ -336,7 +336,7 @@ static void dump_reference_tsc(const struct domain *d)
>   {
>       const union viridian_reference_tsc *rt;
>   
> -    rt = &d->arch.hvm_domain.viridian.reference_tsc;
> +    rt = &d->arch.hvm.viridian.reference_tsc;
>       
>       printk(XENLOG_G_INFO "d%d: VIRIDIAN REFERENCE_TSC: enabled: %x pfn: %lx\n",
>              d->domain_id,
> @@ -345,7 +345,7 @@ static void dump_reference_tsc(const struct domain *d)
>   
>   static void enable_hypercall_page(struct domain *d)
>   {
> -    unsigned long gmfn = d->arch.hvm_domain.viridian.hypercall_gpa.fields.pfn;
> +    unsigned long gmfn = d->arch.hvm.viridian.hypercall_gpa.fields.pfn;
>       struct page_info *page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC);
>       uint8_t *p;
>   
> @@ -483,7 +483,7 @@ void viridian_apic_assist_clear(struct vcpu *v)
>   
>   static void update_reference_tsc(struct domain *d, bool_t initialize)
>   {
> -    unsigned long gmfn = d->arch.hvm_domain.viridian.reference_tsc.fields.pfn;
> +    unsigned long gmfn = d->arch.hvm.viridian.reference_tsc.fields.pfn;
>       struct page_info *page = get_page_from_gfn(d, gmfn, NULL, P2M_ALLOC);
>       HV_REFERENCE_TSC_PAGE *p;
>   
> @@ -566,15 +566,15 @@ int wrmsr_viridian_regs(uint32_t idx, uint64_t val)
>       {
>       case HV_X64_MSR_GUEST_OS_ID:
>           perfc_incr(mshv_wrmsr_osid);
> -        d->arch.hvm_domain.viridian.guest_os_id.raw = val;
> +        d->arch.hvm.viridian.guest_os_id.raw = val;
>           dump_guest_os_id(d);
>           break;
>   
>       case HV_X64_MSR_HYPERCALL:
>           perfc_incr(mshv_wrmsr_hc_page);
> -        d->arch.hvm_domain.viridian.hypercall_gpa.raw = val;
> +        d->arch.hvm.viridian.hypercall_gpa.raw = val;
>           dump_hypercall(d);
> -        if ( d->arch.hvm_domain.viridian.hypercall_gpa.fields.enabled )
> +        if ( d->arch.hvm.viridian.hypercall_gpa.fields.enabled )
>               enable_hypercall_page(d);
>           break;
>   
> @@ -618,9 +618,9 @@ int wrmsr_viridian_regs(uint32_t idx, uint64_t val)
>               return 0;
>   
>           perfc_incr(mshv_wrmsr_tsc_msr);
> -        d->arch.hvm_domain.viridian.reference_tsc.raw = val;
> +        d->arch.hvm.viridian.reference_tsc.raw = val;
>           dump_reference_tsc(d);
> -        if ( d->arch.hvm_domain.viridian.reference_tsc.fields.enabled )
> +        if ( d->arch.hvm.viridian.reference_tsc.fields.enabled )
>               update_reference_tsc(d, 1);
>           break;
>   
> @@ -681,7 +681,7 @@ void viridian_time_ref_count_freeze(struct domain *d)
>   {
>       struct viridian_time_ref_count *trc;
>   
> -    trc = &d->arch.hvm_domain.viridian.time_ref_count;
> +    trc = &d->arch.hvm.viridian.time_ref_count;
>   
>       if ( test_and_clear_bit(_TRC_running, &trc->flags) )
>           trc->val = raw_trc_val(d) + trc->off;
> @@ -691,7 +691,7 @@ void viridian_time_ref_count_thaw(struct domain *d)
>   {
>       struct viridian_time_ref_count *trc;
>   
> -    trc = &d->arch.hvm_domain.viridian.time_ref_count;
> +    trc = &d->arch.hvm.viridian.time_ref_count;
>   
>       if ( !d->is_shutting_down &&
>            !test_and_set_bit(_TRC_running, &trc->flags) )
> @@ -710,12 +710,12 @@ int rdmsr_viridian_regs(uint32_t idx, uint64_t *val)
>       {
>       case HV_X64_MSR_GUEST_OS_ID:
>           perfc_incr(mshv_rdmsr_osid);
> -        *val = d->arch.hvm_domain.viridian.guest_os_id.raw;
> +        *val = d->arch.hvm.viridian.guest_os_id.raw;
>           break;
>   
>       case HV_X64_MSR_HYPERCALL:
>           perfc_incr(mshv_rdmsr_hc_page);
> -        *val = d->arch.hvm_domain.viridian.hypercall_gpa.raw;
> +        *val = d->arch.hvm.viridian.hypercall_gpa.raw;
>           break;
>   
>       case HV_X64_MSR_VP_INDEX:
> @@ -760,14 +760,14 @@ int rdmsr_viridian_regs(uint32_t idx, uint64_t *val)
>               return 0;
>   
>           perfc_incr(mshv_rdmsr_tsc_msr);
> -        *val = d->arch.hvm_domain.viridian.reference_tsc.raw;
> +        *val = d->arch.hvm.viridian.reference_tsc.raw;
>           break;
>   
>       case HV_X64_MSR_TIME_REF_COUNT:
>       {
>           struct viridian_time_ref_count *trc;
>   
> -        trc = &d->arch.hvm_domain.viridian.time_ref_count;
> +        trc = &d->arch.hvm.viridian.time_ref_count;
>   
>           if ( !(viridian_feature_mask(d) & HVMPV_time_ref_count) )
>               return 0;
> @@ -993,10 +993,10 @@ int viridian_hypercall(struct cpu_user_regs *regs)
>   static int viridian_save_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
>   {
>       struct hvm_viridian_domain_context ctxt = {
> -        .time_ref_count = d->arch.hvm_domain.viridian.time_ref_count.val,
> -        .hypercall_gpa  = d->arch.hvm_domain.viridian.hypercall_gpa.raw,
> -        .guest_os_id    = d->arch.hvm_domain.viridian.guest_os_id.raw,
> -        .reference_tsc  = d->arch.hvm_domain.viridian.reference_tsc.raw,
> +        .time_ref_count = d->arch.hvm.viridian.time_ref_count.val,
> +        .hypercall_gpa  = d->arch.hvm.viridian.hypercall_gpa.raw,
> +        .guest_os_id    = d->arch.hvm.viridian.guest_os_id.raw,
> +        .reference_tsc  = d->arch.hvm.viridian.reference_tsc.raw,
>       };
>   
>       if ( !is_viridian_domain(d) )
> @@ -1012,12 +1012,12 @@ static int viridian_load_domain_ctxt(struct domain *d, hvm_domain_context_t *h)
>       if ( hvm_load_entry_zeroextend(VIRIDIAN_DOMAIN, h, &ctxt) != 0 )
>           return -EINVAL;
>   
> -    d->arch.hvm_domain.viridian.time_ref_count.val = ctxt.time_ref_count;
> -    d->arch.hvm_domain.viridian.hypercall_gpa.raw  = ctxt.hypercall_gpa;
> -    d->arch.hvm_domain.viridian.guest_os_id.raw    = ctxt.guest_os_id;
> -    d->arch.hvm_domain.viridian.reference_tsc.raw  = ctxt.reference_tsc;
> +    d->arch.hvm.viridian.time_ref_count.val = ctxt.time_ref_count;
> +    d->arch.hvm.viridian.hypercall_gpa.raw  = ctxt.hypercall_gpa;
> +    d->arch.hvm.viridian.guest_os_id.raw    = ctxt.guest_os_id;
> +    d->arch.hvm.viridian.reference_tsc.raw  = ctxt.reference_tsc;
>   
> -    if ( d->arch.hvm_domain.viridian.reference_tsc.fields.enabled )
> +    if ( d->arch.hvm.viridian.reference_tsc.fields.enabled )
>           update_reference_tsc(d, 0);
>   
>       return 0;
> diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
> index ec089cc..04702e9 100644
> --- a/xen/arch/x86/hvm/vlapic.c
> +++ b/xen/arch/x86/hvm/vlapic.c
> @@ -1203,10 +1203,10 @@ int vlapic_accept_pic_intr(struct vcpu *v)
>           return 0;
>   
>       TRACE_2D(TRC_HVM_EMUL_LAPIC_PIC_INTR,
> -             (v == v->domain->arch.hvm_domain.i8259_target),
> +             (v == v->domain->arch.hvm.i8259_target),
>                v ? __vlapic_accept_pic_intr(v) : -1);
>   
> -    return ((v == v->domain->arch.hvm_domain.i8259_target) &&
> +    return ((v == v->domain->arch.hvm.i8259_target) &&
>               __vlapic_accept_pic_intr(v));
>   }
>   
> @@ -1224,9 +1224,9 @@ void vlapic_adjust_i8259_target(struct domain *d)
>       v = d->vcpu ? d->vcpu[0] : NULL;
>   
>    found:
> -    if ( d->arch.hvm_domain.i8259_target == v )
> +    if ( d->arch.hvm.i8259_target == v )
>           return;
> -    d->arch.hvm_domain.i8259_target = v;
> +    d->arch.hvm.i8259_target = v;
>       pt_adjust_global_vcpu_target(v);
>   }
>   
> diff --git a/xen/arch/x86/hvm/vmsi.c b/xen/arch/x86/hvm/vmsi.c
> index 3001d5c..ccbf181 100644
> --- a/xen/arch/x86/hvm/vmsi.c
> +++ b/xen/arch/x86/hvm/vmsi.c
> @@ -173,7 +173,7 @@ static DEFINE_RCU_READ_LOCK(msixtbl_rcu_lock);
>    */
>   static bool msixtbl_initialised(const struct domain *d)
>   {
> -    return !!d->arch.hvm_domain.msixtbl_list.next;
> +    return !!d->arch.hvm.msixtbl_list.next;
>   }
>   
>   static struct msixtbl_entry *msixtbl_find_entry(
> @@ -182,7 +182,7 @@ static struct msixtbl_entry *msixtbl_find_entry(
>       struct msixtbl_entry *entry;
>       struct domain *d = v->domain;
>   
> -    list_for_each_entry( entry, &d->arch.hvm_domain.msixtbl_list, list )
> +    list_for_each_entry( entry, &d->arch.hvm.msixtbl_list, list )
>           if ( addr >= entry->gtable &&
>                addr < entry->gtable + entry->table_len )
>               return entry;
> @@ -430,7 +430,7 @@ static void add_msixtbl_entry(struct domain *d,
>       entry->pdev = pdev;
>       entry->gtable = (unsigned long) gtable;
>   
> -    list_add_rcu(&entry->list, &d->arch.hvm_domain.msixtbl_list);
> +    list_add_rcu(&entry->list, &d->arch.hvm.msixtbl_list);
>   }
>   
>   static void free_msixtbl_entry(struct rcu_head *rcu)
> @@ -483,7 +483,7 @@ int msixtbl_pt_register(struct domain *d, struct pirq *pirq, uint64_t gtable)
>   
>       pdev = msi_desc->dev;
>   
> -    list_for_each_entry( entry, &d->arch.hvm_domain.msixtbl_list, list )
> +    list_for_each_entry( entry, &d->arch.hvm.msixtbl_list, list )
>           if ( pdev == entry->pdev )
>               goto found;
>   
> @@ -542,7 +542,7 @@ void msixtbl_pt_unregister(struct domain *d, struct pirq *pirq)
>   
>       pdev = msi_desc->dev;
>   
> -    list_for_each_entry( entry, &d->arch.hvm_domain.msixtbl_list, list )
> +    list_for_each_entry( entry, &d->arch.hvm.msixtbl_list, list )
>           if ( pdev == entry->pdev )
>               goto found;
>   
> @@ -564,7 +564,7 @@ void msixtbl_init(struct domain *d)
>       if ( !is_hvm_domain(d) || !has_vlapic(d) || msixtbl_initialised(d) )
>           return;
>   
> -    INIT_LIST_HEAD(&d->arch.hvm_domain.msixtbl_list);
> +    INIT_LIST_HEAD(&d->arch.hvm.msixtbl_list);
>   
>       handler = hvm_next_io_handler(d);
>       if ( handler )
> @@ -584,7 +584,7 @@ void msixtbl_pt_cleanup(struct domain *d)
>       spin_lock(&d->event_lock);
>   
>       list_for_each_entry_safe( entry, temp,
> -                              &d->arch.hvm_domain.msixtbl_list, list )
> +                              &d->arch.hvm.msixtbl_list, list )
>           del_msixtbl_entry(entry);
>   
>       spin_unlock(&d->event_lock);
> diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
> index 6681032..f30850c 100644
> --- a/xen/arch/x86/hvm/vmx/vmcs.c
> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> @@ -1108,8 +1108,8 @@ static int construct_vmcs(struct vcpu *v)
>       }
>   
>       /* I/O access bitmap. */
> -    __vmwrite(IO_BITMAP_A, __pa(d->arch.hvm_domain.io_bitmap));
> -    __vmwrite(IO_BITMAP_B, __pa(d->arch.hvm_domain.io_bitmap) + PAGE_SIZE);
> +    __vmwrite(IO_BITMAP_A, __pa(d->arch.hvm.io_bitmap));
> +    __vmwrite(IO_BITMAP_B, __pa(d->arch.hvm.io_bitmap) + PAGE_SIZE);
>   
>       if ( cpu_has_vmx_virtual_intr_delivery )
>       {
> @@ -1263,7 +1263,7 @@ static int construct_vmcs(struct vcpu *v)
>           __vmwrite(XSS_EXIT_BITMAP, 0);
>   
>       if ( cpu_has_vmx_tsc_scaling )
> -        __vmwrite(TSC_MULTIPLIER, d->arch.hvm_domain.tsc_scaling_ratio);
> +        __vmwrite(TSC_MULTIPLIER, d->arch.hvm.tsc_scaling_ratio);
>   
>       /* will update HOST & GUEST_CR3 as reqd */
>       paging_update_paging_modes(v);
> @@ -1643,7 +1643,7 @@ void vmx_vcpu_flush_pml_buffer(struct vcpu *v)
>   
>   bool_t vmx_domain_pml_enabled(const struct domain *d)
>   {
> -    return !!(d->arch.hvm_domain.vmx.status & VMX_DOMAIN_PML_ENABLED);
> +    return !!(d->arch.hvm.vmx.status & VMX_DOMAIN_PML_ENABLED);
>   }
>   
>   /*
> @@ -1668,7 +1668,7 @@ int vmx_domain_enable_pml(struct domain *d)
>           if ( (rc = vmx_vcpu_enable_pml(v)) != 0 )
>               goto error;
>   
> -    d->arch.hvm_domain.vmx.status |= VMX_DOMAIN_PML_ENABLED;
> +    d->arch.hvm.vmx.status |= VMX_DOMAIN_PML_ENABLED;
>   
>       return 0;
>   
> @@ -1697,7 +1697,7 @@ void vmx_domain_disable_pml(struct domain *d)
>       for_each_vcpu ( d, v )
>           vmx_vcpu_disable_pml(v);
>   
> -    d->arch.hvm_domain.vmx.status &= ~VMX_DOMAIN_PML_ENABLED;
> +    d->arch.hvm.vmx.status &= ~VMX_DOMAIN_PML_ENABLED;
>   }
>   
>   /*
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index 73f0d52..ccfbacb 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -318,7 +318,7 @@ void vmx_pi_hooks_assign(struct domain *d)
>       if ( !iommu_intpost || !is_hvm_domain(d) )
>           return;
>   
> -    ASSERT(!d->arch.hvm_domain.pi_ops.vcpu_block);
> +    ASSERT(!d->arch.hvm.pi_ops.vcpu_block);
>   
>       /*
>        * We carefully handle the timing here:
> @@ -329,8 +329,8 @@ void vmx_pi_hooks_assign(struct domain *d)
>        * This can make sure the PI (especially the NDST feild) is
>        * in proper state when we call vmx_vcpu_block().
>        */
> -    d->arch.hvm_domain.pi_ops.switch_from = vmx_pi_switch_from;
> -    d->arch.hvm_domain.pi_ops.switch_to = vmx_pi_switch_to;
> +    d->arch.hvm.pi_ops.switch_from = vmx_pi_switch_from;
> +    d->arch.hvm.pi_ops.switch_to = vmx_pi_switch_to;
>   
>       for_each_vcpu ( d, v )
>       {
> @@ -345,8 +345,8 @@ void vmx_pi_hooks_assign(struct domain *d)
>                   x2apic_enabled ? dest : MASK_INSR(dest, PI_xAPIC_NDST_MASK));
>       }
>   
> -    d->arch.hvm_domain.pi_ops.vcpu_block = vmx_vcpu_block;
> -    d->arch.hvm_domain.pi_ops.do_resume = vmx_pi_do_resume;
> +    d->arch.hvm.pi_ops.vcpu_block = vmx_vcpu_block;
> +    d->arch.hvm.pi_ops.do_resume = vmx_pi_do_resume;
>   }
>   
>   /* This function is called when pcidevs_lock is held */
> @@ -357,7 +357,7 @@ void vmx_pi_hooks_deassign(struct domain *d)
>       if ( !iommu_intpost || !is_hvm_domain(d) )
>           return;
>   
> -    ASSERT(d->arch.hvm_domain.pi_ops.vcpu_block);
> +    ASSERT(d->arch.hvm.pi_ops.vcpu_block);
>   
>       /*
>        * Pausing the domain can make sure the vCPUs are not
> @@ -369,7 +369,7 @@ void vmx_pi_hooks_deassign(struct domain *d)
>       domain_pause(d);
>   
>       /*
> -     * Note that we don't set 'd->arch.hvm_domain.pi_ops.switch_to' to NULL
> +     * Note that we don't set 'd->arch.hvm.pi_ops.switch_to' to NULL
>        * here. If we deassign the hooks while the vCPU is runnable in the
>        * runqueue with 'SN' set, all the future notification event will be
>        * suppressed since vmx_deliver_posted_intr() also use 'SN' bit
> @@ -382,9 +382,9 @@ void vmx_pi_hooks_deassign(struct domain *d)
>        * system, leave it here until we find a clean solution to deassign the
>        * 'switch_to' hook function.
>        */
> -    d->arch.hvm_domain.pi_ops.vcpu_block = NULL;
> -    d->arch.hvm_domain.pi_ops.switch_from = NULL;
> -    d->arch.hvm_domain.pi_ops.do_resume = NULL;
> +    d->arch.hvm.pi_ops.vcpu_block = NULL;
> +    d->arch.hvm.pi_ops.switch_from = NULL;
> +    d->arch.hvm.pi_ops.do_resume = NULL;
>   
>       for_each_vcpu ( d, v )
>           vmx_pi_unblock_vcpu(v);
> @@ -934,8 +934,8 @@ static void vmx_ctxt_switch_from(struct vcpu *v)
>       vmx_restore_host_msrs();
>       vmx_save_dr(v);
>   
> -    if ( v->domain->arch.hvm_domain.pi_ops.switch_from )
> -        v->domain->arch.hvm_domain.pi_ops.switch_from(v);
> +    if ( v->domain->arch.hvm.pi_ops.switch_from )
> +        v->domain->arch.hvm.pi_ops.switch_from(v);
>   }
>   
>   static void vmx_ctxt_switch_to(struct vcpu *v)
> @@ -943,8 +943,8 @@ static void vmx_ctxt_switch_to(struct vcpu *v)
>       vmx_restore_guest_msrs(v);
>       vmx_restore_dr(v);
>   
> -    if ( v->domain->arch.hvm_domain.pi_ops.switch_to )
> -        v->domain->arch.hvm_domain.pi_ops.switch_to(v);
> +    if ( v->domain->arch.hvm.pi_ops.switch_to )
> +        v->domain->arch.hvm.pi_ops.switch_to(v);
>   }
>   
>   
> @@ -1104,7 +1104,7 @@ static void vmx_set_segment_register(struct vcpu *v, enum x86_segment seg,
>           if ( seg == x86_seg_tr )
>           {
>               const struct domain *d = v->domain;
> -            uint64_t val = d->arch.hvm_domain.params[HVM_PARAM_VM86_TSS_SIZED];
> +            uint64_t val = d->arch.hvm.params[HVM_PARAM_VM86_TSS_SIZED];
>   
>               if ( val )
>               {
> @@ -1115,7 +1115,7 @@ static void vmx_set_segment_register(struct vcpu *v, enum x86_segment seg,
>                   if ( val & VM86_TSS_UPDATED )
>                   {
>                       hvm_prepare_vm86_tss(v, base, limit);
> -                    cmpxchg(&d->arch.hvm_domain.params[HVM_PARAM_VM86_TSS_SIZED],
> +                    cmpxchg(&d->arch.hvm.params[HVM_PARAM_VM86_TSS_SIZED],
>                               val, val & ~VM86_TSS_UPDATED);
>                   }
>                   v->arch.hvm_vmx.vm86_segment_mask &= ~(1u << seg);
> @@ -1626,7 +1626,7 @@ static void vmx_update_guest_cr(struct vcpu *v, unsigned int cr,
>           {
>               if ( !hvm_paging_enabled(v) && !vmx_unrestricted_guest(v) )
>                   v->arch.hvm_vcpu.hw_cr[3] =
> -                    v->domain->arch.hvm_domain.params[HVM_PARAM_IDENT_PT];
> +                    v->domain->arch.hvm.params[HVM_PARAM_IDENT_PT];
>               vmx_load_pdptrs(v);
>           }
>   
> @@ -2997,7 +2997,7 @@ static int vmx_alloc_vlapic_mapping(struct domain *d)
>       mfn = page_to_mfn(pg);
>       clear_domain_page(mfn);
>       share_xen_page_with_guest(pg, d, SHARE_rw);
> -    d->arch.hvm_domain.vmx.apic_access_mfn = mfn_x(mfn);
> +    d->arch.hvm.vmx.apic_access_mfn = mfn_x(mfn);
>       set_mmio_p2m_entry(d, paddr_to_pfn(APIC_DEFAULT_PHYS_BASE), mfn,
>                          PAGE_ORDER_4K, p2m_get_hostp2m(d)->default_access);
>   
> @@ -3006,7 +3006,7 @@ static int vmx_alloc_vlapic_mapping(struct domain *d)
>   
>   static void vmx_free_vlapic_mapping(struct domain *d)
>   {
> -    unsigned long mfn = d->arch.hvm_domain.vmx.apic_access_mfn;
> +    unsigned long mfn = d->arch.hvm.vmx.apic_access_mfn;
>   
>       if ( mfn != 0 )
>           free_shared_domheap_page(mfn_to_page(_mfn(mfn)));
> @@ -3016,13 +3016,13 @@ static void vmx_install_vlapic_mapping(struct vcpu *v)
>   {
>       paddr_t virt_page_ma, apic_page_ma;
>   
> -    if ( v->domain->arch.hvm_domain.vmx.apic_access_mfn == 0 )
> +    if ( v->domain->arch.hvm.vmx.apic_access_mfn == 0 )
>           return;
>   
>       ASSERT(cpu_has_vmx_virtualize_apic_accesses);
>   
>       virt_page_ma = page_to_maddr(vcpu_vlapic(v)->regs_page);
> -    apic_page_ma = v->domain->arch.hvm_domain.vmx.apic_access_mfn;
> +    apic_page_ma = v->domain->arch.hvm.vmx.apic_access_mfn;
>       apic_page_ma <<= PAGE_SHIFT;
>   
>       vmx_vmcs_enter(v);
> @@ -4330,8 +4330,8 @@ bool vmx_vmenter_helper(const struct cpu_user_regs *regs)
>        if ( nestedhvm_vcpu_in_guestmode(curr) && vcpu_nestedhvm(curr).stale_np2m )
>            return false;
>   
> -    if ( curr->domain->arch.hvm_domain.pi_ops.do_resume )
> -        curr->domain->arch.hvm_domain.pi_ops.do_resume(curr);
> +    if ( curr->domain->arch.hvm.pi_ops.do_resume )
> +        curr->domain->arch.hvm.pi_ops.do_resume(curr);
>   
>       if ( !cpu_has_vmx_vpid )
>           goto out;
> diff --git a/xen/arch/x86/hvm/vpic.c b/xen/arch/x86/hvm/vpic.c
> index cfc9544..e0500c5 100644
> --- a/xen/arch/x86/hvm/vpic.c
> +++ b/xen/arch/x86/hvm/vpic.c
> @@ -35,7 +35,7 @@
>   #include <asm/hvm/support.h>
>   
>   #define vpic_domain(v) (container_of((v), struct domain, \
> -                        arch.hvm_domain.vpic[!vpic->is_master]))
> +                        arch.hvm.vpic[!vpic->is_master]))
>   #define __vpic_lock(v) &container_of((v), struct hvm_domain, \
>                                           vpic[!(v)->is_master])->irq_lock
>   #define vpic_lock(v)   spin_lock(__vpic_lock(v))
> @@ -112,7 +112,7 @@ static void vpic_update_int_output(struct hvm_hw_vpic *vpic)
>           if ( vpic->is_master )
>           {
>               /* Master INT line is connected in Virtual Wire Mode. */
> -            struct vcpu *v = vpic_domain(vpic)->arch.hvm_domain.i8259_target;
> +            struct vcpu *v = vpic_domain(vpic)->arch.hvm.i8259_target;
>               if ( v != NULL )
>               {
>                   TRACE_1D(TRC_HVM_EMUL_PIC_KICK, irq);
> @@ -334,7 +334,7 @@ static int vpic_intercept_pic_io(
>           return X86EMUL_OKAY;
>       }
>   
> -    vpic = &current->domain->arch.hvm_domain.vpic[port >> 7];
> +    vpic = &current->domain->arch.hvm.vpic[port >> 7];
>   
>       if ( dir == IOREQ_WRITE )
>           vpic_ioport_write(vpic, port, (uint8_t)*val);
> @@ -352,7 +352,7 @@ static int vpic_intercept_elcr_io(
>   
>       BUG_ON(bytes != 1);
>   
> -    vpic = &current->domain->arch.hvm_domain.vpic[port & 1];
> +    vpic = &current->domain->arch.hvm.vpic[port & 1];
>   
>       if ( dir == IOREQ_WRITE )
>       {
> @@ -382,7 +382,7 @@ static int vpic_save(struct domain *d, hvm_domain_context_t *h)
>       /* Save the state of both PICs */
>       for ( i = 0; i < 2 ; i++ )
>       {
> -        s = &d->arch.hvm_domain.vpic[i];
> +        s = &d->arch.hvm.vpic[i];
>           if ( hvm_save_entry(PIC, i, h, s) )
>               return 1;
>       }
> @@ -401,7 +401,7 @@ static int vpic_load(struct domain *d, hvm_domain_context_t *h)
>       /* Which PIC is this? */
>       if ( inst > 1 )
>           return -EINVAL;
> -    s = &d->arch.hvm_domain.vpic[inst];
> +    s = &d->arch.hvm.vpic[inst];
>   
>       /* Load the state */
>       if ( hvm_load_entry(PIC, h, s) != 0 )
> @@ -420,7 +420,7 @@ void vpic_reset(struct domain *d)
>           return;
>   
>       /* Master PIC. */
> -    vpic = &d->arch.hvm_domain.vpic[0];
> +    vpic = &d->arch.hvm.vpic[0];
>       memset(vpic, 0, sizeof(*vpic));
>       vpic->is_master = 1;
>       vpic->elcr      = 1 << 2;
> @@ -446,7 +446,7 @@ void vpic_init(struct domain *d)
>   
>   void vpic_irq_positive_edge(struct domain *d, int irq)
>   {
> -    struct hvm_hw_vpic *vpic = &d->arch.hvm_domain.vpic[irq >> 3];
> +    struct hvm_hw_vpic *vpic = &d->arch.hvm.vpic[irq >> 3];
>       uint8_t mask = 1 << (irq & 7);
>   
>       ASSERT(has_vpic(d));
> @@ -464,7 +464,7 @@ void vpic_irq_positive_edge(struct domain *d, int irq)
>   
>   void vpic_irq_negative_edge(struct domain *d, int irq)
>   {
> -    struct hvm_hw_vpic *vpic = &d->arch.hvm_domain.vpic[irq >> 3];
> +    struct hvm_hw_vpic *vpic = &d->arch.hvm.vpic[irq >> 3];
>       uint8_t mask = 1 << (irq & 7);
>   
>       ASSERT(has_vpic(d));
> @@ -483,7 +483,7 @@ void vpic_irq_negative_edge(struct domain *d, int irq)
>   int vpic_ack_pending_irq(struct vcpu *v)
>   {
>       int irq, vector;
> -    struct hvm_hw_vpic *vpic = &v->domain->arch.hvm_domain.vpic[0];
> +    struct hvm_hw_vpic *vpic = &v->domain->arch.hvm.vpic[0];
>   
>       ASSERT(has_vpic(v->domain));
>   
> diff --git a/xen/arch/x86/hvm/vpt.c b/xen/arch/x86/hvm/vpt.c
> index 6ac4c91..7b57017 100644
> --- a/xen/arch/x86/hvm/vpt.c
> +++ b/xen/arch/x86/hvm/vpt.c
> @@ -24,11 +24,11 @@
>   #include <asm/mc146818rtc.h>
>   
>   #define mode_is(d, name) \
> -    ((d)->arch.hvm_domain.params[HVM_PARAM_TIMER_MODE] == HVMPTM_##name)
> +    ((d)->arch.hvm.params[HVM_PARAM_TIMER_MODE] == HVMPTM_##name)
>   
>   void hvm_init_guest_time(struct domain *d)
>   {
> -    struct pl_time *pl = d->arch.hvm_domain.pl_time;
> +    struct pl_time *pl = d->arch.hvm.pl_time;
>   
>       spin_lock_init(&pl->pl_time_lock);
>       pl->stime_offset = -(u64)get_s_time();
> @@ -37,7 +37,7 @@ void hvm_init_guest_time(struct domain *d)
>   
>   uint64_t hvm_get_guest_time_fixed(const struct vcpu *v, uint64_t at_tsc)
>   {
> -    struct pl_time *pl = v->domain->arch.hvm_domain.pl_time;
> +    struct pl_time *pl = v->domain->arch.hvm.pl_time;
>       u64 now;
>   
>       /* Called from device models shared with PV guests. Be careful. */
> @@ -88,7 +88,7 @@ static int pt_irq_vector(struct periodic_time *pt, enum hvm_intsrc src)
>       gsi = hvm_isa_irq_to_gsi(isa_irq);
>   
>       if ( src == hvm_intsrc_pic )
> -        return (v->domain->arch.hvm_domain.vpic[isa_irq >> 3].irq_base
> +        return (v->domain->arch.hvm.vpic[isa_irq >> 3].irq_base
>                   + (isa_irq & 7));
>   
>       ASSERT(src == hvm_intsrc_lapic);
> @@ -121,7 +121,7 @@ static int pt_irq_masked(struct periodic_time *pt)
>   
>       case PTSRC_isa:
>       {
> -        uint8_t pic_imr = v->domain->arch.hvm_domain.vpic[pt->irq >> 3].imr;
> +        uint8_t pic_imr = v->domain->arch.hvm.vpic[pt->irq >> 3].imr;
>   
>           /* Check if the interrupt is unmasked in the PIC. */
>           if ( !(pic_imr & (1 << (pt->irq & 7))) && vlapic_accept_pic_intr(v) )
> @@ -363,7 +363,7 @@ int pt_update_irq(struct vcpu *v)
>       case PTSRC_isa:
>           hvm_isa_irq_deassert(v->domain, irq);
>           if ( platform_legacy_irq(irq) && vlapic_accept_pic_intr(v) &&
> -             v->domain->arch.hvm_domain.vpic[irq >> 3].int_output )
> +             v->domain->arch.hvm.vpic[irq >> 3].int_output )
>               hvm_isa_irq_assert(v->domain, irq, NULL);
>           else
>           {
> @@ -514,7 +514,7 @@ void create_periodic_time(
>   
>       if ( !pt->one_shot )
>       {
> -        if ( v->domain->arch.hvm_domain.params[HVM_PARAM_VPT_ALIGN] )
> +        if ( v->domain->arch.hvm.params[HVM_PARAM_VPT_ALIGN] )
>           {
>               pt->scheduled = align_timer(pt->scheduled, pt->period);
>           }
> @@ -605,7 +605,7 @@ void pt_adjust_global_vcpu_target(struct vcpu *v)
>       pt_adjust_vcpu(&vpit->pt0, v);
>       spin_unlock(&vpit->lock);
>   
> -    pl_time = v->domain->arch.hvm_domain.pl_time;
> +    pl_time = v->domain->arch.hvm.pl_time;
>   
>       spin_lock(&pl_time->vrtc.lock);
>       pt_adjust_vcpu(&pl_time->vrtc.pt, v);
> @@ -640,9 +640,9 @@ void pt_may_unmask_irq(struct domain *d, struct periodic_time *vlapic_pt)
>       if ( d )
>       {
>           pt_resume(&d->arch.vpit.pt0);
> -        pt_resume(&d->arch.hvm_domain.pl_time->vrtc.pt);
> +        pt_resume(&d->arch.hvm.pl_time->vrtc.pt);
>           for ( i = 0; i < HPET_TIMER_NUM; i++ )
> -            pt_resume(&d->arch.hvm_domain.pl_time->vhpet.pt[i]);
> +            pt_resume(&d->arch.hvm.pl_time->vhpet.pt[i]);
>       }
>   
>       if ( vlapic_pt )
> diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
> index 6865c79..ec93ab6 100644
> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -1293,7 +1293,7 @@ int init_domain_irq_mapping(struct domain *d)
>   
>       radix_tree_init(&d->arch.irq_pirq);
>       if ( is_hvm_domain(d) )
> -        radix_tree_init(&d->arch.hvm_domain.emuirq_pirq);
> +        radix_tree_init(&d->arch.hvm.emuirq_pirq);
>   
>       for ( i = 1; platform_legacy_irq(i); ++i )
>       {
> @@ -1319,7 +1319,7 @@ void cleanup_domain_irq_mapping(struct domain *d)
>   {
>       radix_tree_destroy(&d->arch.irq_pirq, NULL);
>       if ( is_hvm_domain(d) )
> -        radix_tree_destroy(&d->arch.hvm_domain.emuirq_pirq, NULL);
> +        radix_tree_destroy(&d->arch.hvm.emuirq_pirq, NULL);
>   }
>   
>   struct pirq *alloc_pirq_struct(struct domain *d)
> @@ -2490,7 +2490,7 @@ int map_domain_emuirq_pirq(struct domain *d, int pirq, int emuirq)
>       /* do not store emuirq mappings for pt devices */
>       if ( emuirq != IRQ_PT )
>       {
> -        int err = radix_tree_insert(&d->arch.hvm_domain.emuirq_pirq, emuirq,
> +        int err = radix_tree_insert(&d->arch.hvm.emuirq_pirq, emuirq,
>                                       radix_tree_int_to_ptr(pirq));
>   
>           switch ( err )
> @@ -2500,7 +2500,7 @@ int map_domain_emuirq_pirq(struct domain *d, int pirq, int emuirq)
>           case -EEXIST:
>               radix_tree_replace_slot(
>                   radix_tree_lookup_slot(
> -                    &d->arch.hvm_domain.emuirq_pirq, emuirq),
> +                    &d->arch.hvm.emuirq_pirq, emuirq),
>                   radix_tree_int_to_ptr(pirq));
>               break;
>           default:
> @@ -2542,7 +2542,7 @@ int unmap_domain_pirq_emuirq(struct domain *d, int pirq)
>           pirq_cleanup_check(info, d);
>       }
>       if ( emuirq != IRQ_PT )
> -        radix_tree_delete(&d->arch.hvm_domain.emuirq_pirq, emuirq);
> +        radix_tree_delete(&d->arch.hvm.emuirq_pirq, emuirq);
>   
>    done:
>       return ret;
> diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
> index 812a840..fe10e9d 100644
> --- a/xen/arch/x86/mm/hap/hap.c
> +++ b/xen/arch/x86/mm/hap/hap.c
> @@ -83,7 +83,7 @@ int hap_track_dirty_vram(struct domain *d,
>   
>           paging_lock(d);
>   
> -        dirty_vram = d->arch.hvm_domain.dirty_vram;
> +        dirty_vram = d->arch.hvm.dirty_vram;
>           if ( !dirty_vram )
>           {
>               rc = -ENOMEM;
> @@ -93,7 +93,7 @@ int hap_track_dirty_vram(struct domain *d,
>                   goto out;
>               }
>   
> -            d->arch.hvm_domain.dirty_vram = dirty_vram;
> +            d->arch.hvm.dirty_vram = dirty_vram;
>           }
>   
>           if ( begin_pfn != dirty_vram->begin_pfn ||
> @@ -145,7 +145,7 @@ int hap_track_dirty_vram(struct domain *d,
>       {
>           paging_lock(d);
>   
> -        dirty_vram = d->arch.hvm_domain.dirty_vram;
> +        dirty_vram = d->arch.hvm.dirty_vram;
>           if ( dirty_vram )
>           {
>               /*
> @@ -155,7 +155,7 @@ int hap_track_dirty_vram(struct domain *d,
>               begin_pfn = dirty_vram->begin_pfn;
>               nr = dirty_vram->end_pfn - dirty_vram->begin_pfn;
>               xfree(dirty_vram);
> -            d->arch.hvm_domain.dirty_vram = NULL;
> +            d->arch.hvm.dirty_vram = NULL;
>           }
>   
>           paging_unlock(d);
> @@ -579,8 +579,7 @@ void hap_teardown(struct domain *d, bool *preempted)
>   
>       d->arch.paging.mode &= ~PG_log_dirty;
>   
> -    xfree(d->arch.hvm_domain.dirty_vram);
> -    d->arch.hvm_domain.dirty_vram = NULL;
> +    XFREE(d->arch.hvm.dirty_vram);
>   
>   out:
>       paging_unlock(d);
> diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
> index fad8a9d..fe1df83 100644
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -150,7 +150,7 @@ static inline shr_handle_t get_next_handle(void)
>   }
>   
>   #define mem_sharing_enabled(d) \
> -    (is_hvm_domain(d) && (d)->arch.hvm_domain.mem_sharing_enabled)
> +    (is_hvm_domain(d) && (d)->arch.hvm.mem_sharing_enabled)
>   
>   static atomic_t nr_saved_mfns   = ATOMIC_INIT(0);
>   static atomic_t nr_shared_mfns  = ATOMIC_INIT(0);
> @@ -1333,7 +1333,7 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
>   
>       /* Only HAP is supported */
>       rc = -ENODEV;
> -    if ( !hap_enabled(d) || !d->arch.hvm_domain.mem_sharing_enabled )
> +    if ( !hap_enabled(d) || !d->arch.hvm.mem_sharing_enabled )
>           goto out;
>   
>       switch ( mso.op )
> @@ -1613,7 +1613,7 @@ int mem_sharing_domctl(struct domain *d, struct xen_domctl_mem_sharing_op *mec)
>               if ( unlikely(need_iommu(d) && mec->u.enable) )
>                   rc = -EXDEV;
>               else
> -                d->arch.hvm_domain.mem_sharing_enabled = mec->u.enable;
> +                d->arch.hvm.mem_sharing_enabled = mec->u.enable;
>           }
>           break;
>   
> diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
> index 1930a1d..afdc27d 100644
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -2881,11 +2881,11 @@ void shadow_teardown(struct domain *d, bool *preempted)
>        * calls now that we've torn down the bitmap */
>       d->arch.paging.mode &= ~PG_log_dirty;
>   
> -    if (d->arch.hvm_domain.dirty_vram) {
> -        xfree(d->arch.hvm_domain.dirty_vram->sl1ma);
> -        xfree(d->arch.hvm_domain.dirty_vram->dirty_bitmap);
> -        xfree(d->arch.hvm_domain.dirty_vram);
> -        d->arch.hvm_domain.dirty_vram = NULL;
> +    if ( d->arch.hvm.dirty_vram )
> +    {
> +        xfree(d->arch.hvm.dirty_vram->sl1ma);
> +        xfree(d->arch.hvm.dirty_vram->dirty_bitmap);
> +        XFREE(d->arch.hvm.dirty_vram);
>       }
>   
>   out:
> @@ -3261,7 +3261,7 @@ int shadow_track_dirty_vram(struct domain *d,
>       p2m_lock(p2m_get_hostp2m(d));
>       paging_lock(d);
>   
> -    dirty_vram = d->arch.hvm_domain.dirty_vram;
> +    dirty_vram = d->arch.hvm.dirty_vram;
>   
>       if ( dirty_vram && (!nr ||
>                ( begin_pfn != dirty_vram->begin_pfn
> @@ -3272,7 +3272,7 @@ int shadow_track_dirty_vram(struct domain *d,
>           xfree(dirty_vram->sl1ma);
>           xfree(dirty_vram->dirty_bitmap);
>           xfree(dirty_vram);
> -        dirty_vram = d->arch.hvm_domain.dirty_vram = NULL;
> +        dirty_vram = d->arch.hvm.dirty_vram = NULL;
>       }
>   
>       if ( !nr )
> @@ -3299,7 +3299,7 @@ int shadow_track_dirty_vram(struct domain *d,
>               goto out;
>           dirty_vram->begin_pfn = begin_pfn;
>           dirty_vram->end_pfn = end_pfn;
> -        d->arch.hvm_domain.dirty_vram = dirty_vram;
> +        d->arch.hvm.dirty_vram = dirty_vram;
>   
>           if ( (dirty_vram->sl1ma = xmalloc_array(paddr_t, nr)) == NULL )
>               goto out_dirty_vram;
> @@ -3418,7 +3418,7 @@ int shadow_track_dirty_vram(struct domain *d,
>       xfree(dirty_vram->sl1ma);
>   out_dirty_vram:
>       xfree(dirty_vram);
> -    dirty_vram = d->arch.hvm_domain.dirty_vram = NULL;
> +    dirty_vram = d->arch.hvm.dirty_vram = NULL;
>   
>   out:
>       paging_unlock(d);
> diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
> index 9e43533..62819eb 100644
> --- a/xen/arch/x86/mm/shadow/multi.c
> +++ b/xen/arch/x86/mm/shadow/multi.c
> @@ -527,7 +527,7 @@ _sh_propagate(struct vcpu *v,
>       guest_l1e_t guest_entry = { guest_intpte };
>       shadow_l1e_t *sp = shadow_entry_ptr;
>       struct domain *d = v->domain;
> -    struct sh_dirty_vram *dirty_vram = d->arch.hvm_domain.dirty_vram;
> +    struct sh_dirty_vram *dirty_vram = d->arch.hvm.dirty_vram;
>       gfn_t target_gfn = guest_l1e_get_gfn(guest_entry);
>       u32 pass_thru_flags;
>       u32 gflags, sflags;
> @@ -619,7 +619,7 @@ _sh_propagate(struct vcpu *v,
>           if ( !mmio_mfn &&
>                (type = hvm_get_mem_pinned_cacheattr(d, target_gfn, 0)) >= 0 )
>               sflags |= pat_type_2_pte_flags(type);
> -        else if ( d->arch.hvm_domain.is_in_uc_mode )
> +        else if ( d->arch.hvm.is_in_uc_mode )
>               sflags |= pat_type_2_pte_flags(PAT_TYPE_UNCACHABLE);
>           else
>               if ( iomem_access_permitted(d, mfn_x(target_mfn), mfn_x(target_mfn)) )
> @@ -1110,7 +1110,7 @@ static inline void shadow_vram_get_l1e(shadow_l1e_t new_sl1e,
>       mfn_t mfn = shadow_l1e_get_mfn(new_sl1e);
>       int flags = shadow_l1e_get_flags(new_sl1e);
>       unsigned long gfn;
> -    struct sh_dirty_vram *dirty_vram = d->arch.hvm_domain.dirty_vram;
> +    struct sh_dirty_vram *dirty_vram = d->arch.hvm.dirty_vram;
>   
>       if ( !dirty_vram         /* tracking disabled? */
>            || !(flags & _PAGE_RW) /* read-only mapping? */
> @@ -1141,7 +1141,7 @@ static inline void shadow_vram_put_l1e(shadow_l1e_t old_sl1e,
>       mfn_t mfn = shadow_l1e_get_mfn(old_sl1e);
>       int flags = shadow_l1e_get_flags(old_sl1e);
>       unsigned long gfn;
> -    struct sh_dirty_vram *dirty_vram = d->arch.hvm_domain.dirty_vram;
> +    struct sh_dirty_vram *dirty_vram = d->arch.hvm.dirty_vram;
>   
>       if ( !dirty_vram         /* tracking disabled? */
>            || !(flags & _PAGE_RW) /* read-only mapping? */
> diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
> index 4524823..3a3c158 100644
> --- a/xen/arch/x86/physdev.c
> +++ b/xen/arch/x86/physdev.c
> @@ -98,7 +98,7 @@ int physdev_map_pirq(domid_t domid, int type, int *index, int *pirq_p,
>       {
>           /*
>            * Only makes sense for vector-based callback, else HVM-IRQ logic
> -         * calls back into itself and deadlocks on hvm_domain.irq_lock.
> +         * calls back into itself and deadlocks on hvm.irq_lock.
>            */
>           if ( !is_hvm_pv_evtchn_domain(d) )
>               return -EINVAL;
> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> index dd11815..ed133fc 100644
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -1870,7 +1870,7 @@ static int __hwdom_init io_bitmap_cb(unsigned long s, unsigned long e,
>   
>       ASSERT(e <= INT_MAX);
>       for ( i = s; i <= e; i++ )
> -        __clear_bit(i, d->arch.hvm_domain.io_bitmap);
> +        __clear_bit(i, d->arch.hvm.io_bitmap);
>   
>       return 0;
>   }
> @@ -1881,7 +1881,7 @@ void __hwdom_init setup_io_bitmap(struct domain *d)
>   
>       if ( is_hvm_domain(d) )
>       {
> -        bitmap_fill(d->arch.hvm_domain.io_bitmap, 0x10000);
> +        bitmap_fill(d->arch.hvm.io_bitmap, 0x10000);
>           rc = rangeset_report_ranges(d->arch.ioport_caps, 0, 0x10000,
>                                       io_bitmap_cb, d);
>           BUG_ON(rc);
> @@ -1892,9 +1892,9 @@ void __hwdom_init setup_io_bitmap(struct domain *d)
>            * Access to 1 byte RTC ports also needs to be trapped in order
>            * to keep consistency with PV.
>            */
> -        __set_bit(0xcf8, d->arch.hvm_domain.io_bitmap);
> -        __set_bit(RTC_PORT(0), d->arch.hvm_domain.io_bitmap);
> -        __set_bit(RTC_PORT(1), d->arch.hvm_domain.io_bitmap);
> +        __set_bit(0xcf8, d->arch.hvm.io_bitmap);
> +        __set_bit(RTC_PORT(0), d->arch.hvm.io_bitmap);
> +        __set_bit(RTC_PORT(1), d->arch.hvm.io_bitmap);
>       }
>   }
>   
> diff --git a/xen/arch/x86/time.c b/xen/arch/x86/time.c
> index 69e9aaf..5922fbf 100644
> --- a/xen/arch/x86/time.c
> +++ b/xen/arch/x86/time.c
> @@ -1039,7 +1039,7 @@ static void __update_vcpu_system_time(struct vcpu *v, int force)
>   
>           if ( is_hvm_domain(d) )
>           {
> -            struct pl_time *pl = v->domain->arch.hvm_domain.pl_time;
> +            struct pl_time *pl = v->domain->arch.hvm.pl_time;
>   
>               stime += pl->stime_offset + v->arch.hvm_vcpu.stime_offset;
>               if ( stime >= 0 )
> @@ -2183,7 +2183,7 @@ void tsc_set_info(struct domain *d,
>       if ( is_hvm_domain(d) )
>       {
>           if ( hvm_tsc_scaling_supported && !d->arch.vtsc )
> -            d->arch.hvm_domain.tsc_scaling_ratio =
> +            d->arch.hvm.tsc_scaling_ratio =
>                   hvm_get_tsc_scaling_ratio(d->arch.tsc_khz);
>   
>           hvm_set_rdtsc_exiting(d, d->arch.vtsc);
> @@ -2197,10 +2197,10 @@ void tsc_set_info(struct domain *d,
>                * call set_tsc_offset() later from hvm_vcpu_reset_state() and they
>                * will sync their TSC to BSP's sync_tsc.
>                */
> -            d->arch.hvm_domain.sync_tsc = rdtsc();
> +            d->arch.hvm.sync_tsc = rdtsc();
>               hvm_set_tsc_offset(d->vcpu[0],
>                                  d->vcpu[0]->arch.hvm_vcpu.cache_tsc_offset,
> -                               d->arch.hvm_domain.sync_tsc);
> +                               d->arch.hvm.sync_tsc);
>           }
>       }
>   
> diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
> index 144ab81..4793aac 100644
> --- a/xen/common/vm_event.c
> +++ b/xen/common/vm_event.c
> @@ -48,7 +48,7 @@ static int vm_event_enable(
>       xen_event_channel_notification_t notification_fn)
>   {
>       int rc;
> -    unsigned long ring_gfn = d->arch.hvm_domain.params[param];
> +    unsigned long ring_gfn = d->arch.hvm.params[param];
>   
>       if ( !*ved )
>           *ved = xzalloc(struct vm_event_domain);
> diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
> index d1adffa..2644048 100644
> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -1417,7 +1417,7 @@ static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag)
>       /* Prevent device assign if mem paging or mem sharing have been
>        * enabled for this domain */
>       if ( unlikely(!need_iommu(d) &&
> -            (d->arch.hvm_domain.mem_sharing_enabled ||
> +            (d->arch.hvm.mem_sharing_enabled ||
>                vm_event_check_ring(d->vm_event_paging) ||
>                p2m_get_hostp2m(d)->global_logdirty)) )
>           return -EXDEV;
> diff --git a/xen/drivers/vpci/msix.c b/xen/drivers/vpci/msix.c
> index bcf6325..1960dae 100644
> --- a/xen/drivers/vpci/msix.c
> +++ b/xen/drivers/vpci/msix.c
> @@ -152,7 +152,7 @@ static struct vpci_msix *msix_find(const struct domain *d, unsigned long addr)
>   {
>       struct vpci_msix *msix;
>   
> -    list_for_each_entry ( msix, &d->arch.hvm_domain.msix_tables, next )
> +    list_for_each_entry ( msix, &d->arch.hvm.msix_tables, next )
>       {
>           const struct vpci_bar *bars = msix->pdev->vpci->header.bars;
>           unsigned int i;
> @@ -438,10 +438,10 @@ static int init_msix(struct pci_dev *pdev)
>       if ( rc )
>           return rc;
>   
> -    if ( list_empty(&d->arch.hvm_domain.msix_tables) )
> +    if ( list_empty(&d->arch.hvm.msix_tables) )
>           register_mmio_handler(d, &vpci_msix_table_ops);
>   
> -    list_add(&pdev->vpci->msix->next, &d->arch.hvm_domain.msix_tables);
> +    list_add(&pdev->vpci->msix->next, &d->arch.hvm.msix_tables);
>   
>       return 0;
>   }
> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
> index 280c395..d682307 100644
> --- a/xen/include/asm-arm/domain.h
> +++ b/xen/include/asm-arm/domain.h
> @@ -51,7 +51,7 @@ struct arch_domain
>       /* Virtual MMU */
>       struct p2m_domain p2m;
>   
> -    struct hvm_domain hvm_domain;
> +    struct hvm_domain hvm;
>   
>       struct vmmio vmmio;
>   
> diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
> index fdd6856..4722c2d 100644
> --- a/xen/include/asm-x86/domain.h
> +++ b/xen/include/asm-x86/domain.h
> @@ -17,7 +17,7 @@
>   #define is_pv_32bit_vcpu(v)    (is_pv_32bit_domain((v)->domain))
>   
>   #define is_hvm_pv_evtchn_domain(d) (is_hvm_domain(d) && \
> -        (d)->arch.hvm_domain.irq->callback_via_type == HVMIRQ_callback_vector)
> +        (d)->arch.hvm.irq->callback_via_type == HVMIRQ_callback_vector)
>   #define is_hvm_pv_evtchn_vcpu(v) (is_hvm_pv_evtchn_domain(v->domain))
>   #define is_domain_direct_mapped(d) ((void)(d), 0)
>   
> @@ -306,7 +306,7 @@ struct arch_domain
>   
>       union {
>           struct pv_domain pv;
> -        struct hvm_domain hvm_domain;
> +        struct hvm_domain hvm;
>       };
>   
>       struct paging_domain paging;
> diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
> index 5885950..acf8e03 100644
> --- a/xen/include/asm-x86/hvm/domain.h
> +++ b/xen/include/asm-x86/hvm/domain.h
> @@ -202,7 +202,7 @@ struct hvm_domain {
>       };
>   };
>   
> -#define hap_enabled(d)  ((d)->arch.hvm_domain.hap_enabled)
> +#define hap_enabled(d)  ((d)->arch.hvm.hap_enabled)
>   
>   #endif /* __ASM_X86_HVM_DOMAIN_H__ */
>   
> diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
> index 5ea507b..ac0f035 100644
> --- a/xen/include/asm-x86/hvm/hvm.h
> +++ b/xen/include/asm-x86/hvm/hvm.h
> @@ -266,7 +266,7 @@ u64 hvm_get_guest_tsc_fixed(struct vcpu *v, u64 at_tsc);
>       (1ULL << hvm_funcs.tsc_scaling.ratio_frac_bits)
>   
>   #define hvm_tsc_scaling_ratio(d) \
> -    ((d)->arch.hvm_domain.tsc_scaling_ratio)
> +    ((d)->arch.hvm.tsc_scaling_ratio)
>   
>   u64 hvm_scale_tsc(const struct domain *d, u64 tsc);
>   u64 hvm_get_tsc_scaling_ratio(u32 gtsc_khz);
> @@ -391,10 +391,10 @@ static inline bool hvm_get_guest_bndcfgs(struct vcpu *v, u64 *val)
>   bool hvm_set_guest_bndcfgs(struct vcpu *v, u64 val);
>   
>   #define has_hvm_params(d) \
> -    ((d)->arch.hvm_domain.params != NULL)
> +    ((d)->arch.hvm.params != NULL)
>   
>   #define viridian_feature_mask(d) \
> -    (has_hvm_params(d) ? (d)->arch.hvm_domain.params[HVM_PARAM_VIRIDIAN] : 0)
> +    (has_hvm_params(d) ? (d)->arch.hvm.params[HVM_PARAM_VIRIDIAN] : 0)
>   
>   #define is_viridian_domain(d) \
>       (is_hvm_domain(d) && (viridian_feature_mask(d) & HVMPV_base_freq))
> @@ -670,9 +670,8 @@ unsigned long hvm_cr4_guest_valid_bits(const struct domain *d, bool restore);
>   #define arch_vcpu_block(v) ({                                   \
>       struct vcpu *v_ = (v);                                      \
>       struct domain *d_ = v_->domain;                             \
> -    if ( is_hvm_domain(d_) &&                               \
> -         (d_->arch.hvm_domain.pi_ops.vcpu_block) )          \
> -        d_->arch.hvm_domain.pi_ops.vcpu_block(v_);          \
> +    if ( is_hvm_domain(d_) && d_->arch.hvm.pi_ops.vcpu_block )  \
> +        d_->arch.hvm.pi_ops.vcpu_block(v_);                     \
>   })
>   
>   #endif /* __ASM_X86_HVM_HVM_H__ */
> diff --git a/xen/include/asm-x86/hvm/irq.h b/xen/include/asm-x86/hvm/irq.h
> index 8a43cb9..2e6fa70 100644
> --- a/xen/include/asm-x86/hvm/irq.h
> +++ b/xen/include/asm-x86/hvm/irq.h
> @@ -97,7 +97,7 @@ struct hvm_irq {
>       (((((dev)<<2) + ((dev)>>3) + (intx)) & 31) + 16)
>   #define hvm_pci_intx_link(dev, intx) \
>       (((dev) + (intx)) & 3)
> -#define hvm_domain_irq(d) ((d)->arch.hvm_domain.irq)
> +#define hvm_domain_irq(d) ((d)->arch.hvm.irq)
>   #define hvm_irq_size(cnt) offsetof(struct hvm_irq, gsi_assert_count[cnt])
>   
>   #define hvm_isa_irq_to_gsi(isa_irq) ((isa_irq) ? : 2)
> diff --git a/xen/include/asm-x86/hvm/nestedhvm.h b/xen/include/asm-x86/hvm/nestedhvm.h
> index 3c810b7..4a041e2 100644
> --- a/xen/include/asm-x86/hvm/nestedhvm.h
> +++ b/xen/include/asm-x86/hvm/nestedhvm.h
> @@ -35,8 +35,8 @@ enum nestedhvm_vmexits {
>   /* Nested HVM on/off per domain */
>   static inline bool nestedhvm_enabled(const struct domain *d)
>   {
> -    return is_hvm_domain(d) && d->arch.hvm_domain.params &&
> -        d->arch.hvm_domain.params[HVM_PARAM_NESTEDHVM];
> +    return is_hvm_domain(d) && d->arch.hvm.params &&
> +        d->arch.hvm.params[HVM_PARAM_NESTEDHVM];
>   }
>   
>   /* Nested VCPU */
> diff --git a/xen/include/asm-x86/hvm/vioapic.h b/xen/include/asm-x86/hvm/vioapic.h
> index 138d2c0..a72cd17 100644
> --- a/xen/include/asm-x86/hvm/vioapic.h
> +++ b/xen/include/asm-x86/hvm/vioapic.h
> @@ -58,7 +58,7 @@ struct hvm_vioapic {
>   };
>   
>   #define hvm_vioapic_size(cnt) offsetof(struct hvm_vioapic, redirtbl[cnt])
> -#define domain_vioapic(d, i) ((d)->arch.hvm_domain.vioapic[i])
> +#define domain_vioapic(d, i) ((d)->arch.hvm.vioapic[i])
>   #define vioapic_domain(v) ((v)->domain)
>   
>   int vioapic_init(struct domain *d);
> diff --git a/xen/include/asm-x86/hvm/vpt.h b/xen/include/asm-x86/hvm/vpt.h
> index 61c26ed..99169dd 100644
> --- a/xen/include/asm-x86/hvm/vpt.h
> +++ b/xen/include/asm-x86/hvm/vpt.h
> @@ -150,8 +150,8 @@ void pt_migrate(struct vcpu *v);
>   
>   void pt_adjust_global_vcpu_target(struct vcpu *v);
>   #define pt_global_vcpu_target(d) \
> -    (is_hvm_domain(d) && (d)->arch.hvm_domain.i8259_target ? \
> -     (d)->arch.hvm_domain.i8259_target : \
> +    (is_hvm_domain(d) && (d)->arch.hvm.i8259_target ? \
> +     (d)->arch.hvm.i8259_target : \
>        (d)->vcpu ? (d)->vcpu[0] : NULL)
>   
>   void pt_may_unmask_irq(struct domain *d, struct periodic_time *vlapic_pt);
> diff --git a/xen/include/asm-x86/irq.h b/xen/include/asm-x86/irq.h
> index d9dad39..054c3ab 100644
> --- a/xen/include/asm-x86/irq.h
> +++ b/xen/include/asm-x86/irq.h
> @@ -188,8 +188,7 @@ void cleanup_domain_irq_mapping(struct domain *);
>   #define domain_pirq_to_emuirq(d, pirq) pirq_field(d, pirq,              \
>       arch.hvm.emuirq, IRQ_UNBOUND)
>   #define domain_emuirq_to_pirq(d, emuirq) ({                             \
> -    void *__ret = radix_tree_lookup(&(d)->arch.hvm_domain.emuirq_pirq,  \
> -                                    emuirq);                            \
> +    void *__ret = radix_tree_lookup(&(d)->arch.hvm.emuirq_pirq, emuirq);\
>       __ret ? radix_tree_ptr_to_int(__ret) : IRQ_UNBOUND;                 \
>   })
>   #define IRQ_UNBOUND -1
> 

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 7/7] x86/hvm: Drop hvm_{vmx,svm} shorthands
  2018-08-28 17:39 ` [PATCH 7/7] x86/hvm: Drop hvm_{vmx,svm} shorthands Andrew Cooper
                     ` (2 preceding siblings ...)
  2018-08-30 15:03   ` Jan Beulich
@ 2018-08-30 23:11   ` Boris Ostrovsky
  2018-08-30 23:14     ` Boris Ostrovsky
  3 siblings, 1 reply; 39+ messages in thread
From: Boris Ostrovsky @ 2018-08-30 23:11 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel
  Cc: Kevin Tian, Wei Liu, Jun Nakajima, George Dunlap, Jan Beulich,
	Suravee Suthikulpanit, Brian Woods, Roger Pau Monné

On 08/28/2018 01:39 PM, Andrew Cooper wrote:
> By making {vmx,svm} in hvm_vcpu into an anonymous union (consistent with
> domain side of things), the hvm_{vmx,svm} defines can be dropped, and all code
> refer to the correctly-named fields.  This means that the data hierachy is no
> longer obscured from grep/cscope/tags/etc.
>
> Reformat one comment and switch one bool_t to bool while making changes.
>
> No functional change.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>

Do we still support pre-4.6 gcc? They have a bug that doesn't allow
initializers for anonymous structs/unions. I don't know whether we have
any for vmx/svm, but I thought I'd mention this just in case.

Other than that, for SVM bits in patches 3,4,6 and 7

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 7/7] x86/hvm: Drop hvm_{vmx,svm} shorthands
  2018-08-30 23:11   ` Boris Ostrovsky
@ 2018-08-30 23:14     ` Boris Ostrovsky
  2018-08-30 23:18       ` Andrew Cooper
  0 siblings, 1 reply; 39+ messages in thread
From: Boris Ostrovsky @ 2018-08-30 23:14 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel
  Cc: Kevin Tian, Wei Liu, Jun Nakajima, George Dunlap, Jan Beulich,
	Suravee Suthikulpanit, Brian Woods, Roger Pau Monné

On 08/30/2018 07:11 PM, Boris Ostrovsky wrote:
> On 08/28/2018 01:39 PM, Andrew Cooper wrote:
>> By making {vmx,svm} in hvm_vcpu into an anonymous union (consistent with
>> domain side of things), the hvm_{vmx,svm} defines can be dropped, and all code
>> refer to the correctly-named fields.  This means that the data hierachy is no
>> longer obscured from grep/cscope/tags/etc.
>>
>> Reformat one comment and switch one bool_t to bool while making changes.
>>
>> No functional change.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>
> Do we still support pre-4.6 gcc? 

Or 4.4.6? I don't remember.

-boris


> They have a bug that doesn't allow
> initializers for anonymous structs/unions. I don't know whether we have
> any for vmx/svm, but I thought I'd mention this just in case.
>
> Other than that, for SVM bits in patches 3,4,6 and 7
>
> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 7/7] x86/hvm: Drop hvm_{vmx,svm} shorthands
  2018-08-30 23:14     ` Boris Ostrovsky
@ 2018-08-30 23:18       ` Andrew Cooper
  0 siblings, 0 replies; 39+ messages in thread
From: Andrew Cooper @ 2018-08-30 23:18 UTC (permalink / raw)
  To: Boris Ostrovsky, Xen-devel
  Cc: Kevin Tian, Wei Liu, Jun Nakajima, George Dunlap, Jan Beulich,
	Suravee Suthikulpanit, Brian Woods, Roger Pau Monné

On 31/08/18 00:14, Boris Ostrovsky wrote:
> On 08/30/2018 07:11 PM, Boris Ostrovsky wrote:
>> On 08/28/2018 01:39 PM, Andrew Cooper wrote:
>>> By making {vmx,svm} in hvm_vcpu into an anonymous union (consistent with
>>> domain side of things), the hvm_{vmx,svm} defines can be dropped, and all code
>>> refer to the correctly-named fields.  This means that the data hierachy is no
>>> longer obscured from grep/cscope/tags/etc.
>>>
>>> Reformat one comment and switch one bool_t to bool while making changes.
>>>
>>> No functional change.
>>>
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>
>> Do we still support pre-4.6 gcc? 
> Or 4.4.6? I don't remember.

I'm aware of that bug and it is 4.6 iirc.

However, in this case there are no initialisers.  The memory starts off
as a zeroed heap page and has a plethora of init() functions call on it
to set things up.

>
> -boris
>
>
>> They have a bug that doesn't allow
>> initializers for anonymous structs/unions. I don't know whether we have
>> any for vmx/svm, but I thought I'd mention this just in case.
>>
>> Other than that, for SVM bits in patches 3,4,6 and 7
>>
>> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

Thanks.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 5/7] x86/vtx: Rename arch_vmx_struct to vmx_vcpu
  2018-08-30 15:47     ` Andrew Cooper
@ 2018-08-31  1:35       ` Tian, Kevin
  0 siblings, 0 replies; 39+ messages in thread
From: Tian, Kevin @ 2018-08-31  1:35 UTC (permalink / raw)
  To: Andrew Cooper, Jan Beulich
  Cc: Xen-devel, Wei Liu, Nakajima, Jun, Roger Pau Monne

> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: Thursday, August 30, 2018 11:47 PM
> 
> On 30/08/18 15:54, Jan Beulich wrote:
> >>>> On 28.08.18 at 19:39, <andrew.cooper3@citrix.com> wrote:
> >> The suffix and prefix are redundant, and the name is curiously odd.
> Rename it
> >> to vmx_vcpu to be consistent with all the other similar structures.
> >>
> >> No functional change.
> >>
> >> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >> ---
> >> CC: Jan Beulich <JBeulich@suse.com>
> >> CC: Wei Liu <wei.liu2@citrix.com>
> >> CC: Roger Pau Monné <roger.pau@citrix.com>
> >> CC: Jun Nakajima <jun.nakajima@intel.com>
> >> CC: Kevin Tian <kevin.tian@intel.com>
> >>
> >> Some of the local pointers are named arch_vmx.  I'm open to renaming
> them to
> >> just vmx (like all the other local pointers) if people are happy with the
> >> additional patch delta.
> > I'd be fine with that. With or without
> > Acked-by: Jan Beulich <jbeulich@suse.com>
> 
> TBH, I was hoping for a comment from Kevin on this question.
> 
> Given that the net diffstat including the pointer renames is:
> 
> andrewcoop@andrewcoop:/local/xen.git/xen$ git d HEAD^ --stat
>  xen/arch/x86/hvm/vmx/vmcs.c        | 44
> ++++++++++++++++++++++----------------------
>  xen/arch/x86/hvm/vmx/vmx.c         |  4 ++--
>  xen/include/asm-x86/hvm/vcpu.h     |  2 +-
>  xen/include/asm-x86/hvm/vmx/vmcs.h |  2 +-
>  4 files changed, 26 insertions(+), 26 deletions(-)
> 
> I've decided to go ahead and do them, to improve the eventual code
> consistency.

yes, please go ahead. I didn't note that open earlier.

Thanks
Kevin
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 3/7] xen/hvm: Rename d->arch.hvm_domain to d->arch.hvm
  2018-08-28 17:39 ` [PATCH 3/7] xen/hvm: Rename d->arch.hvm_domain to d->arch.hvm Andrew Cooper
                     ` (4 preceding siblings ...)
  2018-08-30 16:44   ` Julien Grall
@ 2018-09-03  8:11   ` Paul Durrant
  5 siblings, 0 replies; 39+ messages in thread
From: Paul Durrant @ 2018-09-03  8:11 UTC (permalink / raw)
  To: Xen-devel
  Cc: Kevin Tian, Tamas K Lengyel, Wei Liu, Jun Nakajima,
	Razvan Cojocaru, Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, Stefano Stabellini, Jan Beulich,
	Boris Ostrovsky, Brian Woods, Suravee Suthikulpanit,
	Roger Pau Monne

> -----Original Message-----
> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: 28 August 2018 18:39
> To: Xen-devel <xen-devel@lists.xen.org>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Jan Beulich
> <JBeulich@suse.com>; Wei Liu <wei.liu2@citrix.com>; Roger Pau Monne
> <roger.pau@citrix.com>; Tim (Xen.org) <tim@xen.org>; George Dunlap
> <George.Dunlap@citrix.com>; Paul Durrant <Paul.Durrant@citrix.com>;
> Razvan Cojocaru <rcojocaru@bitdefender.com>; Tamas K Lengyel
> <tamas@tklengyel.com>; Stefano Stabellini <sstabellini@kernel.org>; Julien
> Grall <julien.grall@arm.com>; Jun Nakajima <jun.nakajima@intel.com>;
> Kevin Tian <kevin.tian@intel.com>; Boris Ostrovsky
> <boris.ostrovsky@oracle.com>; Suravee Suthikulpanit
> <suravee.suthikulpanit@amd.com>; Brian Woods <brian.woods@amd.com>
> Subject: [PATCH 3/7] xen/hvm: Rename d->arch.hvm_domain to d-
> >arch.hvm
> 
> The trailing _domain suffix is redundant, but adds to code volume.  Drop it.
> 
> Reflow lines as appropriate, and switch to using the new XFREE/etc wrappers
> where applicable.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Paul Durrant <paul.durrant@citrix.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 4/7] x86/hvm: Rename v->arch.hvm_vcpu to v->arch.hvm
  2018-08-28 17:39 ` [PATCH 4/7] x86/hvm: Rename v->arch.hvm_vcpu to v->arch.hvm Andrew Cooper
                     ` (3 preceding siblings ...)
  2018-08-30 14:52   ` Jan Beulich
@ 2018-09-03  8:13   ` Paul Durrant
  4 siblings, 0 replies; 39+ messages in thread
From: Paul Durrant @ 2018-09-03  8:13 UTC (permalink / raw)
  To: Xen-devel
  Cc: Kevin Tian, Tamas K Lengyel, Wei Liu, Jun Nakajima,
	Razvan Cojocaru, Andrew Cooper, Tim (Xen.org),
	George Dunlap, Julien Grall, Stefano Stabellini, Jan Beulich,
	Boris Ostrovsky, Brian Woods, Suravee Suthikulpanit,
	Roger Pau Monne

> -----Original Message-----
> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: 28 August 2018 18:39
> To: Xen-devel <xen-devel@lists.xen.org>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Jan Beulich
> <JBeulich@suse.com>; Wei Liu <wei.liu2@citrix.com>; Roger Pau Monne
> <roger.pau@citrix.com>; Tim (Xen.org) <tim@xen.org>; George Dunlap
> <George.Dunlap@citrix.com>; Paul Durrant <Paul.Durrant@citrix.com>;
> Razvan Cojocaru <rcojocaru@bitdefender.com>; Tamas K Lengyel
> <tamas@tklengyel.com>; Stefano Stabellini <sstabellini@kernel.org>; Julien
> Grall <julien.grall@arm.com>; Jun Nakajima <jun.nakajima@intel.com>;
> Kevin Tian <kevin.tian@intel.com>; Boris Ostrovsky
> <boris.ostrovsky@oracle.com>; Suravee Suthikulpanit
> <suravee.suthikulpanit@amd.com>; Brian Woods <brian.woods@amd.com>
> Subject: [PATCH 4/7] x86/hvm: Rename v->arch.hvm_vcpu to v->arch.hvm
> 
> The trailing _vcpu suffix is redundant, but adds to code volume.  Drop it.
> 
> Reflow lines as appropriate, and switch to using the new XFREE/etc wrappers
> where applicable.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Paul Durrant <paul.durrant@citrix.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

end of thread, other threads:[~2018-09-03  8:13 UTC | newest]

Thread overview: 39+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-08-28 17:38 [PATCH 0/7] x86: Structure naming and consistency improvements Andrew Cooper
2018-08-28 17:39 ` [PATCH 1/7] x86/pv: Rename d->arch.pv_domain to d->arch.pv Andrew Cooper
2018-08-29  7:45   ` Wei Liu
2018-08-29 15:55   ` Jan Beulich
2018-08-28 17:39 ` [PATCH 2/7] x86/pv: Rename v->arch.pv_vcpu to v->arch.pv Andrew Cooper
2018-08-29  7:53   ` Wei Liu
2018-08-29 16:01   ` Jan Beulich
2018-08-28 17:39 ` [PATCH 3/7] xen/hvm: Rename d->arch.hvm_domain to d->arch.hvm Andrew Cooper
2018-08-28 18:56   ` Razvan Cojocaru
2018-08-29  7:56   ` Wei Liu
2018-08-30  1:34   ` Tian, Kevin
2018-08-30 14:39   ` Jan Beulich
2018-08-30 16:44   ` Julien Grall
2018-09-03  8:11   ` Paul Durrant
2018-08-28 17:39 ` [PATCH 4/7] x86/hvm: Rename v->arch.hvm_vcpu to v->arch.hvm Andrew Cooper
2018-08-28 18:59   ` Razvan Cojocaru
2018-08-29  7:57   ` Wei Liu
2018-08-30  1:34   ` Tian, Kevin
2018-08-30 14:52   ` Jan Beulich
2018-08-30 16:03     ` Andrew Cooper
2018-09-03  8:13   ` Paul Durrant
2018-08-28 17:39 ` [PATCH 5/7] x86/vtx: Rename arch_vmx_struct to vmx_vcpu Andrew Cooper
2018-08-29  8:03   ` Wei Liu
2018-08-29 11:17     ` Andrew Cooper
2018-08-29 13:16       ` Wei Liu
2018-08-30  1:36   ` Tian, Kevin
2018-08-30 14:54   ` Jan Beulich
2018-08-30 15:47     ` Andrew Cooper
2018-08-31  1:35       ` Tian, Kevin
2018-08-28 17:39 ` [PATCH 6/7] x86/svm: Rename arch_svm_struct to svm_vcpu Andrew Cooper
2018-08-30 14:54   ` Jan Beulich
2018-08-28 17:39 ` [PATCH 7/7] x86/hvm: Drop hvm_{vmx,svm} shorthands Andrew Cooper
2018-08-29  8:03   ` Wei Liu
2018-08-30  1:39   ` Tian, Kevin
2018-08-30 16:08     ` Andrew Cooper
2018-08-30 15:03   ` Jan Beulich
2018-08-30 23:11   ` Boris Ostrovsky
2018-08-30 23:14     ` Boris Ostrovsky
2018-08-30 23:18       ` Andrew Cooper

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.