All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] x86: remove PVHv1
@ 2017-02-24 15:13 Roger Pau Monne
  2017-02-24 15:13 ` [PATCH 1/3] x86: remove PVHv1 code Roger Pau Monne
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Roger Pau Monne @ 2017-02-24 15:13 UTC (permalink / raw)
  To: xen-devel

Hello,

This patch series removes the PVHv1 code, both from the hypervisor and the
tools, and also gets rid of the has_hvm_container_{domain/vcpu} macro, since
from Xen's point of view there are only two types of guests: PV or HVM.

Last patch is a minor code movement to have all the domain build PVH related
functions together.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/3] x86: remove PVHv1 code
  2017-02-24 15:13 [PATCH 0/3] x86: remove PVHv1 Roger Pau Monne
@ 2017-02-24 15:13 ` Roger Pau Monne
  2017-02-24 15:32   ` Andrew Cooper
  2017-02-24 15:36   ` Andrew Cooper
  2017-02-24 15:13 ` [PATCH 2/3] x86: remove has_hvm_container_{domain/vcpu} Roger Pau Monne
  2017-02-24 15:13 ` [PATCH 3/3] x86/PVHv2: move pvh_setup_e820 together with the other pvh functions Roger Pau Monne
  2 siblings, 2 replies; 11+ messages in thread
From: Roger Pau Monne @ 2017-02-24 15:13 UTC (permalink / raw)
  To: xen-devel
  Cc: Elena Ufimtseva, Kevin Tian, Tamas K Lengyel, Wei Liu,
	Jun Nakajima, Razvan Cojocaru, George Dunlap, Andrew Cooper,
	Ian Jackson, Paul Durrant, Jan Beulich, Roger Pau Monne

This removal applies to both the hypervisor and the toolstack side of PVHv1.

Note that on the toolstack side there's one hiccup: on xl the "pvh"
configuration option is translated to builder="hvm",
device_model_version="none".  This is done because otherwise xl would start
parsing PV like options, and filling the PV struct at libxl_domain_build_info
(which in turn pollutes the HVM one because it's a union).

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Elena Ufimtseva <elena.ufimtseva@oracle.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Paul Durrant <paul.durrant@citrix.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Razvan Cojocaru <rcojocaru@bitdefender.com>
Cc: Tamas K Lengyel <tamas@tklengyel.com>
---
 docs/man/xl.cfg.pod.5.in          |  10 +-
 docs/misc/pvh-readme.txt          |  63 ---------
 tools/debugger/gdbsx/xg/xg_main.c |   4 +-
 tools/libxc/include/xc_dom.h      |   1 -
 tools/libxc/include/xenctrl.h     |   2 +-
 tools/libxc/xc_cpuid_x86.c        |  13 +-
 tools/libxc/xc_dom_core.c         |   9 --
 tools/libxc/xc_dom_x86.c          |  49 +++----
 tools/libxc/xc_domain.c           |   1 -
 tools/libxl/libxl_create.c        |  31 ++--
 tools/libxl/libxl_dom.c           |   1 -
 tools/libxl/libxl_internal.h      |   1 -
 tools/libxl/libxl_x86.c           |   7 +-
 tools/xl/xl_cmdimpl.c             |  10 +-
 xen/arch/x86/cpu/vpmu.c           |   3 +-
 xen/arch/x86/cpuid.c              | 211 ++++++++++++++--------------
 xen/arch/x86/domain.c             |  42 +-----
 xen/arch/x86/domain_build.c       | 287 +-------------------------------------
 xen/arch/x86/domctl.c             |   7 +-
 xen/arch/x86/hvm/hvm.c            |  81 +----------
 xen/arch/x86/hvm/hypercall.c      |   4 +-
 xen/arch/x86/hvm/io.c             |   2 -
 xen/arch/x86/hvm/ioreq.c          |   3 +-
 xen/arch/x86/hvm/irq.c            |   3 -
 xen/arch/x86/hvm/vmx/vmcs.c       |  35 +----
 xen/arch/x86/hvm/vmx/vmx.c        |  12 +-
 xen/arch/x86/mm.c                 |   2 +-
 xen/arch/x86/mm/p2m-pt.c          |   2 +-
 xen/arch/x86/mm/p2m.c             |   6 +-
 xen/arch/x86/physdev.c            |   8 --
 xen/arch/x86/setup.c              |   7 -
 xen/arch/x86/time.c               |  27 ----
 xen/common/domain.c               |   2 -
 xen/common/domctl.c               |  10 --
 xen/common/kernel.c               |   5 -
 xen/common/vm_event.c             |   8 +-
 xen/include/asm-x86/domain.h      |   1 -
 xen/include/asm-x86/hvm/hvm.h     |   3 -
 xen/include/public/domctl.h       |  12 +-
 xen/include/xen/sched.h           |   9 +-
 40 files changed, 196 insertions(+), 798 deletions(-)
 delete mode 100644 docs/misc/pvh-readme.txt

diff --git a/docs/man/xl.cfg.pod.5.in b/docs/man/xl.cfg.pod.5.in
index 505c111..da1fdd7 100644
--- a/docs/man/xl.cfg.pod.5.in
+++ b/docs/man/xl.cfg.pod.5.in
@@ -1064,6 +1064,12 @@ FIFO-based event channel ABI support up to 131,071 event channels.
 Other guests are limited to 4095 (64-bit x86 and ARM) or 1023 (32-bit
 x86).
 
+=item B<pvh=BOOLEAN>
+
+Selects whether to run this PV guest in an HVM container. Default is 0.
+Note that this option is equivalent to setting builder="hvm" and
+device_model_version="none"
+
 =back
 
 =head2 Paravirtualised (PV) Guest Specific Options
@@ -1108,10 +1114,6 @@ if your particular guest kernel does not require this behaviour then
 it is safe to allow this to be enabled but you may wish to disable it
 anyway.
 
-=item B<pvh=BOOLEAN>
-
-Selects whether to run this PV guest in an HVM container. Default is 0.
-
 =back
 
 =head2 Fully-virtualised (HVM) Guest Specific Options
diff --git a/docs/misc/pvh-readme.txt b/docs/misc/pvh-readme.txt
deleted file mode 100644
index c5b3de4..0000000
--- a/docs/misc/pvh-readme.txt
+++ /dev/null
@@ -1,63 +0,0 @@
-
-PVH : an x86 PV guest running in an HVM container.
-
-See: http://blog.xen.org/index.php/2012/10/23/the-paravirtualization-spectrum-part-1-the-ends-of-the-spectrum/
-
-At the moment HAP is required for PVH.
-
-At present the only PVH guest is an x86 64bit PV linux. Patches are at:
-   git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
-
-A PVH guest kernel must support following features, as defined for linux
-in arch/x86/xen/xen-head.S:
-
-   #define FEATURES_PVH "|writable_descriptor_tables" \
-                        "|auto_translated_physmap"    \
-                        "|supervisor_mode_kernel"     \
-                        "|hvm_callback_vector"
-
-In a nutshell:
-* the guest uses auto translate:
- - p2m is managed by xen
- - pagetables are owned by the guest
- - mmu_update hypercall not available
-* it uses event callback and not vlapic emulation,
-* IDT is native, so set_trap_table hcall is also N/A for a PVH guest.
-
-For a full list of hcalls supported for PVH, see pvh_hypercall64_table
-in arch/x86/hvm/hvm.c in xen.  From the ABI prespective, it's mostly a
-PV guest with auto translate, although it does use hvm_op for setting
-callback vector, and has a special version of arch_set_guest_info for bringing
-up secondary cpus.
-
-The initial phase targets the booting of a 64bit UP/SMP linux guest in PVH
-mode. This is done by adding: pvh=1 in the config file. xl, and not xm, is
-supported. Phase I patches are broken into three parts:
-   - xen changes for booting of 64bit PVH guest
-   - tools changes for creating a PVH guest
-   - boot of 64bit dom0 in PVH mode.
-
-To boot 64bit dom0 in PVH mode, add dom0pvh to grub xen command line.
-
-Following fixme's exist in the code:
-  - arch/x86/time.c: support more tsc modes.
-
-Following remain to be done for PVH:
-   - Get rid of PVH mode, make it just HVM with some flags set
-   - implement arch_get_info_guest() for pvh.
-   - Investigate what else needs to be done for VMI support.
-   - AMD port.
-   - 32bit PVH guest support in both linux and xen. Xen changes are tagged
-     "32bitfixme".
-   - Add support for monitoring guest behavior. See hvm_memory_event* functions
-     in hvm.c
-   - vcpu hotplug support
-   - Live migration of PVH guests.
-   - Avail PVH dom0 of posted interrupts. (This will be a big win).
-
-
-Note, any emails to me must be cc'd to xen devel mailing list. OTOH, please
-cc me on PVH emails to the xen devel mailing list.
-
-Mukesh Rathor
-mukesh.rathor [at] oracle [dot] com
diff --git a/tools/debugger/gdbsx/xg/xg_main.c b/tools/debugger/gdbsx/xg/xg_main.c
index 8c8a402..7ebf914 100644
--- a/tools/debugger/gdbsx/xg/xg_main.c
+++ b/tools/debugger/gdbsx/xg/xg_main.c
@@ -79,7 +79,6 @@ int xgtrc_on = 0;
 struct xen_domctl domctl;         /* just use a global domctl */
 
 static int     _hvm_guest;        /* hvm guest? 32bit HVMs have 64bit context */
-static int     _pvh_guest;        /* PV guest in HVM container */
 static domid_t _dom_id;           /* guest domid */
 static int     _max_vcpu_id;      /* thus max_vcpu_id+1 VCPUs */
 static int     _dom0_fd;          /* fd of /dev/privcmd */
@@ -308,7 +307,6 @@ xg_attach(int domid, int guest_bitness)
 
     _max_vcpu_id = domctl.u.getdomaininfo.max_vcpu_id;
     _hvm_guest = (domctl.u.getdomaininfo.flags & XEN_DOMINF_hvm_guest);
-    _pvh_guest = (domctl.u.getdomaininfo.flags & XEN_DOMINF_pvh_guest);
     return _max_vcpu_id;
 }
 
@@ -369,7 +367,7 @@ _change_TF(vcpuid_t which_vcpu, int guest_bitness, int setit)
     int sz = sizeof(anyc);
 
     /* first try the MTF for hvm guest. otherwise do manually */
-    if (_hvm_guest || _pvh_guest) {
+    if (_hvm_guest) {
         domctl.u.debug_op.vcpu = which_vcpu;
         domctl.u.debug_op.op = setit ? XEN_DOMCTL_DEBUG_OP_SINGLE_STEP_ON :
                                        XEN_DOMCTL_DEBUG_OP_SINGLE_STEP_OFF;
diff --git a/tools/libxc/include/xc_dom.h b/tools/libxc/include/xc_dom.h
index 608cbc2..b416eb5 100644
--- a/tools/libxc/include/xc_dom.h
+++ b/tools/libxc/include/xc_dom.h
@@ -164,7 +164,6 @@ struct xc_dom_image {
     domid_t console_domid;
     domid_t xenstore_domid;
     xen_pfn_t shared_info_mfn;
-    int pvh_enabled;
 
     xc_interface *xch;
     domid_t guest_domid;
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index a48981a..a7083f8 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -443,7 +443,7 @@ typedef struct xc_dominfo {
     uint32_t      ssidref;
     unsigned int  dying:1, crashed:1, shutdown:1,
                   paused:1, blocked:1, running:1,
-                  hvm:1, debugged:1, pvh:1, xenstore:1, hap:1;
+                  hvm:1, debugged:1, xenstore:1, hap:1;
     unsigned int  shutdown_reason; /* only meaningful if shutdown==1 */
     unsigned long nr_pages; /* current number, not maximum */
     unsigned long nr_outstanding_pages;
diff --git a/tools/libxc/xc_cpuid_x86.c b/tools/libxc/xc_cpuid_x86.c
index 35ecca1..1bedf05 100644
--- a/tools/libxc/xc_cpuid_x86.c
+++ b/tools/libxc/xc_cpuid_x86.c
@@ -167,7 +167,6 @@ struct cpuid_domain_info
     } vendor;
 
     bool hvm;
-    bool pvh;
     uint64_t xfeature_mask;
 
     uint32_t *featureset;
@@ -231,7 +230,6 @@ static int get_cpuid_domain_info(xc_interface *xch, domid_t domid,
         return -ESRCH;
 
     info->hvm = di.hvm;
-    info->pvh = di.pvh;
 
     info->featureset = calloc(host_nr_features, sizeof(*info->featureset));
     if ( !info->featureset )
@@ -682,13 +680,10 @@ static void sanitise_featureset(struct cpuid_domain_info *info)
                 clear_bit(X86_FEATURE_SYSCALL, info->featureset);
         }
 
-        if ( !info->pvh )
-        {
-            clear_bit(X86_FEATURE_PSE, info->featureset);
-            clear_bit(X86_FEATURE_PSE36, info->featureset);
-            clear_bit(X86_FEATURE_PGE, info->featureset);
-            clear_bit(X86_FEATURE_PAGE1GB, info->featureset);
-        }
+        clear_bit(X86_FEATURE_PSE, info->featureset);
+        clear_bit(X86_FEATURE_PSE36, info->featureset);
+        clear_bit(X86_FEATURE_PGE, info->featureset);
+        clear_bit(X86_FEATURE_PAGE1GB, info->featureset);
     }
 
     if ( info->xfeature_mask == 0 )
diff --git a/tools/libxc/xc_dom_core.c b/tools/libxc/xc_dom_core.c
index 36cd3c8..cf40343 100644
--- a/tools/libxc/xc_dom_core.c
+++ b/tools/libxc/xc_dom_core.c
@@ -896,15 +896,6 @@ int xc_dom_parse_image(struct xc_dom_image *dom)
         goto err;
     }
 
-    if ( dom->pvh_enabled )
-    {
-        const char *pvh_features = "writable_descriptor_tables|"
-                                   "auto_translated_physmap|"
-                                   "supervisor_mode_kernel|"
-                                   "hvm_callback_vector";
-        elf_xen_parse_features(pvh_features, dom->f_requested, NULL);
-    }
-
     /* check features */
     for ( i = 0; i < XENFEAT_NR_SUBMAPS; i++ )
     {
diff --git a/tools/libxc/xc_dom_x86.c b/tools/libxc/xc_dom_x86.c
index 6495e7f..c176c00 100644
--- a/tools/libxc/xc_dom_x86.c
+++ b/tools/libxc/xc_dom_x86.c
@@ -373,7 +373,7 @@ static x86_pgentry_t get_pg_prot_x86(struct xc_dom_image *dom, int l,
     unsigned m;
 
     prot = domx86->params->lvl_prot[l];
-    if ( l > 0 || dom->pvh_enabled )
+    if ( l > 0 )
         return prot;
 
     for ( m = 0; m < domx86->n_mappings; m++ )
@@ -870,18 +870,15 @@ static int vcpu_x86_32(struct xc_dom_image *dom)
     DOMPRINTF("%s: cr3: pfn 0x%" PRIpfn " mfn 0x%" PRIpfn "",
               __FUNCTION__, dom->pgtables_seg.pfn, cr3_pfn);
 
-    if ( !dom->pvh_enabled )
-    {
-        ctxt->user_regs.ds = FLAT_KERNEL_DS_X86_32;
-        ctxt->user_regs.es = FLAT_KERNEL_DS_X86_32;
-        ctxt->user_regs.fs = FLAT_KERNEL_DS_X86_32;
-        ctxt->user_regs.gs = FLAT_KERNEL_DS_X86_32;
-        ctxt->user_regs.ss = FLAT_KERNEL_SS_X86_32;
-        ctxt->user_regs.cs = FLAT_KERNEL_CS_X86_32;
-
-        ctxt->kernel_ss = ctxt->user_regs.ss;
-        ctxt->kernel_sp = ctxt->user_regs.esp;
-    }
+    ctxt->user_regs.ds = FLAT_KERNEL_DS_X86_32;
+    ctxt->user_regs.es = FLAT_KERNEL_DS_X86_32;
+    ctxt->user_regs.fs = FLAT_KERNEL_DS_X86_32;
+    ctxt->user_regs.gs = FLAT_KERNEL_DS_X86_32;
+    ctxt->user_regs.ss = FLAT_KERNEL_SS_X86_32;
+    ctxt->user_regs.cs = FLAT_KERNEL_CS_X86_32;
+
+    ctxt->kernel_ss = ctxt->user_regs.ss;
+    ctxt->kernel_sp = ctxt->user_regs.esp;
 
     rc = xc_vcpu_setcontext(dom->xch, dom->guest_domid, 0, &any_ctx);
     if ( rc != 0 )
@@ -916,18 +913,15 @@ static int vcpu_x86_64(struct xc_dom_image *dom)
     DOMPRINTF("%s: cr3: pfn 0x%" PRIpfn " mfn 0x%" PRIpfn "",
               __FUNCTION__, dom->pgtables_seg.pfn, cr3_pfn);
 
-    if ( !dom->pvh_enabled )
-    {
-        ctxt->user_regs.ds = FLAT_KERNEL_DS_X86_64;
-        ctxt->user_regs.es = FLAT_KERNEL_DS_X86_64;
-        ctxt->user_regs.fs = FLAT_KERNEL_DS_X86_64;
-        ctxt->user_regs.gs = FLAT_KERNEL_DS_X86_64;
-        ctxt->user_regs.ss = FLAT_KERNEL_SS_X86_64;
-        ctxt->user_regs.cs = FLAT_KERNEL_CS_X86_64;
-
-        ctxt->kernel_ss = ctxt->user_regs.ss;
-        ctxt->kernel_sp = ctxt->user_regs.esp;
-    }
+    ctxt->user_regs.ds = FLAT_KERNEL_DS_X86_64;
+    ctxt->user_regs.es = FLAT_KERNEL_DS_X86_64;
+    ctxt->user_regs.fs = FLAT_KERNEL_DS_X86_64;
+    ctxt->user_regs.gs = FLAT_KERNEL_DS_X86_64;
+    ctxt->user_regs.ss = FLAT_KERNEL_SS_X86_64;
+    ctxt->user_regs.cs = FLAT_KERNEL_CS_X86_64;
+
+    ctxt->kernel_ss = ctxt->user_regs.ss;
+    ctxt->kernel_sp = ctxt->user_regs.esp;
 
     rc = xc_vcpu_setcontext(dom->xch, dom->guest_domid, 0, &any_ctx);
     if ( rc != 0 )
@@ -1106,7 +1100,7 @@ static int meminit_pv(struct xc_dom_image *dom)
     rc = x86_compat(dom->xch, dom->guest_domid, dom->guest_type);
     if ( rc )
         return rc;
-    if ( xc_dom_feature_translated(dom) && !dom->pvh_enabled )
+    if ( xc_dom_feature_translated(dom) )
     {
         dom->shadow_enabled = 1;
         rc = x86_shadow(dom->xch, dom->guest_domid);
@@ -1594,9 +1588,6 @@ static int map_grant_table_frames(struct xc_dom_image *dom)
 {
     int i, rc;
 
-    if ( dom->pvh_enabled )
-        return 0;
-
     for ( i = 0; ; i++ )
     {
         rc = xc_domain_add_to_physmap(dom->xch, dom->guest_domid,
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index d862e53..ea3f193 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -370,7 +370,6 @@ int xc_domain_getinfo(xc_interface *xch,
         info->running  = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_running);
         info->hvm      = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_hvm_guest);
         info->debugged = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_debugged);
-        info->pvh      = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_pvh_guest);
         info->xenstore = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_xs_domain);
         info->hap      = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_hap);
 
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index e741b9a..c23ba2f 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -38,9 +38,6 @@ int libxl__domain_create_info_setdefault(libxl__gc *gc,
     if (c_info->type == LIBXL_DOMAIN_TYPE_HVM) {
         libxl_defbool_setdefault(&c_info->hap, true);
         libxl_defbool_setdefault(&c_info->oos, true);
-    } else {
-        libxl_defbool_setdefault(&c_info->pvh, false);
-        libxl_defbool_setdefault(&c_info->hap, libxl_defbool_val(c_info->pvh));
     }
 
     libxl_defbool_setdefault(&c_info->run_hotplug_scripts, true);
@@ -475,8 +472,6 @@ int libxl__domain_build(libxl__gc *gc,
 
         break;
     case LIBXL_DOMAIN_TYPE_PV:
-        state->pvh_enabled = libxl_defbool_val(d_config->c_info.pvh);
-
         ret = libxl__build_pv(gc, domid, info, state);
         if (ret)
             goto out;
@@ -536,14 +531,6 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
         flags |= XEN_DOMCTL_CDF_hvm_guest;
         flags |= libxl_defbool_val(info->hap) ? XEN_DOMCTL_CDF_hap : 0;
         flags |= libxl_defbool_val(info->oos) ? 0 : XEN_DOMCTL_CDF_oos_off;
-    } else if (libxl_defbool_val(info->pvh)) {
-        flags |= XEN_DOMCTL_CDF_pvh_guest;
-        if (!libxl_defbool_val(info->hap)) {
-            LOGD(ERROR, *domid, "HAP must be on for PVH");
-            rc = ERROR_INVAL;
-            goto out;
-        }
-        flags |= XEN_DOMCTL_CDF_hap;
     }
 
     /* Ultimately, handle is an array of 16 uint8_t, same as uuid */
@@ -859,6 +846,24 @@ static void initiate_domain_create(libxl__egc *egc,
         goto error_out;
     }
 
+    libxl_defbool_setdefault(&d_config->c_info.pvh, false);
+    if (libxl_defbool_val(d_config->c_info.pvh)) {
+        if (d_config->c_info.type != LIBXL_DOMAIN_TYPE_HVM) {
+            LOGD(ERROR, domid, "Invalid domain type for PVH: %s",
+                 libxl_domain_type_to_string(d_config->c_info.type));
+            ret = ERROR_INVAL;
+            goto error_out;
+        }
+        if (d_config->b_info.device_model_version !=
+            LIBXL_DEVICE_MODEL_VERSION_NONE) {
+            LOGD(ERROR, domid, "Invalid device model version for PVH: %s",
+                 libxl_device_model_version_to_string(
+                     d_config->b_info.device_model_version));
+            ret = ERROR_INVAL;
+            goto error_out;
+        }
+    }
+
     /* If target_memkb is smaller than max_memkb, the subsequent call
      * to libxc when building HVM domain will enable PoD mode.
      */
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index d519c8d..e133962 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -690,7 +690,6 @@ int libxl__build_pv(libxl__gc *gc, uint32_t domid,
         return ERROR_FAIL;
     }
 
-    dom->pvh_enabled = state->pvh_enabled;
     dom->container_type = XC_DOM_PV_CONTAINER;
 
     LOG(DEBUG, "pv kernel mapped %d path %s", state->pv_kernel.mapped, state->pv_kernel.path);
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 5bbede5..7722665 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -1129,7 +1129,6 @@ typedef struct {
     libxl__file_reference pv_kernel;
     libxl__file_reference pv_ramdisk;
     const char * pv_cmdline;
-    bool pvh_enabled;
 
     xen_vmemrange_t *vmemranges;
     uint32_t num_vmemranges;
diff --git a/tools/libxl/libxl_x86.c b/tools/libxl/libxl_x86.c
index 5da7504..eb6bf66 100644
--- a/tools/libxl/libxl_x86.c
+++ b/tools/libxl/libxl_x86.c
@@ -338,11 +338,8 @@ int libxl__arch_domain_create(libxl__gc *gc, libxl_domain_config *d_config,
     if (rtc_timeoffset)
         xc_domain_set_time_offset(ctx->xch, domid, rtc_timeoffset);
 
-    if (d_config->b_info.type == LIBXL_DOMAIN_TYPE_HVM ||
-        libxl_defbool_val(d_config->c_info.pvh)) {
-
-        unsigned long shadow;
-        shadow = (d_config->b_info.shadow_memkb + 1023) / 1024;
+    if (d_config->b_info.type == LIBXL_DOMAIN_TYPE_HVM) {
+        unsigned long shadow = (d_config->b_info.shadow_memkb + 1023) / 1024;
         xc_shadow_control(ctx->xch, domid, XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION, NULL, 0, &shadow, 0, NULL);
     }
 
diff --git a/tools/xl/xl_cmdimpl.c b/tools/xl/xl_cmdimpl.c
index 0add5dc..09ccb14 100644
--- a/tools/xl/xl_cmdimpl.c
+++ b/tools/xl/xl_cmdimpl.c
@@ -1374,7 +1374,15 @@ static void parse_config_data(const char *config_source,
         !strncmp(buf, "hvm", strlen(buf)))
         c_info->type = LIBXL_DOMAIN_TYPE_HVM;
 
-    xlu_cfg_get_defbool(config, "pvh", &c_info->pvh, 0);
+    if (!xlu_cfg_get_defbool(config, "pvh", &c_info->pvh, 0)) {
+        /* NB: we need to set the type here, or else we will fall into
+         * the PV path, and the set of options will be completely wrong
+         * (even more because the PV and HVM options are inside an union).
+         */
+        c_info->type = LIBXL_DOMAIN_TYPE_HVM;
+        b_info->device_model_version = LIBXL_DEVICE_MODEL_VERSION_NONE;
+    }
+
     xlu_cfg_get_defbool(config, "hap", &c_info->hap, 0);
 
     if (xlu_cfg_replace_string (config, "name", &c_info->name, 0)) {
diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
index c8615e8..d319dea 100644
--- a/xen/arch/x86/cpu/vpmu.c
+++ b/xen/arch/x86/cpu/vpmu.c
@@ -225,8 +225,7 @@ void vpmu_do_interrupt(struct cpu_user_regs *regs)
         if ( !vpmu->xenpmu_data )
             return;
 
-        if ( is_pvh_vcpu(sampling) &&
-             !(vpmu_mode & XENPMU_MODE_ALL) &&
+        if ( !(vpmu_mode & XENPMU_MODE_ALL) &&
              !vpmu->arch_vpmu_ops->do_interrupt(regs) )
             return;
 
diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 07d24da..80e1edd 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -576,6 +576,7 @@ static void pv_cpuid(uint32_t leaf, uint32_t subleaf, struct cpuid_leaf *res)
     struct vcpu *curr = current;
     struct domain *currd = curr->domain;
     const struct cpuid_policy *p = currd->arch.cpuid;
+    const struct cpu_user_regs *regs = guest_cpu_user_regs();
 
     if ( !is_control_domain(currd) && !is_hardware_domain(currd) )
         domain_cpuid(currd, leaf, subleaf, res);
@@ -588,130 +589,120 @@ static void pv_cpuid(uint32_t leaf, uint32_t subleaf, struct cpuid_leaf *res)
         res->c = p->basic._1c;
         res->d = p->basic._1d;
 
-        if ( !is_pvh_domain(currd) )
-        {
-            const struct cpu_user_regs *regs = guest_cpu_user_regs();
+        /*
+         * !!! OSXSAVE handling for PV guests is non-architectural !!!
+         *
+         * Architecturally, the correct code here is simply:
+         *
+         *   if ( curr->arch.pv_vcpu.ctrlreg[4] & X86_CR4_OSXSAVE )
+         *       c |= cpufeat_mask(X86_FEATURE_OSXSAVE);
+         *
+         * However because of bugs in Xen (before c/s bd19080b, Nov 2010,
+         * the XSAVE cpuid flag leaked into guests despite the feature not
+         * being available for use), buggy workarounds where introduced to
+         * Linux (c/s 947ccf9c, also Nov 2010) which relied on the fact
+         * that Xen also incorrectly leaked OSXSAVE into the guest.
+         *
+         * Furthermore, providing architectural OSXSAVE behaviour to a
+         * many Linux PV guests triggered a further kernel bug when the
+         * fpu code observes that XSAVEOPT is available, assumes that
+         * xsave state had been set up for the task, and follows a wild
+         * pointer.
+         *
+         * Older Linux PVOPS kernels however do require architectural
+         * behaviour.  They observe Xen's leaked OSXSAVE and assume they
+         * can already use XSETBV, dying with a #UD because the shadowed
+         * CR4.OSXSAVE is clear.  This behaviour has been adjusted in all
+         * observed cases via stable backports of the above changeset.
+         *
+         * Therefore, the leaking of Xen's OSXSAVE setting has become a
+         * defacto part of the PV ABI and can't reasonably be corrected.
+         * It can however be restricted to only the enlightened CPUID
+         * view, as seen by the guest kernel.
+         *
+         * The following situations and logic now applies:
+         *
+         * - Hardware without CPUID faulting support and native CPUID:
+         *    There is nothing Xen can do here.  The hosts XSAVE flag will
+         *    leak through and Xen's OSXSAVE choice will leak through.
+         *
+         *    In the case that the guest kernel has not set up OSXSAVE, only
+         *    SSE will be set in xcr0, and guest userspace can't do too much
+         *    damage itself.
+         *
+         * - Enlightened CPUID or CPUID faulting available:
+         *    Xen can fully control what is seen here.  Guest kernels need
+         *    to see the leaked OSXSAVE via the enlightened path, but
+         *    guest userspace and the native is given architectural
+         *    behaviour.
+         *
+         *    Emulated vs Faulted CPUID is distinguised based on whether a
+         *    #UD or #GP is currently being serviced.
+         */
+        /* OSXSAVE clear in policy.  Fast-forward CR4 back in. */
+        if ( (curr->arch.pv_vcpu.ctrlreg[4] & X86_CR4_OSXSAVE) ||
+             (regs->entry_vector == TRAP_invalid_op &&
+              guest_kernel_mode(curr, regs) &&
+              (read_cr4() & X86_CR4_OSXSAVE)) )
+            res->c |= cpufeat_mask(X86_FEATURE_OSXSAVE);
 
+        /*
+         * At the time of writing, a PV domain is the only viable option
+         * for Dom0.  Several interactions between dom0 and Xen for real
+         * hardware setup have unfortunately been implemented based on
+         * state which incorrectly leaked into dom0.
+         *
+         * These leaks are retained for backwards compatibility, but
+         * restricted to the hardware domains kernel only.
+         */
+        if ( is_hardware_domain(currd) && guest_kernel_mode(curr, regs) )
+        {
             /*
-             * Delete the PVH condition when HVMLite formally replaces PVH,
-             * and HVM guests no longer enter a PV codepath.
+             * MTRR used to unconditionally leak into PV guests.  They
+             * cannot MTRR infrastructure at all, and shouldn't be able to
+             * see the feature.
+             *
+             * Modern PVOPS Linux self-clobbers the MTRR feature, to avoid
+             * trying to use the associated MSRs.  Xenolinux-based PV dom0's
+             * however use the MTRR feature as an indication of the presence
+             * of the XENPF_{add,del,read}_memtype hypercalls.
              */
+            if ( cpu_has_mtrr )
+                res->d |= cpufeat_mask(X86_FEATURE_MTRR);
 
             /*
-             * !!! OSXSAVE handling for PV guests is non-architectural !!!
-             *
-             * Architecturally, the correct code here is simply:
-             *
-             *   if ( curr->arch.pv_vcpu.ctrlreg[4] & X86_CR4_OSXSAVE )
-             *       c |= cpufeat_mask(X86_FEATURE_OSXSAVE);
-             *
-             * However because of bugs in Xen (before c/s bd19080b, Nov 2010,
-             * the XSAVE cpuid flag leaked into guests despite the feature not
-             * being available for use), buggy workarounds where introduced to
-             * Linux (c/s 947ccf9c, also Nov 2010) which relied on the fact
-             * that Xen also incorrectly leaked OSXSAVE into the guest.
-             *
-             * Furthermore, providing architectural OSXSAVE behaviour to a
-             * many Linux PV guests triggered a further kernel bug when the
-             * fpu code observes that XSAVEOPT is available, assumes that
-             * xsave state had been set up for the task, and follows a wild
-             * pointer.
-             *
-             * Older Linux PVOPS kernels however do require architectural
-             * behaviour.  They observe Xen's leaked OSXSAVE and assume they
-             * can already use XSETBV, dying with a #UD because the shadowed
-             * CR4.OSXSAVE is clear.  This behaviour has been adjusted in all
-             * observed cases via stable backports of the above changeset.
-             *
-             * Therefore, the leaking of Xen's OSXSAVE setting has become a
-             * defacto part of the PV ABI and can't reasonably be corrected.
-             * It can however be restricted to only the enlightened CPUID
-             * view, as seen by the guest kernel.
-             *
-             * The following situations and logic now applies:
+             * MONITOR never leaked into PV guests, as PV guests cannot
+             * use the MONITOR/MWAIT instructions.  As such, they require
+             * the feature to not being present in emulated CPUID.
              *
-             * - Hardware without CPUID faulting support and native CPUID:
-             *    There is nothing Xen can do here.  The hosts XSAVE flag will
-             *    leak through and Xen's OSXSAVE choice will leak through.
+             * Modern PVOPS Linux try to be cunning and use native CPUID
+             * to see if the hardware actually supports MONITOR, and by
+             * extension, deep C states.
              *
-             *    In the case that the guest kernel has not set up OSXSAVE, only
-             *    SSE will be set in xcr0, and guest userspace can't do too much
-             *    damage itself.
+             * If the feature is seen, deep-C state information is
+             * obtained from the DSDT and handed back to Xen via the
+             * XENPF_set_processor_pminfo hypercall.
              *
-             * - Enlightened CPUID or CPUID faulting available:
-             *    Xen can fully control what is seen here.  Guest kernels need
-             *    to see the leaked OSXSAVE via the enlightened path, but
-             *    guest userspace and the native is given architectural
-             *    behaviour.
+             * This mechanism is incompatible with an HVM-based hardware
+             * domain, and also with CPUID Faulting.
              *
-             *    Emulated vs Faulted CPUID is distinguised based on whether a
-             *    #UD or #GP is currently being serviced.
+             * Luckily, Xen can be just as 'cunning', and distinguish an
+             * emulated CPUID from a faulted CPUID by whether a #UD or #GP
+             * fault is currently being serviced.  Yuck...
              */
-            /* OSXSAVE clear in policy.  Fast-forward CR4 back in. */
-            if ( (curr->arch.pv_vcpu.ctrlreg[4] & X86_CR4_OSXSAVE) ||
-                 (regs->entry_vector == TRAP_invalid_op &&
-                  guest_kernel_mode(curr, regs) &&
-                  (read_cr4() & X86_CR4_OSXSAVE)) )
-                res->c |= cpufeat_mask(X86_FEATURE_OSXSAVE);
+            if ( cpu_has_monitor && regs->entry_vector == TRAP_gp_fault )
+                res->c |= cpufeat_mask(X86_FEATURE_MONITOR);
 
             /*
-             * At the time of writing, a PV domain is the only viable option
-             * for Dom0.  Several interactions between dom0 and Xen for real
-             * hardware setup have unfortunately been implemented based on
-             * state which incorrectly leaked into dom0.
+             * While MONITOR never leaked into PV guests, EIST always used
+             * to.
              *
-             * These leaks are retained for backwards compatibility, but
-             * restricted to the hardware domains kernel only.
+             * Modern PVOPS will only parse P state information from the
+             * DSDT and return it to Xen if EIST is seen in the emulated
+             * CPUID information.
              */
-            if ( is_hardware_domain(currd) && guest_kernel_mode(curr, regs) )
-            {
-                /*
-                 * MTRR used to unconditionally leak into PV guests.  They
-                 * cannot MTRR infrastructure at all, and shouldn't be able to
-                 * see the feature.
-                 *
-                 * Modern PVOPS Linux self-clobbers the MTRR feature, to avoid
-                 * trying to use the associated MSRs.  Xenolinux-based PV dom0's
-                 * however use the MTRR feature as an indication of the presence
-                 * of the XENPF_{add,del,read}_memtype hypercalls.
-                 */
-                if ( cpu_has_mtrr )
-                    res->d |= cpufeat_mask(X86_FEATURE_MTRR);
-
-                /*
-                 * MONITOR never leaked into PV guests, as PV guests cannot
-                 * use the MONITOR/MWAIT instructions.  As such, they require
-                 * the feature to not being present in emulated CPUID.
-                 *
-                 * Modern PVOPS Linux try to be cunning and use native CPUID
-                 * to see if the hardware actually supports MONITOR, and by
-                 * extension, deep C states.
-                 *
-                 * If the feature is seen, deep-C state information is
-                 * obtained from the DSDT and handed back to Xen via the
-                 * XENPF_set_processor_pminfo hypercall.
-                 *
-                 * This mechanism is incompatible with an HVM-based hardware
-                 * domain, and also with CPUID Faulting.
-                 *
-                 * Luckily, Xen can be just as 'cunning', and distinguish an
-                 * emulated CPUID from a faulted CPUID by whether a #UD or #GP
-                 * fault is currently being serviced.  Yuck...
-                 */
-                if ( cpu_has_monitor && regs->entry_vector == TRAP_gp_fault )
-                    res->c |= cpufeat_mask(X86_FEATURE_MONITOR);
-
-                /*
-                 * While MONITOR never leaked into PV guests, EIST always used
-                 * to.
-                 *
-                 * Modern PVOPS will only parse P state information from the
-                 * DSDT and return it to Xen if EIST is seen in the emulated
-                 * CPUID information.
-                 */
-                if ( cpu_has_eist )
-                    res->c |= cpufeat_mask(X86_FEATURE_EIST);
-            }
+            if ( cpu_has_eist )
+                res->c |= cpufeat_mask(X86_FEATURE_EIST);
         }
 
         if ( vpmu_enabled(curr) &&
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 7d3071e..75ded25 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -328,7 +328,7 @@ int switch_compat(struct domain *d)
 
     if ( is_hvm_domain(d) || d->tot_pages != 0 )
         return -EACCES;
-    if ( is_pv_32bit_domain(d) || is_pvh_32bit_domain(d) )
+    if ( is_pv_32bit_domain(d) )
         return 0;
 
     d->arch.has_32bit_shinfo = 1;
@@ -339,12 +339,7 @@ int switch_compat(struct domain *d)
     {
         rc = setup_compat_arg_xlat(v);
         if ( !rc )
-        {
-            if ( !is_pvh_domain(d) )
-                rc = setup_compat_l4(v);
-            else
-                rc = hvm_set_mode(v, 4);
-        }
+            rc = setup_compat_l4(v);
 
         if ( rc )
             goto undo_and_fail;
@@ -363,7 +358,7 @@ int switch_compat(struct domain *d)
     {
         free_compat_arg_xlat(v);
 
-        if ( !is_pvh_domain(d) && !pagetable_is_null(v->arch.guest_table) )
+        if ( !pagetable_is_null(v->arch.guest_table) )
             release_compat_l4(v);
     }
 
@@ -873,7 +868,7 @@ int arch_set_info_guest(
 
     /* The context is a compat-mode one if the target domain is compat-mode;
      * we expect the tools to DTRT even in compat-mode callers. */
-    compat = is_pv_32bit_domain(d) || is_pvh_32bit_domain(d);
+    compat = is_pv_32bit_domain(d);
 
 #define c(fld) (compat ? (c.cmp->fld) : (c.nat->fld))
     flags = c(flags);
@@ -925,18 +920,6 @@ int arch_set_info_guest(
              (c(ldt_ents) > 8192) )
             return -EINVAL;
     }
-    else if ( is_pvh_domain(d) )
-    {
-        if ( c(ctrlreg[0]) || c(ctrlreg[1]) || c(ctrlreg[2]) ||
-             c(ctrlreg[4]) || c(ctrlreg[5]) || c(ctrlreg[6]) ||
-             c(ctrlreg[7]) ||  c(ldt_base) || c(ldt_ents) ||
-             c(user_regs.cs) || c(user_regs.ss) || c(user_regs.es) ||
-             c(user_regs.ds) || c(user_regs.fs) || c(user_regs.gs) ||
-             c(kernel_ss) || c(kernel_sp) || c(gdt_ents) ||
-             (!compat && (c.nat->gs_base_kernel ||
-              c.nat->fs_base || c.nat->gs_base_user)) )
-            return -EINVAL;
-    }
 
     v->fpu_initialised = !!(flags & VGCF_I387_VALID);
 
@@ -992,21 +975,7 @@ int arch_set_info_guest(
             v->arch.debugreg[i] = c(debugreg[i]);
 
         hvm_set_info_guest(v);
-
-        if ( is_hvm_domain(d) || v->is_initialised )
-            goto out;
-
-        /* NB: No need to use PV cr3 un-pickling macros */
-        cr3_gfn = c(ctrlreg[3]) >> PAGE_SHIFT;
-        cr3_page = get_page_from_gfn(d, cr3_gfn, NULL, P2M_ALLOC);
-
-        v->arch.cr3 = page_to_maddr(cr3_page);
-        v->arch.hvm_vcpu.guest_cr[3] = c(ctrlreg[3]);
-        v->arch.guest_table = pagetable_from_page(cr3_page);
-
-        ASSERT(paging_mode_enabled(d));
-
-        goto pvh_skip_pv_stuff;
+        goto out;
     }
 
     init_int80_direct_trap(v);
@@ -1259,7 +1228,6 @@ int arch_set_info_guest(
 
     clear_bit(_VPF_in_reset, &v->pause_flags);
 
- pvh_skip_pv_stuff:
     if ( v->vcpu_id == 0 )
         update_domain_wallclock_time(d);
 
diff --git a/xen/arch/x86/domain_build.c b/xen/arch/x86/domain_build.c
index 7ef7297..af17d20 100644
--- a/xen/arch/x86/domain_build.c
+++ b/xen/arch/x86/domain_build.c
@@ -471,141 +471,6 @@ static void __init process_dom0_ioports_disable(struct domain *dom0)
     }
 }
 
-static __init void pvh_add_mem_mapping(struct domain *d, unsigned long gfn,
-                                       unsigned long mfn, unsigned long nr_mfns)
-{
-    unsigned long i;
-    p2m_access_t a;
-    mfn_t omfn;
-    p2m_type_t t;
-    int rc;
-
-    for ( i = 0; i < nr_mfns; i++ )
-    {
-        if ( !iomem_access_permitted(d, mfn + i, mfn + i) )
-        {
-            omfn = get_gfn_query_unlocked(d, gfn + i, &t);
-            guest_physmap_remove_page(d, _gfn(gfn + i), omfn, PAGE_ORDER_4K);
-            continue;
-        }
-
-        if ( rangeset_contains_singleton(mmio_ro_ranges, mfn + i) )
-            a = p2m_access_r;
-        else
-            a = p2m_access_rw;
-
-        if ( (rc = set_mmio_p2m_entry(d, gfn + i, _mfn(mfn + i),
-                                      PAGE_ORDER_4K, a)) )
-            panic("pvh_add_mem_mapping: gfn:%lx mfn:%lx i:%ld rc:%d\n",
-                  gfn, mfn, i, rc);
-        if ( !(i & 0xfffff) )
-                process_pending_softirqs();
-    }
-}
-
-/*
- * Set the 1:1 map for all non-RAM regions for dom 0. Thus, dom0 will have
- * the entire io region mapped in the EPT/NPT.
- *
- * pvh fixme: The following doesn't map MMIO ranges when they sit above the
- *            highest E820 covered address.
- */
-static __init void pvh_map_all_iomem(struct domain *d, unsigned long nr_pages)
-{
-    unsigned long start_pfn, end_pfn, end = 0, start = 0;
-    const struct e820entry *entry;
-    unsigned long nump, nmap, navail, mfn, nr_holes = 0;
-    unsigned int i;
-    struct page_info *page;
-    int rc;
-
-    for ( i = 0, entry = e820.map; i < e820.nr_map; i++, entry++ )
-    {
-        end = entry->addr + entry->size;
-
-        if ( entry->type == E820_RAM || entry->type == E820_UNUSABLE ||
-             i == e820.nr_map - 1 )
-        {
-            start_pfn = PFN_DOWN(start);
-
-            /* Unused RAM areas are marked UNUSABLE, so skip them too */
-            if ( entry->type == E820_RAM || entry->type == E820_UNUSABLE )
-                end_pfn = PFN_UP(entry->addr);
-            else
-                end_pfn = PFN_UP(end);
-
-            if ( start_pfn < end_pfn )
-            {
-                nump = end_pfn - start_pfn;
-                /* Add pages to the mapping */
-                pvh_add_mem_mapping(d, start_pfn, start_pfn, nump);
-                if ( start_pfn < nr_pages )
-                    nr_holes += (end_pfn < nr_pages) ?
-                                    nump : (nr_pages - start_pfn);
-            }
-            start = end;
-        }
-    }
-
-    /*
-     * Some BIOSes may not report io space above ram that is less than 4GB. So
-     * we map any non-ram upto 4GB.
-     */
-    if ( end < GB(4) )
-    {
-        start_pfn = PFN_UP(end);
-        end_pfn = (GB(4)) >> PAGE_SHIFT;
-        nump = end_pfn - start_pfn;
-        pvh_add_mem_mapping(d, start_pfn, start_pfn, nump);
-    }
-
-    /*
-     * Add the memory removed by the holes at the end of the
-     * memory map.
-     */
-    page = page_list_first(&d->page_list);
-    for ( i = 0, entry = e820.map; i < e820.nr_map && nr_holes > 0;
-          i++, entry++ )
-    {
-        if ( entry->type != E820_RAM )
-            continue;
-
-        end_pfn = PFN_UP(entry->addr + entry->size);
-        if ( end_pfn <= nr_pages )
-            continue;
-
-        navail = end_pfn - nr_pages;
-        nmap = min(navail, nr_holes);
-        nr_holes -= nmap;
-        start_pfn = max_t(unsigned long, nr_pages, PFN_DOWN(entry->addr));
-        /*
-         * Populate this memory region using the pages
-         * previously removed by the MMIO holes.
-         */
-        do
-        {
-            mfn = page_to_mfn(page);
-            if ( get_gpfn_from_mfn(mfn) != INVALID_M2P_ENTRY )
-                continue;
-
-            rc = guest_physmap_add_page(d, _gfn(start_pfn), _mfn(mfn), 0);
-            if ( rc != 0 )
-                panic("Unable to add gpfn %#lx mfn %#lx to Dom0 physmap: %d",
-                      start_pfn, mfn, rc);
-            start_pfn++;
-            nmap--;
-            if ( !(nmap & 0xfffff) )
-                process_pending_softirqs();
-        } while ( ((page = page_list_next(page, &d->page_list)) != NULL)
-                  && nmap );
-        ASSERT(nmap == 0);
-        if ( page == NULL )
-            break;
-    }
-
-    ASSERT(nr_holes == 0);
-}
-
 static __init void pvh_setup_e820(struct domain *d, unsigned long nr_pages)
 {
     struct e820entry *entry, *entry_guest;
@@ -676,12 +541,6 @@ static __init void pvh_setup_e820(struct domain *d, unsigned long nr_pages)
 static __init void dom0_update_physmap(struct domain *d, unsigned long pfn,
                                    unsigned long mfn, unsigned long vphysmap_s)
 {
-    if ( is_pvh_domain(d) )
-    {
-        int rc = guest_physmap_add_page(d, _gfn(pfn), _mfn(mfn), 0);
-        BUG_ON(rc);
-        return;
-    }
     if ( !is_pv_32bit_domain(d) )
         ((unsigned long *)vphysmap_s)[pfn] = mfn;
     else
@@ -690,78 +549,6 @@ static __init void dom0_update_physmap(struct domain *d, unsigned long pfn,
     set_gpfn_from_mfn(mfn, pfn);
 }
 
-/* Replace mfns with pfns in dom0 page tables */
-static __init void pvh_fixup_page_tables_for_hap(struct vcpu *v,
-                                                 unsigned long v_start,
-                                                 unsigned long v_end)
-{
-    int i, j, k;
-    l4_pgentry_t *pl4e, *l4start;
-    l3_pgentry_t *pl3e;
-    l2_pgentry_t *pl2e;
-    l1_pgentry_t *pl1e;
-    unsigned long cr3_pfn;
-
-    ASSERT(paging_mode_enabled(v->domain));
-
-    l4start = map_domain_page(_mfn(pagetable_get_pfn(v->arch.guest_table)));
-
-    /* Clear entries prior to guest L4 start */
-    pl4e = l4start + l4_table_offset(v_start);
-    memset(l4start, 0, (unsigned long)pl4e - (unsigned long)l4start);
-
-    for ( ; pl4e <= l4start + l4_table_offset(v_end - 1); pl4e++ )
-    {
-        pl3e = map_l3t_from_l4e(*pl4e);
-        for ( i = 0; i < PAGE_SIZE / sizeof(*pl3e); i++, pl3e++ )
-        {
-            if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
-                continue;
-
-            pl2e = map_l2t_from_l3e(*pl3e);
-            for ( j = 0; j < PAGE_SIZE / sizeof(*pl2e); j++, pl2e++ )
-            {
-                if ( !(l2e_get_flags(*pl2e)  & _PAGE_PRESENT) )
-                    continue;
-
-                pl1e = map_l1t_from_l2e(*pl2e);
-                for ( k = 0; k < PAGE_SIZE / sizeof(*pl1e); k++, pl1e++ )
-                {
-                    if ( !(l1e_get_flags(*pl1e) & _PAGE_PRESENT) )
-                        continue;
-
-                    *pl1e = l1e_from_pfn(get_gpfn_from_mfn(l1e_get_pfn(*pl1e)),
-                                         l1e_get_flags(*pl1e));
-                }
-                unmap_domain_page(pl1e);
-                *pl2e = l2e_from_pfn(get_gpfn_from_mfn(l2e_get_pfn(*pl2e)),
-                                     l2e_get_flags(*pl2e));
-            }
-            unmap_domain_page(pl2e);
-            *pl3e = l3e_from_pfn(get_gpfn_from_mfn(l3e_get_pfn(*pl3e)),
-                                 l3e_get_flags(*pl3e));
-        }
-        unmap_domain_page(pl3e);
-        *pl4e = l4e_from_pfn(get_gpfn_from_mfn(l4e_get_pfn(*pl4e)),
-                             l4e_get_flags(*pl4e));
-    }
-
-    /* Clear entries post guest L4. */
-    if ( (unsigned long)pl4e & (PAGE_SIZE - 1) )
-        memset(pl4e, 0, PAGE_SIZE - ((unsigned long)pl4e & (PAGE_SIZE - 1)));
-
-    unmap_domain_page(l4start);
-
-    cr3_pfn = get_gpfn_from_mfn(paddr_to_pfn(v->arch.cr3));
-    v->arch.hvm_vcpu.guest_cr[3] = pfn_to_paddr(cr3_pfn);
-
-    /*
-     * Finally, we update the paging modes (hap_update_paging_modes). This will
-     * create monitor_table for us, update v->arch.cr3, and update vmcs.cr3.
-     */
-    paging_update_paging_modes(v);
-}
-
 static __init void mark_pv_pt_pages_rdonly(struct domain *d,
                                            l4_pgentry_t *l4start,
                                            unsigned long vpt_start,
@@ -1053,8 +840,6 @@ static int __init construct_dom0_pv(
     l3_pgentry_t *l3tab = NULL, *l3start = NULL;
     l2_pgentry_t *l2tab = NULL, *l2start = NULL;
     l1_pgentry_t *l1tab = NULL, *l1start = NULL;
-    paddr_t shared_info_paddr = 0;
-    u32 save_pvh_pg_mode = 0;
 
     /*
      * This fully describes the memory layout of the initial domain. All 
@@ -1135,13 +920,6 @@ static int __init construct_dom0_pv(
             rc = -EINVAL;
             goto out;
         }
-        if ( is_pvh_domain(d) &&
-             !test_bit(XENFEAT_hvm_callback_vector, parms.f_supported) )
-        {
-            printk("Kernel does not support PVH mode\n");
-            rc = -EINVAL;
-            goto out;
-        }
     }
 
     if ( compat32 )
@@ -1207,12 +985,6 @@ static int __init construct_dom0_pv(
                         sizeof(struct start_info) +
                         sizeof(struct dom0_vga_console_info));
 
-    if ( is_pvh_domain(d) )
-    {
-        shared_info_paddr = round_pgup(vstartinfo_end) - v_start;
-        vstartinfo_end   += PAGE_SIZE;
-    }
-
     vpt_start        = round_pgup(vstartinfo_end);
     for ( nr_pt_pages = 2; ; nr_pt_pages++ )
     {
@@ -1458,11 +1230,6 @@ static int __init construct_dom0_pv(
         setup_dom0_vcpu(d, i, cpu);
     }
 
-    /*
-     * pvh: we temporarily disable d->arch.paging.mode so that we can build cr3
-     * needed to run on dom0's page tables.
-     */
-    save_pvh_pg_mode = d->arch.paging.mode;
     d->arch.paging.mode = 0;
 
     /* Set up CR3 value for write_ptbase */
@@ -1532,25 +1299,6 @@ static int __init construct_dom0_pv(
                          nr_pages);
     }
 
-    /*
-     * We enable paging mode again so guest_physmap_add_page and
-     * paging_set_allocation will do the right thing for us.
-     */
-    d->arch.paging.mode = save_pvh_pg_mode;
-
-    if ( is_pvh_domain(d) )
-    {
-        bool preempted;
-
-        do {
-            preempted = false;
-            paging_set_allocation(d, dom0_paging_pages(d, nr_pages),
-                                  &preempted);
-            process_pending_softirqs();
-        } while ( preempted );
-    }
-
-
     /* Write the phys->machine and machine->phys table entries. */
     for ( pfn = 0; pfn < count; pfn++ )
     {
@@ -1628,15 +1376,6 @@ static int __init construct_dom0_pv(
         si->console.dom0.info_size = sizeof(struct dom0_vga_console_info);
     }
 
-    /*
-     * PVH: We need to update si->shared_info while we are on dom0 page tables,
-     * but need to defer the p2m update until after we have fixed up the
-     * page tables for PVH so that the m2p for the si pte entry returns
-     * correct pfn.
-     */
-    if ( is_pvh_domain(d) )
-        si->shared_info = shared_info_paddr;
-
     if ( is_pv_32bit_domain(d) )
         xlat_start_info(si, XLAT_start_info_console_dom0);
 
@@ -1670,16 +1409,8 @@ static int __init construct_dom0_pv(
     regs->_eflags = X86_EFLAGS_IF;
 
 #ifdef CONFIG_SHADOW_PAGING
-    if ( opt_dom0_shadow )
-    {
-        if ( is_pvh_domain(d) )
-        {
-            printk("Unsupported option dom0_shadow for PVH\n");
-            return -EINVAL;
-        }
-        if ( paging_enable(d, PG_SH_enable) == 0 ) 
-            paging_update_paging_modes(v);
-    }
+    if ( opt_dom0_shadow && paging_enable(d, PG_SH_enable) == 0 )
+        paging_update_paging_modes(v);
 #endif
 
     /*
@@ -1696,20 +1427,6 @@ static int __init construct_dom0_pv(
         printk(" Xen warning: dom0 kernel broken ELF: %s\n",
                elf_check_broken(&elf));
 
-    if ( is_pvh_domain(d) )
-    {
-        /* finally, fixup the page table, replacing mfns with pfns */
-        pvh_fixup_page_tables_for_hap(v, v_start, v_end);
-
-        /* the pt has correct pfn for si, now update the mfn in the p2m */
-        mfn = virt_to_mfn(d->shared_info);
-        pfn = shared_info_paddr >> PAGE_SHIFT;
-        dom0_update_physmap(d, pfn, mfn, 0);
-
-        pvh_map_all_iomem(d, nr_pages);
-        pvh_setup_e820(d, nr_pages);
-    }
-
     if ( d->domain_id == hardware_domid )
         iommu_hwdom_init(d);
 
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 364283e..15785e2 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -618,9 +618,8 @@ long arch_do_domctl(
         break;
 
     case XEN_DOMCTL_get_address_size:
-        domctl->u.address_size.size =
-            (is_pv_32bit_domain(d) || is_pvh_32bit_domain(d)) ?
-            32 : BITS_PER_LONG;
+        domctl->u.address_size.size = is_pv_32bit_domain(d) ? 32 :
+                                                              BITS_PER_LONG;
         copyback = 1;
         break;
 
@@ -1491,7 +1490,7 @@ void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)
 {
     unsigned int i;
     const struct domain *d = v->domain;
-    bool_t compat = is_pv_32bit_domain(d) || is_pvh_32bit_domain(d);
+    bool_t compat = is_pv_32bit_domain(d);
 #define c(fld) (!compat ? (c.nat->fld) : (c.cmp->fld))
 
     if ( !is_pv_domain(d) )
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 5372a9a..017bab3 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -180,9 +180,6 @@ static int __init hvm_enable(void)
         printk("\n");
     }
 
-    if ( !fns->pvh_supported )
-        printk(XENLOG_INFO "HVM: PVH mode not supported on this platform\n");
-
     if ( !opt_altp2m_enabled )
         hvm_funcs.altp2m_supported = 0;
 
@@ -431,10 +428,6 @@ u64 hvm_get_guest_tsc_fixed(struct vcpu *v, uint64_t at_tsc)
 
 void hvm_migrate_timers(struct vcpu *v)
 {
-    /* PVH doesn't use rtc and emulated timers, it uses pvclock mechanism. */
-    if ( is_pvh_vcpu(v) )
-        return;
-
     rtc_migrate_timers(v);
     pt_migrate(v);
 }
@@ -594,19 +587,6 @@ static int hvm_print_line(
     return X86EMUL_OKAY;
 }
 
-static int handle_pvh_io(
-    int dir, unsigned int port, unsigned int bytes, uint32_t *val)
-{
-    struct domain *currd = current->domain;
-
-    if ( dir == IOREQ_WRITE )
-        guest_io_write(port, bytes, *val, currd);
-    else
-        *val = guest_io_read(port, bytes, currd);
-
-    return X86EMUL_OKAY;
-}
-
 int hvm_domain_initialise(struct domain *d)
 {
     int rc;
@@ -618,22 +598,6 @@ int hvm_domain_initialise(struct domain *d)
         return -EINVAL;
     }
 
-    if ( is_pvh_domain(d) )
-    {
-        if ( !hvm_funcs.pvh_supported )
-        {
-            printk(XENLOG_G_WARNING "Attempt to create a PVH guest "
-                   "on a system without necessary hardware support\n");
-            return -EINVAL;
-        }
-        if ( !hap_enabled(d) )
-        {
-            printk(XENLOG_G_INFO "PVH guest must have HAP on\n");
-            return -EINVAL;
-        }
-
-    }
-
     spin_lock_init(&d->arch.hvm_domain.irq_lock);
     spin_lock_init(&d->arch.hvm_domain.uc_lock);
     spin_lock_init(&d->arch.hvm_domain.write_map.lock);
@@ -675,12 +639,6 @@ int hvm_domain_initialise(struct domain *d)
 
     hvm_ioreq_init(d);
 
-    if ( is_pvh_domain(d) )
-    {
-        register_portio_handler(d, 0, 0x10003, handle_pvh_io);
-        return 0;
-    }
-
     hvm_init_guest_time(d);
 
     d->arch.hvm_domain.params[HVM_PARAM_TRIPLE_FAULT_REASON] = SHUTDOWN_reboot;
@@ -723,9 +681,6 @@ int hvm_domain_initialise(struct domain *d)
 
 void hvm_domain_relinquish_resources(struct domain *d)
 {
-    if ( is_pvh_domain(d) )
-        return;
-
     if ( hvm_funcs.nhvm_domain_relinquish_resources )
         hvm_funcs.nhvm_domain_relinquish_resources(d);
 
@@ -754,9 +709,6 @@ void hvm_domain_destroy(struct domain *d)
 
     hvm_destroy_cacheattr_region_list(d);
 
-    if ( is_pvh_domain(d) )
-        return;
-
     hvm_funcs.domain_destroy(d);
     rtc_deinit(d);
     stdvga_deinit(d);
@@ -1525,13 +1477,6 @@ int hvm_vcpu_initialise(struct vcpu *v)
 
     v->arch.hvm_vcpu.inject_event.vector = HVM_EVENT_VECTOR_UNSET;
 
-    if ( is_pvh_domain(d) )
-    {
-        /* This is for hvm_long_mode_enabled(v). */
-        v->arch.hvm_vcpu.guest_efer = EFER_LMA | EFER_LME;
-        return 0;
-    }
-
     rc = setup_compat_arg_xlat(v); /* teardown: free_compat_arg_xlat() */
     if ( rc != 0 )
         goto fail4;
@@ -1869,9 +1814,6 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
             __put_gfn(hostp2m, gfn);
 
         rc = 0;
-        if ( unlikely(is_pvh_domain(currd)) )
-            goto out;
-
         if ( !handle_mmio_with_translation(gla, gpa >> PAGE_SHIFT, npfec) )
             hvm_inject_hw_exception(TRAP_gp_fault, 0);
         rc = 1;
@@ -2213,15 +2155,6 @@ int hvm_set_cr0(unsigned long value, bool_t may_defer)
          (value & (X86_CR0_PE | X86_CR0_PG)) == X86_CR0_PG )
         goto gpf;
 
-    /* A pvh is not expected to change to real mode. */
-    if ( is_pvh_domain(d) &&
-         (value & (X86_CR0_PE | X86_CR0_PG)) != (X86_CR0_PG | X86_CR0_PE) )
-    {
-        printk(XENLOG_G_WARNING
-               "PVH attempting to turn off PE/PG. CR0:%lx\n", value);
-        goto gpf;
-    }
-
     if ( may_defer && unlikely(v->domain->arch.monitor.write_ctrlreg_enabled &
                                monitor_ctrlreg_bitmask(VM_EVENT_X86_CR0)) )
     {
@@ -2386,11 +2319,6 @@ int hvm_set_cr4(unsigned long value, bool_t may_defer)
                         "EFER.LMA is set");
             goto gpf;
         }
-        if ( is_pvh_vcpu(v) )
-        {
-            HVM_DBG_LOG(DBG_LEVEL_1, "32-bit PVH guest cleared CR4.PAE");
-            goto gpf;
-        }
     }
 
     old_cr = v->arch.hvm_vcpu.guest_cr[4];
@@ -3544,8 +3472,7 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content,
         break;
 
     case MSR_IA32_APICBASE:
-        if ( unlikely(is_pvh_vcpu(v)) ||
-             !vlapic_msr_set(vcpu_vlapic(v), msr_content) )
+        if ( !vlapic_msr_set(vcpu_vlapic(v), msr_content) )
             goto gp_fault;
         break;
 
@@ -4068,8 +3995,7 @@ static int hvmop_set_param(
         return -ESRCH;
 
     rc = -EINVAL;
-    if ( !has_hvm_container_domain(d) ||
-         (is_pvh_domain(d) && (a.index != HVM_PARAM_CALLBACK_IRQ)) )
+    if ( !has_hvm_container_domain(d) )
         goto out;
 
     rc = hvm_allow_set_param(d, &a);
@@ -4324,8 +4250,7 @@ static int hvmop_get_param(
         return -ESRCH;
 
     rc = -EINVAL;
-    if ( !has_hvm_container_domain(d) ||
-         (is_pvh_domain(d) && (a.index != HVM_PARAM_CALLBACK_IRQ)) )
+    if ( !has_hvm_container_domain(d) )
         goto out;
 
     rc = hvm_allow_get_param(d, &a);
diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c
index 6499caa..8cc7cc6 100644
--- a/xen/arch/x86/hvm/hypercall.c
+++ b/xen/arch/x86/hvm/hypercall.c
@@ -78,7 +78,7 @@ static long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
     switch ( cmd )
     {
     default:
-        if ( !is_pvh_vcpu(curr) || !is_hardware_domain(curr->domain) )
+        if ( !is_hardware_domain(curr->domain) )
             return -ENOSYS;
         /* fall through */
     case PHYSDEVOP_map_pirq:
@@ -86,7 +86,7 @@ static long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
     case PHYSDEVOP_eoi:
     case PHYSDEVOP_irq_status_query:
     case PHYSDEVOP_get_free_pirq:
-        if ( !has_pirq(curr->domain) && !is_pvh_vcpu(curr) )
+        if ( !has_pirq(curr->domain) )
             return -ENOSYS;
         break;
     }
diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
index 205fb68..5016300 100644
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -84,8 +84,6 @@ bool hvm_emulate_one_insn(hvm_emulate_validate_t *validate)
     struct hvm_vcpu_io *vio = &curr->arch.hvm_vcpu.hvm_io;
     int rc;
 
-    ASSERT(!is_pvh_vcpu(curr));
-
     hvm_emulate_init_once(&ctxt, validate, guest_cpu_user_regs());
 
     rc = hvm_emulate_one(&ctxt);
diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index ebb3eca..ad2edad 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -1387,8 +1387,7 @@ void hvm_ioreq_init(struct domain *d)
     spin_lock_init(&d->arch.hvm_domain.ioreq_server.lock);
     INIT_LIST_HEAD(&d->arch.hvm_domain.ioreq_server.list);
 
-    if ( !is_pvh_domain(d) )
-        register_portio_handler(d, 0xcf8, 4, hvm_access_cf8);
+    register_portio_handler(d, 0xcf8, 4, hvm_access_cf8);
 }
 
 /*
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index ff7d288..760544b 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -423,9 +423,6 @@ struct hvm_intack hvm_vcpu_has_pending_irq(struct vcpu *v)
          && vcpu_info(v, evtchn_upcall_pending) )
         return hvm_intack_vector(plat->irq.callback_via.vector);
 
-    if ( is_pvh_vcpu(v) )
-        return hvm_intack_none;
-
     if ( vlapic_accept_pic_intr(v) && plat->vpic[0].int_output )
         return hvm_intack_pic(0);
 
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index db8c4f4..9f5cc35 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1068,20 +1068,6 @@ static int construct_vmcs(struct vcpu *v)
                   vmx_pin_based_exec_control & ~PIN_BASED_POSTED_INTERRUPT);
     }
 
-    if ( is_pvh_domain(d) )
-    {
-        /* Unrestricted guest (real mode for EPT) */
-        v->arch.hvm_vmx.secondary_exec_control &=
-            ~SECONDARY_EXEC_UNRESTRICTED_GUEST;
-
-        /* Start in 64-bit mode. PVH 32bitfixme. */
-        vmentry_ctl |= VM_ENTRY_IA32E_MODE;       /* GUEST_EFER.LME/LMA ignored */
-
-        ASSERT(v->arch.hvm_vmx.exec_control & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS);
-        ASSERT(v->arch.hvm_vmx.exec_control & CPU_BASED_ACTIVATE_MSR_BITMAP);
-        ASSERT(!(v->arch.hvm_vmx.exec_control & CPU_BASED_RDTSC_EXITING));
-    }
-
     vmx_update_cpu_exec_control(v);
 
     __vmwrite(VM_EXIT_CONTROLS, vmexit_ctl);
@@ -1217,11 +1203,7 @@ static int construct_vmcs(struct vcpu *v)
     __vmwrite(GUEST_DS_AR_BYTES, 0xc093);
     __vmwrite(GUEST_FS_AR_BYTES, 0xc093);
     __vmwrite(GUEST_GS_AR_BYTES, 0xc093);
-    if ( is_pvh_domain(d) )
-        /* CS.L == 1, exec, read/write, accessed. */
-        __vmwrite(GUEST_CS_AR_BYTES, 0xa09b);
-    else
-        __vmwrite(GUEST_CS_AR_BYTES, 0xc09b); /* exec/read, accessed */
+    __vmwrite(GUEST_CS_AR_BYTES, 0xc09b); /* exec/read, accessed */
 
     /* Guest IDT. */
     __vmwrite(GUEST_IDTR_BASE, 0);
@@ -1251,23 +1233,10 @@ static int construct_vmcs(struct vcpu *v)
               | (1U << TRAP_no_device);
     vmx_update_exception_bitmap(v);
 
-    /*
-     * In HVM domains, this happens on the realmode->paging
-     * transition.  Since PVH never goes through this transition, we
-     * need to do it at start-of-day.
-     */
-    if ( is_pvh_domain(d) )
-        vmx_update_debug_state(v);
-
     v->arch.hvm_vcpu.guest_cr[0] = X86_CR0_PE | X86_CR0_ET;
-
-    /* PVH domains always start in paging mode */
-    if ( is_pvh_domain(d) )
-        v->arch.hvm_vcpu.guest_cr[0] |= X86_CR0_PG;
-
     hvm_update_guest_cr(v, 0);
 
-    v->arch.hvm_vcpu.guest_cr[4] = is_pvh_domain(d) ? X86_CR4_PAE : 0;
+    v->arch.hvm_vcpu.guest_cr[4] = 0;
     hvm_update_guest_cr(v, 4);
 
     if ( cpu_has_vmx_tpr_shadow )
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index b7f25a8..87a5b82 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2063,9 +2063,6 @@ static int vmx_set_mode(struct vcpu *v, int mode)
 {
     unsigned long attr;
 
-    if ( !is_pvh_vcpu(v) )
-        return 0;
-
     ASSERT((mode == 4) || (mode == 8));
 
     attr = (mode == 4) ? 0xc09b : 0xa09b;
@@ -2298,12 +2295,6 @@ const struct hvm_function_table * __init start_vmx(void)
         vmx_function_table.sync_pir_to_irr = NULL;
     }
 
-    if ( cpu_has_vmx_ept
-         && cpu_has_vmx_pat
-         && cpu_has_vmx_msr_bitmap
-         && cpu_has_vmx_secondary_exec_control )
-        vmx_function_table.pvh_supported = 1;
-
     if ( cpu_has_vmx_tsc_scaling )
         vmx_function_table.tsc_scaling.ratio_frac_bits = 48;
 
@@ -3734,8 +3725,7 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
         if ( exit_qualification & 0x10 )
         {
             /* INS, OUTS */
-            if ( unlikely(is_pvh_vcpu(v)) /* PVH fixme */ ||
-                 !hvm_emulate_one_insn(x86_insn_is_portio) )
+            if ( !hvm_emulate_one_insn(x86_insn_is_portio) )
                 hvm_inject_hw_exception(TRAP_gp_fault, 0);
         }
         else
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 1661e66..12dabcf 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -3041,7 +3041,7 @@ static struct domain *get_pg_owner(domid_t domid)
         goto out;
     }
 
-    if ( !is_pvh_domain(curr) && unlikely(paging_mode_translate(curr)) )
+    if ( unlikely(paging_mode_translate(curr)) )
     {
         MEM_LOG("Cannot mix foreign mappings with translated domains");
         goto out;
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index bbfa54e..07e2ccd 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -532,7 +532,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
 
     if ( unlikely(p2m_is_foreign(p2mt)) )
     {
-        /* pvh fixme: foreign types are only supported on ept at present */
+        /* hvm fixme: foreign types are only supported on ept at present */
         gdprintk(XENLOG_WARNING, "Unimplemented foreign p2m type.\n");
         return -EINVAL;
     }
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 2eee9cd..a5651a3 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -589,7 +589,7 @@ int p2m_alloc_table(struct p2m_domain *p2m)
 }
 
 /*
- * pvh fixme: when adding support for pvh non-hardware domains, this path must
+ * hvm fixme: when adding support for pvh non-hardware domains, this path must
  * cleanup any foreign p2m types (release refcnts on them).
  */
 void p2m_teardown(struct p2m_domain *p2m)
@@ -2411,10 +2411,10 @@ int p2m_add_foreign(struct domain *tdom, unsigned long fgfn,
     struct domain *fdom;
 
     ASSERT(tdom);
-    if ( foreigndom == DOMID_SELF || !is_pvh_domain(tdom) )
+    if ( foreigndom == DOMID_SELF )
         return -EINVAL;
     /*
-     * pvh fixme: until support is added to p2m teardown code to cleanup any
+     * hvm fixme: until support is added to p2m teardown code to cleanup any
      * foreign entries, limit this to hardware domain only.
      */
     if ( !is_hardware_domain(tdom) )
diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index fc45bfb..81cd6c9 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -517,10 +517,6 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         struct vcpu *curr = current;
         struct physdev_set_iopl set_iopl;
 
-        ret = -ENOSYS;
-        if ( is_pvh_vcpu(curr) )
-            break;
-
         ret = -EFAULT;
         if ( copy_from_guest(&set_iopl, arg, 1) != 0 )
             break;
@@ -536,10 +532,6 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         struct vcpu *curr = current;
         struct physdev_set_iobitmap set_iobitmap;
 
-        ret = -ENOSYS;
-        if ( is_pvh_vcpu(curr) )
-            break;
-
         ret = -EFAULT;
         if ( copy_from_guest(&set_iobitmap, arg, 1) != 0 )
             break;
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 66b7aba..fa601da 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -62,10 +62,6 @@ integer_param("maxcpus", max_cpus);
 
 unsigned long __read_mostly cr4_pv32_mask;
 
-/* Boot dom0 in pvh mode */
-static bool_t __initdata opt_dom0pvh;
-boolean_param("dom0pvh", opt_dom0pvh);
-
 /* **** Linux config option: propagated to domain0. */
 /* "acpi=off":    Sisables both ACPI table parsing and interpreter. */
 /* "acpi=force":  Override the disable blacklist.                   */
@@ -1545,9 +1541,6 @@ void __init noreturn __start_xen(unsigned long mbi_p)
 
     init_guest_cpuid();
 
-    if ( opt_dom0pvh )
-        domcr_flags |= DOMCRF_pvh | DOMCRF_hap;
-
     if ( dom0_pvh )
     {
         domcr_flags |= DOMCRF_hvm |
diff --git a/xen/arch/x86/time.c b/xen/arch/x86/time.c
index 3ad2ab0..b739dc8 100644
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -2013,33 +2013,6 @@ void tsc_set_info(struct domain *d,
         d->arch.vtsc = 0;
         return;
     }
-    if ( is_pvh_domain(d) )
-    {
-        /*
-         * PVH fixme: support more tsc modes.
-         *
-         * NB: The reason this is disabled here appears to be with
-         * additional support required to do the PV RDTSC emulation.
-         * Since we're no longer taking the PV emulation path for
-         * anything, we may be able to remove this restriction.
-         *
-         * pvhfixme: Experiments show that "default" works for PVH,
-         * but "always_emulate" does not for some reason.  Figure out
-         * why.
-         */
-        switch ( tsc_mode )
-        {
-        case TSC_MODE_NEVER_EMULATE:
-            break;
-        default:
-            printk(XENLOG_WARNING
-                   "PVH currently does not support tsc emulation. Setting timer_mode = never_emulate\n");
-            /* FALLTHRU */
-        case TSC_MODE_DEFAULT:
-            tsc_mode = TSC_MODE_NEVER_EMULATE;
-            break;
-        }
-    }
 
     switch ( d->arch.tsc_mode = tsc_mode )
     {
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 4492c9c..b22aacc 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -304,8 +304,6 @@ struct domain *domain_create(domid_t domid, unsigned int domcr_flags,
 
     if ( domcr_flags & DOMCRF_hvm )
         d->guest_type = guest_type_hvm;
-    else if ( domcr_flags & DOMCRF_pvh )
-        d->guest_type = guest_type_pvh;
     else
         d->guest_type = guest_type_pv;
 
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 93e3029..951a5dc 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -194,9 +194,6 @@ void getdomaininfo(struct domain *d, struct xen_domctl_getdomaininfo *info)
     case guest_type_hvm:
         info->flags |= XEN_DOMINF_hvm_guest;
         break;
-    case guest_type_pvh:
-        info->flags |= XEN_DOMINF_pvh_guest;
-        break;
     default:
         break;
     }
@@ -501,7 +498,6 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         ret = -EINVAL;
         if ( (op->u.createdomain.flags &
              ~(XEN_DOMCTL_CDF_hvm_guest
-               | XEN_DOMCTL_CDF_pvh_guest
                | XEN_DOMCTL_CDF_hap
                | XEN_DOMCTL_CDF_s3_integrity
                | XEN_DOMCTL_CDF_oos_off
@@ -532,15 +528,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
             rover = dom;
         }
 
-        if ( (op->u.createdomain.flags & XEN_DOMCTL_CDF_hvm_guest)
-             && (op->u.createdomain.flags & XEN_DOMCTL_CDF_pvh_guest) )
-            return -EINVAL;
-
         domcr_flags = 0;
         if ( op->u.createdomain.flags & XEN_DOMCTL_CDF_hvm_guest )
             domcr_flags |= DOMCRF_hvm;
-        if ( op->u.createdomain.flags & XEN_DOMCTL_CDF_pvh_guest )
-            domcr_flags |= DOMCRF_pvh;
         if ( op->u.createdomain.flags & XEN_DOMCTL_CDF_hap )
             domcr_flags |= DOMCRF_hap;
         if ( op->u.createdomain.flags & XEN_DOMCTL_CDF_s3_integrity )
diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index 4b87c60..a4ae612 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -324,11 +324,6 @@ DO(xen_version)(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
                              (1U << XENFEAT_highmem_assist) |
                              (1U << XENFEAT_gnttab_map_avail_bits);
                 break;
-            case guest_type_pvh:
-                fi.submap |= (1U << XENFEAT_hvm_safe_pvclock) |
-                             (1U << XENFEAT_supervisor_mode_kernel) |
-                             (1U << XENFEAT_hvm_callback_vector);
-                break;
             case guest_type_hvm:
                 fi.submap |= (1U << XENFEAT_hvm_safe_pvclock) |
                              (1U << XENFEAT_hvm_callback_vector) |
diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 45046d1..0fe9a53 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -606,8 +606,8 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
             struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
             rc = -EOPNOTSUPP;
-            /* pvh fixme: p2m_is_foreign types need addressing */
-            if ( is_pvh_vcpu(current) || is_pvh_domain(hardware_domain) )
+            /* hvm fixme: p2m_is_foreign types need addressing */
+            if ( is_hvm_domain(hardware_domain) )
                 break;
 
             rc = -ENODEV;
@@ -707,8 +707,8 @@ int vm_event_domctl(struct domain *d, xen_domctl_vm_event_op_t *vec,
         {
         case XEN_VM_EVENT_ENABLE:
             rc = -EOPNOTSUPP;
-            /* pvh fixme: p2m_is_foreign types need addressing */
-            if ( is_pvh_vcpu(current) || is_pvh_domain(hardware_domain) )
+            /* hvm fixme: p2m_is_foreign types need addressing */
+            if ( is_hvm_domain(hardware_domain) )
                 break;
 
             rc = -ENODEV;
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index d8749df..488398a 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -15,7 +15,6 @@
 #define has_32bit_shinfo(d)    ((d)->arch.has_32bit_shinfo)
 #define is_pv_32bit_domain(d)  ((d)->arch.is_32bit_pv)
 #define is_pv_32bit_vcpu(v)    (is_pv_32bit_domain((v)->domain))
-#define is_pvh_32bit_domain(d) (is_pvh_domain(d) && has_32bit_shinfo(d))
 
 #define is_hvm_pv_evtchn_domain(d) (has_hvm_container_domain(d) && \
         d->arch.hvm_domain.irq.callback_via_type == HVMIRQ_callback_vector)
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 87b203a..8c8c633 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -91,9 +91,6 @@ struct hvm_function_table {
     /* Support Hardware-Assisted Paging? */
     bool_t hap_supported;
 
-    /* Necessary hardware support for PVH mode? */
-    bool_t pvh_supported;
-
     /* Necessary hardware support for alternate p2m's? */
     bool altp2m_supported;
 
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 85cbb7c..65b7475 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -60,11 +60,8 @@ struct xen_domctl_createdomain {
  /* Disable out-of-sync shadow page tables? */
 #define _XEN_DOMCTL_CDF_oos_off       3
 #define XEN_DOMCTL_CDF_oos_off        (1U<<_XEN_DOMCTL_CDF_oos_off)
- /* Is this a PVH guest (as opposed to an HVM or PV guest)? */
-#define _XEN_DOMCTL_CDF_pvh_guest     4
-#define XEN_DOMCTL_CDF_pvh_guest      (1U<<_XEN_DOMCTL_CDF_pvh_guest)
  /* Is this a xenstore domain? */
-#define _XEN_DOMCTL_CDF_xs_domain     5
+#define _XEN_DOMCTL_CDF_xs_domain     4
 #define XEN_DOMCTL_CDF_xs_domain      (1U<<_XEN_DOMCTL_CDF_xs_domain)
     uint32_t flags;
     struct xen_arch_domainconfig config;
@@ -97,14 +94,11 @@ struct xen_domctl_getdomaininfo {
  /* Being debugged.  */
 #define _XEN_DOMINF_debugged  6
 #define XEN_DOMINF_debugged   (1U<<_XEN_DOMINF_debugged)
-/* domain is PVH */
-#define _XEN_DOMINF_pvh_guest 7
-#define XEN_DOMINF_pvh_guest  (1U<<_XEN_DOMINF_pvh_guest)
 /* domain is a xenstore domain */
-#define _XEN_DOMINF_xs_domain 8
+#define _XEN_DOMINF_xs_domain 7
 #define XEN_DOMINF_xs_domain  (1U<<_XEN_DOMINF_xs_domain)
 /* domain has hardware assisted paging */
-#define _XEN_DOMINF_hap       9
+#define _XEN_DOMINF_hap       8
 #define XEN_DOMINF_hap        (1U<<_XEN_DOMINF_hap)
  /* XEN_DOMINF_shutdown guest-supplied code.  */
 #define XEN_DOMINF_shutdownmask 255
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 0929c0b..cc11999 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -312,7 +312,7 @@ struct evtchn_port_ops;
  * will be false, but has_hvm_container_* checks will be true.
  */
 enum guest_type {
-    guest_type_pv, guest_type_pvh, guest_type_hvm
+    guest_type_pv, guest_type_hvm
 };
 
 struct domain
@@ -555,11 +555,8 @@ struct domain *domain_create(domid_t domid, unsigned int domcr_flags,
  /* DOMCRF_oos_off: dont use out-of-sync optimization for shadow page tables */
 #define _DOMCRF_oos_off         4
 #define DOMCRF_oos_off          (1U<<_DOMCRF_oos_off)
- /* DOMCRF_pvh: Create PV domain in HVM container. */
-#define _DOMCRF_pvh             5
-#define DOMCRF_pvh              (1U<<_DOMCRF_pvh)
  /* DOMCRF_xs_domain: xenstore domain */
-#define _DOMCRF_xs_domain       6
+#define _DOMCRF_xs_domain       5
 #define DOMCRF_xs_domain        (1U<<_DOMCRF_xs_domain)
 
 /*
@@ -875,8 +872,6 @@ void watchdog_domain_destroy(struct domain *d);
 
 #define is_pv_domain(d) ((d)->guest_type == guest_type_pv)
 #define is_pv_vcpu(v)   (is_pv_domain((v)->domain))
-#define is_pvh_domain(d) ((d)->guest_type == guest_type_pvh)
-#define is_pvh_vcpu(v)   (is_pvh_domain((v)->domain))
 #define is_hvm_domain(d) ((d)->guest_type == guest_type_hvm)
 #define is_hvm_vcpu(v)   (is_hvm_domain(v->domain))
 #define has_hvm_container_domain(d) ((d)->guest_type != guest_type_pv)
-- 
2.10.1 (Apple Git-78)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/3] x86: remove has_hvm_container_{domain/vcpu}
  2017-02-24 15:13 [PATCH 0/3] x86: remove PVHv1 Roger Pau Monne
  2017-02-24 15:13 ` [PATCH 1/3] x86: remove PVHv1 code Roger Pau Monne
@ 2017-02-24 15:13 ` Roger Pau Monne
  2017-02-24 15:35   ` Andrew Cooper
                     ` (2 more replies)
  2017-02-24 15:13 ` [PATCH 3/3] x86/PVHv2: move pvh_setup_e820 together with the other pvh functions Roger Pau Monne
  2 siblings, 3 replies; 11+ messages in thread
From: Roger Pau Monne @ 2017-02-24 15:13 UTC (permalink / raw)
  To: xen-devel
  Cc: Elena Ufimtseva, Kevin Tian, Jan Beulich, Jun Nakajima,
	Andrew Cooper, Christoph Egger, Tim Deegan, George Dunlap,
	Suravee Suthikulpanit, Boris Ostrovsky, Roger Pau Monne

It is now useless since PVHv1 is removed and PVHv2 is a HVM domain from Xen's
point of view.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Cc: Christoph Egger <chegger@amazon.de>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Elena Ufimtseva <elena.ufimtseva@oracle.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Tim Deegan <tim@xen.org>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/arch/x86/cpu/mcheck/vmce.c      |  6 +++---
 xen/arch/x86/cpu/vpmu.c             |  4 ++--
 xen/arch/x86/cpu/vpmu_amd.c         | 12 ++++++------
 xen/arch/x86/cpu/vpmu_intel.c       | 31 +++++++++++++++----------------
 xen/arch/x86/cpuid.c                |  2 +-
 xen/arch/x86/debug.c                |  2 +-
 xen/arch/x86/domain.c               | 28 ++++++++++++++--------------
 xen/arch/x86/domain_build.c         |  5 ++---
 xen/arch/x86/domctl.c               |  2 +-
 xen/arch/x86/hvm/dm.c               |  2 +-
 xen/arch/x86/hvm/hvm.c              |  6 +++---
 xen/arch/x86/hvm/irq.c              |  2 +-
 xen/arch/x86/hvm/mtrr.c             |  2 +-
 xen/arch/x86/hvm/vmsi.c             |  3 +--
 xen/arch/x86/hvm/vmx/vmcs.c         |  4 ++--
 xen/arch/x86/hvm/vmx/vmx.c          |  4 ++--
 xen/arch/x86/mm.c                   |  4 ++--
 xen/arch/x86/mm/paging.c            |  2 +-
 xen/arch/x86/mm/shadow/common.c     |  9 ++++-----
 xen/arch/x86/setup.c                |  2 +-
 xen/arch/x86/time.c                 | 11 +++++------
 xen/arch/x86/traps.c                |  4 ++--
 xen/arch/x86/x86_64/traps.c         |  4 ++--
 xen/drivers/passthrough/x86/iommu.c |  2 +-
 xen/include/asm-x86/domain.h        |  2 +-
 xen/include/asm-x86/event.h         |  2 +-
 xen/include/asm-x86/guest_access.h  | 12 ++++++------
 xen/include/asm-x86/hvm/hvm.h       |  2 +-
 xen/include/xen/sched.h             |  2 --
 xen/include/xen/tmem_xen.h          |  5 ++---
 30 files changed, 85 insertions(+), 93 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index 8b727b4..6fb7833 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -82,7 +82,7 @@ int vmce_restore_vcpu(struct vcpu *v, const struct hvm_vmce_vcpu *ctxt)
     {
         dprintk(XENLOG_G_ERR, "%s restore: unsupported MCA capabilities"
                 " %#" PRIx64 " for %pv (supported: %#Lx)\n",
-                has_hvm_container_vcpu(v) ? "HVM" : "PV", ctxt->caps,
+                is_hvm_vcpu(v) ? "HVM" : "PV", ctxt->caps,
                 v, guest_mcg_cap & ~MCG_CAP_COUNT);
         return -EPERM;
     }
@@ -364,7 +364,7 @@ int inject_vmce(struct domain *d, int vcpu)
         if ( !v->is_initialised )
             continue;
 
-        if ( (has_hvm_container_domain(d) ||
+        if ( (is_hvm_domain(d) ||
               guest_has_trap_callback(d, v->vcpu_id, TRAP_machine_check)) &&
              !test_and_set_bool(v->mce_pending) )
         {
@@ -444,7 +444,7 @@ int unmmap_broken_page(struct domain *d, mfn_t mfn, unsigned long gfn)
     if ( !mfn_valid(mfn) )
         return -EINVAL;
 
-    if ( !has_hvm_container_domain(d) || !paging_mode_hap(d) )
+    if ( !is_hvm_domain(d) || !paging_mode_hap(d) )
         return -EOPNOTSUPP;
 
     rc = -1;
diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
index d319dea..5b1e0ec 100644
--- a/xen/arch/x86/cpu/vpmu.c
+++ b/xen/arch/x86/cpu/vpmu.c
@@ -237,7 +237,7 @@ void vpmu_do_interrupt(struct cpu_user_regs *regs)
         vpmu->arch_vpmu_ops->arch_vpmu_save(sampling, 1);
         vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
 
-        if ( has_hvm_container_vcpu(sampled) )
+        if ( is_hvm_vcpu(sampled) )
             *flags = 0;
         else
             *flags = PMU_SAMPLE_PV;
@@ -288,7 +288,7 @@ void vpmu_do_interrupt(struct cpu_user_regs *regs)
             r->sp = cur_regs->rsp;
             r->flags = cur_regs->rflags;
 
-            if ( !has_hvm_container_vcpu(sampled) )
+            if ( !is_hvm_vcpu(sampled) )
             {
                 r->ss = cur_regs->ss;
                 r->cs = cur_regs->cs;
diff --git a/xen/arch/x86/cpu/vpmu_amd.c b/xen/arch/x86/cpu/vpmu_amd.c
index e0acbf4..b3c3697 100644
--- a/xen/arch/x86/cpu/vpmu_amd.c
+++ b/xen/arch/x86/cpu/vpmu_amd.c
@@ -305,8 +305,8 @@ static int amd_vpmu_save(struct vcpu *v,  bool_t to_guest)
 
     context_save(v);
 
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) &&
-         has_hvm_container_vcpu(v) && is_msr_bitmap_on(vpmu) )
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && is_hvm_vcpu(v) &&
+         is_msr_bitmap_on(vpmu) )
         amd_vpmu_unset_msr_bitmap(v);
 
     if ( to_guest )
@@ -367,7 +367,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content,
         return -EINVAL;
 
     /* For all counters, enable guest only mode for HVM guest */
-    if ( has_hvm_container_vcpu(v) && (type == MSR_TYPE_CTRL) &&
+    if ( is_hvm_vcpu(v) && (type == MSR_TYPE_CTRL) &&
          !is_guest_mode(msr_content) )
     {
         set_guest_mode(msr_content);
@@ -381,7 +381,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content,
             return 0;
         vpmu_set(vpmu, VPMU_RUNNING);
 
-        if ( has_hvm_container_vcpu(v) && is_msr_bitmap_on(vpmu) )
+        if ( is_hvm_vcpu(v) && is_msr_bitmap_on(vpmu) )
              amd_vpmu_set_msr_bitmap(v);
     }
 
@@ -390,7 +390,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content,
         (is_pmu_enabled(msr_content) == 0) && vpmu_is_set(vpmu, VPMU_RUNNING) )
     {
         vpmu_reset(vpmu, VPMU_RUNNING);
-        if ( has_hvm_container_vcpu(v) && is_msr_bitmap_on(vpmu) )
+        if ( is_hvm_vcpu(v) && is_msr_bitmap_on(vpmu) )
              amd_vpmu_unset_msr_bitmap(v);
         release_pmu_ownership(PMU_OWNER_HVM);
     }
@@ -433,7 +433,7 @@ static void amd_vpmu_destroy(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
-    if ( has_hvm_container_vcpu(v) && is_msr_bitmap_on(vpmu) )
+    if ( is_hvm_vcpu(v) && is_msr_bitmap_on(vpmu) )
         amd_vpmu_unset_msr_bitmap(v);
 
     xfree(vpmu->context);
diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.c
index 0ce68f1..4567cd3 100644
--- a/xen/arch/x86/cpu/vpmu_intel.c
+++ b/xen/arch/x86/cpu/vpmu_intel.c
@@ -306,7 +306,7 @@ static inline void __core2_vpmu_save(struct vcpu *v)
     for ( i = 0; i < arch_pmc_cnt; i++ )
         rdmsrl(MSR_IA32_PERFCTR0 + i, xen_pmu_cntr_pair[i].counter);
 
-    if ( !has_hvm_container_vcpu(v) )
+    if ( !is_hvm_vcpu(v) )
         rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, core2_vpmu_cxt->global_status);
 }
 
@@ -314,7 +314,7 @@ static int core2_vpmu_save(struct vcpu *v, bool_t to_guest)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
-    if ( !has_hvm_container_vcpu(v) )
+    if ( !is_hvm_vcpu(v) )
         wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
 
     if ( !vpmu_are_all_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
@@ -323,8 +323,8 @@ static int core2_vpmu_save(struct vcpu *v, bool_t to_guest)
     __core2_vpmu_save(v);
 
     /* Unset PMU MSR bitmap to trap lazy load. */
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) &&
-         has_hvm_container_vcpu(v) && cpu_has_vmx_msr_bitmap )
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && is_hvm_vcpu(v) &&
+         cpu_has_vmx_msr_bitmap )
         core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
 
     if ( to_guest )
@@ -362,7 +362,7 @@ static inline void __core2_vpmu_load(struct vcpu *v)
     if ( vpmu_is_set(vcpu_vpmu(v), VPMU_CPU_HAS_DS) )
         wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
 
-    if ( !has_hvm_container_vcpu(v) )
+    if ( !is_hvm_vcpu(v) )
     {
         wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, core2_vpmu_cxt->global_ovf_ctrl);
         core2_vpmu_cxt->global_ovf_ctrl = 0;
@@ -413,7 +413,7 @@ static int core2_vpmu_verify(struct vcpu *v)
     }
 
     if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) &&
-         !(has_hvm_container_vcpu(v)
+         !(is_hvm_vcpu(v)
            ? is_canonical_address(core2_vpmu_cxt->ds_area)
            : __addr_ok(core2_vpmu_cxt->ds_area)) )
         return -EINVAL;
@@ -474,7 +474,7 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
     if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
         return 0;
 
-    if ( has_hvm_container_vcpu(v) )
+    if ( is_hvm_vcpu(v) )
     {
         wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
         if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
@@ -539,7 +539,7 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
     {
         __core2_vpmu_load(current);
         vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        if ( has_hvm_container_vcpu(current) &&
+        if ( is_hvm_vcpu(current) &&
              cpu_has_vmx_msr_bitmap )
             core2_vpmu_set_msr_bitmap(current->arch.hvm_vmx.msr_bitmap);
     }
@@ -612,9 +612,8 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content,
             return -EINVAL;
         if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
         {
-            if ( !(has_hvm_container_vcpu(v)
-                   ? is_canonical_address(msr_content)
-                   : __addr_ok(msr_content)) )
+            if ( !(is_hvm_vcpu(v) ? is_canonical_address(msr_content)
+                                  : __addr_ok(msr_content)) )
             {
                 gdprintk(XENLOG_WARNING,
                          "Illegal address for IA32_DS_AREA: %#" PRIx64 "x\n",
@@ -635,7 +634,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content,
         if ( msr_content & fixed_ctrl_mask )
             return -EINVAL;
 
-        if ( has_hvm_container_vcpu(v) )
+        if ( is_hvm_vcpu(v) )
             vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
                                &core2_vpmu_cxt->global_ctrl);
         else
@@ -704,7 +703,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content,
             if ( blocked )
                 return -EINVAL;
 
-            if ( has_hvm_container_vcpu(v) )
+            if ( is_hvm_vcpu(v) )
                 vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
                                    &core2_vpmu_cxt->global_ctrl);
             else
@@ -723,7 +722,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content,
         wrmsrl(msr, msr_content);
     else
     {
-        if ( has_hvm_container_vcpu(v) )
+        if ( is_hvm_vcpu(v) )
             vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
         else
             wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
@@ -757,7 +756,7 @@ static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
             *msr_content = core2_vpmu_cxt->global_status;
             break;
         case MSR_CORE_PERF_GLOBAL_CTRL:
-            if ( has_hvm_container_vcpu(v) )
+            if ( is_hvm_vcpu(v) )
                 vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
             else
                 rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, *msr_content);
@@ -858,7 +857,7 @@ static void core2_vpmu_destroy(struct vcpu *v)
     vpmu->context = NULL;
     xfree(vpmu->priv_context);
     vpmu->priv_context = NULL;
-    if ( has_hvm_container_vcpu(v) && cpu_has_vmx_msr_bitmap )
+    if ( is_hvm_vcpu(v) && cpu_has_vmx_msr_bitmap )
         core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
     release_pmu_ownership(PMU_OWNER_HVM);
     vpmu_clear(vpmu);
diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 80e1edd..79979bc 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -959,7 +959,7 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
         break;
 
     case 0x80000001:
-        if ( has_hvm_container_domain(d) )
+        if ( is_hvm_domain(d) )
         {
             /* Fast-forward MSR_APIC_BASE.EN. */
             if ( vlapic_hw_disabled(vcpu_vlapic(v)) )
diff --git a/xen/arch/x86/debug.c b/xen/arch/x86/debug.c
index 499574e..2070077 100644
--- a/xen/arch/x86/debug.c
+++ b/xen/arch/x86/debug.c
@@ -168,7 +168,7 @@ unsigned int dbg_rw_guest_mem(struct domain *dp, void * __user gaddr,
 
         pagecnt = min_t(long, PAGE_SIZE - (addr & ~PAGE_MASK), len);
 
-        mfn = (has_hvm_container_domain(dp)
+        mfn = (is_hvm_domain(dp)
                ? dbg_hvm_va2mfn(addr, dp, toaddr, &gfn)
                : dbg_pv_va2mfn(addr, dp, pgd3));
         if ( mfn_eq(mfn, INVALID_MFN) )
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 75ded25..9a56408 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -187,7 +187,7 @@ void dump_pageframe_info(struct domain *d)
         spin_unlock(&d->page_alloc_lock);
     }
 
-    if ( has_hvm_container_domain(d) )
+    if ( is_hvm_domain(d) )
         p2m_pod_dump_data(d);
 
     spin_lock(&d->page_alloc_lock);
@@ -390,7 +390,7 @@ int vcpu_initialise(struct vcpu *v)
 
     spin_lock_init(&v->arch.vpmu.vpmu_lock);
 
-    if ( has_hvm_container_domain(d) )
+    if ( is_hvm_domain(d) )
     {
         rc = hvm_vcpu_initialise(v);
         goto done;
@@ -461,7 +461,7 @@ void vcpu_destroy(struct vcpu *v)
 
     vcpu_destroy_fpu(v);
 
-    if ( has_hvm_container_vcpu(v) )
+    if ( is_hvm_vcpu(v) )
         hvm_vcpu_destroy(v);
     else
         xfree(v->arch.pv_vcpu.trap_ctxt);
@@ -548,7 +548,7 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags,
         d->arch.emulation_flags = emflags;
     }
 
-    if ( has_hvm_container_domain(d) )
+    if ( is_hvm_domain(d) )
     {
         d->arch.hvm_domain.hap_enabled =
             hvm_funcs.hap_supported && (domcr_flags & DOMCRF_hap);
@@ -622,7 +622,7 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags,
     if ( (rc = psr_domain_init(d)) != 0 )
         goto fail;
 
-    if ( has_hvm_container_domain(d) )
+    if ( is_hvm_domain(d) )
     {
         if ( (rc = hvm_domain_initialise(d)) != 0 )
             goto fail;
@@ -681,7 +681,7 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags,
 
 void arch_domain_destroy(struct domain *d)
 {
-    if ( has_hvm_container_domain(d) )
+    if ( is_hvm_domain(d) )
         hvm_domain_destroy(d);
 
     xfree(d->arch.e820);
@@ -733,8 +733,8 @@ int arch_domain_soft_reset(struct domain *d)
     p2m_type_t p2mt;
     unsigned int i;
 
-    /* Soft reset is supported for HVM/PVH domains only. */
-    if ( !has_hvm_container_domain(d) )
+    /* Soft reset is supported for HVM domains only. */
+    if ( !is_hvm_domain(d) )
         return -EINVAL;
 
     hvm_domain_soft_reset(d);
@@ -924,7 +924,7 @@ int arch_set_info_guest(
     v->fpu_initialised = !!(flags & VGCF_I387_VALID);
 
     v->arch.flags &= ~TF_kernel_mode;
-    if ( (flags & VGCF_in_kernel) || has_hvm_container_domain(d)/*???*/ )
+    if ( (flags & VGCF_in_kernel) || is_hvm_domain(d)/*???*/ )
         v->arch.flags |= TF_kernel_mode;
 
     v->arch.vgc_flags = flags;
@@ -969,7 +969,7 @@ int arch_set_info_guest(
         }
     }
 
-    if ( has_hvm_container_domain(d) )
+    if ( is_hvm_domain(d) )
     {
         for ( i = 0; i < ARRAY_SIZE(v->arch.debugreg); ++i )
             v->arch.debugreg[i] = c(debugreg[i]);
@@ -1993,7 +1993,7 @@ static void __context_switch(void)
             if ( xcr0 != get_xcr0() && !set_xcr0(xcr0) )
                 BUG();
 
-            if ( cpu_has_xsaves && has_hvm_container_vcpu(n) )
+            if ( cpu_has_xsaves && is_hvm_vcpu(n) )
                 set_msr_xss(n->arch.hvm_vcpu.msr_xss);
         }
         vcpu_restore_fpu_eager(n);
@@ -2083,7 +2083,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
 
         if ( is_pv_domain(nextd) &&
              (is_idle_domain(prevd) ||
-              has_hvm_container_domain(prevd) ||
+              is_hvm_domain(prevd) ||
               is_pv_32bit_domain(prevd) != is_pv_32bit_domain(nextd)) )
         {
             uint64_t efer = read_efer();
@@ -2385,7 +2385,7 @@ int domain_relinquish_resources(struct domain *d)
 
     pit_deinit(d);
 
-    if ( has_hvm_container_domain(d) )
+    if ( is_hvm_domain(d) )
         hvm_domain_relinquish_resources(d);
 
     return 0;
@@ -2428,7 +2428,7 @@ void vcpu_mark_events_pending(struct vcpu *v)
     if ( already_pending )
         return;
 
-    if ( has_hvm_container_vcpu(v) )
+    if ( is_hvm_vcpu(v) )
         hvm_assert_evtchn_irq(v);
     else
         vcpu_kick(v);
diff --git a/xen/arch/x86/domain_build.c b/xen/arch/x86/domain_build.c
index af17d20..b7d920a 100644
--- a/xen/arch/x86/domain_build.c
+++ b/xen/arch/x86/domain_build.c
@@ -360,9 +360,8 @@ static unsigned long __init compute_dom0_nr_pages(
             avail -= max_pdx >> s;
     }
 
-    need_paging = has_hvm_container_domain(d)
-                  ? !iommu_hap_pt_share || !paging_mode_hap(d)
-                  : opt_dom0_shadow;
+    need_paging = is_hvm_domain(d) ? !iommu_hap_pt_share || !paging_mode_hap(d)
+                                   : opt_dom0_shadow;
     for ( ; ; need_paging = 0 )
     {
         nr_pages = dom0_nrpages;
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 15785e2..671b942 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -1522,7 +1522,7 @@ void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)
     for ( i = 0; i < ARRAY_SIZE(v->arch.debugreg); ++i )
         c(debugreg[i] = v->arch.debugreg[i]);
 
-    if ( has_hvm_container_domain(d) )
+    if ( is_hvm_domain(d) )
     {
         struct segment_register sreg;
 
diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index 2122c45..333c884 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -283,7 +283,7 @@ static int dm_op(domid_t domid,
     if ( rc )
         return rc;
 
-    if ( !has_hvm_container_domain(d) )
+    if ( !is_hvm_domain(d) )
         goto out;
 
     rc = xsm_dm_op(XSM_DM_PRIV, d);
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 017bab3..e42bf5b 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3054,7 +3054,7 @@ static enum hvm_copy_result __hvm_copy(
     char *p;
     int count, todo = size;
 
-    ASSERT(has_hvm_container_vcpu(v));
+    ASSERT(is_hvm_vcpu(v));
 
     /*
      * XXX Disable for 4.1.0: PV-on-HVM drivers will do grant-table ops
@@ -3995,7 +3995,7 @@ static int hvmop_set_param(
         return -ESRCH;
 
     rc = -EINVAL;
-    if ( !has_hvm_container_domain(d) )
+    if ( !is_hvm_domain(d) )
         goto out;
 
     rc = hvm_allow_set_param(d, &a);
@@ -4250,7 +4250,7 @@ static int hvmop_get_param(
         return -ESRCH;
 
     rc = -EINVAL;
-    if ( !has_hvm_container_domain(d) )
+    if ( !is_hvm_domain(d) )
         goto out;
 
     rc = hvm_allow_get_param(d, &a);
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index 760544b..a774ed7 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -480,7 +480,7 @@ int hvm_local_events_need_delivery(struct vcpu *v)
 
 void arch_evtchn_inject(struct vcpu *v)
 {
-    if ( has_hvm_container_vcpu(v) )
+    if ( is_hvm_vcpu(v) )
         hvm_assert_evtchn_irq(v);
 }
 
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index c5c27cb..b721c63 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -540,7 +540,7 @@ int hvm_get_mem_pinned_cacheattr(struct domain *d, gfn_t gfn,
     uint64_t mask = ~(uint64_t)0 << order;
     int rc = -ENXIO;
 
-    ASSERT(has_hvm_container_domain(d));
+    ASSERT(is_hvm_domain(d));
 
     rcu_read_lock(&pinned_cacheattr_rcu_lock);
     list_for_each_entry_rcu ( range,
diff --git a/xen/arch/x86/hvm/vmsi.c b/xen/arch/x86/hvm/vmsi.c
index 25f5756..a36692c 100644
--- a/xen/arch/x86/hvm/vmsi.c
+++ b/xen/arch/x86/hvm/vmsi.c
@@ -560,8 +560,7 @@ void msixtbl_init(struct domain *d)
 {
     struct hvm_io_handler *handler;
 
-    if ( !has_hvm_container_domain(d) || !has_vlapic(d) ||
-         msixtbl_initialised(d) )
+    if ( !is_hvm_domain(d) || !has_vlapic(d) || msixtbl_initialised(d) )
         return;
 
     INIT_LIST_HEAD(&d->arch.hvm_domain.msixtbl_list);
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 9f5cc35..fb4401c 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -768,7 +768,7 @@ void vmx_vmcs_exit(struct vcpu *v)
     {
         /* Don't confuse vmx_do_resume (for @v or @current!) */
         vmx_clear_vmcs(v);
-        if ( has_hvm_container_vcpu(current) )
+        if ( is_hvm_vcpu(current) )
             vmx_load_vmcs(current);
 
         spin_unlock(&v->arch.hvm_vmx.vmcs_lock);
@@ -1899,7 +1899,7 @@ static void vmcs_dump(unsigned char ch)
 
     for_each_domain ( d )
     {
-        if ( !has_hvm_container_domain(d) )
+        if ( !is_hvm_domain(d) )
             continue;
         printk("\n>>> Domain %d <<<\n", d->domain_id);
         for_each_vcpu ( d, v )
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 87a5b82..7b9ae7e 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -240,7 +240,7 @@ static void vmx_pi_do_resume(struct vcpu *v)
 /* This function is called when pcidevs_lock is held */
 void vmx_pi_hooks_assign(struct domain *d)
 {
-    if ( !iommu_intpost || !has_hvm_container_domain(d) )
+    if ( !iommu_intpost || !is_hvm_domain(d) )
         return;
 
     ASSERT(!d->arch.hvm_domain.pi_ops.vcpu_block);
@@ -254,7 +254,7 @@ void vmx_pi_hooks_assign(struct domain *d)
 /* This function is called when pcidevs_lock is held */
 void vmx_pi_hooks_deassign(struct domain *d)
 {
-    if ( !iommu_intpost || !has_hvm_container_domain(d) )
+    if ( !iommu_intpost || !is_hvm_domain(d) )
         return;
 
     ASSERT(d->arch.hvm_domain.pi_ops.vcpu_block);
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 12dabcf..92d0f7f 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -438,7 +438,7 @@ int page_is_ram_type(unsigned long mfn, unsigned long mem_type)
 
 unsigned long domain_get_maximum_gpfn(struct domain *d)
 {
-    if ( has_hvm_container_domain(d) )
+    if ( is_hvm_domain(d) )
         return p2m_get_hostp2m(d)->max_mapped_pfn;
     /* NB. PV guests specify nr_pfns rather than max_pfn so we adjust here. */
     return (arch_get_max_pfn(d) ?: 1) - 1;
@@ -3184,7 +3184,7 @@ long do_mmuext_op(
             break;
         }
 
-        if ( has_hvm_container_domain(d) )
+        if ( is_hvm_domain(d) )
         {
             switch ( op.cmd )
             {
diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index 97e2780..9dab520 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -420,7 +420,7 @@ static int paging_log_dirty_op(struct domain *d,
          * Mark dirty all currently write-mapped pages on e.g. the
          * final iteration of a save operation.
          */
-        if ( has_hvm_container_domain(d) &&
+        if ( is_hvm_domain(d) &&
              (sc->mode & XEN_DOMCTL_SHADOW_LOGDIRTY_FINAL) )
             hvm_mapped_guest_frames_mark_dirty(d);
 
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 560a7fd..85098b0 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -314,7 +314,7 @@ const struct x86_emulate_ops *shadow_init_emulation(
     struct vcpu *v = current;
     unsigned long addr;
 
-    ASSERT(has_hvm_container_vcpu(v));
+    ASSERT(is_hvm_vcpu(v));
 
     memset(sh_ctxt, 0, sizeof(*sh_ctxt));
 
@@ -358,7 +358,7 @@ void shadow_continue_emulation(struct sh_emulate_ctxt *sh_ctxt,
     struct vcpu *v = current;
     unsigned long addr, diff;
 
-    ASSERT(has_hvm_container_vcpu(v));
+    ASSERT(is_hvm_vcpu(v));
 
     /*
      * We don't refetch the segment bases, because we don't emulate
@@ -1695,9 +1695,8 @@ void *sh_emulate_map_dest(struct vcpu *v, unsigned long vaddr,
 
 #ifndef NDEBUG
     /* We don't emulate user-mode writes to page tables. */
-    if ( has_hvm_container_domain(d)
-         ? hvm_get_cpl(v) == 3
-         : !guest_kernel_mode(v, guest_cpu_user_regs()) )
+    if ( is_hvm_domain(d) ? hvm_get_cpl(v) == 3
+                          : !guest_kernel_mode(v, guest_cpu_user_regs()) )
     {
         gdprintk(XENLOG_DEBUG, "User-mode write to pagetable reached "
                  "emulate_map_dest(). This should never happen!\n");
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index fa601da..1b5c662 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1723,7 +1723,7 @@ void __hwdom_init setup_io_bitmap(struct domain *d)
 {
     int rc;
 
-    if ( has_hvm_container_domain(d) )
+    if ( is_hvm_domain(d) )
     {
         bitmap_fill(d->arch.hvm_domain.io_bitmap, 0x10000);
         rc = rangeset_report_ranges(d->arch.ioport_caps, 0, 0x10000,
diff --git a/xen/arch/x86/time.c b/xen/arch/x86/time.c
index b739dc8..6d544af 100644
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -940,7 +940,7 @@ static void __update_vcpu_system_time(struct vcpu *v, int force)
     }
     else
     {
-        if ( has_hvm_container_domain(d) && hvm_tsc_scaling_supported )
+        if ( is_hvm_domain(d) && hvm_tsc_scaling_supported )
         {
             tsc_stamp            = hvm_scale_tsc(d, t->stamp.local_tsc);
             _u.tsc_to_system_mul = d->arch.vtsc_to_ns.mul_frac;
@@ -1950,7 +1950,7 @@ void tsc_get_info(struct domain *d, uint32_t *tsc_mode,
                   uint64_t *elapsed_nsec, uint32_t *gtsc_khz,
                   uint32_t *incarnation)
 {
-    bool_t enable_tsc_scaling = has_hvm_container_domain(d) &&
+    bool_t enable_tsc_scaling = is_hvm_domain(d) &&
                                 hvm_tsc_scaling_supported && !d->arch.vtsc;
 
     *incarnation = d->arch.incarnation;
@@ -2030,7 +2030,7 @@ void tsc_set_info(struct domain *d,
          *  PV: guest has not migrated yet (and thus arch.tsc_khz == cpu_khz)
          */
         if ( tsc_mode == TSC_MODE_DEFAULT && host_tsc_is_safe() &&
-             (has_hvm_container_domain(d) ?
+             (is_hvm_domain(d) ?
               (d->arch.tsc_khz == cpu_khz ||
                hvm_get_tsc_scaling_ratio(d->arch.tsc_khz)) :
               incarnation == 0) )
@@ -2045,8 +2045,7 @@ void tsc_set_info(struct domain *d,
     case TSC_MODE_PVRDTSCP:
         d->arch.vtsc = !boot_cpu_has(X86_FEATURE_RDTSCP) ||
                        !host_tsc_is_safe();
-        enable_tsc_scaling = has_hvm_container_domain(d) &&
-                             !d->arch.vtsc &&
+        enable_tsc_scaling = is_hvm_domain(d) && !d->arch.vtsc &&
                              hvm_get_tsc_scaling_ratio(gtsc_khz ?: cpu_khz);
         d->arch.tsc_khz = (enable_tsc_scaling && gtsc_khz) ? gtsc_khz : cpu_khz;
         set_time_scale(&d->arch.vtsc_to_ns, d->arch.tsc_khz * 1000 );
@@ -2063,7 +2062,7 @@ void tsc_set_info(struct domain *d,
         break;
     }
     d->arch.incarnation = incarnation + 1;
-    if ( has_hvm_container_domain(d) )
+    if ( is_hvm_domain(d) )
     {
         if ( hvm_tsc_scaling_supported && !d->arch.vtsc )
             d->arch.hvm_domain.tsc_scaling_ratio =
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 75c89eb..c1ca945 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -799,7 +799,7 @@ void do_trap(struct cpu_user_regs *regs)
     }
 
     if ( ((trapnr == TRAP_copro_error) || (trapnr == TRAP_simd_error)) &&
-         system_state >= SYS_STATE_active && has_hvm_container_vcpu(curr) &&
+         system_state >= SYS_STATE_active && is_hvm_vcpu(curr) &&
          curr->arch.hvm_vcpu.fpu_exception_callback )
     {
         curr->arch.hvm_vcpu.fpu_exception_callback(
@@ -976,7 +976,7 @@ void cpuid_hypervisor_leaves(const struct vcpu *v, uint32_t leaf,
         break;
 
     case 4: /* HVM hypervisor leaf. */
-        if ( !has_hvm_container_domain(d) || subleaf != 0 )
+        if ( !is_hvm_domain(d) || subleaf != 0 )
             break;
 
         if ( cpu_has_vmx_apic_reg_virt )
diff --git a/xen/arch/x86/x86_64/traps.c b/xen/arch/x86/x86_64/traps.c
index b66c24b..ad4d6c1 100644
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -88,7 +88,7 @@ void show_registers(const struct cpu_user_regs *regs)
     enum context context;
     struct vcpu *v = system_state >= SYS_STATE_smp_boot ? current : NULL;
 
-    if ( guest_mode(regs) && has_hvm_container_vcpu(v) )
+    if ( guest_mode(regs) && is_hvm_vcpu(v) )
     {
         struct segment_register sreg;
         context = CTXT_hvm_guest;
@@ -623,7 +623,7 @@ static void hypercall_page_initialise_ring3_kernel(void *hypercall_page)
 void hypercall_page_initialise(struct domain *d, void *hypercall_page)
 {
     memset(hypercall_page, 0xCC, PAGE_SIZE);
-    if ( has_hvm_container_domain(d) )
+    if ( is_hvm_domain(d) )
         hvm_hypercall_page_initialise(d, hypercall_page);
     else if ( !is_pv_32bit_domain(d) )
         hypercall_page_initialise_ring3_kernel(hypercall_page);
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index 69cd6c5..750c663 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -55,7 +55,7 @@ int arch_iommu_populate_page_table(struct domain *d)
 
     while ( !rc && (page = page_list_remove_head(&d->page_list)) )
     {
-        if ( has_hvm_container_domain(d) ||
+        if ( is_hvm_domain(d) ||
             (page->u.inuse.type_info & PGT_type_mask) == PGT_writable_page )
         {
             unsigned long mfn = page_to_mfn(page);
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 488398a..8542d7e 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -16,7 +16,7 @@
 #define is_pv_32bit_domain(d)  ((d)->arch.is_32bit_pv)
 #define is_pv_32bit_vcpu(v)    (is_pv_32bit_domain((v)->domain))
 
-#define is_hvm_pv_evtchn_domain(d) (has_hvm_container_domain(d) && \
+#define is_hvm_pv_evtchn_domain(d) (is_hvm_domain(d) && \
         d->arch.hvm_domain.irq.callback_via_type == HVMIRQ_callback_vector)
 #define is_hvm_pv_evtchn_vcpu(v) (is_hvm_pv_evtchn_domain(v->domain))
 #define is_domain_direct_mapped(d) ((void)(d), 0)
diff --git a/xen/include/asm-x86/event.h b/xen/include/asm-x86/event.h
index d589d6f..a91599d 100644
--- a/xen/include/asm-x86/event.h
+++ b/xen/include/asm-x86/event.h
@@ -26,7 +26,7 @@ static inline int local_events_need_delivery(void)
 
     ASSERT(!is_idle_vcpu(v));
 
-    return (has_hvm_container_vcpu(v) ? hvm_local_events_need_delivery(v) :
+    return (is_hvm_vcpu(v) ? hvm_local_events_need_delivery(v) :
             (vcpu_info(v, evtchn_upcall_pending) &&
              !vcpu_info(v, evtchn_upcall_mask)));
 }
diff --git a/xen/include/asm-x86/guest_access.h b/xen/include/asm-x86/guest_access.h
index 88edb3f..ca700c9 100644
--- a/xen/include/asm-x86/guest_access.h
+++ b/xen/include/asm-x86/guest_access.h
@@ -14,27 +14,27 @@
 
 /* Raw access functions: no type checking. */
 #define raw_copy_to_guest(dst, src, len)        \
-    (has_hvm_container_vcpu(current) ?                     \
+    (is_hvm_vcpu(current) ?                     \
      copy_to_user_hvm((dst), (src), (len)) :    \
      copy_to_user((dst), (src), (len)))
 #define raw_copy_from_guest(dst, src, len)      \
-    (has_hvm_container_vcpu(current) ?                     \
+    (is_hvm_vcpu(current) ?                     \
      copy_from_user_hvm((dst), (src), (len)) :  \
      copy_from_user((dst), (src), (len)))
 #define raw_clear_guest(dst,  len)              \
-    (has_hvm_container_vcpu(current) ?                     \
+    (is_hvm_vcpu(current) ?                     \
      clear_user_hvm((dst), (len)) :             \
      clear_user((dst), (len)))
 #define __raw_copy_to_guest(dst, src, len)      \
-    (has_hvm_container_vcpu(current) ?                     \
+    (is_hvm_vcpu(current) ?                     \
      copy_to_user_hvm((dst), (src), (len)) :    \
      __copy_to_user((dst), (src), (len)))
 #define __raw_copy_from_guest(dst, src, len)    \
-    (has_hvm_container_vcpu(current) ?                     \
+    (is_hvm_vcpu(current) ?                     \
      copy_from_user_hvm((dst), (src), (len)) :  \
      __copy_from_user((dst), (src), (len)))
 #define __raw_clear_guest(dst,  len)            \
-    (has_hvm_container_vcpu(current) ?                     \
+    (is_hvm_vcpu(current) ?                     \
      clear_user_hvm((dst), (len)) :             \
      clear_user((dst), (len)))
 
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 8c8c633..478a419 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -621,7 +621,7 @@ unsigned long hvm_cr4_guest_valid_bits(const struct vcpu *v, bool restore);
 #define arch_vcpu_block(v) ({                                   \
     struct vcpu *v_ = (v);                                      \
     struct domain *d_ = v_->domain;                             \
-    if ( has_hvm_container_domain(d_) &&                        \
+    if ( is_hvm_domain(d_) &&                               \
          (d_->arch.hvm_domain.pi_ops.vcpu_block) )          \
         d_->arch.hvm_domain.pi_ops.vcpu_block(v_);          \
 })
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index cc11999..832352a 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -874,8 +874,6 @@ void watchdog_domain_destroy(struct domain *d);
 #define is_pv_vcpu(v)   (is_pv_domain((v)->domain))
 #define is_hvm_domain(d) ((d)->guest_type == guest_type_hvm)
 #define is_hvm_vcpu(v)   (is_hvm_domain(v->domain))
-#define has_hvm_container_domain(d) ((d)->guest_type != guest_type_pv)
-#define has_hvm_container_vcpu(v)   (has_hvm_container_domain((v)->domain))
 #define is_pinned_vcpu(v) ((v)->domain->is_pinned || \
                            cpumask_weight((v)->cpu_hard_affinity) == 1)
 #ifdef CONFIG_HAS_PASSTHROUGH
diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
index a6cab00..13cf7bc 100644
--- a/xen/include/xen/tmem_xen.h
+++ b/xen/include/xen/tmem_xen.h
@@ -185,9 +185,8 @@ typedef XEN_GUEST_HANDLE_PARAM(char) tmem_cli_va_param_t;
 static inline int tmem_get_tmemop_from_client(tmem_op_t *op, tmem_cli_op_t uops)
 {
 #ifdef CONFIG_COMPAT
-    if ( has_hvm_container_vcpu(current) ?
-         hvm_guest_x86_mode(current) != 8 :
-         is_pv_32bit_vcpu(current) )
+    if ( is_hvm_vcpu(current) ? hvm_guest_x86_mode(current) != 8
+                              : is_pv_32bit_vcpu(current) )
     {
         int rc;
         enum XLAT_tmem_op_u u;
-- 
2.10.1 (Apple Git-78)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/3] x86/PVHv2: move pvh_setup_e820 together with the other pvh functions
  2017-02-24 15:13 [PATCH 0/3] x86: remove PVHv1 Roger Pau Monne
  2017-02-24 15:13 ` [PATCH 1/3] x86: remove PVHv1 code Roger Pau Monne
  2017-02-24 15:13 ` [PATCH 2/3] x86: remove has_hvm_container_{domain/vcpu} Roger Pau Monne
@ 2017-02-24 15:13 ` Roger Pau Monne
  2017-02-24 15:29   ` Jan Beulich
  2 siblings, 1 reply; 11+ messages in thread
From: Roger Pau Monne @ 2017-02-24 15:13 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Jan Beulich, Roger Pau Monne

This function is only used by PVHv2 domain build, so move it together with the
other PVH domain build functions.

Just code motion, no functional change.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/arch/x86/domain_build.c | 134 ++++++++++++++++++++++----------------------
 1 file changed, 67 insertions(+), 67 deletions(-)

diff --git a/xen/arch/x86/domain_build.c b/xen/arch/x86/domain_build.c
index b7d920a..aa40c79 100644
--- a/xen/arch/x86/domain_build.c
+++ b/xen/arch/x86/domain_build.c
@@ -470,73 +470,6 @@ static void __init process_dom0_ioports_disable(struct domain *dom0)
     }
 }
 
-static __init void pvh_setup_e820(struct domain *d, unsigned long nr_pages)
-{
-    struct e820entry *entry, *entry_guest;
-    unsigned int i;
-    unsigned long pages, cur_pages = 0;
-    uint64_t start, end;
-
-    /*
-     * Craft the e820 memory map for Dom0 based on the hardware e820 map.
-     */
-    d->arch.e820 = xzalloc_array(struct e820entry, e820.nr_map);
-    if ( !d->arch.e820 )
-        panic("Unable to allocate memory for Dom0 e820 map");
-    entry_guest = d->arch.e820;
-
-    /* Clamp e820 memory map to match the memory assigned to Dom0 */
-    for ( i = 0, entry = e820.map; i < e820.nr_map; i++, entry++ )
-    {
-        if ( entry->type != E820_RAM )
-        {
-            *entry_guest = *entry;
-            goto next;
-        }
-
-        if ( nr_pages == cur_pages )
-        {
-            /*
-             * We already have all the assigned memory,
-             * skip this entry
-             */
-            continue;
-        }
-
-        /*
-         * Make sure the start and length are aligned to PAGE_SIZE, because
-         * that's the minimum granularity of the 2nd stage translation. Since
-         * the p2m code uses PAGE_ORDER_4K internally, also use it here in
-         * order to prevent this code from getting out of sync.
-         */
-        start = ROUNDUP(entry->addr, PAGE_SIZE << PAGE_ORDER_4K);
-        end = (entry->addr + entry->size) &
-              ~((PAGE_SIZE << PAGE_ORDER_4K) - 1);
-        if ( start >= end )
-            continue;
-
-        entry_guest->type = E820_RAM;
-        entry_guest->addr = start;
-        entry_guest->size = end - start;
-        pages = PFN_DOWN(entry_guest->size);
-        if ( (cur_pages + pages) > nr_pages )
-        {
-            /* Truncate region */
-            entry_guest->size = (nr_pages - cur_pages) << PAGE_SHIFT;
-            cur_pages = nr_pages;
-        }
-        else
-        {
-            cur_pages += pages;
-        }
- next:
-        d->arch.nr_e820++;
-        entry_guest++;
-    }
-    ASSERT(cur_pages == nr_pages);
-    ASSERT(d->arch.nr_e820 <= e820.nr_map);
-}
-
 static __init void dom0_update_physmap(struct domain *d, unsigned long pfn,
                                    unsigned long mfn, unsigned long vphysmap_s)
 {
@@ -1685,6 +1618,73 @@ static void __init pvh_steal_low_ram(struct domain *d, unsigned long start,
     }
 }
 
+static __init void pvh_setup_e820(struct domain *d, unsigned long nr_pages)
+{
+    struct e820entry *entry, *entry_guest;
+    unsigned int i;
+    unsigned long pages, cur_pages = 0;
+    uint64_t start, end;
+
+    /*
+     * Craft the e820 memory map for Dom0 based on the hardware e820 map.
+     */
+    d->arch.e820 = xzalloc_array(struct e820entry, e820.nr_map);
+    if ( !d->arch.e820 )
+        panic("Unable to allocate memory for Dom0 e820 map");
+    entry_guest = d->arch.e820;
+
+    /* Clamp e820 memory map to match the memory assigned to Dom0 */
+    for ( i = 0, entry = e820.map; i < e820.nr_map; i++, entry++ )
+    {
+        if ( entry->type != E820_RAM )
+        {
+            *entry_guest = *entry;
+            goto next;
+        }
+
+        if ( nr_pages == cur_pages )
+        {
+            /*
+             * We already have all the assigned memory,
+             * skip this entry
+             */
+            continue;
+        }
+
+        /*
+         * Make sure the start and length are aligned to PAGE_SIZE, because
+         * that's the minimum granularity of the 2nd stage translation. Since
+         * the p2m code uses PAGE_ORDER_4K internally, also use it here in
+         * order to prevent this code from getting out of sync.
+         */
+        start = ROUNDUP(entry->addr, PAGE_SIZE << PAGE_ORDER_4K);
+        end = (entry->addr + entry->size) &
+              ~((PAGE_SIZE << PAGE_ORDER_4K) - 1);
+        if ( start >= end )
+            continue;
+
+        entry_guest->type = E820_RAM;
+        entry_guest->addr = start;
+        entry_guest->size = end - start;
+        pages = PFN_DOWN(entry_guest->size);
+        if ( (cur_pages + pages) > nr_pages )
+        {
+            /* Truncate region */
+            entry_guest->size = (nr_pages - cur_pages) << PAGE_SHIFT;
+            cur_pages = nr_pages;
+        }
+        else
+        {
+            cur_pages += pages;
+        }
+ next:
+        d->arch.nr_e820++;
+        entry_guest++;
+    }
+    ASSERT(cur_pages == nr_pages);
+    ASSERT(d->arch.nr_e820 <= e820.nr_map);
+}
+
 static int __init pvh_setup_p2m(struct domain *d)
 {
     struct vcpu *v = d->vcpu[0];
-- 
2.10.1 (Apple Git-78)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/3] x86/PVHv2: move pvh_setup_e820 together with the other pvh functions
  2017-02-24 15:13 ` [PATCH 3/3] x86/PVHv2: move pvh_setup_e820 together with the other pvh functions Roger Pau Monne
@ 2017-02-24 15:29   ` Jan Beulich
  0 siblings, 0 replies; 11+ messages in thread
From: Jan Beulich @ 2017-02-24 15:29 UTC (permalink / raw)
  To: Roger Pau Monne; +Cc: Andrew Cooper, xen-devel

>>> On 24.02.17 at 16:13, <roger.pau@citrix.com> wrote:
> This function is only used by PVHv2 domain build, so move it together with the
> other PVH domain build functions.
> 
> Just code motion, no functional change.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/3] x86: remove PVHv1 code
  2017-02-24 15:13 ` [PATCH 1/3] x86: remove PVHv1 code Roger Pau Monne
@ 2017-02-24 15:32   ` Andrew Cooper
  2017-02-24 15:45     ` Jan Beulich
  2017-02-24 15:36   ` Andrew Cooper
  1 sibling, 1 reply; 11+ messages in thread
From: Andrew Cooper @ 2017-02-24 15:32 UTC (permalink / raw)
  To: Roger Pau Monne, xen-devel
  Cc: Elena Ufimtseva, Kevin Tian, Tamas K Lengyel, Wei Liu,
	Jan Beulich, Razvan Cojocaru, George Dunlap, Ian Jackson,
	Paul Durrant, Jun Nakajima

On 24/02/17 15:13, Roger Pau Monne wrote:
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index 85cbb7c..65b7475 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -60,11 +60,8 @@ struct xen_domctl_createdomain {
>   /* Disable out-of-sync shadow page tables? */
>  #define _XEN_DOMCTL_CDF_oos_off       3
>  #define XEN_DOMCTL_CDF_oos_off        (1U<<_XEN_DOMCTL_CDF_oos_off)
> - /* Is this a PVH guest (as opposed to an HVM or PV guest)? */
> -#define _XEN_DOMCTL_CDF_pvh_guest     4
> -#define XEN_DOMCTL_CDF_pvh_guest      (1U<<_XEN_DOMCTL_CDF_pvh_guest)
>   /* Is this a xenstore domain? */
> -#define _XEN_DOMCTL_CDF_xs_domain     5
> +#define _XEN_DOMCTL_CDF_xs_domain     4
>  #define XEN_DOMCTL_CDF_xs_domain      (1U<<_XEN_DOMCTL_CDF_xs_domain)
>      uint32_t flags;
>      struct xen_arch_domainconfig config;
> @@ -97,14 +94,11 @@ struct xen_domctl_getdomaininfo {
>   /* Being debugged.  */
>  #define _XEN_DOMINF_debugged  6
>  #define XEN_DOMINF_debugged   (1U<<_XEN_DOMINF_debugged)
> -/* domain is PVH */
> -#define _XEN_DOMINF_pvh_guest 7
> -#define XEN_DOMINF_pvh_guest  (1U<<_XEN_DOMINF_pvh_guest)
>  /* domain is a xenstore domain */
> -#define _XEN_DOMINF_xs_domain 8
> +#define _XEN_DOMINF_xs_domain 7
>  #define XEN_DOMINF_xs_domain  (1U<<_XEN_DOMINF_xs_domain)
>  /* domain has hardware assisted paging */
> -#define _XEN_DOMINF_hap       9
> +#define _XEN_DOMINF_hap       8
>  #define XEN_DOMINF_hap        (1U<<_XEN_DOMINF_hap)
>   /* XEN_DOMINF_shutdown guest-supplied code.  */
>  #define XEN_DOMINF_shutdownmask 255
>

It would probably be better to leave holes in the bitfield space here,
given that it is in the public interface.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/3] x86: remove has_hvm_container_{domain/vcpu}
  2017-02-24 15:13 ` [PATCH 2/3] x86: remove has_hvm_container_{domain/vcpu} Roger Pau Monne
@ 2017-02-24 15:35   ` Andrew Cooper
  2017-02-24 15:44   ` Tim Deegan
  2017-02-27  1:43   ` Tian, Kevin
  2 siblings, 0 replies; 11+ messages in thread
From: Andrew Cooper @ 2017-02-24 15:35 UTC (permalink / raw)
  To: Roger Pau Monne, xen-devel
  Cc: Elena Ufimtseva, Kevin Tian, Suravee Suthikulpanit,
	George Dunlap, Christoph Egger, Tim Deegan, Jan Beulich,
	Jun Nakajima, Boris Ostrovsky

On 24/02/17 15:13, Roger Pau Monne wrote:
> It is now useless since PVHv1 is removed and PVHv2 is a HVM domain from Xen's
> point of view.
>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/3] x86: remove PVHv1 code
  2017-02-24 15:13 ` [PATCH 1/3] x86: remove PVHv1 code Roger Pau Monne
  2017-02-24 15:32   ` Andrew Cooper
@ 2017-02-24 15:36   ` Andrew Cooper
  1 sibling, 0 replies; 11+ messages in thread
From: Andrew Cooper @ 2017-02-24 15:36 UTC (permalink / raw)
  To: Roger Pau Monne, xen-devel
  Cc: Elena Ufimtseva, Kevin Tian, Tamas K Lengyel, Wei Liu,
	Jan Beulich, Razvan Cojocaru, George Dunlap, Ian Jackson,
	Paul Durrant, Jun Nakajima

On 24/02/17 15:13, Roger Pau Monne wrote:
> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> index 66b7aba..fa601da 100644
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -62,10 +62,6 @@ integer_param("maxcpus", max_cpus);
>  
>  unsigned long __read_mostly cr4_pv32_mask;
>  
> -/* Boot dom0 in pvh mode */
> -static bool_t __initdata opt_dom0pvh;
> -boolean_param("dom0pvh", opt_dom0pvh);
> -

Please edit docs/misc/xen-command-line.markdown as well.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/3] x86: remove has_hvm_container_{domain/vcpu}
  2017-02-24 15:13 ` [PATCH 2/3] x86: remove has_hvm_container_{domain/vcpu} Roger Pau Monne
  2017-02-24 15:35   ` Andrew Cooper
@ 2017-02-24 15:44   ` Tim Deegan
  2017-02-27  1:43   ` Tian, Kevin
  2 siblings, 0 replies; 11+ messages in thread
From: Tim Deegan @ 2017-02-24 15:44 UTC (permalink / raw)
  To: Roger Pau Monne
  Cc: Elena Ufimtseva, Kevin Tian, Jan Beulich, George Dunlap,
	Andrew Cooper, Christoph Egger, Jun Nakajima, xen-devel,
	Boris Ostrovsky, Suravee Suthikulpanit

At 15:13 +0000 on 24 Feb (1487949198), Roger Pau Monne wrote:
> It is now useless since PVHv1 is removed and PVHv2 is a HVM domain from Xen's
> point of view.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Acked-by: Tim Deegan <tim@xen.org>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/3] x86: remove PVHv1 code
  2017-02-24 15:32   ` Andrew Cooper
@ 2017-02-24 15:45     ` Jan Beulich
  0 siblings, 0 replies; 11+ messages in thread
From: Jan Beulich @ 2017-02-24 15:45 UTC (permalink / raw)
  To: Andrew Cooper, Roger Pau Monne
  Cc: Elena Ufimtseva, Kevin Tian, Tamas K Lengyel, Wei Liu,
	Razvan Cojocaru, George Dunlap, Ian Jackson, Paul Durrant,
	Jun Nakajima, xen-devel

>>> On 24.02.17 at 16:32, <andrew.cooper3@citrix.com> wrote:
> On 24/02/17 15:13, Roger Pau Monne wrote:
>> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
>> index 85cbb7c..65b7475 100644
>> --- a/xen/include/public/domctl.h
>> +++ b/xen/include/public/domctl.h
>> @@ -60,11 +60,8 @@ struct xen_domctl_createdomain {
>>   /* Disable out-of-sync shadow page tables? */
>>  #define _XEN_DOMCTL_CDF_oos_off       3
>>  #define XEN_DOMCTL_CDF_oos_off        (1U<<_XEN_DOMCTL_CDF_oos_off)
>> - /* Is this a PVH guest (as opposed to an HVM or PV guest)? */
>> -#define _XEN_DOMCTL_CDF_pvh_guest     4
>> -#define XEN_DOMCTL_CDF_pvh_guest      (1U<<_XEN_DOMCTL_CDF_pvh_guest)
>>   /* Is this a xenstore domain? */
>> -#define _XEN_DOMCTL_CDF_xs_domain     5
>> +#define _XEN_DOMCTL_CDF_xs_domain     4
>>  #define XEN_DOMCTL_CDF_xs_domain      (1U<<_XEN_DOMCTL_CDF_xs_domain)
>>      uint32_t flags;
>>      struct xen_arch_domainconfig config;
>> @@ -97,14 +94,11 @@ struct xen_domctl_getdomaininfo {
>>   /* Being debugged.  */
>>  #define _XEN_DOMINF_debugged  6
>>  #define XEN_DOMINF_debugged   (1U<<_XEN_DOMINF_debugged)
>> -/* domain is PVH */
>> -#define _XEN_DOMINF_pvh_guest 7
>> -#define XEN_DOMINF_pvh_guest  (1U<<_XEN_DOMINF_pvh_guest)
>>  /* domain is a xenstore domain */
>> -#define _XEN_DOMINF_xs_domain 8
>> +#define _XEN_DOMINF_xs_domain 7
>>  #define XEN_DOMINF_xs_domain  (1U<<_XEN_DOMINF_xs_domain)
>>  /* domain has hardware assisted paging */
>> -#define _XEN_DOMINF_hap       9
>> +#define _XEN_DOMINF_hap       8
>>  #define XEN_DOMINF_hap        (1U<<_XEN_DOMINF_hap)
>>   /* XEN_DOMINF_shutdown guest-supplied code.  */
>>  #define XEN_DOMINF_shutdownmask 255
>>
> 
> It would probably be better to leave holes in the bitfield space here,
> given that it is in the public interface.

Or else the domctl interface version would need to be bumped.
Or perhaps it needs to be in any case with such a removal.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/3] x86: remove has_hvm_container_{domain/vcpu}
  2017-02-24 15:13 ` [PATCH 2/3] x86: remove has_hvm_container_{domain/vcpu} Roger Pau Monne
  2017-02-24 15:35   ` Andrew Cooper
  2017-02-24 15:44   ` Tim Deegan
@ 2017-02-27  1:43   ` Tian, Kevin
  2 siblings, 0 replies; 11+ messages in thread
From: Tian, Kevin @ 2017-02-27  1:43 UTC (permalink / raw)
  To: Roger Pau Monne, xen-devel
  Cc: Elena Ufimtseva, Suravee Suthikulpanit, George Dunlap,
	Andrew Cooper, Christoph Egger, Tim Deegan, Jan Beulich,
	Nakajima, Jun, Boris Ostrovsky

> From: Roger Pau Monne [mailto:roger.pau@citrix.com]
> Sent: Friday, February 24, 2017 11:13 PM
> 
> It is now useless since PVHv1 is removed and PVHv2 is a HVM domain from Xen's
> point of view.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2017-02-27  1:43 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-24 15:13 [PATCH 0/3] x86: remove PVHv1 Roger Pau Monne
2017-02-24 15:13 ` [PATCH 1/3] x86: remove PVHv1 code Roger Pau Monne
2017-02-24 15:32   ` Andrew Cooper
2017-02-24 15:45     ` Jan Beulich
2017-02-24 15:36   ` Andrew Cooper
2017-02-24 15:13 ` [PATCH 2/3] x86: remove has_hvm_container_{domain/vcpu} Roger Pau Monne
2017-02-24 15:35   ` Andrew Cooper
2017-02-24 15:44   ` Tim Deegan
2017-02-27  1:43   ` Tian, Kevin
2017-02-24 15:13 ` [PATCH 3/3] x86/PVHv2: move pvh_setup_e820 together with the other pvh functions Roger Pau Monne
2017-02-24 15:29   ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.