xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [Xen-devel] [RFC 0/9] The Xen Blanket: hypervisor interface for PV drivers on nested Xen
@ 2019-06-20  0:30 Christopher Clark
  2019-06-20  0:30 ` [Xen-devel] [RFC 1/9] x86/guest: code movement to separate Xen detection from guest functions Christopher Clark
                   ` (9 more replies)
  0 siblings, 10 replies; 13+ messages in thread
From: Christopher Clark @ 2019-06-20  0:30 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Stefano Stabellini, Wei Liu,
	Konrad Rzeszutek Wilk, George Dunlap, Andrew Cooper, Ian Jackson,
	Rich Persaud, Ankur Arora, Tim Deegan, Julien Grall, Jan Beulich,
	Daniel De Graaf, Christopher Clark, Roger Pau Monné

This RFC patch series adds a new hypervisor interface to support running
a set of PV front end device drivers within dom0 of a guest Xen running
on Xen.

A practical deployment scenario is a system running PV guest VMs that use
unmodified Xen PV device drivers, on a guest Xen hypervisor with a dom0
using PV drivers itself, all within a HVM guest of a hosting Xen
hypervisor (eg. from a cloud provider). Multiple PV guest VMs can reside
within a single cloud instance; guests can be live-migrated between
cloud instances that run nested Xen, and virtual machine introspection
of guests can be performed without requiring cloud provider support.

The name "The Xen Blanket" was given by researchers from IBM and Cornell
when the original work was published at the ACM Eurosys 2012 conference.
    http://www1.unine.ch/eurosys2012/program/conference.html
    https://dl.acm.org/citation.cfm?doid=2168836.2168849
This patch series is a reimplementation of this architecture on modern Xen
by Star Lab.

A patch to the Linux kernel to add device drivers using this blanket interface
is at:
    https://github.com/starlab-io/xenblanket-linux
(This is an example, enabling operation and testing of a Xen Blanket nested
system. Further work would be necessary for Linux upstreaming.)
Relevant other current Linux work is occurring here:
    https://lkml.org/lkml/2019/4/8/67
    https://lists.xenproject.org/archives/html/xen-devel/2019-05/msg00743.html

thanks,

Christopher

Christopher Clark (9):
  x86/guest: code movement to separate Xen detection from guest
    functions
  x86: Introduce Xen detection as separate logic from Xen Guest support.
  x86/nested: add nested_xen_version hypercall
  XSM: Add hook for nested xen version op; revises non-nested version op
  x86/nested, xsm: add nested_memory_op hypercall
  x86/nested, xsm: add nested_hvm_op hypercall
  x86/nested, xsm: add nested_grant_table_op hypercall
  x86/nested, xsm: add nested_event_channel_op hypercall
  x86/nested, xsm: add nested_schedop_shutdown hypercall

 tools/flask/policy/modules/dom0.te           |  14 +-
 tools/flask/policy/modules/guest_features.te |   5 +-
 tools/flask/policy/modules/xen.te            |   3 +
 tools/flask/policy/policy/initial_sids       |   3 +
 xen/arch/x86/Kconfig                         |  33 +-
 xen/arch/x86/Makefile                        |   2 +-
 xen/arch/x86/apic.c                          |   4 +-
 xen/arch/x86/guest/Makefile                  |   4 +
 xen/arch/x86/guest/hypercall_page.S          |   6 +
 xen/arch/x86/guest/xen-guest.c               | 311 ++++++++++++++++
 xen/arch/x86/guest/xen-nested.c              | 350 +++++++++++++++++++
 xen/arch/x86/guest/xen.c                     | 264 +-------------
 xen/arch/x86/hypercall.c                     |   8 +
 xen/arch/x86/pv/hypercall.c                  |   8 +
 xen/arch/x86/setup.c                         |   3 +
 xen/include/asm-x86/guest/hypercall.h        |   7 +-
 xen/include/asm-x86/guest/xen.h              |  36 +-
 xen/include/public/xen.h                     |   6 +
 xen/include/xen/hypercall.h                  |  33 ++
 xen/include/xsm/dummy.h                      |  48 ++-
 xen/include/xsm/xsm.h                        |  49 +++
 xen/xsm/dummy.c                              |   8 +
 xen/xsm/flask/hooks.c                        | 133 ++++++-
 xen/xsm/flask/policy/access_vectors          |  26 ++
 xen/xsm/flask/policy/initial_sids            |   1 +
 xen/xsm/flask/policy/security_classes        |   1 +
 26 files changed, 1086 insertions(+), 280 deletions(-)
 create mode 100644 xen/arch/x86/guest/xen-guest.c
 create mode 100644 xen/arch/x86/guest/xen-nested.c

-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Xen-devel] [RFC 1/9] x86/guest: code movement to separate Xen detection from guest functions
  2019-06-20  0:30 [Xen-devel] [RFC 0/9] The Xen Blanket: hypervisor interface for PV drivers on nested Xen Christopher Clark
@ 2019-06-20  0:30 ` Christopher Clark
  2019-06-20  0:30 ` [Xen-devel] [RFC 2/9] x86: Introduce Xen detection as separate logic from Xen Guest support Christopher Clark
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Christopher Clark @ 2019-06-20  0:30 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Wei Liu, Andrew Cooper, Rich Persaud, Jan Beulich,
	Roger Pau Monné

Move some logic from: xen/arch/x86/guest/xen.c
into a new file: xen/arch/x86/guest/xen-guest.c

xen.c then contains the functions for basic Xen detection and
xen-guest.c implements the intended behaviour changes when Xen is running
as a guest.

Since CONFIG_XEN_GUEST must currently be defined for any of this code to
be included, making xen-guest.o conditional upon it here works correctly
and avoids further change to it in later patches in the series.

No functional change intended.

Signed-off-by: Christopher Clark <christopher.clark@starlab.io>
---
 xen/arch/x86/guest/Makefile    |   1 +
 xen/arch/x86/guest/xen-guest.c | 301 +++++++++++++++++++++++++++++++++
 xen/arch/x86/guest/xen.c       | 254 ----------------------------
 3 files changed, 302 insertions(+), 254 deletions(-)
 create mode 100644 xen/arch/x86/guest/xen-guest.c

diff --git a/xen/arch/x86/guest/Makefile b/xen/arch/x86/guest/Makefile
index 26fb4b1007..6ddaa3748f 100644
--- a/xen/arch/x86/guest/Makefile
+++ b/xen/arch/x86/guest/Makefile
@@ -1,4 +1,5 @@
 obj-y += hypercall_page.o
 obj-y += xen.o
+obj-$(CONFIG_XEN_GUEST) += xen-guest.o
 
 obj-bin-$(CONFIG_PVH_GUEST) += pvh-boot.init.o
diff --git a/xen/arch/x86/guest/xen-guest.c b/xen/arch/x86/guest/xen-guest.c
new file mode 100644
index 0000000000..65596ab1b1
--- /dev/null
+++ b/xen/arch/x86/guest/xen-guest.c
@@ -0,0 +1,301 @@
+/******************************************************************************
+ * arch/x86/guest/xen-guest.c
+ *
+ * Support for running a single VM with Xen as a guest.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; If not, see <http://www.gnu.org/licenses/>.
+ *
+ * Copyright (c) 2017 Citrix Systems Ltd.
+ */
+#include <xen/event.h>
+#include <xen/init.h>
+#include <xen/mm.h>
+#include <xen/pfn.h>
+#include <xen/rangeset.h>
+#include <xen/types.h>
+#include <xen/pv_console.h>
+
+#include <asm/apic.h>
+#include <asm/e820.h>
+#include <asm/guest.h>
+#include <asm/msr.h>
+#include <asm/processor.h>
+
+#include <public/arch-x86/cpuid.h>
+#include <public/hvm/params.h>
+
+bool __read_mostly xen_guest;
+
+static struct rangeset *mem;
+
+DEFINE_PER_CPU(unsigned int, vcpu_id);
+
+static struct vcpu_info *vcpu_info;
+static unsigned long vcpu_info_mapped[BITS_TO_LONGS(NR_CPUS)];
+DEFINE_PER_CPU(struct vcpu_info *, vcpu_info);
+
+static void map_shared_info(void)
+{
+    mfn_t mfn;
+    struct xen_add_to_physmap xatp = {
+        .domid = DOMID_SELF,
+        .space = XENMAPSPACE_shared_info,
+    };
+    unsigned int i;
+    unsigned long rc;
+
+    if ( hypervisor_alloc_unused_page(&mfn) )
+        panic("unable to reserve shared info memory page\n");
+
+    xatp.gpfn = mfn_x(mfn);
+    rc = xen_hypercall_memory_op(XENMEM_add_to_physmap, &xatp);
+    if ( rc )
+        panic("failed to map shared_info page: %ld\n", rc);
+
+    set_fixmap(FIX_XEN_SHARED_INFO, mfn_x(mfn) << PAGE_SHIFT);
+
+    /* Mask all upcalls */
+    for ( i = 0; i < ARRAY_SIZE(XEN_shared_info->evtchn_mask); i++ )
+        write_atomic(&XEN_shared_info->evtchn_mask[i], ~0ul);
+}
+
+static int map_vcpuinfo(void)
+{
+    unsigned int vcpu = this_cpu(vcpu_id);
+    struct vcpu_register_vcpu_info info;
+    int rc;
+
+    if ( !vcpu_info )
+    {
+        this_cpu(vcpu_info) = &XEN_shared_info->vcpu_info[vcpu];
+        return 0;
+    }
+
+    if ( test_bit(vcpu, vcpu_info_mapped) )
+    {
+        this_cpu(vcpu_info) = &vcpu_info[vcpu];
+        return 0;
+    }
+
+    info.mfn = virt_to_mfn(&vcpu_info[vcpu]);
+    info.offset = (unsigned long)&vcpu_info[vcpu] & ~PAGE_MASK;
+    rc = xen_hypercall_vcpu_op(VCPUOP_register_vcpu_info, vcpu, &info);
+    if ( rc )
+    {
+        BUG_ON(vcpu >= XEN_LEGACY_MAX_VCPUS);
+        this_cpu(vcpu_info) = &XEN_shared_info->vcpu_info[vcpu];
+    }
+    else
+    {
+        this_cpu(vcpu_info) = &vcpu_info[vcpu];
+        set_bit(vcpu, vcpu_info_mapped);
+    }
+
+    return rc;
+}
+
+static void set_vcpu_id(void)
+{
+    uint32_t cpuid_base, eax, ebx, ecx, edx;
+
+    cpuid_base = hypervisor_cpuid_base();
+
+    ASSERT(cpuid_base);
+
+    /* Fetch vcpu id from cpuid. */
+    cpuid(cpuid_base + 4, &eax, &ebx, &ecx, &edx);
+    if ( eax & XEN_HVM_CPUID_VCPU_ID_PRESENT )
+        this_cpu(vcpu_id) = ebx;
+    else
+        this_cpu(vcpu_id) = smp_processor_id();
+}
+
+static void __init init_memmap(void)
+{
+    unsigned int i;
+
+    mem = rangeset_new(NULL, "host memory map", 0);
+    if ( !mem )
+        panic("failed to allocate PFN usage rangeset\n");
+
+    /*
+     * Mark up to the last memory page (or 4GiB) as RAM. This is done because
+     * Xen doesn't know the position of possible MMIO holes, so at least try to
+     * avoid the know MMIO hole below 4GiB. Note that this is subject to future
+     * discussion and improvements.
+     */
+    if ( rangeset_add_range(mem, 0, max_t(unsigned long, max_page - 1,
+                                          PFN_DOWN(GB(4) - 1))) )
+        panic("unable to add RAM to in-use PFN rangeset\n");
+
+    for ( i = 0; i < e820.nr_map; i++ )
+    {
+        struct e820entry *e = &e820.map[i];
+
+        if ( rangeset_add_range(mem, PFN_DOWN(e->addr),
+                                PFN_UP(e->addr + e->size - 1)) )
+            panic("unable to add range [%#lx, %#lx] to in-use PFN rangeset\n",
+                  PFN_DOWN(e->addr), PFN_UP(e->addr + e->size - 1));
+    }
+}
+
+static void xen_evtchn_upcall(struct cpu_user_regs *regs)
+{
+    struct vcpu_info *vcpu_info = this_cpu(vcpu_info);
+    unsigned long pending;
+
+    vcpu_info->evtchn_upcall_pending = 0;
+    pending = xchg(&vcpu_info->evtchn_pending_sel, 0);
+
+    while ( pending )
+    {
+        unsigned int l1 = find_first_set_bit(pending);
+        unsigned long evtchn = xchg(&XEN_shared_info->evtchn_pending[l1], 0);
+
+        __clear_bit(l1, &pending);
+        evtchn &= ~XEN_shared_info->evtchn_mask[l1];
+        while ( evtchn )
+        {
+            unsigned int port = find_first_set_bit(evtchn);
+
+            __clear_bit(port, &evtchn);
+            port += l1 * BITS_PER_LONG;
+
+            if ( pv_console && port == pv_console_evtchn() )
+                pv_console_rx(regs);
+            else if ( pv_shim )
+                pv_shim_inject_evtchn(port);
+        }
+    }
+
+    ack_APIC_irq();
+}
+
+static void init_evtchn(void)
+{
+    static uint8_t evtchn_upcall_vector;
+    int rc;
+
+    if ( !evtchn_upcall_vector )
+        alloc_direct_apic_vector(&evtchn_upcall_vector, xen_evtchn_upcall);
+
+    ASSERT(evtchn_upcall_vector);
+
+    rc = xen_hypercall_set_evtchn_upcall_vector(this_cpu(vcpu_id),
+                                                evtchn_upcall_vector);
+    if ( rc )
+        panic("Unable to set evtchn upcall vector: %d\n", rc);
+
+    /* Trick toolstack to think we are enlightened */
+    {
+        struct xen_hvm_param a = {
+            .domid = DOMID_SELF,
+            .index = HVM_PARAM_CALLBACK_IRQ,
+            .value = 1,
+        };
+
+        BUG_ON(xen_hypercall_hvm_op(HVMOP_set_param, &a));
+    }
+}
+
+void __init hypervisor_setup(void)
+{
+    init_memmap();
+
+    map_shared_info();
+
+    set_vcpu_id();
+    vcpu_info = xzalloc_array(struct vcpu_info, nr_cpu_ids);
+    if ( map_vcpuinfo() )
+    {
+        xfree(vcpu_info);
+        vcpu_info = NULL;
+    }
+    if ( !vcpu_info && nr_cpu_ids > XEN_LEGACY_MAX_VCPUS )
+    {
+        unsigned int i;
+
+        for ( i = XEN_LEGACY_MAX_VCPUS; i < nr_cpu_ids; i++ )
+            __cpumask_clear_cpu(i, &cpu_present_map);
+        nr_cpu_ids = XEN_LEGACY_MAX_VCPUS;
+        printk(XENLOG_WARNING
+               "unable to map vCPU info, limiting vCPUs to: %u\n",
+               XEN_LEGACY_MAX_VCPUS);
+    }
+
+    init_evtchn();
+}
+
+void hypervisor_ap_setup(void)
+{
+    set_vcpu_id();
+    map_vcpuinfo();
+    init_evtchn();
+}
+
+int hypervisor_alloc_unused_page(mfn_t *mfn)
+{
+    unsigned long m;
+    int rc;
+
+    rc = rangeset_claim_range(mem, 1, &m);
+    if ( !rc )
+        *mfn = _mfn(m);
+
+    return rc;
+}
+
+int hypervisor_free_unused_page(mfn_t mfn)
+{
+    return rangeset_remove_range(mem, mfn_x(mfn), mfn_x(mfn));
+}
+
+static void ap_resume(void *unused)
+{
+    map_vcpuinfo();
+    init_evtchn();
+}
+
+void hypervisor_resume(void)
+{
+    /* Reset shared info page. */
+    map_shared_info();
+
+    /*
+     * Reset vcpu_info. Just clean the mapped bitmap and try to map the vcpu
+     * area again. On failure to map (when it was previously mapped) panic
+     * since it's impossible to safely shut down running guest vCPUs in order
+     * to meet the new XEN_LEGACY_MAX_VCPUS requirement.
+     */
+    bitmap_zero(vcpu_info_mapped, NR_CPUS);
+    if ( map_vcpuinfo() && nr_cpu_ids > XEN_LEGACY_MAX_VCPUS )
+        panic("unable to remap vCPU info and vCPUs > legacy limit\n");
+
+    /* Setup event channel upcall vector. */
+    init_evtchn();
+    smp_call_function(ap_resume, NULL, 1);
+
+    if ( pv_console )
+        pv_console_init();
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/x86/guest/xen.c b/xen/arch/x86/guest/xen.c
index 7b7a5badab..90d464bdbd 100644
--- a/xen/arch/x86/guest/xen.c
+++ b/xen/arch/x86/guest/xen.c
@@ -22,9 +22,7 @@
 #include <xen/init.h>
 #include <xen/mm.h>
 #include <xen/pfn.h>
-#include <xen/rangeset.h>
 #include <xen/types.h>
-#include <xen/pv_console.h>
 
 #include <asm/apic.h>
 #include <asm/e820.h>
@@ -35,17 +33,8 @@
 #include <public/arch-x86/cpuid.h>
 #include <public/hvm/params.h>
 
-bool __read_mostly xen_guest;
-
 static __read_mostly uint32_t xen_cpuid_base;
 extern char hypercall_page[];
-static struct rangeset *mem;
-
-DEFINE_PER_CPU(unsigned int, vcpu_id);
-
-static struct vcpu_info *vcpu_info;
-static unsigned long vcpu_info_mapped[BITS_TO_LONGS(NR_CPUS)];
-DEFINE_PER_CPU(struct vcpu_info *, vcpu_info);
 
 static void __init find_xen_leaves(void)
 {
@@ -87,254 +76,11 @@ void __init probe_hypervisor(void)
     xen_guest = true;
 }
 
-static void map_shared_info(void)
-{
-    mfn_t mfn;
-    struct xen_add_to_physmap xatp = {
-        .domid = DOMID_SELF,
-        .space = XENMAPSPACE_shared_info,
-    };
-    unsigned int i;
-    unsigned long rc;
-
-    if ( hypervisor_alloc_unused_page(&mfn) )
-        panic("unable to reserve shared info memory page\n");
-
-    xatp.gpfn = mfn_x(mfn);
-    rc = xen_hypercall_memory_op(XENMEM_add_to_physmap, &xatp);
-    if ( rc )
-        panic("failed to map shared_info page: %ld\n", rc);
-
-    set_fixmap(FIX_XEN_SHARED_INFO, mfn_x(mfn) << PAGE_SHIFT);
-
-    /* Mask all upcalls */
-    for ( i = 0; i < ARRAY_SIZE(XEN_shared_info->evtchn_mask); i++ )
-        write_atomic(&XEN_shared_info->evtchn_mask[i], ~0ul);
-}
-
-static int map_vcpuinfo(void)
-{
-    unsigned int vcpu = this_cpu(vcpu_id);
-    struct vcpu_register_vcpu_info info;
-    int rc;
-
-    if ( !vcpu_info )
-    {
-        this_cpu(vcpu_info) = &XEN_shared_info->vcpu_info[vcpu];
-        return 0;
-    }
-
-    if ( test_bit(vcpu, vcpu_info_mapped) )
-    {
-        this_cpu(vcpu_info) = &vcpu_info[vcpu];
-        return 0;
-    }
-
-    info.mfn = virt_to_mfn(&vcpu_info[vcpu]);
-    info.offset = (unsigned long)&vcpu_info[vcpu] & ~PAGE_MASK;
-    rc = xen_hypercall_vcpu_op(VCPUOP_register_vcpu_info, vcpu, &info);
-    if ( rc )
-    {
-        BUG_ON(vcpu >= XEN_LEGACY_MAX_VCPUS);
-        this_cpu(vcpu_info) = &XEN_shared_info->vcpu_info[vcpu];
-    }
-    else
-    {
-        this_cpu(vcpu_info) = &vcpu_info[vcpu];
-        set_bit(vcpu, vcpu_info_mapped);
-    }
-
-    return rc;
-}
-
-static void set_vcpu_id(void)
-{
-    uint32_t eax, ebx, ecx, edx;
-
-    ASSERT(xen_cpuid_base);
-
-    /* Fetch vcpu id from cpuid. */
-    cpuid(xen_cpuid_base + 4, &eax, &ebx, &ecx, &edx);
-    if ( eax & XEN_HVM_CPUID_VCPU_ID_PRESENT )
-        this_cpu(vcpu_id) = ebx;
-    else
-        this_cpu(vcpu_id) = smp_processor_id();
-}
-
-static void __init init_memmap(void)
-{
-    unsigned int i;
-
-    mem = rangeset_new(NULL, "host memory map", 0);
-    if ( !mem )
-        panic("failed to allocate PFN usage rangeset\n");
-
-    /*
-     * Mark up to the last memory page (or 4GiB) as RAM. This is done because
-     * Xen doesn't know the position of possible MMIO holes, so at least try to
-     * avoid the know MMIO hole below 4GiB. Note that this is subject to future
-     * discussion and improvements.
-     */
-    if ( rangeset_add_range(mem, 0, max_t(unsigned long, max_page - 1,
-                                          PFN_DOWN(GB(4) - 1))) )
-        panic("unable to add RAM to in-use PFN rangeset\n");
-
-    for ( i = 0; i < e820.nr_map; i++ )
-    {
-        struct e820entry *e = &e820.map[i];
-
-        if ( rangeset_add_range(mem, PFN_DOWN(e->addr),
-                                PFN_UP(e->addr + e->size - 1)) )
-            panic("unable to add range [%#lx, %#lx] to in-use PFN rangeset\n",
-                  PFN_DOWN(e->addr), PFN_UP(e->addr + e->size - 1));
-    }
-}
-
-static void xen_evtchn_upcall(struct cpu_user_regs *regs)
-{
-    struct vcpu_info *vcpu_info = this_cpu(vcpu_info);
-    unsigned long pending;
-
-    vcpu_info->evtchn_upcall_pending = 0;
-    pending = xchg(&vcpu_info->evtchn_pending_sel, 0);
-
-    while ( pending )
-    {
-        unsigned int l1 = find_first_set_bit(pending);
-        unsigned long evtchn = xchg(&XEN_shared_info->evtchn_pending[l1], 0);
-
-        __clear_bit(l1, &pending);
-        evtchn &= ~XEN_shared_info->evtchn_mask[l1];
-        while ( evtchn )
-        {
-            unsigned int port = find_first_set_bit(evtchn);
-
-            __clear_bit(port, &evtchn);
-            port += l1 * BITS_PER_LONG;
-
-            if ( pv_console && port == pv_console_evtchn() )
-                pv_console_rx(regs);
-            else if ( pv_shim )
-                pv_shim_inject_evtchn(port);
-        }
-    }
-
-    ack_APIC_irq();
-}
-
-static void init_evtchn(void)
-{
-    static uint8_t evtchn_upcall_vector;
-    int rc;
-
-    if ( !evtchn_upcall_vector )
-        alloc_direct_apic_vector(&evtchn_upcall_vector, xen_evtchn_upcall);
-
-    ASSERT(evtchn_upcall_vector);
-
-    rc = xen_hypercall_set_evtchn_upcall_vector(this_cpu(vcpu_id),
-                                                evtchn_upcall_vector);
-    if ( rc )
-        panic("Unable to set evtchn upcall vector: %d\n", rc);
-
-    /* Trick toolstack to think we are enlightened */
-    {
-        struct xen_hvm_param a = {
-            .domid = DOMID_SELF,
-            .index = HVM_PARAM_CALLBACK_IRQ,
-            .value = 1,
-        };
-
-        BUG_ON(xen_hypercall_hvm_op(HVMOP_set_param, &a));
-    }
-}
-
-void __init hypervisor_setup(void)
-{
-    init_memmap();
-
-    map_shared_info();
-
-    set_vcpu_id();
-    vcpu_info = xzalloc_array(struct vcpu_info, nr_cpu_ids);
-    if ( map_vcpuinfo() )
-    {
-        xfree(vcpu_info);
-        vcpu_info = NULL;
-    }
-    if ( !vcpu_info && nr_cpu_ids > XEN_LEGACY_MAX_VCPUS )
-    {
-        unsigned int i;
-
-        for ( i = XEN_LEGACY_MAX_VCPUS; i < nr_cpu_ids; i++ )
-            __cpumask_clear_cpu(i, &cpu_present_map);
-        nr_cpu_ids = XEN_LEGACY_MAX_VCPUS;
-        printk(XENLOG_WARNING
-               "unable to map vCPU info, limiting vCPUs to: %u\n",
-               XEN_LEGACY_MAX_VCPUS);
-    }
-
-    init_evtchn();
-}
-
-void hypervisor_ap_setup(void)
-{
-    set_vcpu_id();
-    map_vcpuinfo();
-    init_evtchn();
-}
-
-int hypervisor_alloc_unused_page(mfn_t *mfn)
-{
-    unsigned long m;
-    int rc;
-
-    rc = rangeset_claim_range(mem, 1, &m);
-    if ( !rc )
-        *mfn = _mfn(m);
-
-    return rc;
-}
-
-int hypervisor_free_unused_page(mfn_t mfn)
-{
-    return rangeset_remove_range(mem, mfn_x(mfn), mfn_x(mfn));
-}
-
 uint32_t hypervisor_cpuid_base(void)
 {
     return xen_cpuid_base;
 }
 
-static void ap_resume(void *unused)
-{
-    map_vcpuinfo();
-    init_evtchn();
-}
-
-void hypervisor_resume(void)
-{
-    /* Reset shared info page. */
-    map_shared_info();
-
-    /*
-     * Reset vcpu_info. Just clean the mapped bitmap and try to map the vcpu
-     * area again. On failure to map (when it was previously mapped) panic
-     * since it's impossible to safely shut down running guest vCPUs in order
-     * to meet the new XEN_LEGACY_MAX_VCPUS requirement.
-     */
-    bitmap_zero(vcpu_info_mapped, NR_CPUS);
-    if ( map_vcpuinfo() && nr_cpu_ids > XEN_LEGACY_MAX_VCPUS )
-        panic("unable to remap vCPU info and vCPUs > legacy limit\n");
-
-    /* Setup event channel upcall vector. */
-    init_evtchn();
-    smp_call_function(ap_resume, NULL, 1);
-
-    if ( pv_console )
-        pv_console_init();
-}
-
 /*
  * Local variables:
  * mode: C
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [Xen-devel] [RFC 2/9] x86: Introduce Xen detection as separate logic from Xen Guest support.
  2019-06-20  0:30 [Xen-devel] [RFC 0/9] The Xen Blanket: hypervisor interface for PV drivers on nested Xen Christopher Clark
  2019-06-20  0:30 ` [Xen-devel] [RFC 1/9] x86/guest: code movement to separate Xen detection from guest functions Christopher Clark
@ 2019-06-20  0:30 ` Christopher Clark
  2019-06-20  0:30 ` [Xen-devel] [RFC 3/9] x86/nested: add nested_xen_version hypercall Christopher Clark
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Christopher Clark @ 2019-06-20  0:30 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Wei Liu, Andrew Cooper, Rich Persaud, Jan Beulich,
	Roger Pau Monné

Add Kconfig option XEN_DETECT for:
  "Support for Xen detecting when it is running under Xen".
If running under Xen is detected, a boot message will indicate the
hypervisor version obtained from cpuid.

Update the XEN_GUEST Kconfig option text to reflect its current
purpose:
  "Common PVH_GUEST and PV_SHIM logic for Xen as a Xen-aware guest".

Update calibrate_APIC_clock to use Xen-specific init if nested Xen is
detected, even if not operating as a PV shim or booted as PVH.

This work is a precursor to adding the interface for support of
PV drivers on nested Xen.

Signed-off-by: Christopher Clark <christopher.clark@starlab.io>
---
 xen/arch/x86/Kconfig            | 11 ++++++++++-
 xen/arch/x86/Makefile           |  2 +-
 xen/arch/x86/apic.c             |  4 ++--
 xen/arch/x86/guest/Makefile     |  2 +-
 xen/arch/x86/guest/xen-guest.c  | 10 ++++++++++
 xen/arch/x86/guest/xen.c        | 23 ++++++++++++++++++-----
 xen/arch/x86/setup.c            |  3 +++
 xen/include/asm-x86/guest/xen.h | 26 +++++++++++++++++++++-----
 8 files changed, 66 insertions(+), 15 deletions(-)

diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index f502d765ba..31e5ffd2f2 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -161,11 +161,20 @@ config XEN_ALIGN_2M
 
 endchoice
 
+config XEN_DETECT
+	def_bool y
+	prompt "Xen Detection"
+	---help---
+	  Support for Xen detecting when it is running under Xen.
+
+	  If unsure, say Y.
+
 config XEN_GUEST
 	def_bool n
 	prompt "Xen Guest"
+	depends on XEN_DETECT
 	---help---
-	  Support for Xen detecting when it is running under Xen.
+	  Common PVH_GUEST and PV_SHIM logic for Xen as a Xen-aware guest.
 
 	  If unsure, say N.
 
diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index 8a8d8f060f..763077b0a3 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -1,7 +1,7 @@
 subdir-y += acpi
 subdir-y += cpu
 subdir-y += genapic
-subdir-$(CONFIG_XEN_GUEST) += guest
+subdir-$(CONFIG_XEN_DETECT) += guest
 subdir-$(CONFIG_HVM) += hvm
 subdir-y += mm
 subdir-$(CONFIG_XENOPROF) += oprofile
diff --git a/xen/arch/x86/apic.c b/xen/arch/x86/apic.c
index 9c3c998d34..5949a95d58 100644
--- a/xen/arch/x86/apic.c
+++ b/xen/arch/x86/apic.c
@@ -1247,7 +1247,7 @@ static int __init calibrate_APIC_clock(void)
      */
     __setup_APIC_LVTT(1000000000);
 
-    if ( !xen_guest )
+    if ( !xen_detected )
         /*
          * The timer chip counts down to zero. Let's wait
          * for a wraparound to start exact measurement:
@@ -1267,7 +1267,7 @@ static int __init calibrate_APIC_clock(void)
      * Let's wait LOOPS ticks:
      */
     for (i = 0; i < LOOPS; i++)
-        if ( !xen_guest )
+        if ( !xen_detected )
             wait_8254_wraparound();
         else
             wait_tick_pvh();
diff --git a/xen/arch/x86/guest/Makefile b/xen/arch/x86/guest/Makefile
index 6ddaa3748f..d3a7844e61 100644
--- a/xen/arch/x86/guest/Makefile
+++ b/xen/arch/x86/guest/Makefile
@@ -1,4 +1,4 @@
-obj-y += hypercall_page.o
+obj-$(CONFIG_XEN_GUEST) += hypercall_page.o
 obj-y += xen.o
 obj-$(CONFIG_XEN_GUEST) += xen-guest.o
 
diff --git a/xen/arch/x86/guest/xen-guest.c b/xen/arch/x86/guest/xen-guest.c
index 65596ab1b1..b6d89e02a3 100644
--- a/xen/arch/x86/guest/xen-guest.c
+++ b/xen/arch/x86/guest/xen-guest.c
@@ -35,6 +35,8 @@
 #include <public/arch-x86/cpuid.h>
 #include <public/hvm/params.h>
 
+extern char hypercall_page[];
+
 bool __read_mostly xen_guest;
 
 static struct rangeset *mem;
@@ -45,6 +47,14 @@ static struct vcpu_info *vcpu_info;
 static unsigned long vcpu_info_mapped[BITS_TO_LONGS(NR_CPUS)];
 DEFINE_PER_CPU(struct vcpu_info *, vcpu_info);
 
+void xen_guest_enable(void)
+{
+    /* Fill the hypercall page. */
+    wrmsrl(cpuid_ebx(hypervisor_cpuid_base() + 2), __pa(hypercall_page));
+
+    xen_guest = true;
+}
+
 static void map_shared_info(void)
 {
     mfn_t mfn;
diff --git a/xen/arch/x86/guest/xen.c b/xen/arch/x86/guest/xen.c
index 90d464bdbd..b0b603a11a 100644
--- a/xen/arch/x86/guest/xen.c
+++ b/xen/arch/x86/guest/xen.c
@@ -33,8 +33,10 @@
 #include <public/arch-x86/cpuid.h>
 #include <public/hvm/params.h>
 
+/* xen_detected: Xen running on Xen detected */
+bool __read_mostly xen_detected;
+
 static __read_mostly uint32_t xen_cpuid_base;
-extern char hypercall_page[];
 
 static void __init find_xen_leaves(void)
 {
@@ -58,7 +60,7 @@ static void __init find_xen_leaves(void)
 
 void __init probe_hypervisor(void)
 {
-    if ( xen_guest )
+    if ( xen_detected )
         return;
 
     /* Too early to use cpu_has_hypervisor */
@@ -70,10 +72,21 @@ void __init probe_hypervisor(void)
     if ( !xen_cpuid_base )
         return;
 
-    /* Fill the hypercall page. */
-    wrmsrl(cpuid_ebx(xen_cpuid_base + 2), __pa(hypercall_page));
+    xen_detected = true;
+
+    xen_guest_enable();
+}
+
+void __init hypervisor_print_info(void)
+{
+    uint32_t eax, ebx, ecx, edx;
+    unsigned int major, minor;
+
+    cpuid(xen_cpuid_base + 1, &eax, &ebx, &ecx, &edx);
 
-    xen_guest = true;
+    major = eax >> 16;
+    minor = eax & 0xffff;
+    printk("Nested Xen version %u.%u.\n", major, minor);
 }
 
 uint32_t hypervisor_cpuid_base(void)
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index d2011910fa..58f499edaf 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -774,6 +774,9 @@ void __init noreturn __start_xen(unsigned long mbi_p)
     ehci_dbgp_init();
     console_init_preirq();
 
+    if ( xen_detected )
+        hypervisor_print_info();
+
     if ( pvh_boot )
         pvh_print_info();
 
diff --git a/xen/include/asm-x86/guest/xen.h b/xen/include/asm-x86/guest/xen.h
index 7e04e4a7ab..27c854ab8a 100644
--- a/xen/include/asm-x86/guest/xen.h
+++ b/xen/include/asm-x86/guest/xen.h
@@ -24,20 +24,37 @@
 #include <asm/e820.h>
 #include <asm/fixmap.h>
 
-#define XEN_shared_info ((struct shared_info *)fix_to_virt(FIX_XEN_SHARED_INFO))
+#ifdef CONFIG_XEN_DETECT
+
+extern bool xen_detected;
+
+void probe_hypervisor(void);
+void hypervisor_print_info(void);
+uint32_t hypervisor_cpuid_base(void);
+
+#else
+
+#define xen_detected 0
+
+static inline void probe_hypervisor(void) {}
+static inline void hypervisor_print_info(void) {
+    ASSERT_UNREACHABLE();
+}
+
+#endif /* CONFIG_XEN_DETECT */
 
 #ifdef CONFIG_XEN_GUEST
+#define XEN_shared_info ((struct shared_info *)fix_to_virt(FIX_XEN_SHARED_INFO))
 
 extern bool xen_guest;
 extern bool pv_console;
 
-void probe_hypervisor(void);
 void hypervisor_setup(void);
 void hypervisor_ap_setup(void);
 int hypervisor_alloc_unused_page(mfn_t *mfn);
 int hypervisor_free_unused_page(mfn_t mfn);
-uint32_t hypervisor_cpuid_base(void);
 void hypervisor_resume(void);
+void xen_guest_enable(void);
 
 DECLARE_PER_CPU(unsigned int, vcpu_id);
 DECLARE_PER_CPU(struct vcpu_info *, vcpu_info);
@@ -47,8 +64,6 @@ DECLARE_PER_CPU(struct vcpu_info *, vcpu_info);
 #define xen_guest 0
 #define pv_console 0
 
-static inline void probe_hypervisor(void) {}
-
 static inline void hypervisor_setup(void)
 {
     ASSERT_UNREACHABLE();
@@ -57,6 +72,7 @@ static inline void hypervisor_ap_setup(void)
 {
     ASSERT_UNREACHABLE();
 }
+static inline void xen_guest_enable(void) {}
 
 #endif /* CONFIG_XEN_GUEST */
 #endif /* __X86_GUEST_XEN_H__ */
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [Xen-devel] [RFC 3/9] x86/nested: add nested_xen_version hypercall
  2019-06-20  0:30 [Xen-devel] [RFC 0/9] The Xen Blanket: hypervisor interface for PV drivers on nested Xen Christopher Clark
  2019-06-20  0:30 ` [Xen-devel] [RFC 1/9] x86/guest: code movement to separate Xen detection from guest functions Christopher Clark
  2019-06-20  0:30 ` [Xen-devel] [RFC 2/9] x86: Introduce Xen detection as separate logic from Xen Guest support Christopher Clark
@ 2019-06-20  0:30 ` Christopher Clark
  2019-06-20  0:30 ` [Xen-devel] [RFC 4/9] XSM: Add hook for nested xen version op; revises non-nested version op Christopher Clark
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Christopher Clark @ 2019-06-20  0:30 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Stefano Stabellini, Wei Liu,
	Konrad Rzeszutek Wilk, George Dunlap, Andrew Cooper, Ian Jackson,
	Rich Persaud, Tim Deegan, Julien Grall, Jan Beulich,
	Roger Pau Monné

Provides proxying to the host hypervisor for XENVER_version and
XENVER_get_features ops.

The nested PV interface is only enabled when Xen is not running as
either the PV shim or booted as PVH, since the initialization performed
within the hypervisor in those cases - ie. as a Xen guest - claims resources
that are normally operated by the control domain.

This nested hypercall only permits access from the control domain.
The XSM policy hook implementation is deferred to a subsequent commit.

Signed-off-by: Christopher Clark <christopher.clark@starlab.io>
---
 xen/arch/x86/Kconfig                  | 22 +++++++
 xen/arch/x86/guest/Makefile           |  5 +-
 xen/arch/x86/guest/hypercall_page.S   |  1 +
 xen/arch/x86/guest/xen-nested.c       | 82 +++++++++++++++++++++++++++
 xen/arch/x86/guest/xen.c              |  5 +-
 xen/arch/x86/hypercall.c              |  3 +
 xen/arch/x86/pv/hypercall.c           |  3 +
 xen/include/asm-x86/guest/hypercall.h |  7 ++-
 xen/include/asm-x86/guest/xen.h       | 10 ++++
 xen/include/public/xen.h              |  1 +
 xen/include/xen/hypercall.h           |  6 ++
 11 files changed, 142 insertions(+), 3 deletions(-)
 create mode 100644 xen/arch/x86/guest/xen-nested.c

diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index 31e5ffd2f2..e31e8d3434 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -207,6 +207,28 @@ config PV_SHIM_EXCLUSIVE
 	  option is only intended for use when building a dedicated PV Shim
 	  firmware, and will not function correctly in other scenarios.
 
+	  If unsure, say N.
+
+config XEN_NESTED
+	bool "Xen PV driver interface for nested Xen" if EXPERT = "y"
+	depends on XEN_DETECT
+	---help---
+	  Enables a second PV driver interface in the hypervisor to support running
+	  two sets of PV drivers within a single privileged guest (eg. guest dom0)
+	  of a system running Xen under Xen:
+
+	  1) host set: frontends to access devices provided by lower hypervisor
+	  2) guest set: backends to support existing PV drivers in nested guest VMs
+
+	  This interface supports the host set of drivers and performs proxying of a
+	  limited set of hypercall operations from the guest to the host hypervisor.
+
+	  This feature is for the guest hypervisor and is transparent to the
+	  host hypervisor. Guest VMs of the guest hypervisor use the standard
+	  PV driver interfaces and unmodified drivers.
+
+	  Feature is also known as "The Xen-Blanket", presented at Eurosys 2012.
+
 	  If unsure, say N.
 endmenu
 
diff --git a/xen/arch/x86/guest/Makefile b/xen/arch/x86/guest/Makefile
index d3a7844e61..6d8b0186d4 100644
--- a/xen/arch/x86/guest/Makefile
+++ b/xen/arch/x86/guest/Makefile
@@ -1,5 +1,8 @@
-obj-$(CONFIG_XEN_GUEST) += hypercall_page.o
+ifneq ($(filter y,$(CONFIG_XEN_GUEST) $(CONFIG_XEN_NESTED) $(CONFIG_PVH_GUEST)),)
+obj-y += hypercall_page.o
+endif
 obj-y += xen.o
 obj-$(CONFIG_XEN_GUEST) += xen-guest.o
+obj-$(CONFIG_XEN_NESTED) += xen-nested.o
 
 obj-bin-$(CONFIG_PVH_GUEST) += pvh-boot.init.o
diff --git a/xen/arch/x86/guest/hypercall_page.S b/xen/arch/x86/guest/hypercall_page.S
index 6485e9150e..2b1e35803a 100644
--- a/xen/arch/x86/guest/hypercall_page.S
+++ b/xen/arch/x86/guest/hypercall_page.S
@@ -60,6 +60,7 @@ DECLARE_HYPERCALL(domctl)
 DECLARE_HYPERCALL(kexec_op)
 DECLARE_HYPERCALL(argo_op)
 DECLARE_HYPERCALL(xenpmu_op)
+DECLARE_HYPERCALL(nested_xen_version)
 
 DECLARE_HYPERCALL(arch_0)
 DECLARE_HYPERCALL(arch_1)
diff --git a/xen/arch/x86/guest/xen-nested.c b/xen/arch/x86/guest/xen-nested.c
new file mode 100644
index 0000000000..744592aa0c
--- /dev/null
+++ b/xen/arch/x86/guest/xen-nested.c
@@ -0,0 +1,82 @@
+/*
+ * arch/x86/guest/xen-nested.c
+ *
+ * Hypercall implementations for nested PV drivers interface.
+ *
+ * Copyright (c) 2019 Star Lab Corp
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ */
+
+#include <xen/config.h>
+#include <xen/errno.h>
+#include <xen/guest_access.h>
+#include <xen/hypercall.h>
+#include <xen/lib.h>
+#include <xen/sched.h>
+
+#include <public/version.h>
+
+#include <asm/guest/hypercall.h>
+#include <asm/guest/xen.h>
+
+extern char hypercall_page[];
+
+/* xen_nested: support for nested PV interface enabled */
+static bool __read_mostly xen_nested;
+
+void xen_nested_enable(void)
+{
+    /* Fill the hypercall page. */
+    wrmsrl(cpuid_ebx(hypervisor_cpuid_base() + 2), __pa(hypercall_page));
+
+    xen_nested = true;
+}
+
+long do_nested_xen_version(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
+{
+    long ret;
+
+    if ( !xen_nested )
+        return -ENOSYS;
+
+    /* FIXME: apply XSM check here */
+    if ( !is_control_domain(current->domain) )
+        return -EPERM;
+
+    gprintk(XENLOG_DEBUG, "Nested xen_version: %d.\n", cmd);
+
+    switch ( cmd )
+    {
+    case XENVER_version:
+        return xen_hypercall_xen_version(XENVER_version, 0);
+
+    case XENVER_get_features:
+    {
+        xen_feature_info_t fi;
+
+        if ( copy_from_guest(&fi, arg, 1) )
+            return -EFAULT;
+
+        ret = xen_hypercall_xen_version(XENVER_get_features, &fi);
+        if ( ret )
+            return ret;
+
+        if ( __copy_to_guest(arg, &fi, 1) )
+            return -EFAULT;
+
+        return 0;
+    }
+
+    default:
+        gprintk(XENLOG_ERR, "Nested xen_version op %d not implemented.\n", cmd);
+        return -EOPNOTSUPP;
+    }
+}
diff --git a/xen/arch/x86/guest/xen.c b/xen/arch/x86/guest/xen.c
index b0b603a11a..78a5f40b22 100644
--- a/xen/arch/x86/guest/xen.c
+++ b/xen/arch/x86/guest/xen.c
@@ -74,7 +74,10 @@ void __init probe_hypervisor(void)
 
     xen_detected = true;
 
-    xen_guest_enable();
+    if ( pv_shim || pvh_boot )
+        xen_guest_enable();
+    else
+        xen_nested_enable();
 }
 
 void __init hypervisor_print_info(void)
diff --git a/xen/arch/x86/hypercall.c b/xen/arch/x86/hypercall.c
index d483dbaa6b..b22f0ca65a 100644
--- a/xen/arch/x86/hypercall.c
+++ b/xen/arch/x86/hypercall.c
@@ -72,6 +72,9 @@ const hypercall_args_t hypercall_args_table[NR_hypercalls] =
 #ifdef CONFIG_HVM
     ARGS(hvm_op, 2),
     ARGS(dm_op, 3),
+#endif
+#ifdef CONFIG_XEN_NESTED
+    ARGS(nested_xen_version, 2),
 #endif
     ARGS(mca, 1),
     ARGS(arch_1, 1),
diff --git a/xen/arch/x86/pv/hypercall.c b/xen/arch/x86/pv/hypercall.c
index 0c84c0b3a0..1e00d07273 100644
--- a/xen/arch/x86/pv/hypercall.c
+++ b/xen/arch/x86/pv/hypercall.c
@@ -83,6 +83,9 @@ const hypercall_table_t pv_hypercall_table[] = {
 #ifdef CONFIG_HVM
     HYPERCALL(hvm_op),
     COMPAT_CALL(dm_op),
+#endif
+#ifdef CONFIG_XEN_NESTED
+    HYPERCALL(nested_xen_version),
 #endif
     HYPERCALL(mca),
     HYPERCALL(arch_1),
diff --git a/xen/include/asm-x86/guest/hypercall.h b/xen/include/asm-x86/guest/hypercall.h
index d548816b30..86e11dd1d1 100644
--- a/xen/include/asm-x86/guest/hypercall.h
+++ b/xen/include/asm-x86/guest/hypercall.h
@@ -19,7 +19,7 @@
 #ifndef __X86_XEN_HYPERCALL_H__
 #define __X86_XEN_HYPERCALL_H__
 
-#ifdef CONFIG_XEN_GUEST
+#if defined(CONFIG_XEN_GUEST) || defined (CONFIG_XEN_NESTED)
 
 #include <xen/types.h>
 
@@ -123,6 +123,11 @@ static inline long xen_hypercall_hvm_op(unsigned int op, void *arg)
     return _hypercall64_2(long, __HYPERVISOR_hvm_op, op, arg);
 }
 
+static inline long xen_hypercall_xen_version(unsigned int op, void *arg)
+{
+    return _hypercall64_2(long, __HYPERVISOR_xen_version, op, arg);
+}
+
 /*
  * Higher level hypercall helpers
  */
diff --git a/xen/include/asm-x86/guest/xen.h b/xen/include/asm-x86/guest/xen.h
index 27c854ab8a..802aee5edb 100644
--- a/xen/include/asm-x86/guest/xen.h
+++ b/xen/include/asm-x86/guest/xen.h
@@ -43,6 +43,16 @@ static inline void hypervisor_print_info(void) {
 
 #endif /* CONFIG_XEN_DETECT */
 
+#ifdef CONFIG_XEN_NESTED
+
+void xen_nested_enable(void);
+
+#else
+
+static inline void xen_nested_enable(void) {}
+
+#endif /* CONFIG_XEN_NESTED */
+
 #ifdef CONFIG_XEN_GUEST
 #define XEN_shared_info ((struct shared_info *)fix_to_virt(FIX_XEN_SHARED_INFO))
 
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index cb2917e74b..2f5ac5eedc 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -121,6 +121,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define __HYPERVISOR_argo_op              39
 #define __HYPERVISOR_xenpmu_op            40
 #define __HYPERVISOR_dm_op                41
+#define __HYPERVISOR_nested_xen_version   42
 
 /* Architecture-specific hypercall definitions. */
 #define __HYPERVISOR_arch_0               48
diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h
index fc00a67448..15194002d6 100644
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -150,6 +150,12 @@ do_dm_op(
     unsigned int nr_bufs,
     XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs);
 
+#ifdef CONFIG_XEN_NESTED
+extern long do_nested_xen_version(
+    int cmd,
+    XEN_GUEST_HANDLE_PARAM(void) arg);
+#endif
+
 #ifdef CONFIG_COMPAT
 
 extern int
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [Xen-devel] [RFC 4/9] XSM: Add hook for nested xen version op; revises non-nested version op
  2019-06-20  0:30 [Xen-devel] [RFC 0/9] The Xen Blanket: hypervisor interface for PV drivers on nested Xen Christopher Clark
                   ` (2 preceding siblings ...)
  2019-06-20  0:30 ` [Xen-devel] [RFC 3/9] x86/nested: add nested_xen_version hypercall Christopher Clark
@ 2019-06-20  0:30 ` Christopher Clark
  2019-06-20  0:30 ` [Xen-devel] [RFC 5/9] x86/nested, xsm: add nested_memory_op hypercall Christopher Clark
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Christopher Clark @ 2019-06-20  0:30 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Wei Liu, Andrew Cooper, Ian Jackson, Rich Persaud,
	Jan Beulich, Daniel De Graaf, Roger Pau Monné

Expand XSM control to the full set of Xen version ops, to allow for
granular control over ops a domain is allowed to issue for the nested case.

Applies const to args of xsm_default_action.

Signed-off-by: Christopher Clark <christopher.clark@starlab.io>
---
 tools/flask/policy/modules/dom0.te           |  7 ++-
 tools/flask/policy/modules/guest_features.te |  5 +-
 tools/flask/policy/modules/xen.te            |  3 ++
 tools/flask/policy/policy/initial_sids       |  3 ++
 xen/arch/x86/guest/xen-nested.c              |  6 +--
 xen/include/xsm/dummy.h                      | 12 ++++-
 xen/include/xsm/xsm.h                        | 13 ++++++
 xen/xsm/dummy.c                              |  3 ++
 xen/xsm/flask/hooks.c                        | 49 ++++++++++++++------
 xen/xsm/flask/policy/access_vectors          |  6 +++
 xen/xsm/flask/policy/initial_sids            |  1 +
 11 files changed, 86 insertions(+), 22 deletions(-)

diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/modules/dom0.te
index 9970f9dc08..9ed7ccb57b 100644
--- a/tools/flask/policy/modules/dom0.te
+++ b/tools/flask/policy/modules/dom0.te
@@ -22,9 +22,9 @@ allow dom0_t xen_t:xen2 {
 # Allow dom0 to use all XENVER_ subops that have checks.
 # Note that dom0 is part of domain_type so this has duplicates.
 allow dom0_t xen_t:version {
-	xen_extraversion xen_compile_info xen_capabilities
+	xen_version xen_extraversion xen_compile_info xen_capabilities
 	xen_changeset xen_pagesize xen_guest_handle xen_commandline
-	xen_build_id
+	xen_build_id xen_get_features xen_platform_parameters
 };
 
 allow dom0_t xen_t:mmu memorymap;
@@ -43,6 +43,9 @@ allow dom0_t dom0_t:domain2 {
 };
 allow dom0_t dom0_t:resource { add remove };
 
+# Allow dom0 to communicate with a nested Xen hypervisor
+allow dom0_t nestedxen_t:version { xen_version xen_get_features };
+
 # These permissions allow using the FLASK security server to compute access
 # checks locally, which could be used by a domain or service (such as xenstore)
 # that does not have its own security server to make access decisions based on
diff --git a/tools/flask/policy/modules/guest_features.te b/tools/flask/policy/modules/guest_features.te
index 2797a22761..baade15f2e 100644
--- a/tools/flask/policy/modules/guest_features.te
+++ b/tools/flask/policy/modules/guest_features.te
@@ -21,8 +21,9 @@ if (guest_writeconsole) {
 
 # For normal guests, allow all queries except XENVER_commandline.
 allow domain_type xen_t:version {
-    xen_extraversion xen_compile_info xen_capabilities
-    xen_changeset xen_pagesize xen_guest_handle
+    xen_version xen_extraversion xen_compile_info xen_capabilities
+    xen_changeset xen_pagesize xen_guest_handle xen_get_features
+    xen_platform_parameters
 };
 
 # Version queries don't need auditing when denied.  They can be
diff --git a/tools/flask/policy/modules/xen.te b/tools/flask/policy/modules/xen.te
index 3dbf93d2b8..fbd82334fd 100644
--- a/tools/flask/policy/modules/xen.te
+++ b/tools/flask/policy/modules/xen.te
@@ -26,6 +26,9 @@ attribute mls_priv;
 # The hypervisor itself
 type xen_t, xen_type, mls_priv;
 
+# A nested Xen hypervisor, if any
+type nestedxen_t, xen_type;
+
 # Domain 0
 declare_singleton_domain(dom0_t, mls_priv);
 
diff --git a/tools/flask/policy/policy/initial_sids b/tools/flask/policy/policy/initial_sids
index 6b7b7eff21..50b648df3b 100644
--- a/tools/flask/policy/policy/initial_sids
+++ b/tools/flask/policy/policy/initial_sids
@@ -16,3 +16,6 @@ sid device gen_context(system_u:object_r:device_t,s0)
 # Initial SIDs used by the toolstack for domains without defined labels
 sid domU gen_context(system_u:system_r:domU_t,s0)
 sid domDM gen_context(system_u:system_r:dm_dom_t,s0)
+
+# Initial SID for nested Xen on Xen
+sid nestedxen gen_context(system_u:system_r:nestedxen_t,s0)
diff --git a/xen/arch/x86/guest/xen-nested.c b/xen/arch/x86/guest/xen-nested.c
index 744592aa0c..fcfa5e1087 100644
--- a/xen/arch/x86/guest/xen-nested.c
+++ b/xen/arch/x86/guest/xen-nested.c
@@ -47,9 +47,9 @@ long do_nested_xen_version(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
     if ( !xen_nested )
         return -ENOSYS;
 
-    /* FIXME: apply XSM check here */
-    if ( !is_control_domain(current->domain) )
-        return -EPERM;
+    ret = xsm_nested_xen_version(XSM_PRIV, current->domain, cmd);
+    if ( ret )
+        return ret;
 
     gprintk(XENLOG_DEBUG, "Nested xen_version: %d.\n", cmd);
 
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index 01d2814fed..8011bf2cb4 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -69,7 +69,7 @@ void __xsm_action_mismatch_detected(void);
 #endif /* CONFIG_XSM */
 
 static always_inline int xsm_default_action(
-    xsm_default_t action, struct domain *src, struct domain *target)
+    xsm_default_t action, const struct domain *src, const struct domain *target)
 {
     switch ( action ) {
     case XSM_HOOK:
@@ -739,6 +739,16 @@ static XSM_INLINE int xsm_argo_send(const struct domain *d,
 
 #endif /* CONFIG_ARGO */
 
+#ifdef CONFIG_XEN_NESTED
+static XSM_INLINE int xsm_nested_xen_version(XSM_DEFAULT_ARG
+                                             const struct domain *d,
+                                             unsigned int cmd)
+{
+    XSM_ASSERT_ACTION(XSM_PRIV);
+    return xsm_default_action(action, d, NULL);
+}
+#endif
+
 #include <public/version.h>
 static XSM_INLINE int xsm_xen_version (XSM_DEFAULT_ARG uint32_t op)
 {
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index b6141f6ab1..96044cb55a 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -187,6 +187,9 @@ struct xsm_operations {
     int (*argo_register_any_source) (const struct domain *d);
     int (*argo_send) (const struct domain *d, const struct domain *t);
 #endif
+#ifdef CONFIG_XEN_NESTED
+    int (*nested_xen_version) (const struct domain *d, unsigned int cmd);
+#endif
 };
 
 #ifdef CONFIG_XSM
@@ -723,6 +726,16 @@ static inline int xsm_argo_send(const struct domain *d, const struct domain *t)
 
 #endif /* CONFIG_ARGO */
 
+#ifdef CONFIG_XEN_NESTED
+static inline int xsm_nested_xen_version(xsm_default_t def,
+                                         const struct domain *d,
+                                         unsigned int cmd)
+{
+    return xsm_ops->nested_xen_version(d, cmd);
+}
+
+#endif /* CONFIG_XEN_NESTED */
+
 #endif /* XSM_NO_WRAPPERS */
 
 #ifdef CONFIG_MULTIBOOT
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index c9a566f2b5..ed0a4b0691 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -157,4 +157,7 @@ void __init xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, argo_register_any_source);
     set_to_dummy_if_null(ops, argo_send);
 #endif
+#ifdef CONFIG_XEN_NESTED
+    set_to_dummy_if_null(ops, nested_xen_version);
+#endif
 }
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index a7d690ac3c..2835279fe7 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1666,46 +1666,56 @@ static int flask_dm_op(struct domain *d)
 
 #endif /* CONFIG_X86 */
 
-static int flask_xen_version (uint32_t op)
+static int domain_has_xen_version (const struct domain *d, u32 tsid,
+                                   uint32_t op)
 {
-    u32 dsid = domain_sid(current->domain);
+    u32 dsid = domain_sid(d);
 
     switch ( op )
     {
     case XENVER_version:
-    case XENVER_platform_parameters:
-    case XENVER_get_features:
-        /* These sub-ops ignore the permission checks and return data. */
-        return 0;
+        return avc_has_perm(dsid, tsid, SECCLASS_VERSION,
+                            VERSION__XEN_VERSION, NULL);
     case XENVER_extraversion:
-        return avc_has_perm(dsid, SECINITSID_XEN, SECCLASS_VERSION,
+        return avc_has_perm(dsid, tsid, SECCLASS_VERSION,
                             VERSION__XEN_EXTRAVERSION, NULL);
     case XENVER_compile_info:
-        return avc_has_perm(dsid, SECINITSID_XEN, SECCLASS_VERSION,
+        return avc_has_perm(dsid, tsid, SECCLASS_VERSION,
                             VERSION__XEN_COMPILE_INFO, NULL);
     case XENVER_capabilities:
-        return avc_has_perm(dsid, SECINITSID_XEN, SECCLASS_VERSION,
+        return avc_has_perm(dsid, tsid, SECCLASS_VERSION,
                             VERSION__XEN_CAPABILITIES, NULL);
     case XENVER_changeset:
-        return avc_has_perm(dsid, SECINITSID_XEN, SECCLASS_VERSION,
+        return avc_has_perm(dsid, tsid, SECCLASS_VERSION,
                             VERSION__XEN_CHANGESET, NULL);
+    case XENVER_platform_parameters:
+        return avc_has_perm(dsid, tsid, SECCLASS_VERSION,
+                            VERSION__XEN_PLATFORM_PARAMETERS, NULL);
+    case XENVER_get_features:
+        return avc_has_perm(dsid, tsid, SECCLASS_VERSION,
+                            VERSION__XEN_GET_FEATURES, NULL);
     case XENVER_pagesize:
-        return avc_has_perm(dsid, SECINITSID_XEN, SECCLASS_VERSION,
+        return avc_has_perm(dsid, tsid, SECCLASS_VERSION,
                             VERSION__XEN_PAGESIZE, NULL);
     case XENVER_guest_handle:
-        return avc_has_perm(dsid, SECINITSID_XEN, SECCLASS_VERSION,
+        return avc_has_perm(dsid, tsid, SECCLASS_VERSION,
                             VERSION__XEN_GUEST_HANDLE, NULL);
     case XENVER_commandline:
-        return avc_has_perm(dsid, SECINITSID_XEN, SECCLASS_VERSION,
+        return avc_has_perm(dsid, tsid, SECCLASS_VERSION,
                             VERSION__XEN_COMMANDLINE, NULL);
     case XENVER_build_id:
-        return avc_has_perm(dsid, SECINITSID_XEN, SECCLASS_VERSION,
+        return avc_has_perm(dsid, tsid, SECCLASS_VERSION,
                             VERSION__XEN_BUILD_ID, NULL);
     default:
         return -EPERM;
     }
 }
 
+static int flask_xen_version (uint32_t op)
+{
+    return domain_has_xen_version(current->domain, SECINITSID_XEN, op);
+}
+
 static int flask_domain_resource_map(struct domain *d)
 {
     return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__RESOURCE_MAP);
@@ -1738,6 +1748,14 @@ static int flask_argo_send(const struct domain *d, const struct domain *t)
 
 #endif
 
+#ifdef CONFIG_XEN_NESTED
+static int flask_nested_xen_version(const struct domain *d, unsigned int op)
+{
+    return domain_has_xen_version(d, SECINITSID_NESTEDXEN, op);
+}
+
+#endif
+
 long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);
 int compat_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);
 
@@ -1877,6 +1895,9 @@ static struct xsm_operations flask_ops = {
     .argo_register_any_source = flask_argo_register_any_source,
     .argo_send = flask_argo_send,
 #endif
+#ifdef CONFIG_XEN_NESTED
+    .nested_xen_version = flask_nested_xen_version,
+#endif
 };
 
 void __init flask_init(const void *policy_buffer, size_t policy_size)
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 194d743a71..7e0d5aa7bf 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -510,6 +510,8 @@ class security
 #
 class version
 {
+# Basic information
+    xen_version
 # Extra informations (-unstable).
     xen_extraversion
 # Compile information of the hypervisor.
@@ -518,6 +520,10 @@ class version
     xen_capabilities
 # Source code changeset.
     xen_changeset
+# Hypervisor virt start
+    xen_platform_parameters
+# Query for bitmap of platform features
+    xen_get_features
 # Page size the hypervisor uses.
     xen_pagesize
 # An value that the control stack can choose.
diff --git a/xen/xsm/flask/policy/initial_sids b/xen/xsm/flask/policy/initial_sids
index 7eca70d339..c684cda873 100644
--- a/xen/xsm/flask/policy/initial_sids
+++ b/xen/xsm/flask/policy/initial_sids
@@ -15,4 +15,5 @@ sid irq
 sid device
 sid domU
 sid domDM
+sid nestedxen
 # FLASK
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [Xen-devel] [RFC 5/9] x86/nested, xsm: add nested_memory_op hypercall
  2019-06-20  0:30 [Xen-devel] [RFC 0/9] The Xen Blanket: hypervisor interface for PV drivers on nested Xen Christopher Clark
                   ` (3 preceding siblings ...)
  2019-06-20  0:30 ` [Xen-devel] [RFC 4/9] XSM: Add hook for nested xen version op; revises non-nested version op Christopher Clark
@ 2019-06-20  0:30 ` Christopher Clark
  2019-06-20  0:30 ` [Xen-devel] [RFC 6/9] x86/nested, xsm: add nested_hvm_op hypercall Christopher Clark
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Christopher Clark @ 2019-06-20  0:30 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Stefano Stabellini, Wei Liu,
	Konrad Rzeszutek Wilk, George Dunlap, Andrew Cooper, Ian Jackson,
	Rich Persaud, Tim Deegan, Julien Grall, Jan Beulich,
	Daniel De Graaf, Roger Pau Monné

Provides proxying to the host hypervisor for the XENMEM_add_to_physmap op
only for the XENMAPSPACE_shared_info and XENMAPSPACE_grant_table spaces,
for DOMID_SELF.

Both compat and native entry points.

Signed-off-by: Christopher Clark <christopher.clark@starlab.io>
---
 tools/flask/policy/modules/dom0.te  |  1 +
 xen/arch/x86/guest/hypercall_page.S |  1 +
 xen/arch/x86/guest/xen-nested.c     | 80 +++++++++++++++++++++++++++++
 xen/arch/x86/hypercall.c            |  1 +
 xen/arch/x86/pv/hypercall.c         |  1 +
 xen/include/public/xen.h            |  1 +
 xen/include/xen/hypercall.h         | 10 ++++
 xen/include/xsm/dummy.h             |  7 +++
 xen/include/xsm/xsm.h               |  7 +++
 xen/xsm/dummy.c                     |  1 +
 xen/xsm/flask/hooks.c               | 15 ++++++
 11 files changed, 125 insertions(+)

diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/modules/dom0.te
index 9ed7ccb57b..1f564ff83b 100644
--- a/tools/flask/policy/modules/dom0.te
+++ b/tools/flask/policy/modules/dom0.te
@@ -45,6 +45,7 @@ allow dom0_t dom0_t:resource { add remove };
 
 # Allow dom0 to communicate with a nested Xen hypervisor
 allow dom0_t nestedxen_t:version { xen_version xen_get_features };
+allow dom0_t nestedxen_t:mmu physmap;
 
 # These permissions allow using the FLASK security server to compute access
 # checks locally, which could be used by a domain or service (such as xenstore)
diff --git a/xen/arch/x86/guest/hypercall_page.S b/xen/arch/x86/guest/hypercall_page.S
index 2b1e35803a..1a8dd0ea4f 100644
--- a/xen/arch/x86/guest/hypercall_page.S
+++ b/xen/arch/x86/guest/hypercall_page.S
@@ -61,6 +61,7 @@ DECLARE_HYPERCALL(kexec_op)
 DECLARE_HYPERCALL(argo_op)
 DECLARE_HYPERCALL(xenpmu_op)
 DECLARE_HYPERCALL(nested_xen_version)
+DECLARE_HYPERCALL(nested_memory_op)
 
 DECLARE_HYPERCALL(arch_0)
 DECLARE_HYPERCALL(arch_1)
diff --git a/xen/arch/x86/guest/xen-nested.c b/xen/arch/x86/guest/xen-nested.c
index fcfa5e1087..a76983cc2d 100644
--- a/xen/arch/x86/guest/xen-nested.c
+++ b/xen/arch/x86/guest/xen-nested.c
@@ -22,11 +22,17 @@
 #include <xen/lib.h>
 #include <xen/sched.h>
 
+#include <public/memory.h>
 #include <public/version.h>
+#include <public/xen.h>
 
 #include <asm/guest/hypercall.h>
 #include <asm/guest/xen.h>
 
+#ifdef CONFIG_COMPAT
+#include <compat/memory.h>
+#endif
+
 extern char hypercall_page[];
 
 /* xen_nested: support for nested PV interface enabled */
@@ -80,3 +86,77 @@ long do_nested_xen_version(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         return -EOPNOTSUPP;
     }
 }
+
+static long nested_add_to_physmap(struct xen_add_to_physmap xatp)
+{
+    struct domain *d;
+    long ret;
+
+    if ( !xen_nested )
+        return -ENOSYS;
+
+    if ( (xatp.space != XENMAPSPACE_shared_info) &&
+         (xatp.space != XENMAPSPACE_grant_table) )
+    {
+        gprintk(XENLOG_ERR, "Nested memory op: unknown xatp.space: %u\n",
+                xatp.space);
+        return -EINVAL;
+    }
+
+    if ( xatp.domid != DOMID_SELF )
+        return -EPERM;
+
+    ret = xsm_nested_add_to_physmap(XSM_PRIV, current->domain);
+    if ( ret )
+        return ret;
+
+    gprintk(XENLOG_DEBUG, "Nested XENMEM_add_to_physmap: %d\n", xatp.space);
+
+    d = rcu_lock_current_domain();
+
+    ret = xen_hypercall_memory_op(XENMEM_add_to_physmap, &xatp);
+
+    rcu_unlock_domain(d);
+
+    if ( ret )
+        gprintk(XENLOG_ERR, "Nested memory op failed add_to_physmap"
+                            " for %d with %ld\n", xatp.space, ret);
+    return ret;
+}
+
+long do_nested_memory_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
+{
+    struct xen_add_to_physmap xatp;
+
+    if ( cmd != XENMEM_add_to_physmap )
+    {
+        gprintk(XENLOG_ERR, "Nested memory op %u not implemented.\n", cmd);
+        return -EOPNOTSUPP;
+    }
+
+    if ( copy_from_guest(&xatp, arg, 1) )
+        return -EFAULT;
+
+    return nested_add_to_physmap(xatp);
+}
+
+#ifdef CONFIG_COMPAT
+int compat_nested_memory_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
+{
+    struct compat_add_to_physmap cmp;
+    struct xen_add_to_physmap *nat = COMPAT_ARG_XLAT_VIRT_BASE;
+
+    if ( cmd != XENMEM_add_to_physmap )
+    {
+        gprintk(XENLOG_ERR, "Nested memory op %u not implemented.\n", cmd);
+        return -EOPNOTSUPP;
+    }
+
+    if ( copy_from_guest(&cmp, arg, 1) )
+        return -EFAULT;
+
+    XLAT_add_to_physmap(nat, &cmp);
+
+    return nested_add_to_physmap(*nat);
+}
+#endif
diff --git a/xen/arch/x86/hypercall.c b/xen/arch/x86/hypercall.c
index b22f0ca65a..2aa8dc5ac6 100644
--- a/xen/arch/x86/hypercall.c
+++ b/xen/arch/x86/hypercall.c
@@ -75,6 +75,7 @@ const hypercall_args_t hypercall_args_table[NR_hypercalls] =
 #endif
 #ifdef CONFIG_XEN_NESTED
     ARGS(nested_xen_version, 2),
+    COMP(nested_memory_op, 2, 2),
 #endif
     ARGS(mca, 1),
     ARGS(arch_1, 1),
diff --git a/xen/arch/x86/pv/hypercall.c b/xen/arch/x86/pv/hypercall.c
index 1e00d07273..96198d3313 100644
--- a/xen/arch/x86/pv/hypercall.c
+++ b/xen/arch/x86/pv/hypercall.c
@@ -86,6 +86,7 @@ const hypercall_table_t pv_hypercall_table[] = {
 #endif
 #ifdef CONFIG_XEN_NESTED
     HYPERCALL(nested_xen_version),
+    COMPAT_CALL(nested_memory_op),
 #endif
     HYPERCALL(mca),
     HYPERCALL(arch_1),
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index 2f5ac5eedc..e081f52fc4 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -122,6 +122,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define __HYPERVISOR_xenpmu_op            40
 #define __HYPERVISOR_dm_op                41
 #define __HYPERVISOR_nested_xen_version   42
+#define __HYPERVISOR_nested_memory_op     43
 
 /* Architecture-specific hypercall definitions. */
 #define __HYPERVISOR_arch_0               48
diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h
index 15194002d6..d373bd1763 100644
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -154,6 +154,10 @@ do_dm_op(
 extern long do_nested_xen_version(
     int cmd,
     XEN_GUEST_HANDLE_PARAM(void) arg);
+
+extern long do_nested_memory_op(
+    int cmd,
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 #endif
 
 #ifdef CONFIG_COMPAT
@@ -222,6 +226,12 @@ compat_dm_op(
     unsigned int nr_bufs,
     XEN_GUEST_HANDLE_PARAM(void) bufs);
 
+#ifdef CONFIG_XEN_NESTED
+extern int compat_nested_memory_op(
+    int cmd,
+    XEN_GUEST_HANDLE_PARAM(void) arg);
+#endif
+
 #endif
 
 void arch_get_xen_caps(xen_capabilities_info_t *info);
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index 8011bf2cb4..17375f6b9f 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -747,6 +747,13 @@ static XSM_INLINE int xsm_nested_xen_version(XSM_DEFAULT_ARG
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, d, NULL);
 }
+
+static XSM_INLINE int xsm_nested_add_to_physmap(XSM_DEFAULT_ARG
+                                                const struct domain *d)
+{
+    XSM_ASSERT_ACTION(XSM_PRIV);
+    return xsm_default_action(action, d, NULL);
+}
 #endif
 
 #include <public/version.h>
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 96044cb55a..920d2d9088 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -189,6 +189,7 @@ struct xsm_operations {
 #endif
 #ifdef CONFIG_XEN_NESTED
     int (*nested_xen_version) (const struct domain *d, unsigned int cmd);
+    int (*nested_add_to_physmap) (const struct domain *d);
 #endif
 };
 
@@ -734,6 +735,12 @@ static inline int xsm_nested_xen_version(xsm_default_t def,
     return xsm_ops->nested_xen_version(d, cmd);
 }
 
+static inline int xsm_nested_add_to_physmap(xsm_default_t def,
+                                            const struct domain *d)
+{
+    return xsm_ops->nested_add_to_physmap(d);
+}
+
 #endif /* CONFIG_XEN_NESTED */
 
 #endif /* XSM_NO_WRAPPERS */
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index ed0a4b0691..5ce29bcfe5 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -159,5 +159,6 @@ void __init xsm_fixup_ops (struct xsm_operations *ops)
 #endif
 #ifdef CONFIG_XEN_NESTED
     set_to_dummy_if_null(ops, nested_xen_version);
+    set_to_dummy_if_null(ops, nested_add_to_physmap);
 #endif
 }
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 2835279fe7..17a81b85f9 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1749,6 +1749,20 @@ static int flask_argo_send(const struct domain *d, const struct domain *t)
 #endif
 
 #ifdef CONFIG_XEN_NESTED
+static int domain_has_nested_perm(const struct domain *d, u16 class, u32 perm)
+{
+    struct avc_audit_data ad;
+
+    AVC_AUDIT_DATA_INIT(&ad, NONE);
+
+    return avc_has_perm(domain_sid(d), SECINITSID_NESTEDXEN, class, perm, &ad);
+}
+
+static int flask_nested_add_to_physmap(const struct domain *d)
+{
+    return domain_has_nested_perm(d, SECCLASS_MMU, MMU__PHYSMAP);
+}
+
 static int flask_nested_xen_version(const struct domain *d, unsigned int op)
 {
     return domain_has_xen_version(d, SECINITSID_NESTEDXEN, op);
@@ -1897,6 +1911,7 @@ static struct xsm_operations flask_ops = {
 #endif
 #ifdef CONFIG_XEN_NESTED
     .nested_xen_version = flask_nested_xen_version,
+    .nested_add_to_physmap = flask_nested_add_to_physmap,
 #endif
 };
 
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [Xen-devel] [RFC 6/9] x86/nested, xsm: add nested_hvm_op hypercall
  2019-06-20  0:30 [Xen-devel] [RFC 0/9] The Xen Blanket: hypervisor interface for PV drivers on nested Xen Christopher Clark
                   ` (4 preceding siblings ...)
  2019-06-20  0:30 ` [Xen-devel] [RFC 5/9] x86/nested, xsm: add nested_memory_op hypercall Christopher Clark
@ 2019-06-20  0:30 ` Christopher Clark
  2019-06-20  0:30 ` [Xen-devel] [RFC 7/9] x86/nested, xsm: add nested_grant_table_op hypercall Christopher Clark
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Christopher Clark @ 2019-06-20  0:30 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Stefano Stabellini, Wei Liu,
	Konrad Rzeszutek Wilk, George Dunlap, Andrew Cooper, Ian Jackson,
	Rich Persaud, Tim Deegan, Julien Grall, Jan Beulich,
	Daniel De Graaf, Roger Pau Monné

Provides proxying to the host hypervisor for HVMOP_get_param and
HVMOP_set_param ops.

Signed-off-by: Christopher Clark <christopher.clark@starlab.io>
---
 tools/flask/policy/modules/dom0.te  |  1 +
 xen/arch/x86/guest/hypercall_page.S |  1 +
 xen/arch/x86/guest/xen-nested.c     | 42 +++++++++++++++++++++++++++++
 xen/arch/x86/hypercall.c            |  1 +
 xen/arch/x86/pv/hypercall.c         |  1 +
 xen/include/public/xen.h            |  1 +
 xen/include/xen/hypercall.h         |  4 +++
 xen/include/xsm/dummy.h             |  7 +++++
 xen/include/xsm/xsm.h               |  7 +++++
 xen/xsm/dummy.c                     |  1 +
 xen/xsm/flask/hooks.c               | 22 +++++++++++++++
 11 files changed, 88 insertions(+)

diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/modules/dom0.te
index 1f564ff83b..7d0f29f082 100644
--- a/tools/flask/policy/modules/dom0.te
+++ b/tools/flask/policy/modules/dom0.te
@@ -46,6 +46,7 @@ allow dom0_t dom0_t:resource { add remove };
 # Allow dom0 to communicate with a nested Xen hypervisor
 allow dom0_t nestedxen_t:version { xen_version xen_get_features };
 allow dom0_t nestedxen_t:mmu physmap;
+allow dom0_t nestedxen_t:hvm { setparam getparam };
 
 # These permissions allow using the FLASK security server to compute access
 # checks locally, which could be used by a domain or service (such as xenstore)
diff --git a/xen/arch/x86/guest/hypercall_page.S b/xen/arch/x86/guest/hypercall_page.S
index 1a8dd0ea4f..adbb82f4ec 100644
--- a/xen/arch/x86/guest/hypercall_page.S
+++ b/xen/arch/x86/guest/hypercall_page.S
@@ -62,6 +62,7 @@ DECLARE_HYPERCALL(argo_op)
 DECLARE_HYPERCALL(xenpmu_op)
 DECLARE_HYPERCALL(nested_xen_version)
 DECLARE_HYPERCALL(nested_memory_op)
+DECLARE_HYPERCALL(nested_hvm_op)
 
 DECLARE_HYPERCALL(arch_0)
 DECLARE_HYPERCALL(arch_1)
diff --git a/xen/arch/x86/guest/xen-nested.c b/xen/arch/x86/guest/xen-nested.c
index a76983cc2d..82bd6885e6 100644
--- a/xen/arch/x86/guest/xen-nested.c
+++ b/xen/arch/x86/guest/xen-nested.c
@@ -22,6 +22,7 @@
 #include <xen/lib.h>
 #include <xen/sched.h>
 
+#include <public/hvm/hvm_op.h>
 #include <public/memory.h>
 #include <public/version.h>
 #include <public/xen.h>
@@ -160,3 +161,44 @@ int compat_nested_memory_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
     return nested_add_to_physmap(*nat);
 }
 #endif
+
+long do_nested_hvm_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
+{
+    struct xen_hvm_param a;
+    long ret;
+
+    if ( !xen_nested )
+        return -ENOSYS;
+
+    ret = xsm_nested_hvm_op(XSM_PRIV, current->domain, cmd);
+    if ( ret )
+        return ret;
+
+    switch ( cmd )
+    {
+    case HVMOP_set_param:
+    {
+        if ( copy_from_guest(&a, arg, 1) )
+            return -EFAULT;
+
+        return xen_hypercall_hvm_op(cmd, &a);
+    }
+
+    case HVMOP_get_param:
+    {
+        if ( copy_from_guest(&a, arg, 1) )
+            return -EFAULT;
+
+        ret = xen_hypercall_hvm_op(cmd, &a);
+
+        if ( !ret && __copy_to_guest(arg, &a, 1) )
+            return -EFAULT;
+
+        return ret;
+    }
+
+    default:
+        gprintk(XENLOG_ERR, "Nested hvm op %d not implemented.\n", cmd);
+        return -EOPNOTSUPP;
+    }
+}
diff --git a/xen/arch/x86/hypercall.c b/xen/arch/x86/hypercall.c
index 2aa8dc5ac6..268cc9450a 100644
--- a/xen/arch/x86/hypercall.c
+++ b/xen/arch/x86/hypercall.c
@@ -76,6 +76,7 @@ const hypercall_args_t hypercall_args_table[NR_hypercalls] =
 #ifdef CONFIG_XEN_NESTED
     ARGS(nested_xen_version, 2),
     COMP(nested_memory_op, 2, 2),
+    ARGS(nested_hvm_op, 2),
 #endif
     ARGS(mca, 1),
     ARGS(arch_1, 1),
diff --git a/xen/arch/x86/pv/hypercall.c b/xen/arch/x86/pv/hypercall.c
index 96198d3313..e88ecce222 100644
--- a/xen/arch/x86/pv/hypercall.c
+++ b/xen/arch/x86/pv/hypercall.c
@@ -87,6 +87,7 @@ const hypercall_table_t pv_hypercall_table[] = {
 #ifdef CONFIG_XEN_NESTED
     HYPERCALL(nested_xen_version),
     COMPAT_CALL(nested_memory_op),
+    HYPERCALL(nested_hvm_op),
 #endif
     HYPERCALL(mca),
     HYPERCALL(arch_1),
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index e081f52fc4..1731409eb8 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -123,6 +123,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define __HYPERVISOR_dm_op                41
 #define __HYPERVISOR_nested_xen_version   42
 #define __HYPERVISOR_nested_memory_op     43
+#define __HYPERVISOR_nested_hvm_op        44
 
 /* Architecture-specific hypercall definitions. */
 #define __HYPERVISOR_arch_0               48
diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h
index d373bd1763..b09070539e 100644
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -158,6 +158,10 @@ extern long do_nested_xen_version(
 extern long do_nested_memory_op(
     int cmd,
     XEN_GUEST_HANDLE_PARAM(void) arg);
+
+extern long do_nested_hvm_op(
+    int cmd,
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 #endif
 
 #ifdef CONFIG_COMPAT
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index 17375f6b9f..238b425c49 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -754,6 +754,13 @@ static XSM_INLINE int xsm_nested_add_to_physmap(XSM_DEFAULT_ARG
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, d, NULL);
 }
+
+static XSM_INLINE int xsm_nested_hvm_op(XSM_DEFAULT_ARG const struct domain *d,
+                                        unsigned int cmd)
+{
+    XSM_ASSERT_ACTION(XSM_PRIV);
+    return xsm_default_action(action, d, NULL);
+}
 #endif
 
 #include <public/version.h>
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 920d2d9088..cc02bf18c7 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -190,6 +190,7 @@ struct xsm_operations {
 #ifdef CONFIG_XEN_NESTED
     int (*nested_xen_version) (const struct domain *d, unsigned int cmd);
     int (*nested_add_to_physmap) (const struct domain *d);
+    int (*nested_hvm_op) (const struct domain *d, unsigned int cmd);
 #endif
 };
 
@@ -741,6 +742,12 @@ static inline int xsm_nested_add_to_physmap(xsm_default_t def,
     return xsm_ops->nested_add_to_physmap(d);
 }
 
+static inline int xsm_nested_hvm_op(xsm_default_t def, const struct domain *d,
+                                    unsigned int cmd)
+{
+    return xsm_ops->nested_hvm_op(d, cmd);
+}
+
 #endif /* CONFIG_XEN_NESTED */
 
 #endif /* XSM_NO_WRAPPERS */
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 5ce29bcfe5..909d41a81b 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -160,5 +160,6 @@ void __init xsm_fixup_ops (struct xsm_operations *ops)
 #ifdef CONFIG_XEN_NESTED
     set_to_dummy_if_null(ops, nested_xen_version);
     set_to_dummy_if_null(ops, nested_add_to_physmap);
+    set_to_dummy_if_null(ops, nested_hvm_op);
 #endif
 }
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 17a81b85f9..f8d247e28f 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1768,6 +1768,27 @@ static int flask_nested_xen_version(const struct domain *d, unsigned int op)
     return domain_has_xen_version(d, SECINITSID_NESTEDXEN, op);
 }
 
+static int flask_nested_hvm_op(const struct domain *d, unsigned int op)
+{
+    u32 perm;
+
+    switch ( op )
+    {
+    case HVMOP_set_param:
+        perm = HVM__SETPARAM;
+        break;
+
+    case HVMOP_get_param:
+        perm = HVM__GETPARAM;
+        break;
+
+    default:
+        perm = HVM__HVMCTL;
+    }
+
+    return domain_has_nested_perm(d, SECCLASS_HVM, perm);
+}
+
 #endif
 
 long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);
@@ -1912,6 +1933,7 @@ static struct xsm_operations flask_ops = {
 #ifdef CONFIG_XEN_NESTED
     .nested_xen_version = flask_nested_xen_version,
     .nested_add_to_physmap = flask_nested_add_to_physmap,
+    .nested_hvm_op = flask_nested_hvm_op,
 #endif
 };
 
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [Xen-devel] [RFC 7/9] x86/nested, xsm: add nested_grant_table_op hypercall
  2019-06-20  0:30 [Xen-devel] [RFC 0/9] The Xen Blanket: hypervisor interface for PV drivers on nested Xen Christopher Clark
                   ` (5 preceding siblings ...)
  2019-06-20  0:30 ` [Xen-devel] [RFC 6/9] x86/nested, xsm: add nested_hvm_op hypercall Christopher Clark
@ 2019-06-20  0:30 ` Christopher Clark
  2019-06-20  0:30 ` [Xen-devel] [RFC 8/9] x86/nested, xsm: add nested_event_channel_op hypercall Christopher Clark
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Christopher Clark @ 2019-06-20  0:30 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Stefano Stabellini, Wei Liu,
	Konrad Rzeszutek Wilk, George Dunlap, Andrew Cooper, Ian Jackson,
	Rich Persaud, Tim Deegan, Julien Grall, Jan Beulich,
	Daniel De Graaf, Roger Pau Monné

Provides proxying to the host hypervisor for the GNTTABOP_query_size op.

Signed-off-by: Christopher Clark <christopher.clark@starlab.io>
---
 tools/flask/policy/modules/dom0.te  |  1 +
 xen/arch/x86/guest/hypercall_page.S |  1 +
 xen/arch/x86/guest/xen-nested.c     | 37 +++++++++++++++++++++++++++++
 xen/arch/x86/hypercall.c            |  1 +
 xen/arch/x86/pv/hypercall.c         |  1 +
 xen/include/public/xen.h            |  1 +
 xen/include/xen/hypercall.h         |  5 ++++
 xen/include/xsm/dummy.h             |  7 ++++++
 xen/include/xsm/xsm.h               |  7 ++++++
 xen/xsm/dummy.c                     |  1 +
 xen/xsm/flask/hooks.c               |  6 +++++
 11 files changed, 68 insertions(+)

diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/modules/dom0.te
index 7d0f29f082..03c93a3093 100644
--- a/tools/flask/policy/modules/dom0.te
+++ b/tools/flask/policy/modules/dom0.te
@@ -47,6 +47,7 @@ allow dom0_t dom0_t:resource { add remove };
 allow dom0_t nestedxen_t:version { xen_version xen_get_features };
 allow dom0_t nestedxen_t:mmu physmap;
 allow dom0_t nestedxen_t:hvm { setparam getparam };
+allow dom0_t nestedxen_t:grant query;
 
 # These permissions allow using the FLASK security server to compute access
 # checks locally, which could be used by a domain or service (such as xenstore)
diff --git a/xen/arch/x86/guest/hypercall_page.S b/xen/arch/x86/guest/hypercall_page.S
index adbb82f4ec..33403714ce 100644
--- a/xen/arch/x86/guest/hypercall_page.S
+++ b/xen/arch/x86/guest/hypercall_page.S
@@ -63,6 +63,7 @@ DECLARE_HYPERCALL(xenpmu_op)
 DECLARE_HYPERCALL(nested_xen_version)
 DECLARE_HYPERCALL(nested_memory_op)
 DECLARE_HYPERCALL(nested_hvm_op)
+DECLARE_HYPERCALL(nested_grant_table_op)
 
 DECLARE_HYPERCALL(arch_0)
 DECLARE_HYPERCALL(arch_1)
diff --git a/xen/arch/x86/guest/xen-nested.c b/xen/arch/x86/guest/xen-nested.c
index 82bd6885e6..a4049e366f 100644
--- a/xen/arch/x86/guest/xen-nested.c
+++ b/xen/arch/x86/guest/xen-nested.c
@@ -22,6 +22,7 @@
 #include <xen/lib.h>
 #include <xen/sched.h>
 
+#include <public/grant_table.h>
 #include <public/hvm/hvm_op.h>
 #include <public/memory.h>
 #include <public/version.h>
@@ -202,3 +203,39 @@ long do_nested_hvm_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         return -EOPNOTSUPP;
     }
 }
+
+long do_nested_grant_table_op(unsigned int cmd,
+                              XEN_GUEST_HANDLE_PARAM(void) uop,
+                              unsigned int count)
+{
+    struct gnttab_query_size op;
+    long ret;
+
+    if ( !xen_nested )
+        return -ENOSYS;
+
+    if ( cmd != GNTTABOP_query_size )
+    {
+        gprintk(XENLOG_ERR, "Nested grant table op %u not supported.\n", cmd);
+        return -EOPNOTSUPP;
+    }
+
+    if ( count != 1 )
+        return -EINVAL;
+
+    if ( copy_from_guest(&op, uop, 1) )
+        return -EFAULT;
+
+    if ( op.dom != DOMID_SELF )
+        return -EPERM;
+
+    ret = xsm_nested_grant_query_size(XSM_PRIV, current->domain);
+    if ( ret )
+        return ret;
+
+    ret = xen_hypercall_grant_table_op(cmd, &op, 1);
+    if ( !ret && __copy_to_guest(uop, &op, 1) )
+        return -EFAULT;
+
+    return ret;
+}
diff --git a/xen/arch/x86/hypercall.c b/xen/arch/x86/hypercall.c
index 268cc9450a..1b9f4c6050 100644
--- a/xen/arch/x86/hypercall.c
+++ b/xen/arch/x86/hypercall.c
@@ -77,6 +77,7 @@ const hypercall_args_t hypercall_args_table[NR_hypercalls] =
     ARGS(nested_xen_version, 2),
     COMP(nested_memory_op, 2, 2),
     ARGS(nested_hvm_op, 2),
+    ARGS(nested_grant_table_op, 3),
 #endif
     ARGS(mca, 1),
     ARGS(arch_1, 1),
diff --git a/xen/arch/x86/pv/hypercall.c b/xen/arch/x86/pv/hypercall.c
index e88ecce222..efa1bd0830 100644
--- a/xen/arch/x86/pv/hypercall.c
+++ b/xen/arch/x86/pv/hypercall.c
@@ -88,6 +88,7 @@ const hypercall_table_t pv_hypercall_table[] = {
     HYPERCALL(nested_xen_version),
     COMPAT_CALL(nested_memory_op),
     HYPERCALL(nested_hvm_op),
+    HYPERCALL(nested_grant_table_op),
 #endif
     HYPERCALL(mca),
     HYPERCALL(arch_1),
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index 1731409eb8..000b7fc9d0 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -124,6 +124,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define __HYPERVISOR_nested_xen_version   42
 #define __HYPERVISOR_nested_memory_op     43
 #define __HYPERVISOR_nested_hvm_op        44
+#define __HYPERVISOR_nested_grant_table_op 45
 
 /* Architecture-specific hypercall definitions. */
 #define __HYPERVISOR_arch_0               48
diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h
index b09070539e..102b20fd5f 100644
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -162,6 +162,11 @@ extern long do_nested_memory_op(
 extern long do_nested_hvm_op(
     int cmd,
     XEN_GUEST_HANDLE_PARAM(void) arg);
+
+extern long do_nested_grant_table_op(
+    unsigned int cmd,
+    XEN_GUEST_HANDLE_PARAM(void) uop,
+    unsigned int count);
 #endif
 
 #ifdef CONFIG_COMPAT
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index 238b425c49..f5871ef05a 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -761,6 +761,13 @@ static XSM_INLINE int xsm_nested_hvm_op(XSM_DEFAULT_ARG const struct domain *d,
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, d, NULL);
 }
+
+static XSM_INLINE int xsm_nested_grant_query_size(XSM_DEFAULT_ARG
+                                                  const struct domain *d)
+{
+    XSM_ASSERT_ACTION(XSM_PRIV);
+    return xsm_default_action(action, d, NULL);
+}
 #endif
 
 #include <public/version.h>
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index cc02bf18c7..e12001c401 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -191,6 +191,7 @@ struct xsm_operations {
     int (*nested_xen_version) (const struct domain *d, unsigned int cmd);
     int (*nested_add_to_physmap) (const struct domain *d);
     int (*nested_hvm_op) (const struct domain *d, unsigned int cmd);
+    int (*nested_grant_query_size) (const struct domain *d);
 #endif
 };
 
@@ -748,6 +749,12 @@ static inline int xsm_nested_hvm_op(xsm_default_t def, const struct domain *d,
     return xsm_ops->nested_hvm_op(d, cmd);
 }
 
+static inline int xsm_nested_grant_query_size(xsm_default_t def,
+                                              const struct domain *d)
+{
+    return xsm_ops->nested_grant_query_size(d);
+}
+
 #endif /* CONFIG_XEN_NESTED */
 
 #endif /* XSM_NO_WRAPPERS */
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 909d41a81b..8c213c258f 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -161,5 +161,6 @@ void __init xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, nested_xen_version);
     set_to_dummy_if_null(ops, nested_add_to_physmap);
     set_to_dummy_if_null(ops, nested_hvm_op);
+    set_to_dummy_if_null(ops, nested_grant_query_size);
 #endif
 }
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index f8d247e28f..2988df2cd1 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1789,6 +1789,11 @@ static int flask_nested_hvm_op(const struct domain *d, unsigned int op)
     return domain_has_nested_perm(d, SECCLASS_HVM, perm);
 }
 
+static int flask_nested_grant_query_size(const struct domain *d)
+{
+    return domain_has_nested_perm(d, SECCLASS_GRANT, GRANT__QUERY);
+}
+
 #endif
 
 long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);
@@ -1934,6 +1939,7 @@ static struct xsm_operations flask_ops = {
     .nested_xen_version = flask_nested_xen_version,
     .nested_add_to_physmap = flask_nested_add_to_physmap,
     .nested_hvm_op = flask_nested_hvm_op,
+    .nested_grant_query_size = flask_nested_grant_query_size,
 #endif
 };
 
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [Xen-devel] [RFC 8/9] x86/nested, xsm: add nested_event_channel_op hypercall
  2019-06-20  0:30 [Xen-devel] [RFC 0/9] The Xen Blanket: hypervisor interface for PV drivers on nested Xen Christopher Clark
                   ` (6 preceding siblings ...)
  2019-06-20  0:30 ` [Xen-devel] [RFC 7/9] x86/nested, xsm: add nested_grant_table_op hypercall Christopher Clark
@ 2019-06-20  0:30 ` Christopher Clark
  2019-06-20  0:30 ` [Xen-devel] [RFC 9/9] x86/nested, xsm: add nested_schedop_shutdown hypercall Christopher Clark
  2019-06-20  4:18 ` [Xen-devel] [RFC 0/9] The Xen Blanket: hypervisor interface for PV drivers on nested Xen Juergen Gross
  9 siblings, 0 replies; 13+ messages in thread
From: Christopher Clark @ 2019-06-20  0:30 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Stefano Stabellini, Wei Liu,
	Konrad Rzeszutek Wilk, George Dunlap, Andrew Cooper, Ian Jackson,
	Rich Persaud, Tim Deegan, Julien Grall, Jan Beulich,
	Daniel De Graaf, Roger Pau Monné

Provides proxying to the host hypervisor for these event channel ops:
 * EVTCHNOP_alloc_unbound
 * EVTCHNOP_bind_vcpu
 * EVTCHNOP_close
 * EVTCHNOP_send
 * EVTCHNOP_unmask

Introduces a new XSM access vector class for policy control applied to this
operation: nested_event.
This is required because the existing 'event' access vector is unsuitable
for repurposing to the nested case: it operates on per-channel security
identifiers that are generated from a combination of the security
identifiers of the two communicating endpoints and data is not available for
the remote endpoint in the nested case.

Signed-off-by: Christopher Clark <christopher.clark@starlab.io>
---
 tools/flask/policy/modules/dom0.te    |  3 +
 xen/arch/x86/guest/hypercall_page.S   |  1 +
 xen/arch/x86/guest/xen-nested.c       | 84 +++++++++++++++++++++++++++
 xen/arch/x86/hypercall.c              |  1 +
 xen/arch/x86/pv/hypercall.c           |  1 +
 xen/include/public/xen.h              |  1 +
 xen/include/xen/hypercall.h           |  4 ++
 xen/include/xsm/dummy.h               |  8 +++
 xen/include/xsm/xsm.h                 |  8 +++
 xen/xsm/dummy.c                       |  1 +
 xen/xsm/flask/hooks.c                 | 35 +++++++++++
 xen/xsm/flask/policy/access_vectors   | 20 +++++++
 xen/xsm/flask/policy/security_classes |  1 +
 13 files changed, 168 insertions(+)

diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/modules/dom0.te
index 03c93a3093..ba3c5ad63d 100644
--- a/tools/flask/policy/modules/dom0.te
+++ b/tools/flask/policy/modules/dom0.te
@@ -48,6 +48,9 @@ allow dom0_t nestedxen_t:version { xen_version xen_get_features };
 allow dom0_t nestedxen_t:mmu physmap;
 allow dom0_t nestedxen_t:hvm { setparam getparam };
 allow dom0_t nestedxen_t:grant query;
+allow dom0_t nestedxen_t:nested_event {
+    alloc_unbound bind_vcpu close send unmask
+};
 
 # These permissions allow using the FLASK security server to compute access
 # checks locally, which could be used by a domain or service (such as xenstore)
diff --git a/xen/arch/x86/guest/hypercall_page.S b/xen/arch/x86/guest/hypercall_page.S
index 33403714ce..64f1885629 100644
--- a/xen/arch/x86/guest/hypercall_page.S
+++ b/xen/arch/x86/guest/hypercall_page.S
@@ -64,6 +64,7 @@ DECLARE_HYPERCALL(nested_xen_version)
 DECLARE_HYPERCALL(nested_memory_op)
 DECLARE_HYPERCALL(nested_hvm_op)
 DECLARE_HYPERCALL(nested_grant_table_op)
+DECLARE_HYPERCALL(nested_event_channel_op)
 
 DECLARE_HYPERCALL(arch_0)
 DECLARE_HYPERCALL(arch_1)
diff --git a/xen/arch/x86/guest/xen-nested.c b/xen/arch/x86/guest/xen-nested.c
index a4049e366f..babf4bf783 100644
--- a/xen/arch/x86/guest/xen-nested.c
+++ b/xen/arch/x86/guest/xen-nested.c
@@ -22,6 +22,7 @@
 #include <xen/lib.h>
 #include <xen/sched.h>
 
+#include <public/event_channel.h>
 #include <public/grant_table.h>
 #include <public/hvm/hvm_op.h>
 #include <public/memory.h>
@@ -239,3 +240,86 @@ long do_nested_grant_table_op(unsigned int cmd,
 
     return ret;
 }
+
+long do_nested_event_channel_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
+{
+    long ret;
+
+    if ( !xen_nested )
+        return -ENOSYS;
+
+    ret = xsm_nested_event_channel_op(XSM_PRIV, current->domain, cmd);
+    if ( ret )
+        return ret;
+
+    switch ( cmd )
+    {
+    case EVTCHNOP_alloc_unbound:
+    {
+        struct evtchn_alloc_unbound alloc_unbound;
+
+        if ( copy_from_guest(&alloc_unbound, arg, 1) )
+            return -EFAULT;
+
+        ret = xen_hypercall_event_channel_op(cmd, &alloc_unbound);
+        if ( !ret && __copy_to_guest(arg, &alloc_unbound, 1) )
+        {
+            struct evtchn_close close;
+
+            ret = -EFAULT;
+            close.port = alloc_unbound.port;
+
+            if ( xen_hypercall_event_channel_op(EVTCHNOP_close, &close) )
+                gprintk(XENLOG_ERR, "Nested event alloc_unbound failed to close"
+                                    " port %u on EFAULT\n", alloc_unbound.port);
+        }
+        break;
+    }
+
+    case EVTCHNOP_bind_vcpu:
+    {
+       struct evtchn_bind_vcpu bind_vcpu;
+
+        if( copy_from_guest(&bind_vcpu, arg, 1) )
+            return -EFAULT;
+
+        return xen_hypercall_event_channel_op(cmd, &bind_vcpu);
+    }
+
+    case EVTCHNOP_close:
+    {
+        struct evtchn_close close;
+
+        if ( copy_from_guest(&close, arg, 1) )
+            return -EFAULT;
+
+        return xen_hypercall_event_channel_op(cmd, &close);
+    }
+
+    case EVTCHNOP_send:
+    {
+        struct evtchn_send send;
+
+        if ( copy_from_guest(&send, arg, 1) )
+            return -EFAULT;
+
+        return xen_hypercall_event_channel_op(cmd, &send);
+    }
+
+    case EVTCHNOP_unmask:
+    {
+        struct evtchn_unmask unmask;
+
+        if ( copy_from_guest(&unmask, arg, 1) )
+            return -EFAULT;
+
+        return xen_hypercall_event_channel_op(cmd, &unmask);
+    }
+
+    default:
+        gprintk(XENLOG_ERR, "Nested: event hypercall %d not supported.\n", cmd);
+        return -EOPNOTSUPP;
+    }
+
+    return ret;
+}
diff --git a/xen/arch/x86/hypercall.c b/xen/arch/x86/hypercall.c
index 1b9f4c6050..752955ac81 100644
--- a/xen/arch/x86/hypercall.c
+++ b/xen/arch/x86/hypercall.c
@@ -78,6 +78,7 @@ const hypercall_args_t hypercall_args_table[NR_hypercalls] =
     COMP(nested_memory_op, 2, 2),
     ARGS(nested_hvm_op, 2),
     ARGS(nested_grant_table_op, 3),
+    ARGS(nested_event_channel_op, 2),
 #endif
     ARGS(mca, 1),
     ARGS(arch_1, 1),
diff --git a/xen/arch/x86/pv/hypercall.c b/xen/arch/x86/pv/hypercall.c
index efa1bd0830..6b1ae74d64 100644
--- a/xen/arch/x86/pv/hypercall.c
+++ b/xen/arch/x86/pv/hypercall.c
@@ -89,6 +89,7 @@ const hypercall_table_t pv_hypercall_table[] = {
     COMPAT_CALL(nested_memory_op),
     HYPERCALL(nested_hvm_op),
     HYPERCALL(nested_grant_table_op),
+    HYPERCALL(nested_event_channel_op),
 #endif
     HYPERCALL(mca),
     HYPERCALL(arch_1),
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index 000b7fc9d0..5fb322e882 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -125,6 +125,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define __HYPERVISOR_nested_memory_op     43
 #define __HYPERVISOR_nested_hvm_op        44
 #define __HYPERVISOR_nested_grant_table_op 45
+#define __HYPERVISOR_nested_event_channel_op 46
 
 /* Architecture-specific hypercall definitions. */
 #define __HYPERVISOR_arch_0               48
diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h
index 102b20fd5f..bd739c2dc7 100644
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -167,6 +167,10 @@ extern long do_nested_grant_table_op(
     unsigned int cmd,
     XEN_GUEST_HANDLE_PARAM(void) uop,
     unsigned int count);
+
+extern long do_nested_event_channel_op(
+    int cmd,
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 #endif
 
 #ifdef CONFIG_COMPAT
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index f5871ef05a..f8162f3308 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -768,6 +768,14 @@ static XSM_INLINE int xsm_nested_grant_query_size(XSM_DEFAULT_ARG
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, d, NULL);
 }
+
+static XSM_INLINE int xsm_nested_event_channel_op(XSM_DEFAULT_ARG
+                                                  const struct domain *d,
+                                                  unsigned int cmd)
+{
+    XSM_ASSERT_ACTION(XSM_PRIV);
+    return xsm_default_action(action, d, NULL);
+}
 #endif
 
 #include <public/version.h>
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index e12001c401..81cb67b89b 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -192,6 +192,7 @@ struct xsm_operations {
     int (*nested_add_to_physmap) (const struct domain *d);
     int (*nested_hvm_op) (const struct domain *d, unsigned int cmd);
     int (*nested_grant_query_size) (const struct domain *d);
+    int (*nested_event_channel_op) (const struct domain *d, unsigned int cmd);
 #endif
 };
 
@@ -755,6 +756,13 @@ static inline int xsm_nested_grant_query_size(xsm_default_t def,
     return xsm_ops->nested_grant_query_size(d);
 }
 
+static inline int xsm_nested_event_channel_op(xsm_default_t def,
+                                              const struct domain *d,
+                                              unsigned int cmd)
+{
+    return xsm_ops->nested_event_channel_op(d, cmd);
+}
+
 #endif /* CONFIG_XEN_NESTED */
 
 #endif /* XSM_NO_WRAPPERS */
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 8c213c258f..91db264ddc 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -162,5 +162,6 @@ void __init xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, nested_add_to_physmap);
     set_to_dummy_if_null(ops, nested_hvm_op);
     set_to_dummy_if_null(ops, nested_grant_query_size);
+    set_to_dummy_if_null(ops, nested_event_channel_op);
 #endif
 }
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 2988df2cd1..27bfa01559 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1794,6 +1794,40 @@ static int flask_nested_grant_query_size(const struct domain *d)
     return domain_has_nested_perm(d, SECCLASS_GRANT, GRANT__QUERY);
 }
 
+static int flask_nested_event_channel_op(const struct domain *d,
+                                         unsigned int op)
+{
+    u32 perm;
+
+    switch ( op )
+    {
+    case EVTCHNOP_alloc_unbound:
+        perm = NESTED_EVENT__ALLOC_UNBOUND;
+        break;
+
+    case EVTCHNOP_bind_vcpu:
+        perm = NESTED_EVENT__BIND_VCPU;
+        break;
+
+    case EVTCHNOP_close:
+        perm = NESTED_EVENT__CLOSE;
+        break;
+
+    case EVTCHNOP_send:
+        perm = NESTED_EVENT__SEND;
+        break;
+
+    case EVTCHNOP_unmask:
+        perm = NESTED_EVENT__UNMASK;
+        break;
+
+    default:
+        return avc_unknown_permission("nested event channel op", op);
+    }
+
+    return domain_has_nested_perm(d, SECCLASS_NESTED_EVENT, perm);
+}
+
 #endif
 
 long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);
@@ -1940,6 +1974,7 @@ static struct xsm_operations flask_ops = {
     .nested_add_to_physmap = flask_nested_add_to_physmap,
     .nested_hvm_op = flask_nested_hvm_op,
     .nested_grant_query_size = flask_nested_grant_query_size,
+    .nested_event_channel_op = flask_nested_event_channel_op,
 #endif
 };
 
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 7e0d5aa7bf..87caa36391 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -316,6 +316,26 @@ class event
     reset
 }
 
+# Class nested_event describes event channels to the host hypervisor
+# in a nested Xen-on-Xen system. Policy controls for these differ
+# from the interdomain event channels between guest VMs:
+# the guest hypervisor does not maintain security identifier information about
+# the remote event endpoint managed by the host hypervisor, so nested_event
+# channels do not have their own security label derived from a type transition.
+class nested_event
+{
+    # nested_event_channel_op: EVTCHNOP_alloc_unbound
+    alloc_unbound
+    # nested_event_channel_op: EVTCHNOP_bind_vcpu
+    bind_vcpu
+    # nested_event_channel_op: EVTCHNOP_close
+    close
+    # nested_event_channel_op: EVTCHNOP_send
+    send
+    # nested_event_channel_op: EVTCHNOP_unmask
+    unmask
+}
+
 # Class grant describes pages shared by grant mappings.  Pages use the security
 # label of their owning domain.
 class grant
diff --git a/xen/xsm/flask/policy/security_classes b/xen/xsm/flask/policy/security_classes
index 50ecbabc5c..ce5d00df23 100644
--- a/xen/xsm/flask/policy/security_classes
+++ b/xen/xsm/flask/policy/security_classes
@@ -20,5 +20,6 @@ class grant
 class security
 class version
 class argo
+class nested_event
 
 # FLASK
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [Xen-devel] [RFC 9/9] x86/nested, xsm: add nested_schedop_shutdown hypercall
  2019-06-20  0:30 [Xen-devel] [RFC 0/9] The Xen Blanket: hypervisor interface for PV drivers on nested Xen Christopher Clark
                   ` (7 preceding siblings ...)
  2019-06-20  0:30 ` [Xen-devel] [RFC 8/9] x86/nested, xsm: add nested_event_channel_op hypercall Christopher Clark
@ 2019-06-20  0:30 ` Christopher Clark
  2019-06-20  4:18 ` [Xen-devel] [RFC 0/9] The Xen Blanket: hypervisor interface for PV drivers on nested Xen Juergen Gross
  9 siblings, 0 replies; 13+ messages in thread
From: Christopher Clark @ 2019-06-20  0:30 UTC (permalink / raw)
  To: xen-devel
  Cc: Juergen Gross, Stefano Stabellini, Wei Liu,
	Konrad Rzeszutek Wilk, George Dunlap, Andrew Cooper, Ian Jackson,
	Rich Persaud, Tim Deegan, Julien Grall, Jan Beulich,
	Daniel De Graaf, Roger Pau Monné

Provides proxying to the host hypervisor for SCHEDOP_shutdown op.

Signed-off-by: Christopher Clark <christopher.clark@starlab.io>
---
 tools/flask/policy/modules/dom0.te  |  1 +
 xen/arch/x86/guest/hypercall_page.S |  1 +
 xen/arch/x86/guest/xen-nested.c     | 25 +++++++++++++++++++++++++
 xen/arch/x86/hypercall.c            |  1 +
 xen/arch/x86/pv/hypercall.c         |  1 +
 xen/include/public/xen.h            |  1 +
 xen/include/xen/hypercall.h         |  4 ++++
 xen/include/xsm/dummy.h             |  7 +++++++
 xen/include/xsm/xsm.h               |  7 +++++++
 xen/xsm/dummy.c                     |  1 +
 xen/xsm/flask/hooks.c               |  6 ++++++
 11 files changed, 55 insertions(+)

diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/modules/dom0.te
index ba3c5ad63d..23911aef4d 100644
--- a/tools/flask/policy/modules/dom0.te
+++ b/tools/flask/policy/modules/dom0.te
@@ -51,6 +51,7 @@ allow dom0_t nestedxen_t:grant query;
 allow dom0_t nestedxen_t:nested_event {
     alloc_unbound bind_vcpu close send unmask
 };
+allow dom0_t nestedxen_t:domain { shutdown };
 
 # These permissions allow using the FLASK security server to compute access
 # checks locally, which could be used by a domain or service (such as xenstore)
diff --git a/xen/arch/x86/guest/hypercall_page.S b/xen/arch/x86/guest/hypercall_page.S
index 64f1885629..28a631e850 100644
--- a/xen/arch/x86/guest/hypercall_page.S
+++ b/xen/arch/x86/guest/hypercall_page.S
@@ -65,6 +65,7 @@ DECLARE_HYPERCALL(nested_memory_op)
 DECLARE_HYPERCALL(nested_hvm_op)
 DECLARE_HYPERCALL(nested_grant_table_op)
 DECLARE_HYPERCALL(nested_event_channel_op)
+DECLARE_HYPERCALL(nested_sched_op)
 
 DECLARE_HYPERCALL(arch_0)
 DECLARE_HYPERCALL(arch_1)
diff --git a/xen/arch/x86/guest/xen-nested.c b/xen/arch/x86/guest/xen-nested.c
index babf4bf783..4f33d5d9be 100644
--- a/xen/arch/x86/guest/xen-nested.c
+++ b/xen/arch/x86/guest/xen-nested.c
@@ -26,6 +26,7 @@
 #include <public/grant_table.h>
 #include <public/hvm/hvm_op.h>
 #include <public/memory.h>
+#include <public/sched.h>
 #include <public/version.h>
 #include <public/xen.h>
 
@@ -323,3 +324,27 @@ long do_nested_event_channel_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 
     return ret;
 }
+
+long do_nested_sched_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
+{
+    struct sched_shutdown sched_shutdown;
+    long ret;
+
+    if ( !xen_nested )
+        return -ENOSYS;
+
+    if ( cmd != SCHEDOP_shutdown )
+    {
+        gprintk(XENLOG_ERR, "Nested: sched op %d not supported.\n", cmd);
+        return -EOPNOTSUPP;
+    }
+
+    ret = xsm_nested_schedop_shutdown(XSM_PRIV, current->domain);
+    if ( ret )
+        return ret;
+
+    if ( copy_from_guest(&sched_shutdown, arg, 1) )
+        return -EFAULT;
+
+    return xen_hypercall_sched_op(cmd, &sched_shutdown);
+}
diff --git a/xen/arch/x86/hypercall.c b/xen/arch/x86/hypercall.c
index 752955ac81..8bf1d74f14 100644
--- a/xen/arch/x86/hypercall.c
+++ b/xen/arch/x86/hypercall.c
@@ -79,6 +79,7 @@ const hypercall_args_t hypercall_args_table[NR_hypercalls] =
     ARGS(nested_hvm_op, 2),
     ARGS(nested_grant_table_op, 3),
     ARGS(nested_event_channel_op, 2),
+    ARGS(nested_sched_op, 2),
 #endif
     ARGS(mca, 1),
     ARGS(arch_1, 1),
diff --git a/xen/arch/x86/pv/hypercall.c b/xen/arch/x86/pv/hypercall.c
index 6b1ae74d64..4874e701e0 100644
--- a/xen/arch/x86/pv/hypercall.c
+++ b/xen/arch/x86/pv/hypercall.c
@@ -90,6 +90,7 @@ const hypercall_table_t pv_hypercall_table[] = {
     HYPERCALL(nested_hvm_op),
     HYPERCALL(nested_grant_table_op),
     HYPERCALL(nested_event_channel_op),
+    HYPERCALL(nested_sched_op),
 #endif
     HYPERCALL(mca),
     HYPERCALL(arch_1),
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index 5fb322e882..62a23310e7 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -126,6 +126,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define __HYPERVISOR_nested_hvm_op        44
 #define __HYPERVISOR_nested_grant_table_op 45
 #define __HYPERVISOR_nested_event_channel_op 46
+#define __HYPERVISOR_nested_sched_op      47
 
 /* Architecture-specific hypercall definitions. */
 #define __HYPERVISOR_arch_0               48
diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h
index bd739c2dc7..96d6ba2cd2 100644
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -171,6 +171,10 @@ extern long do_nested_grant_table_op(
 extern long do_nested_event_channel_op(
     int cmd,
     XEN_GUEST_HANDLE_PARAM(void) arg);
+
+extern long do_nested_sched_op(
+    int cmd,
+    XEN_GUEST_HANDLE_PARAM(void) arg);
 #endif
 
 #ifdef CONFIG_COMPAT
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index f8162f3308..200f097d50 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -776,6 +776,13 @@ static XSM_INLINE int xsm_nested_event_channel_op(XSM_DEFAULT_ARG
     XSM_ASSERT_ACTION(XSM_PRIV);
     return xsm_default_action(action, d, NULL);
 }
+
+static XSM_INLINE int xsm_nested_schedop_shutdown(XSM_DEFAULT_ARG
+                                                  const struct domain *d)
+{
+    XSM_ASSERT_ACTION(XSM_PRIV);
+    return xsm_default_action(action, d, NULL);
+}
 #endif
 
 #include <public/version.h>
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 81cb67b89b..1cb70d427b 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -193,6 +193,7 @@ struct xsm_operations {
     int (*nested_hvm_op) (const struct domain *d, unsigned int cmd);
     int (*nested_grant_query_size) (const struct domain *d);
     int (*nested_event_channel_op) (const struct domain *d, unsigned int cmd);
+    int (*nested_schedop_shutdown) (const struct domain *d);
 #endif
 };
 
@@ -763,6 +764,12 @@ static inline int xsm_nested_event_channel_op(xsm_default_t def,
     return xsm_ops->nested_event_channel_op(d, cmd);
 }
 
+static inline int xsm_nested_schedop_shutdown(xsm_default_t def,
+                                              const struct domain *d)
+{
+    return xsm_ops->nested_schedop_shutdown(d);
+}
+
 #endif /* CONFIG_XEN_NESTED */
 
 #endif /* XSM_NO_WRAPPERS */
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 91db264ddc..ac6e5fdd49 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -163,5 +163,6 @@ void __init xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, nested_hvm_op);
     set_to_dummy_if_null(ops, nested_grant_query_size);
     set_to_dummy_if_null(ops, nested_event_channel_op);
+    set_to_dummy_if_null(ops, nested_schedop_shutdown);
 #endif
 }
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 27bfa01559..385ae1458c 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1828,6 +1828,11 @@ static int flask_nested_event_channel_op(const struct domain *d,
     return domain_has_nested_perm(d, SECCLASS_NESTED_EVENT, perm);
 }
 
+static int flask_nested_schedop_shutdown(const struct domain *d)
+{
+    return domain_has_nested_perm(d, SECCLASS_DOMAIN, DOMAIN__SHUTDOWN);
+}
+
 #endif
 
 long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);
@@ -1975,6 +1980,7 @@ static struct xsm_operations flask_ops = {
     .nested_hvm_op = flask_nested_hvm_op,
     .nested_grant_query_size = flask_nested_grant_query_size,
     .nested_event_channel_op = flask_nested_event_channel_op,
+    .nested_schedop_shutdown = flask_nested_schedop_shutdown,
 #endif
 };
 
-- 
2.17.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [Xen-devel] [RFC 0/9] The Xen Blanket: hypervisor interface for PV drivers on nested Xen
  2019-06-20  0:30 [Xen-devel] [RFC 0/9] The Xen Blanket: hypervisor interface for PV drivers on nested Xen Christopher Clark
                   ` (8 preceding siblings ...)
  2019-06-20  0:30 ` [Xen-devel] [RFC 9/9] x86/nested, xsm: add nested_schedop_shutdown hypercall Christopher Clark
@ 2019-06-20  4:18 ` Juergen Gross
  2019-06-20  8:39   ` Paul Durrant
  9 siblings, 1 reply; 13+ messages in thread
From: Juergen Gross @ 2019-06-20  4:18 UTC (permalink / raw)
  To: Christopher Clark, xen-devel
  Cc: Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Rich Persaud,
	Ankur Arora, Tim Deegan, Julien Grall, Jan Beulich,
	Daniel De Graaf, Christopher Clark, Roger Pau Monné

On 20.06.19 02:30, Christopher Clark wrote:
> This RFC patch series adds a new hypervisor interface to support running
> a set of PV front end device drivers within dom0 of a guest Xen running
> on Xen.
> 
> A practical deployment scenario is a system running PV guest VMs that use
> unmodified Xen PV device drivers, on a guest Xen hypervisor with a dom0
> using PV drivers itself, all within a HVM guest of a hosting Xen
> hypervisor (eg. from a cloud provider). Multiple PV guest VMs can reside
> within a single cloud instance; guests can be live-migrated between
> cloud instances that run nested Xen, and virtual machine introspection
> of guests can be performed without requiring cloud provider support.
> 
> The name "The Xen Blanket" was given by researchers from IBM and Cornell
> when the original work was published at the ACM Eurosys 2012 conference.
>      http://www1.unine.ch/eurosys2012/program/conference.html
>      https://dl.acm.org/citation.cfm?doid=2168836.2168849
> This patch series is a reimplementation of this architecture on modern Xen
> by Star Lab.
> 
> A patch to the Linux kernel to add device drivers using this blanket interface
> is at:
>      https://github.com/starlab-io/xenblanket-linux
> (This is an example, enabling operation and testing of a Xen Blanket nested
> system. Further work would be necessary for Linux upstreaming.)
> Relevant other current Linux work is occurring here:
>      https://lkml.org/lkml/2019/4/8/67
>      https://lists.xenproject.org/archives/html/xen-devel/2019-05/msg00743.html
> 
> thanks,
> 
> Christopher
> 
> Christopher Clark (9):
>    x86/guest: code movement to separate Xen detection from guest
>      functions
>    x86: Introduce Xen detection as separate logic from Xen Guest support.
>    x86/nested: add nested_xen_version hypercall
>    XSM: Add hook for nested xen version op; revises non-nested version op
>    x86/nested, xsm: add nested_memory_op hypercall
>    x86/nested, xsm: add nested_hvm_op hypercall
>    x86/nested, xsm: add nested_grant_table_op hypercall
>    x86/nested, xsm: add nested_event_channel_op hypercall
>    x86/nested, xsm: add nested_schedop_shutdown hypercall
> 
>   tools/flask/policy/modules/dom0.te           |  14 +-
>   tools/flask/policy/modules/guest_features.te |   5 +-
>   tools/flask/policy/modules/xen.te            |   3 +
>   tools/flask/policy/policy/initial_sids       |   3 +
>   xen/arch/x86/Kconfig                         |  33 +-
>   xen/arch/x86/Makefile                        |   2 +-
>   xen/arch/x86/apic.c                          |   4 +-
>   xen/arch/x86/guest/Makefile                  |   4 +
>   xen/arch/x86/guest/hypercall_page.S          |   6 +
>   xen/arch/x86/guest/xen-guest.c               | 311 ++++++++++++++++
>   xen/arch/x86/guest/xen-nested.c              | 350 +++++++++++++++++++
>   xen/arch/x86/guest/xen.c                     | 264 +-------------
>   xen/arch/x86/hypercall.c                     |   8 +
>   xen/arch/x86/pv/hypercall.c                  |   8 +
>   xen/arch/x86/setup.c                         |   3 +
>   xen/include/asm-x86/guest/hypercall.h        |   7 +-
>   xen/include/asm-x86/guest/xen.h              |  36 +-
>   xen/include/public/xen.h                     |   6 +
>   xen/include/xen/hypercall.h                  |  33 ++
>   xen/include/xsm/dummy.h                      |  48 ++-
>   xen/include/xsm/xsm.h                        |  49 +++
>   xen/xsm/dummy.c                              |   8 +
>   xen/xsm/flask/hooks.c                        | 133 ++++++-
>   xen/xsm/flask/policy/access_vectors          |  26 ++
>   xen/xsm/flask/policy/initial_sids            |   1 +
>   xen/xsm/flask/policy/security_classes        |   1 +
>   26 files changed, 1086 insertions(+), 280 deletions(-)
>   create mode 100644 xen/arch/x86/guest/xen-guest.c
>   create mode 100644 xen/arch/x86/guest/xen-nested.c
> 

I think we should discuss that topic at the Xen developer summit in
Chicago. Suddenly there seems to be a rush in nested Xen development
and related areas, so syncing the efforts seems to be a good idea.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Xen-devel] [RFC 0/9] The Xen Blanket: hypervisor interface for PV drivers on nested Xen
  2019-06-20  4:18 ` [Xen-devel] [RFC 0/9] The Xen Blanket: hypervisor interface for PV drivers on nested Xen Juergen Gross
@ 2019-06-20  8:39   ` Paul Durrant
  2019-06-21  5:51     ` Christopher Clark
  0 siblings, 1 reply; 13+ messages in thread
From: Paul Durrant @ 2019-06-20  8:39 UTC (permalink / raw)
  To: 'Juergen Gross', Christopher Clark, xen-devel
  Cc: Julien Grall, Stefano Stabellini, Wei Liu, Konrad Rzeszutek Wilk,
	Andrew Cooper, Tim (Xen.org),
	George Dunlap, Ankur Arora, Rich Persaud, Jan Beulich,
	Ian Jackson, Daniel De Graaf, Christopher Clark, Roger Pau Monne

> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Juergen Gross
> Sent: 20 June 2019 05:18
> To: Christopher Clark <christopher.w.clark@gmail.com>; xen-devel@lists.xenproject.org
> Cc: Stefano Stabellini <sstabellini@kernel.org>; Wei Liu <wl@xen.org>; Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com>; George Dunlap <George.Dunlap@citrix.com>; Andrew Cooper
> <Andrew.Cooper3@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; Rich Persaud <persaur@gmail.com>;
> Ankur Arora <ankur.a.arora@oracle.com>; Tim (Xen.org) <tim@xen.org>; Julien Grall
> <julien.grall@arm.com>; Jan Beulich <jbeulich@suse.com>; Daniel De Graaf <dgdegra@tycho.nsa.gov>;
> Christopher Clark <christopher.clark@starlab.io>; Roger Pau Monne <roger.pau@citrix.com>
> Subject: Re: [Xen-devel] [RFC 0/9] The Xen Blanket: hypervisor interface for PV drivers on nested Xen
> 
> On 20.06.19 02:30, Christopher Clark wrote:
> > This RFC patch series adds a new hypervisor interface to support running
> > a set of PV front end device drivers within dom0 of a guest Xen running
> > on Xen.
> >
> > A practical deployment scenario is a system running PV guest VMs that use
> > unmodified Xen PV device drivers, on a guest Xen hypervisor with a dom0
> > using PV drivers itself, all within a HVM guest of a hosting Xen
> > hypervisor (eg. from a cloud provider). Multiple PV guest VMs can reside
> > within a single cloud instance; guests can be live-migrated between
> > cloud instances that run nested Xen, and virtual machine introspection
> > of guests can be performed without requiring cloud provider support.
> >
> > The name "The Xen Blanket" was given by researchers from IBM and Cornell
> > when the original work was published at the ACM Eurosys 2012 conference.
> >      http://www1.unine.ch/eurosys2012/program/conference.html
> >      https://dl.acm.org/citation.cfm?doid=2168836.2168849
> > This patch series is a reimplementation of this architecture on modern Xen
> > by Star Lab.
> >
> > A patch to the Linux kernel to add device drivers using this blanket interface
> > is at:
> >      https://github.com/starlab-io/xenblanket-linux
> > (This is an example, enabling operation and testing of a Xen Blanket nested
> > system. Further work would be necessary for Linux upstreaming.)
> > Relevant other current Linux work is occurring here:
> >      https://lkml.org/lkml/2019/4/8/67
> >      https://lists.xenproject.org/archives/html/xen-devel/2019-05/msg00743.html
> >
> > thanks,
> >
> > Christopher
> >
> > Christopher Clark (9):
> >    x86/guest: code movement to separate Xen detection from guest
> >      functions
> >    x86: Introduce Xen detection as separate logic from Xen Guest support.
> >    x86/nested: add nested_xen_version hypercall
> >    XSM: Add hook for nested xen version op; revises non-nested version op
> >    x86/nested, xsm: add nested_memory_op hypercall
> >    x86/nested, xsm: add nested_hvm_op hypercall
> >    x86/nested, xsm: add nested_grant_table_op hypercall
> >    x86/nested, xsm: add nested_event_channel_op hypercall
> >    x86/nested, xsm: add nested_schedop_shutdown hypercall
> >
> >   tools/flask/policy/modules/dom0.te           |  14 +-
> >   tools/flask/policy/modules/guest_features.te |   5 +-
> >   tools/flask/policy/modules/xen.te            |   3 +
> >   tools/flask/policy/policy/initial_sids       |   3 +
> >   xen/arch/x86/Kconfig                         |  33 +-
> >   xen/arch/x86/Makefile                        |   2 +-
> >   xen/arch/x86/apic.c                          |   4 +-
> >   xen/arch/x86/guest/Makefile                  |   4 +
> >   xen/arch/x86/guest/hypercall_page.S          |   6 +
> >   xen/arch/x86/guest/xen-guest.c               | 311 ++++++++++++++++
> >   xen/arch/x86/guest/xen-nested.c              | 350 +++++++++++++++++++
> >   xen/arch/x86/guest/xen.c                     | 264 +-------------
> >   xen/arch/x86/hypercall.c                     |   8 +
> >   xen/arch/x86/pv/hypercall.c                  |   8 +
> >   xen/arch/x86/setup.c                         |   3 +
> >   xen/include/asm-x86/guest/hypercall.h        |   7 +-
> >   xen/include/asm-x86/guest/xen.h              |  36 +-
> >   xen/include/public/xen.h                     |   6 +
> >   xen/include/xen/hypercall.h                  |  33 ++
> >   xen/include/xsm/dummy.h                      |  48 ++-
> >   xen/include/xsm/xsm.h                        |  49 +++
> >   xen/xsm/dummy.c                              |   8 +
> >   xen/xsm/flask/hooks.c                        | 133 ++++++-
> >   xen/xsm/flask/policy/access_vectors          |  26 ++
> >   xen/xsm/flask/policy/initial_sids            |   1 +
> >   xen/xsm/flask/policy/security_classes        |   1 +
> >   26 files changed, 1086 insertions(+), 280 deletions(-)
> >   create mode 100644 xen/arch/x86/guest/xen-guest.c
> >   create mode 100644 xen/arch/x86/guest/xen-nested.c
> >
> 
> I think we should discuss that topic at the Xen developer summit in
> Chicago. Suddenly there seems to be a rush in nested Xen development
> and related areas, so syncing the efforts seems to be a good idea.
> 

+1 from me on that...

  Paul

> 
> Juergen
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xenproject.org
> https://lists.xenproject.org/mailman/listinfo/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Xen-devel] [RFC 0/9] The Xen Blanket: hypervisor interface for PV drivers on nested Xen
  2019-06-20  8:39   ` Paul Durrant
@ 2019-06-21  5:51     ` Christopher Clark
  0 siblings, 0 replies; 13+ messages in thread
From: Christopher Clark @ 2019-06-21  5:51 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Juergen Gross, Julien Grall, Stefano Stabellini, Wei Liu,
	Konrad Rzeszutek Wilk, Andrew Cooper, Tim (Xen.org),
	George Dunlap, Ankur Arora, Rich Persaud, Jan Beulich,
	Ian Jackson, xen-devel, Daniel De Graaf, Christopher Clark,
	Roger Pau Monne

On Thu, Jun 20, 2019 at 1:39 AM Paul Durrant <Paul.Durrant@citrix.com> wrote:
>
> > -----Original Message-----
> > From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Juergen Gross
> > Sent: 20 June 2019 05:18
> > To: Christopher Clark <christopher.w.clark@gmail.com>; xen-devel@lists.xenproject.org
> > Cc: Stefano Stabellini <sstabellini@kernel.org>; Wei Liu <wl@xen.org>; Konrad Rzeszutek Wilk
> > <konrad.wilk@oracle.com>; George Dunlap <George.Dunlap@citrix.com>; Andrew Cooper
> > <Andrew.Cooper3@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; Rich Persaud <persaur@gmail.com>;
> > Ankur Arora <ankur.a.arora@oracle.com>; Tim (Xen.org) <tim@xen.org>; Julien Grall
> > <julien.grall@arm.com>; Jan Beulich <jbeulich@suse.com>; Daniel De Graaf <dgdegra@tycho.nsa.gov>;
> > Christopher Clark <christopher.clark@starlab.io>; Roger Pau Monne <roger.pau@citrix.com>
> > Subject: Re: [Xen-devel] [RFC 0/9] The Xen Blanket: hypervisor interface for PV drivers on nested Xen
> >
> > On 20.06.19 02:30, Christopher Clark wrote:
> > > This RFC patch series adds a new hypervisor interface to support running
> > > a set of PV front end device drivers within dom0 of a guest Xen running
> > > on Xen.
> > >
> > > A practical deployment scenario is a system running PV guest VMs that use
> > > unmodified Xen PV device drivers, on a guest Xen hypervisor with a dom0
> > > using PV drivers itself, all within a HVM guest of a hosting Xen
> > > hypervisor (eg. from a cloud provider). Multiple PV guest VMs can reside
> > > within a single cloud instance; guests can be live-migrated between
> > > cloud instances that run nested Xen, and virtual machine introspection
> > > of guests can be performed without requiring cloud provider support.
> > >
> > > The name "The Xen Blanket" was given by researchers from IBM and Cornell
> > > when the original work was published at the ACM Eurosys 2012 conference.
> > >      http://www1.unine.ch/eurosys2012/program/conference.html
> > >      https://dl.acm.org/citation.cfm?doid=2168836.2168849
> > > This patch series is a reimplementation of this architecture on modern Xen
> > > by Star Lab.
> > >
> > > A patch to the Linux kernel to add device drivers using this blanket interface
> > > is at:
> > >      https://github.com/starlab-io/xenblanket-linux
> > > (This is an example, enabling operation and testing of a Xen Blanket nested
> > > system. Further work would be necessary for Linux upstreaming.)
> > > Relevant other current Linux work is occurring here:
> > >      https://lkml.org/lkml/2019/4/8/67
> > >      https://lists.xenproject.org/archives/html/xen-devel/2019-05/msg00743.html
> > >
> > > thanks,
> > >
> > > Christopher
> > >
> > > Christopher Clark (9):
> > >    x86/guest: code movement to separate Xen detection from guest
> > >      functions
> > >    x86: Introduce Xen detection as separate logic from Xen Guest support.
> > >    x86/nested: add nested_xen_version hypercall
> > >    XSM: Add hook for nested xen version op; revises non-nested version op
> > >    x86/nested, xsm: add nested_memory_op hypercall
> > >    x86/nested, xsm: add nested_hvm_op hypercall
> > >    x86/nested, xsm: add nested_grant_table_op hypercall
> > >    x86/nested, xsm: add nested_event_channel_op hypercall
> > >    x86/nested, xsm: add nested_schedop_shutdown hypercall
> > >
> > >   tools/flask/policy/modules/dom0.te           |  14 +-
> > >   tools/flask/policy/modules/guest_features.te |   5 +-
> > >   tools/flask/policy/modules/xen.te            |   3 +
> > >   tools/flask/policy/policy/initial_sids       |   3 +
> > >   xen/arch/x86/Kconfig                         |  33 +-
> > >   xen/arch/x86/Makefile                        |   2 +-
> > >   xen/arch/x86/apic.c                          |   4 +-
> > >   xen/arch/x86/guest/Makefile                  |   4 +
> > >   xen/arch/x86/guest/hypercall_page.S          |   6 +
> > >   xen/arch/x86/guest/xen-guest.c               | 311 ++++++++++++++++
> > >   xen/arch/x86/guest/xen-nested.c              | 350 +++++++++++++++++++
> > >   xen/arch/x86/guest/xen.c                     | 264 +-------------
> > >   xen/arch/x86/hypercall.c                     |   8 +
> > >   xen/arch/x86/pv/hypercall.c                  |   8 +
> > >   xen/arch/x86/setup.c                         |   3 +
> > >   xen/include/asm-x86/guest/hypercall.h        |   7 +-
> > >   xen/include/asm-x86/guest/xen.h              |  36 +-
> > >   xen/include/public/xen.h                     |   6 +
> > >   xen/include/xen/hypercall.h                  |  33 ++
> > >   xen/include/xsm/dummy.h                      |  48 ++-
> > >   xen/include/xsm/xsm.h                        |  49 +++
> > >   xen/xsm/dummy.c                              |   8 +
> > >   xen/xsm/flask/hooks.c                        | 133 ++++++-
> > >   xen/xsm/flask/policy/access_vectors          |  26 ++
> > >   xen/xsm/flask/policy/initial_sids            |   1 +
> > >   xen/xsm/flask/policy/security_classes        |   1 +
> > >   26 files changed, 1086 insertions(+), 280 deletions(-)
> > >   create mode 100644 xen/arch/x86/guest/xen-guest.c
> > >   create mode 100644 xen/arch/x86/guest/xen-nested.c
> > >
> >
> > I think we should discuss that topic at the Xen developer summit in
> > Chicago. Suddenly there seems to be a rush in nested Xen development
> > and related areas, so syncing the efforts seems to be a good idea.
>
> +1 from me on that...

Excellent -- thanks.

Christopher


>
>   Paul
>
> >
> > Juergen
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xenproject.org
> > https://lists.xenproject.org/mailman/listinfo/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2019-06-21  5:52 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-20  0:30 [Xen-devel] [RFC 0/9] The Xen Blanket: hypervisor interface for PV drivers on nested Xen Christopher Clark
2019-06-20  0:30 ` [Xen-devel] [RFC 1/9] x86/guest: code movement to separate Xen detection from guest functions Christopher Clark
2019-06-20  0:30 ` [Xen-devel] [RFC 2/9] x86: Introduce Xen detection as separate logic from Xen Guest support Christopher Clark
2019-06-20  0:30 ` [Xen-devel] [RFC 3/9] x86/nested: add nested_xen_version hypercall Christopher Clark
2019-06-20  0:30 ` [Xen-devel] [RFC 4/9] XSM: Add hook for nested xen version op; revises non-nested version op Christopher Clark
2019-06-20  0:30 ` [Xen-devel] [RFC 5/9] x86/nested, xsm: add nested_memory_op hypercall Christopher Clark
2019-06-20  0:30 ` [Xen-devel] [RFC 6/9] x86/nested, xsm: add nested_hvm_op hypercall Christopher Clark
2019-06-20  0:30 ` [Xen-devel] [RFC 7/9] x86/nested, xsm: add nested_grant_table_op hypercall Christopher Clark
2019-06-20  0:30 ` [Xen-devel] [RFC 8/9] x86/nested, xsm: add nested_event_channel_op hypercall Christopher Clark
2019-06-20  0:30 ` [Xen-devel] [RFC 9/9] x86/nested, xsm: add nested_schedop_shutdown hypercall Christopher Clark
2019-06-20  4:18 ` [Xen-devel] [RFC 0/9] The Xen Blanket: hypervisor interface for PV drivers on nested Xen Juergen Gross
2019-06-20  8:39   ` Paul Durrant
2019-06-21  5:51     ` Christopher Clark

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).