All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/3] Refactoring mempaging code from xenpaging to libxc and few updates.
@ 2014-06-16 18:20 Dushyant Behl
  2014-06-16 18:20 ` [PATCH v2 1/3] [GSOC14] refactored mempaging code from xenpaging to libxc Dushyant Behl
                   ` (2 more replies)
  0 siblings, 3 replies; 24+ messages in thread
From: Dushyant Behl @ 2014-06-16 18:20 UTC (permalink / raw)
  To: xen-devel
  Cc: David Scott, Stefano Stabellini, Andrew Cooper, Ian Jackson,
	Andres Lagar Cavilla, Dushyant Behl, Ian Campbell

[PATCH v2 1/3] [GSOC14] refactored mempaging code from xenpaging to libxc.
Refactores mempaging ring setup and teardown code from xenpaging to libxc.
The code is to be compiled into libxenguest.

[PATCH v2 2/3] [GSOC14] Replacing Deprecated Function Calls.
Replaces calls to deprecated function xc_map_foreign_batch with calls to
xc_map_foreign_bulk.

[PATCH v2 3/3] [GSOC14] FIX:- Race condition between initializing mempaging and ring setup.
Fix for a known race condition bug in mempaging ring setup code.

Signed-off-by: Dushyant Behl <myselfdushyantbehl@gmail.com>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH v2 1/3] [GSOC14] refactored mempaging code from xenpaging to libxc.
  2014-06-16 18:20 [PATCH v2 0/3] Refactoring mempaging code from xenpaging to libxc and few updates Dushyant Behl
@ 2014-06-16 18:20 ` Dushyant Behl
  2014-06-27 10:27   ` Ian Campbell
  2014-06-27 10:39   ` Ian Campbell
  2014-06-16 18:20 ` [PATCH v2 2/3] [GSOC14] Replacing Deprecated Function Calls Dushyant Behl
  2014-06-16 18:20 ` [PATCH v2 3/3] [GSOC14] FIX:- Race condition between initializing shared ring and mempaging Dushyant Behl
  2 siblings, 2 replies; 24+ messages in thread
From: Dushyant Behl @ 2014-06-16 18:20 UTC (permalink / raw)
  To: xen-devel
  Cc: David Scott, Stefano Stabellini, Andrew Cooper, Ian Jackson,
	Andres Lagar Cavilla, Dushyant Behl, Ian Campbell

This patch is part of the work done under the gsoc project -
Lazy Restore Using Memory Paging.

This patch moves the code to initialize and teardown mempaging from xenpaging
to libxc. The code refractored from xenpaging is the code which sets up paging,
initializes a shared ring and event channel to communicate with xen. This
communication is done between the hypervisor and the dom0 tool which performs
the activity of pager. Another routine is also added by this patch which will
teardown the ring and mem paging setup. The xenpaging code is changed to use
the newly created routines and is tested to properly build and work with this code.

The refractoring is done so that any tool which will act as pager in
lazy restore or use memory paging can use a same routines to initialize mempaging.
This refactoring will also allow any future (in-tree) tools to use mempaging.

The refractored code in xc_mem_paging_ring_setup is to be compiled into
libxenguest.

Signed-off-by: Dushyant Behl <myselfdushyantbehl@gmail.com>
Reviewed-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: David Scott <dave.scott@citrix.com>
---
 tools/libxc/Makefile                |   2 +
 tools/libxc/xc_mem_paging_setup.c   | 166 ++++++++++++++++++++++++++++++++++++
 tools/libxc/xenctrl.h               |  27 ++++++
 tools/ocaml/libs/xc/xenctrl_stubs.c |  11 ++-
 tools/xenpaging/Makefile            |   4 +-
 tools/xenpaging/xenpaging.c         | 108 ++++-------------------
 6 files changed, 222 insertions(+), 96 deletions(-)
 create mode 100644 tools/libxc/xc_mem_paging_setup.c

diff --git a/tools/libxc/Makefile b/tools/libxc/Makefile
index a74b19e..6cf14f0 100644
--- a/tools/libxc/Makefile
+++ b/tools/libxc/Makefile
@@ -69,6 +69,8 @@ GUEST_SRCS-$(CONFIG_ARM)     += xc_dom_armzimageloader.c
 GUEST_SRCS-y                 += xc_dom_binloader.c
 GUEST_SRCS-y                 += xc_dom_compat_linux.c
 
+GUEST_SRCS-y                 += xc_mem_paging_setup.c
+
 GUEST_SRCS-$(CONFIG_X86)     += xc_dom_x86.c
 GUEST_SRCS-$(CONFIG_X86)     += xc_cpuid_x86.c
 GUEST_SRCS-$(CONFIG_X86)     += xc_hvm_build_x86.c
diff --git a/tools/libxc/xc_mem_paging_setup.c b/tools/libxc/xc_mem_paging_setup.c
new file mode 100644
index 0000000..4e5fe73
--- /dev/null
+++ b/tools/libxc/xc_mem_paging_setup.c
@@ -0,0 +1,166 @@
+/*
+ * tools/libxc/xc_mem_paging_setup.c
+ *
+ * Routines to initialize and teardown memory paging.
+ * Create shared ring and event channels to communicate
+ * with the hypervisor.
+ *
+ * Copyright (c) 2014 Dushyant Behl
+ * Copyright (c) 2009 by Citrix Systems, Inc. (Patrick Colp)
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#include "xc_private.h"
+#include <xen/event_channel.h>
+#include <xen/mem_event.h>
+
+/*
+ * Mem paging ring and event channel setup routine.
+ * Setup a shared ring and an event channel to communicate between
+ * hypervisor and the tool performing mem paging operations.
+ * The function will return zero on successful completion and will
+ * return -1 on failure at any intermediate step setting up errno
+ * properly.
+ */
+int xc_mem_paging_ring_setup(xc_interface *xch,
+                             xc_evtchn *xce_handle,
+                             domid_t domain_id,
+                             void *ring_page,
+                             int *port,
+                             uint32_t *evtchn_port,
+                             mem_event_back_ring_t *back_ring)
+{
+    int rc;
+    uint64_t ring_pfn, mmap_pfn;
+
+    /* Map the ring page */
+    xc_get_hvm_param(xch, domain_id, HVM_PARAM_PAGING_RING_PFN, &ring_pfn);
+    mmap_pfn = ring_pfn;
+    ring_page = xc_map_foreign_batch(xch, domain_id,
+                                        PROT_READ | PROT_WRITE, &mmap_pfn, 1);
+    if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
+    {
+        /* Map failed, populate ring page */
+        rc = xc_domain_populate_physmap_exact(xch, domain_id,
+                                              1, 0, 0, &ring_pfn);
+        if ( rc != 0 )
+        {
+            PERROR("Failed to populate ring gfn\n");
+            return -1;
+        }
+
+        mmap_pfn = ring_pfn;
+        ring_page = xc_map_foreign_batch(xch, domain_id,
+                                        PROT_READ | PROT_WRITE, &mmap_pfn, 1);
+
+        if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
+        {
+            PERROR("Could not map the ring page\n");
+            return -1;
+        }
+    }
+
+    /* Initialise Xen */
+    rc = xc_mem_paging_enable(xch, domain_id, evtchn_port);
+    if ( rc != 0 )
+    {
+        switch ( errno ) {
+            case EBUSY:
+                ERROR("mempaging is (or was) active on this domain");
+                break;
+            case ENODEV:
+                ERROR("mempaging requires Hardware Assisted Paging");
+                break;
+            case EMLINK:
+                ERROR("mempaging not supported while iommu passthrough is enabled");
+                break;
+            case EXDEV:
+                ERROR("mempaging not supported in a PoD guest");
+                break;
+            default:
+                PERROR("mempaging: error initialising shared page");
+                break;
+        }
+        return -1;
+    }
+
+    /* Bind event notification */
+    rc = xc_evtchn_bind_interdomain(xce_handle, domain_id, *evtchn_port);
+    if ( rc < 0 )
+    {
+        PERROR("Failed to bind event channel");
+        return -1;
+    }
+    *port = rc;
+
+    /* Initialise ring */
+    SHARED_RING_INIT((mem_event_sring_t *)ring_page);
+    BACK_RING_INIT(back_ring, (mem_event_sring_t *)ring_page, PAGE_SIZE);
+
+    /* Now that the ring is set, remove it from the guest's physmap */
+    if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0, &ring_pfn) )
+    {
+        PERROR("Failed to remove ring from guest physmap");
+        return -1;
+    }
+
+    return 0;
+}
+
+/*
+ * Mem paging ring and event channel teardown routine.
+ * The function call invalidates the arguments ring_page and port.
+ * In case of an error at intermediate step the function will setup
+ * errno properly.
+ */
+void xc_mem_paging_ring_teardown(xc_interface *xch,
+                                 xc_evtchn *xce_handle,
+                                 domid_t domain_id,
+                                 void *ring_page,
+                                 int *port)
+{
+    int rc;
+
+    /* Remove the ring page. */
+    munmap(ring_page, PAGE_SIZE);
+    ring_page = NULL;
+
+    /*Disable memory paging. */
+    rc = xc_mem_paging_disable(xch, domain_id);
+    if ( rc != 0 )
+    {
+        PERROR("Error while disabling paging in xen");
+    }
+
+    /* Unbind the event channel. */
+    rc = xc_evtchn_unbind(xce_handle, *port);
+    if ( rc != 0 )
+    {
+        PERROR("Error unbinding event port");
+    }
+
+    /* Set the port to invalid. */
+    *port = -1;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 02129f7..c313cca 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -47,6 +47,7 @@
 #include <xen/xsm/flask_op.h>
 #include <xen/tmem.h>
 #include <xen/kexec.h>
+#include <xen/mem_event.h>
 
 #include "xentoollog.h"
 
@@ -2039,6 +2040,32 @@ int xc_mem_paging_prep(xc_interface *xch, domid_t domain_id, unsigned long gfn);
 int xc_mem_paging_load(xc_interface *xch, domid_t domain_id, 
                         unsigned long gfn, void *buffer);
 
+/*
+ * Mem paging ring and event channel setup routine.
+ * Setup a shared ring and an event channel to communicate between
+ * hypervisor and the tool performing mem paging operations.
+ * The function will return zero on successful completion and will
+ * return -1 on failure at any intermediate step setting up errno
+ * properly. 
+ */
+int xc_mem_paging_ring_setup(xc_interface *xch,
+                             xc_evtchn *xce_handle,
+                             domid_t domain_id,
+                             void *ring_page,
+                             int *port,
+                             uint32_t *evtchn_port,
+                             mem_event_back_ring_t *back_ring);
+/*
+ * Mem paging ring and event channel teardown routine.
+ * The function call invalidates the arguments ring_page and port.
+ * In case of an error at intermediate step the function will setup
+ * errno properly.
+ */
+void xc_mem_paging_ring_teardown(xc_interface *xch,
+                                 xc_evtchn *xce_handle,
+                                 domid_t domain_id,
+                                 void *ring_page,
+                                 int *port);
 /** 
  * Access tracking operations.
  * Supported only on Intel EPT 64 bit processors.
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index ff29b47..37a4db7 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -521,13 +521,16 @@ CAMLprim value stub_xc_evtchn_reset(value xch, value domid)
 	CAMLreturn(Val_unit);
 }
 
-
-#define RING_SIZE 32768
-static char ring[RING_SIZE];
+/*
+ * The name is kept BUF_RING_SIZE, because the name RING_SIZE 
+ * collides with the xen shared ring definitions in io/ring.h
+ */
+#define BUF_RING_SIZE 32768
+static char ring[BUF_RING_SIZE];
 
 CAMLprim value stub_xc_readconsolering(value xch)
 {
-	unsigned int size = RING_SIZE - 1;
+	unsigned int size = BUF_RING_SIZE - 1;
 	char *ring_ptr = ring;
 	int retval;
 
diff --git a/tools/xenpaging/Makefile b/tools/xenpaging/Makefile
index 548d9dd..ea370d3 100644
--- a/tools/xenpaging/Makefile
+++ b/tools/xenpaging/Makefile
@@ -1,8 +1,8 @@
 XEN_ROOT=$(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenstore) $(PTHREAD_CFLAGS)
-LDLIBS += $(LDLIBS_libxenctrl) $(LDLIBS_libxenstore) $(PTHREAD_LIBS)
+CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenguest) $(CFLAGS_libxenstore) $(PTHREAD_CFLAGS)
+LDLIBS += $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxenstore) $(PTHREAD_LIBS)
 LDFLAGS += $(PTHREAD_LDFLAGS)
 
 POLICY    = default
diff --git a/tools/xenpaging/xenpaging.c b/tools/xenpaging/xenpaging.c
index 5ef2f09..ee56558 100644
--- a/tools/xenpaging/xenpaging.c
+++ b/tools/xenpaging/xenpaging.c
@@ -281,7 +281,6 @@ static struct xenpaging *xenpaging_init(int argc, char *argv[])
     xentoollog_logger *dbg = NULL;
     char *p;
     int rc;
-    unsigned long ring_pfn, mmap_pfn;
 
     /* Allocate memory */
     paging = calloc(1, sizeof(struct xenpaging));
@@ -338,61 +337,6 @@ static struct xenpaging *xenpaging_init(int argc, char *argv[])
         goto err;
     }
 
-    /* Map the ring page */
-    xc_get_hvm_param(xch, paging->mem_event.domain_id, 
-                        HVM_PARAM_PAGING_RING_PFN, &ring_pfn);
-    mmap_pfn = ring_pfn;
-    paging->mem_event.ring_page = 
-        xc_map_foreign_batch(xch, paging->mem_event.domain_id, 
-                                PROT_READ | PROT_WRITE, &mmap_pfn, 1);
-    if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
-    {
-        /* Map failed, populate ring page */
-        rc = xc_domain_populate_physmap_exact(paging->xc_handle, 
-                                              paging->mem_event.domain_id,
-                                              1, 0, 0, &ring_pfn);
-        if ( rc != 0 )
-        {
-            PERROR("Failed to populate ring gfn\n");
-            goto err;
-        }
-
-        mmap_pfn = ring_pfn;
-        paging->mem_event.ring_page = 
-            xc_map_foreign_batch(xch, paging->mem_event.domain_id, 
-                                    PROT_READ | PROT_WRITE, &mmap_pfn, 1);
-        if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
-        {
-            PERROR("Could not map the ring page\n");
-            goto err;
-        }
-    }
-    
-    /* Initialise Xen */
-    rc = xc_mem_paging_enable(xch, paging->mem_event.domain_id,
-                             &paging->mem_event.evtchn_port);
-    if ( rc != 0 )
-    {
-        switch ( errno ) {
-            case EBUSY:
-                ERROR("xenpaging is (or was) active on this domain");
-                break;
-            case ENODEV:
-                ERROR("xenpaging requires Hardware Assisted Paging");
-                break;
-            case EMLINK:
-                ERROR("xenpaging not supported while iommu passthrough is enabled");
-                break;
-            case EXDEV:
-                ERROR("xenpaging not supported in a PoD guest");
-                break;
-            default:
-                PERROR("Error initialising shared page");
-                break;
-        }
-        goto err;
-    }
-
     /* Open event channel */
     paging->mem_event.xce_handle = xc_evtchn_open(NULL, 0);
     if ( paging->mem_event.xce_handle == NULL )
@@ -401,28 +345,21 @@ static struct xenpaging *xenpaging_init(int argc, char *argv[])
         goto err;
     }
 
-    /* Bind event notification */
-    rc = xc_evtchn_bind_interdomain(paging->mem_event.xce_handle,
-                                    paging->mem_event.domain_id,
-                                    paging->mem_event.evtchn_port);
-    if ( rc < 0 )
+    /* Initialize paging by setting up ring and event channel. */
+    rc = xc_mem_paging_ring_setup(paging->xc_handle,
+                                  paging->mem_event.xce_handle,
+                                  paging->mem_event.domain_id,
+                                  paging->mem_event.ring_page,
+                                  &paging->mem_event.port,
+                                  &paging->mem_event.evtchn_port,
+                                  &paging->mem_event.back_ring);
+    if( rc )
     {
-        PERROR("Failed to bind event channel");
+        PERROR("Could not initialize mem paging\n");
         goto err;
     }
 
-    paging->mem_event.port = rc;
-
-    /* Initialise ring */
-    SHARED_RING_INIT((mem_event_sring_t *)paging->mem_event.ring_page);
-    BACK_RING_INIT(&paging->mem_event.back_ring,
-                   (mem_event_sring_t *)paging->mem_event.ring_page,
-                   PAGE_SIZE);
-
-    /* Now that the ring is set, remove it from the guest's physmap */
-    if ( xc_domain_decrease_reservation_exact(xch, 
-                    paging->mem_event.domain_id, 1, 0, &ring_pfn) )
-        PERROR("Failed to remove ring from guest physmap");
+    DPRINTF("ring and event channel setup successful");
 
     /* Get max_pages from guest if not provided via cmdline */
     if ( !paging->max_pages )
@@ -523,21 +460,13 @@ static void xenpaging_teardown(struct xenpaging *paging)
     xs_unwatch(paging->xs_handle, "@releaseDomain", watch_token);
 
     paging->xc_handle = NULL;
-    /* Tear down domain paging in Xen */
-    munmap(paging->mem_event.ring_page, PAGE_SIZE);
-    rc = xc_mem_paging_disable(xch, paging->mem_event.domain_id);
-    if ( rc != 0 )
-    {
-        PERROR("Error tearing down domain paging in xen");
-    }
 
-    /* Unbind VIRQ */
-    rc = xc_evtchn_unbind(paging->mem_event.xce_handle, paging->mem_event.port);
-    if ( rc != 0 )
-    {
-        PERROR("Error unbinding event port");
-    }
-    paging->mem_event.port = -1;
+    /* Tear down domain paging and ring setup in Xen */
+    xc_mem_paging_ring_teardown(xch, paging->mem_event.xce_handle,
+                                paging->mem_event.domain_id,
+                                paging->mem_event.ring_page,
+                                &paging->mem_event.port);
+    DPRINTF("tear down ring and mempaging successful");
 
     /* Close event channel */
     rc = xc_evtchn_close(paging->mem_event.xce_handle);
@@ -545,8 +474,7 @@ static void xenpaging_teardown(struct xenpaging *paging)
     {
         PERROR("Error closing event channel");
     }
-    paging->mem_event.xce_handle = NULL;
-    
+
     /* Close connection to xenstore */
     xs_close(paging->xs_handle);
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 2/3] [GSOC14] Replacing Deprecated Function Calls.
  2014-06-16 18:20 [PATCH v2 0/3] Refactoring mempaging code from xenpaging to libxc and few updates Dushyant Behl
  2014-06-16 18:20 ` [PATCH v2 1/3] [GSOC14] refactored mempaging code from xenpaging to libxc Dushyant Behl
@ 2014-06-16 18:20 ` Dushyant Behl
  2014-06-16 18:20 ` [PATCH v2 3/3] [GSOC14] FIX:- Race condition between initializing shared ring and mempaging Dushyant Behl
  2 siblings, 0 replies; 24+ messages in thread
From: Dushyant Behl @ 2014-06-16 18:20 UTC (permalink / raw)
  To: xen-devel
  Cc: David Scott, Stefano Stabellini, Andrew Cooper, Ian Jackson,
	Andres Lagar Cavilla, Dushyant Behl, Ian Campbell

This patch is part of the work done under the gsoc project -
Lazy Restore Using Memory Paging.

This patch replaces the calls to the deprecated function
xc_map_foreign_batch with the calls to xc_map_foreign_bulk
in tools/libxc/xc_mem_paging_ring_setup.c
The function xc_map_foreign_bulk has a cleaner error reporting
interface than xc_map_foreign_batch.

Signed-off-by: Dushyant Behl <myselfdushyantbehl@gmail.com>
Reviewed-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 tools/libxc/xc_mem_paging_setup.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/tools/libxc/xc_mem_paging_setup.c b/tools/libxc/xc_mem_paging_setup.c
index 4e5fe73..679c606 100644
--- a/tools/libxc/xc_mem_paging_setup.c
+++ b/tools/libxc/xc_mem_paging_setup.c
@@ -43,15 +43,15 @@ int xc_mem_paging_ring_setup(xc_interface *xch,
                              uint32_t *evtchn_port,
                              mem_event_back_ring_t *back_ring)
 {
-    int rc;
+    int rc, err;
     uint64_t ring_pfn, mmap_pfn;
 
     /* Map the ring page */
     xc_get_hvm_param(xch, domain_id, HVM_PARAM_PAGING_RING_PFN, &ring_pfn);
     mmap_pfn = ring_pfn;
-    ring_page = xc_map_foreign_batch(xch, domain_id,
-                                        PROT_READ | PROT_WRITE, &mmap_pfn, 1);
-    if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
+    ring_page = xc_map_foreign_bulk(xch, domain_id,
+                                     PROT_READ | PROT_WRITE, &mmap_pfn, &err, 1);
+    if ( (err != 0) || (ring_page == NULL) )
     {
         /* Map failed, populate ring page */
         rc = xc_domain_populate_physmap_exact(xch, domain_id,
@@ -63,10 +63,10 @@ int xc_mem_paging_ring_setup(xc_interface *xch,
         }
 
         mmap_pfn = ring_pfn;
-        ring_page = xc_map_foreign_batch(xch, domain_id,
-                                        PROT_READ | PROT_WRITE, &mmap_pfn, 1);
+        ring_page = xc_map_foreign_bulk(xch, domain_id,
+                                        PROT_READ | PROT_WRITE, &mmap_pfn, &err, 1);
 
-        if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
+        if ( (err != 0) || (ring_page == NULL) )
         {
             PERROR("Could not map the ring page\n");
             return -1;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 3/3] [GSOC14] FIX:- Race condition between initializing shared ring and mempaging.
  2014-06-16 18:20 [PATCH v2 0/3] Refactoring mempaging code from xenpaging to libxc and few updates Dushyant Behl
  2014-06-16 18:20 ` [PATCH v2 1/3] [GSOC14] refactored mempaging code from xenpaging to libxc Dushyant Behl
  2014-06-16 18:20 ` [PATCH v2 2/3] [GSOC14] Replacing Deprecated Function Calls Dushyant Behl
@ 2014-06-16 18:20 ` Dushyant Behl
  2014-06-27 10:33   ` Ian Campbell
  2 siblings, 1 reply; 24+ messages in thread
From: Dushyant Behl @ 2014-06-16 18:20 UTC (permalink / raw)
  To: xen-devel
  Cc: David Scott, Stefano Stabellini, Andrew Cooper, Ian Jackson,
	Andres Lagar Cavilla, Dushyant Behl, Ian Campbell

This patch is part of the work done under the gsoc project -
Lazy Restore Using Memory Paging.

This patch is meant to fix a known race condition bug in mempaging
ring setup routines. The race conditon was between initializing
mem paging and initializing shared ring, earlier the code initialized
mem paging before removing the ring page from guest's physical map
which could enable the guest to interfere with the ring initialisation.
Now the code removes the page from the guest's physical map before
enabling mempaging so that the guest cannot clobber the ring after
we initialise it.

Signed-off-by: Dushyant Behl <myselfdushyantbehl@gmail.com>
Reviewed-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 tools/libxc/xc_mem_paging_setup.c | 24 ++++++++++++++++++++----
 1 file changed, 20 insertions(+), 4 deletions(-)

diff --git a/tools/libxc/xc_mem_paging_setup.c b/tools/libxc/xc_mem_paging_setup.c
index 679c606..a4c4798 100644
--- a/tools/libxc/xc_mem_paging_setup.c
+++ b/tools/libxc/xc_mem_paging_setup.c
@@ -73,6 +73,24 @@ int xc_mem_paging_ring_setup(xc_interface *xch,
         }
     }
 
+    /*
+     * Remove the page from guest's physical map so that the guest will not
+     * be able to break security by pretending to be toolstack for a while.
+     */
+    if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0, ring_page) )
+    {
+        PERROR("Failed to remove ring_page from guest physmap");
+        return -1;
+    }
+
+    DPRINTF("removed ring_page from guest physical map");
+
+    /* Initialise ring */
+    SHARED_RING_INIT((mem_event_sring_t *)ring_page);
+    BACK_RING_INIT(back_ring, (mem_event_sring_t *)ring_page, PAGE_SIZE);
+
+    DPRINTF("ininialized shared ring");
+
     /* Initialise Xen */
     rc = xc_mem_paging_enable(xch, domain_id, evtchn_port);
     if ( rc != 0 )
@@ -106,14 +124,12 @@ int xc_mem_paging_ring_setup(xc_interface *xch,
     }
     *port = rc;
 
-    /* Initialise ring */
-    SHARED_RING_INIT((mem_event_sring_t *)ring_page);
-    BACK_RING_INIT(back_ring, (mem_event_sring_t *)ring_page, PAGE_SIZE);
+    DPRINTF("enabled mempaging");
 
     /* Now that the ring is set, remove it from the guest's physmap */
     if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0, &ring_pfn) )
     {
-        PERROR("Failed to remove ring from guest physmap");
+        PERROR("Failed to remove ring_pfn from guest physmap");
         return -1;
     }
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/3] [GSOC14] refactored mempaging code from xenpaging to libxc.
  2014-06-16 18:20 ` [PATCH v2 1/3] [GSOC14] refactored mempaging code from xenpaging to libxc Dushyant Behl
@ 2014-06-27 10:27   ` Ian Campbell
  2014-06-27 10:37     ` Ian Campbell
  2014-06-27 10:39     ` Dushyant Behl
  2014-06-27 10:39   ` Ian Campbell
  1 sibling, 2 replies; 24+ messages in thread
From: Ian Campbell @ 2014-06-27 10:27 UTC (permalink / raw)
  To: Dushyant Behl
  Cc: David Scott, Stefano Stabellini, Andrew Cooper, Ian Jackson,
	xen-devel, Andres Lagar Cavilla

On Mon, 2014-06-16 at 23:50 +0530, Dushyant Behl wrote:
> This patch is part of the work done under the gsoc project -
> Lazy Restore Using Memory Paging.

For future patches please can you omit this, it's not really relevant to
the commit history. If you really want to include it please put it below
the --- marker. Or just mention it in the 0/N mail.

> 
> This patch moves the code to initialize and teardown mempaging from xenpaging
> to libxc. The code refractored from xenpaging is the code which sets up paging,

"refactored"


> initializes a shared ring and event channel to communicate with xen. This
> communication is done between the hypervisor and the dom0 tool which performs
> the activity of pager. Another routine is also added by this patch which will
> teardown the ring and mem paging setup. The xenpaging code is changed to use
> the newly created routines and is tested to properly build and work with this code.
> 
> The refractoring is done so that any tool which will act as pager in

"refactoring"

> lazy restore or use memory paging can use a same routines to initialize mempaging.
> This refactoring will also allow any future (in-tree) tools to use mempaging.
> 
> The refractored code in xc_mem_paging_ring_setup is to be compiled into

and again.

I'll fix these up on commit, which I'll do as soon as someone confirms
that...

> +    /* Now that the ring is set, remove it from the guest's physmap */
> +    if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0, &ring_pfn) )

... the lack of a domain pause and/or clearing the ring content in this
function does not represent an issue similar to
http://xenbits.xen.org/xsa/advisory-99.html.

> +    /* Unbind the event channel. */
> +    rc = xc_evtchn_unbind(xce_handle, *port);
> +    if ( rc != 0 )
> +    {
> +        PERROR("Error unbinding event port");
> +    }

We don't in general require {}'s around single line statements.

> +/*
> + * The name is kept BUF_RING_SIZE, because the name RING_SIZE 
> + * collides with the xen shared ring definitions in io/ring.h
> + */

This sort of comment generally belongs in the commit log.

Neither of those last two are a blocker for applying though.

Ian.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 3/3] [GSOC14] FIX:- Race condition between initializing shared ring and mempaging.
  2014-06-16 18:20 ` [PATCH v2 3/3] [GSOC14] FIX:- Race condition between initializing shared ring and mempaging Dushyant Behl
@ 2014-06-27 10:33   ` Ian Campbell
  2014-07-20 21:49     ` Dushyant Behl
  0 siblings, 1 reply; 24+ messages in thread
From: Ian Campbell @ 2014-06-27 10:33 UTC (permalink / raw)
  To: Dushyant Behl
  Cc: David Scott, Stefano Stabellini, Andrew Cooper, Ian Jackson,
	xen-devel, Andres Lagar Cavilla

On Mon, 2014-06-16 at 23:50 +0530, Dushyant Behl wrote:
> This patch is part of the work done under the gsoc project -
> Lazy Restore Using Memory Paging.
> 
> This patch is meant to fix a known race condition bug in mempaging
> ring setup routines. The race conditon was between initializing

"condition"

> mem paging and initializing shared ring, earlier the code initialized
> mem paging before removing the ring page from guest's physical map
> which could enable the guest to interfere with the ring initialisation.
> Now the code removes the page from the guest's physical map before
> enabling mempaging so that the guest cannot clobber the ring after
> we initialise it.
> 
> Signed-off-by: Dushyant Behl <myselfdushyantbehl@gmail.com>
> Reviewed-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
>  tools/libxc/xc_mem_paging_setup.c | 24 ++++++++++++++++++++----
>  1 file changed, 20 insertions(+), 4 deletions(-)
> 
> diff --git a/tools/libxc/xc_mem_paging_setup.c b/tools/libxc/xc_mem_paging_setup.c
> index 679c606..a4c4798 100644
> --- a/tools/libxc/xc_mem_paging_setup.c
> +++ b/tools/libxc/xc_mem_paging_setup.c
> @@ -73,6 +73,24 @@ int xc_mem_paging_ring_setup(xc_interface *xch,
>          }
>      }
>  
> +    /*
> +     * Remove the page from guest's physical map so that the guest will not
> +     * be able to break security by pretending to be toolstack for a while.
> +     */
> +    if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0, ring_page) )

ring_page here is a void * mapping of the page, isn't it? Not a
xen_pfn_t[] array of pfns, so surely this isn't right.

> @@ -106,14 +124,12 @@ int xc_mem_paging_ring_setup(xc_interface *xch,
>      }
>      *port = rc;
>  
> -    /* Initialise ring */
> -    SHARED_RING_INIT((mem_event_sring_t *)ring_page);
> -    BACK_RING_INIT(back_ring, (mem_event_sring_t *)ring_page, PAGE_SIZE);
> +    DPRINTF("enabled mempaging");
>  
>      /* Now that the ring is set, remove it from the guest's physmap */
>      if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0, &ring_pfn) )

This looks like the correct invocation to me. If you correct the one
above though isn't this one then redundant though?

>      {
> -        PERROR("Failed to remove ring from guest physmap");
> +        PERROR("Failed to remove ring_pfn from guest physmap");
>          return -1;
>      }
>  

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/3] [GSOC14] refactored mempaging code from xenpaging to libxc.
  2014-06-27 10:27   ` Ian Campbell
@ 2014-06-27 10:37     ` Ian Campbell
  2014-06-27 10:41       ` Andrew Cooper
  2014-06-27 10:39     ` Dushyant Behl
  1 sibling, 1 reply; 24+ messages in thread
From: Ian Campbell @ 2014-06-27 10:37 UTC (permalink / raw)
  To: Dushyant Behl
  Cc: David Scott, Stefano Stabellini, Andrew Cooper, Ian Jackson,
	xen-devel, Andres Lagar Cavilla

On Fri, 2014-06-27 at 11:27 +0100, Ian Campbell wrote:
> On Mon, 2014-06-16 at 23:50 +0530, Dushyant Behl wrote:
> > This patch is part of the work done under the gsoc project -
> > Lazy Restore Using Memory Paging.
> 
> For future patches please can you omit this, it's not really relevant to
> the commit history. If you really want to include it please put it below
> the --- marker. Or just mention it in the 0/N mail.

Also please can you tag your $subject with the appropriate subsystem.
e.g.: "tools: refactor mempaging ...."

See git log for examples of how this normally looks.

> > +    /* Now that the ring is set, remove it from the guest's physmap */
> > +    if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0, &ring_pfn) )
> 
> ... the lack of a domain pause and/or clearing the ring content in this
> function does not represent an issue similar to
> http://xenbits.xen.org/xsa/advisory-99.html.

I found the answer to this in patch #3, but I had questions on that so I
haven't applied yet.

[...]
> Neither of those last two are a blocker for applying though.

But since it seems likely that a v3 is going to be required, please do
fix for next time.

Ian.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/3] [GSOC14] refactored mempaging code from xenpaging to libxc.
  2014-06-16 18:20 ` [PATCH v2 1/3] [GSOC14] refactored mempaging code from xenpaging to libxc Dushyant Behl
  2014-06-27 10:27   ` Ian Campbell
@ 2014-06-27 10:39   ` Ian Campbell
  1 sibling, 0 replies; 24+ messages in thread
From: Ian Campbell @ 2014-06-27 10:39 UTC (permalink / raw)
  To: Dushyant Behl
  Cc: David Scott, Stefano Stabellini, Andrew Cooper, Ian Jackson,
	xen-devel, Andres Lagar Cavilla

Sorry, there's another issue here which prevents me from applying this
series:

On Mon, 2014-06-16 at 23:50 +0530, Dushyant Behl wrote:
> index 0000000..4e5fe73
> --- /dev/null
> +++ b/tools/libxc/xc_mem_paging_setup.c
> @@ -0,0 +1,166 @@
> +/*
> + * tools/libxc/xc_mem_paging_setup.c
> + *
> + * Routines to initialize and teardown memory paging.
> + * Create shared ring and event channels to communicate
> + * with the hypervisor.
> + *
> + * Copyright (c) 2014 Dushyant Behl
> + * Copyright (c) 2009 by Citrix Systems, Inc. (Patrick Colp)
> + *
> + * This library is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation; either
> + * version 2.1 of the License, or (at your option) any later version.

This is not the license which this code originally had (xenpaging.c is
GPL2+). AFAIR the GPL does not have any clause which allows it to
convert to the LGPL in this way (unlike the converse, which is
possible).

Have you had permission from the existing copyright holders to relicense
this code? If so then this needs to be spelled out explicitly in the
commit log. If not then you need to explicitly seek permission.

Ian.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/3] [GSOC14] refactored mempaging code from xenpaging to libxc.
  2014-06-27 10:27   ` Ian Campbell
  2014-06-27 10:37     ` Ian Campbell
@ 2014-06-27 10:39     ` Dushyant Behl
  2014-06-27 12:39       ` Ian Campbell
  1 sibling, 1 reply; 24+ messages in thread
From: Dushyant Behl @ 2014-06-27 10:39 UTC (permalink / raw)
  To: Ian Campbell
  Cc: David Scott, Stefano Stabellini, Andrew Cooper, Ian Jackson,
	xen-devel, Andres Lagar Cavilla

>> This patch is part of the work done under the gsoc project -
>> Lazy Restore Using Memory Paging.
>
> For future patches please can you omit this, it's not really relevant to
> the commit history. If you really want to include it please put it below
> the --- marker. Or just mention it in the 0/N mail.
>
>>
>> This patch moves the code to initialize and teardown mempaging from xenpaging
>> to libxc. The code refractored from xenpaging is the code which sets up paging,
>
> "refactored"
>
>
>> initializes a shared ring and event channel to communicate with xen. This
>> communication is done between the hypervisor and the dom0 tool which performs
>> the activity of pager. Another routine is also added by this patch which will
>> teardown the ring and mem paging setup. The xenpaging code is changed to use
>> the newly created routines and is tested to properly build and work with this code.
>>
>> The refractoring is done so that any tool which will act as pager in
>
> "refactoring"
>
>> lazy restore or use memory paging can use a same routines to initialize mempaging.
>> This refactoring will also allow any future (in-tree) tools to use mempaging.
>>
>> The refractored code in xc_mem_paging_ring_setup is to be compiled into
>
> and again.
>
> I'll fix these up on commit, which I'll do as soon as someone confirms
> that...
>
>> +    /* Now that the ring is set, remove it from the guest's physmap */
>> +    if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0, &ring_pfn) )
>
> ... the lack of a domain pause and/or clearing the ring content in this
> function does not represent an issue similar to
> http://xenbits.xen.org/xsa/advisory-99.html.
>
>> +    /* Unbind the event channel. */
>> +    rc = xc_evtchn_unbind(xce_handle, *port);
>> +    if ( rc != 0 )
>> +    {
>> +        PERROR("Error unbinding event port");
>> +    }
>
> We don't in general require {}'s around single line statements.
>
>> +/*
>> + * The name is kept BUF_RING_SIZE, because the name RING_SIZE
>> + * collides with the xen shared ring definitions in io/ring.h
>> + */
>
> This sort of comment generally belongs in the commit log.
>
> Neither of those last two are a blocker for applying though.

Ian,

Please don't apply the patch yet. I have found a very small yet big
mistake with the patch.
I'll send the correct patch along with these things updated correctly.

Thanks and Regards,
Dushyant Behl

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/3] [GSOC14] refactored mempaging code from xenpaging to libxc.
  2014-06-27 10:37     ` Ian Campbell
@ 2014-06-27 10:41       ` Andrew Cooper
  0 siblings, 0 replies; 24+ messages in thread
From: Andrew Cooper @ 2014-06-27 10:41 UTC (permalink / raw)
  To: Ian Campbell, Dushyant Behl
  Cc: xen-devel, Ian Jackson, David Scott, Andres Lagar Cavilla,
	Stefano Stabellini

On 27/06/14 11:37, Ian Campbell wrote:
> On Fri, 2014-06-27 at 11:27 +0100, Ian Campbell wrote:
>> On Mon, 2014-06-16 at 23:50 +0530, Dushyant Behl wrote:
>>
>>> +    /* Now that the ring is set, remove it from the guest's physmap */
>>> +    if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0, &ring_pfn) )
>> ... the lack of a domain pause and/or clearing the ring content in this
>> function does not represent an issue similar to
>> http://xenbits.xen.org/xsa/advisory-99.html.
> I found the answer to this in patch #3, but I had questions on that so I
> haven't applied yet.

Doesn't c/s 6ae2df93c277, the fix for XSA-99 do change all of this anyway?

Patch #2 still looks valid, albeit against xc_mem_event.c, but #1 and #3
appear to have already been done differently.

~Andrew

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/3] [GSOC14] refactored mempaging code from xenpaging to libxc.
  2014-06-27 10:39     ` Dushyant Behl
@ 2014-06-27 12:39       ` Ian Campbell
  2014-06-28  3:32         ` Dushyant Behl
  0 siblings, 1 reply; 24+ messages in thread
From: Ian Campbell @ 2014-06-27 12:39 UTC (permalink / raw)
  To: Dushyant Behl
  Cc: David Scott, Stefano Stabellini, Andrew Cooper, Ian Jackson,
	xen-devel, Andres Lagar Cavilla

On Fri, 2014-06-27 at 16:09 +0530, Dushyant Behl wrote:
> Please don't apply the patch yet. I have found a very small yet big
> mistake with the patch.
> I'll send the correct patch along with these things updated correctly.

OK.

In the meantime I noticed that it doesn't compile for 32-bit either:

xc_mem_paging_setup.c: In function 'xc_mem_paging_ring_setup':
xc_mem_paging_setup.c:50:5: error: passing argument 4 of 'xc_get_hvm_param' from incompatible pointer type [-Werror]
In file included from xc_private.h:32:0,
                 from xc_mem_paging_setup.c:26:
xenctrl.h:1867:5: note: expected 'long unsigned int *' but argument is of type 'uint64_t *'
xc_mem_paging_setup.c:53:41: error: passing argument 4 of 'xc_map_foreign_batch' from incompatible pointer type [-Werror]
In file included from xc_private.h:32:0,
                 from xc_mem_paging_setup.c:26:
xenctrl.h:1431:7: note: expected 'xen_pfn_t *' but argument is of type 'uint64_t *'
xc_mem_paging_setup.c:58:47: error: passing argument 6 of 'xc_domain_populate_physmap_exact' from incompatible pointer type [-Werror]
In file included from xc_private.h:32:0,
                 from xc_mem_paging_setup.c:26:
xenctrl.h:1326:5: note: expected 'xen_pfn_t *' but argument is of type 'uint64_t *'
xc_mem_paging_setup.c:67:41: error: passing argument 4 of 'xc_map_foreign_batch' from incompatible pointer type [-Werror]
In file included from xc_private.h:32:0,
                 from xc_mem_paging_setup.c:26:
xenctrl.h:1431:7: note: expected 'xen_pfn_t *' but argument is of type 'uint64_t *'
xc_mem_paging_setup.c:114:5: error: passing argument 5 of 'xc_domain_decrease_reservation_exact' from incompatible pointer type [-Werror]
In file included from xc_private.h:32:0,
                 from xc_mem_paging_setup.c:26:
xenctrl.h:1307:5: note: expected 'xen_pfn_t *' but argument is of type 'uint64_t *'
cc1: all warnings being treated as errors

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/3] [GSOC14] refactored mempaging code from xenpaging to libxc.
  2014-06-27 12:39       ` Ian Campbell
@ 2014-06-28  3:32         ` Dushyant Behl
  2014-06-30 10:37           ` Ian Campbell
  0 siblings, 1 reply; 24+ messages in thread
From: Dushyant Behl @ 2014-06-28  3:32 UTC (permalink / raw)
  To: Ian Campbell
  Cc: David Scott, Stefano Stabellini, Andrew Cooper, Ian Jackson,
	xen-devel, Andres Lagar Cavilla

Ian,

> In the meantime I noticed that it doesn't compile for 32-bit either:
>
> xc_mem_paging_setup.c: In function 'xc_mem_paging_ring_setup':
> xc_mem_paging_setup.c:50:5: error: passing argument 4 of 'xc_get_hvm_param' from incompatible pointer type [-Werror]
> In file included from xc_private.h:32:0,
>                  from xc_mem_paging_setup.c:26:
> xenctrl.h:1867:5: note: expected 'long unsigned int *' but argument is of type 'uint64_t *'
> xc_mem_paging_setup.c:53:41: error: passing argument 4 of 'xc_map_foreign_batch' from incompatible pointer type [-Werror]
> In file included from xc_private.h:32:0,
>                  from xc_mem_paging_setup.c:26:
> xenctrl.h:1431:7: note: expected 'xen_pfn_t *' but argument is of type 'uint64_t *'
> xc_mem_paging_setup.c:58:47: error: passing argument 6 of 'xc_domain_populate_physmap_exact' from incompatible pointer type [-Werror]
> In file included from xc_private.h:32:0,
>                  from xc_mem_paging_setup.c:26:
> xenctrl.h:1326:5: note: expected 'xen_pfn_t *' but argument is of type 'uint64_t *'
> xc_mem_paging_setup.c:67:41: error: passing argument 4 of 'xc_map_foreign_batch' from incompatible pointer type [-Werror]
> In file included from xc_private.h:32:0,
>                  from xc_mem_paging_setup.c:26:
> xenctrl.h:1431:7: note: expected 'xen_pfn_t *' but argument is of type 'uint64_t *'
> xc_mem_paging_setup.c:114:5: error: passing argument 5 of 'xc_domain_decrease_reservation_exact' from incompatible pointer type [-Werror]
> In file included from xc_private.h:32:0,
>                  from xc_mem_paging_setup.c:26:
> xenctrl.h:1307:5: note: expected 'xen_pfn_t *' but argument is of type 'uint64_t *'
> cc1: all warnings being treated as errors

I think this is because of the reason that two of the variables in the
code have been changed from unsigned long to uint64_t.

Does the code, on top of which you tried to compile the patch for 32
bit include David Vrabel's GenID series fixes for the prototypes of
xc_[gs]et_hvm_param()?
I made this change because of a suggestion from Andrew and I think
these will not create any problem once the code include the GenID
series fixes.
If this is indeed the case then I think we should delay applying this
patch until those changes get merged.

Thanks and Regards,
Dushyant Behl

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 1/3] [GSOC14] refactored mempaging code from xenpaging to libxc.
  2014-06-28  3:32         ` Dushyant Behl
@ 2014-06-30 10:37           ` Ian Campbell
  0 siblings, 0 replies; 24+ messages in thread
From: Ian Campbell @ 2014-06-30 10:37 UTC (permalink / raw)
  To: Dushyant Behl
  Cc: David Scott, Stefano Stabellini, Andrew Cooper, Ian Jackson,
	xen-devel, Andres Lagar Cavilla

On Sat, 2014-06-28 at 09:02 +0530, Dushyant Behl wrote:
> Ian,
> 
> > In the meantime I noticed that it doesn't compile for 32-bit either:
> >
> > xc_mem_paging_setup.c: In function 'xc_mem_paging_ring_setup':
> > xc_mem_paging_setup.c:50:5: error: passing argument 4 of 'xc_get_hvm_param' from incompatible pointer type [-Werror]
> > In file included from xc_private.h:32:0,
> >                  from xc_mem_paging_setup.c:26:
> > xenctrl.h:1867:5: note: expected 'long unsigned int *' but argument is of type 'uint64_t *'
> > xc_mem_paging_setup.c:53:41: error: passing argument 4 of 'xc_map_foreign_batch' from incompatible pointer type [-Werror]
> > In file included from xc_private.h:32:0,
> >                  from xc_mem_paging_setup.c:26:
> > xenctrl.h:1431:7: note: expected 'xen_pfn_t *' but argument is of type 'uint64_t *'
> > xc_mem_paging_setup.c:58:47: error: passing argument 6 of 'xc_domain_populate_physmap_exact' from incompatible pointer type [-Werror]
> > In file included from xc_private.h:32:0,
> >                  from xc_mem_paging_setup.c:26:
> > xenctrl.h:1326:5: note: expected 'xen_pfn_t *' but argument is of type 'uint64_t *'
> > xc_mem_paging_setup.c:67:41: error: passing argument 4 of 'xc_map_foreign_batch' from incompatible pointer type [-Werror]
> > In file included from xc_private.h:32:0,
> >                  from xc_mem_paging_setup.c:26:
> > xenctrl.h:1431:7: note: expected 'xen_pfn_t *' but argument is of type 'uint64_t *'
> > xc_mem_paging_setup.c:114:5: error: passing argument 5 of 'xc_domain_decrease_reservation_exact' from incompatible pointer type [-Werror]
> > In file included from xc_private.h:32:0,
> >                  from xc_mem_paging_setup.c:26:
> > xenctrl.h:1307:5: note: expected 'xen_pfn_t *' but argument is of type 'uint64_t *'
> > cc1: all warnings being treated as errors
> 
> I think this is because of the reason that two of the variables in the
> code have been changed from unsigned long to uint64_t.
> 
> Does the code, on top of which you tried to compile the patch for 32
> bit include David Vrabel's GenID series fixes for the prototypes of
> xc_[gs]et_hvm_param()?
> I made this change because of a suggestion from Andrew and I think
> these will not create any problem once the code include the GenID
> series fixes.
> If this is indeed the case then I think we should delay applying this
> patch until those changes get merged.

David's patches were committed on Friday and are in the staging branch
now. (Push to master is blocked by an unrelated change right now).

Ian.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 3/3] [GSOC14] FIX:- Race condition between initializing shared ring and mempaging.
  2014-06-27 10:33   ` Ian Campbell
@ 2014-07-20 21:49     ` Dushyant Behl
  2014-07-21  7:31       ` Andres Lagar Cavilla
  0 siblings, 1 reply; 24+ messages in thread
From: Dushyant Behl @ 2014-07-20 21:49 UTC (permalink / raw)
  To: Ian Campbell
  Cc: David Scott, Stefano Stabellini, Andrew Cooper, Ian Jackson,
	Xen Devel, Andres Lagar Cavilla

Ian,

Sorry for this extremely late response.

On Fri, Jun 27, 2014 at 4:03 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Mon, 2014-06-16 at 23:50 +0530, Dushyant Behl wrote:
>> This patch is part of the work done under the gsoc project -
>> Lazy Restore Using Memory Paging.
>>
>> This patch is meant to fix a known race condition bug in mempaging
>> ring setup routines. The race conditon was between initializing
>
> "condition"
>
>> mem paging and initializing shared ring, earlier the code initialized
>> mem paging before removing the ring page from guest's physical map
>> which could enable the guest to interfere with the ring initialisation.
>> Now the code removes the page from the guest's physical map before
>> enabling mempaging so that the guest cannot clobber the ring after
>> we initialise it.
>>
>> Signed-off-by: Dushyant Behl <myselfdushyantbehl@gmail.com>
>> Reviewed-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
>> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>>  tools/libxc/xc_mem_paging_setup.c | 24 ++++++++++++++++++++----
>>  1 file changed, 20 insertions(+), 4 deletions(-)
>>
>> diff --git a/tools/libxc/xc_mem_paging_setup.c b/tools/libxc/xc_mem_paging_setup.c
>> index 679c606..a4c4798 100644
>> --- a/tools/libxc/xc_mem_paging_setup.c
>> +++ b/tools/libxc/xc_mem_paging_setup.c
>> @@ -73,6 +73,24 @@ int xc_mem_paging_ring_setup(xc_interface *xch,
>>          }
>>      }
>>
>> +    /*
>> +     * Remove the page from guest's physical map so that the guest will not
>> +     * be able to break security by pretending to be toolstack for a while.
>> +     */
>> +    if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0, ring_page) )
>
> ring_page here is a void * mapping of the page, isn't it? Not a
> xen_pfn_t[] array of pfns, so surely this isn't right.

Yes, you are correct and the invocation should be with ring_pfn. But
if I change this with
the invocation done below as -
if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0, &ring_pfn) )

then the code fails to enable mem paging.
Can anyone help as to why this would happen?

Thanks and Regards,
Dushyant Behl

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 3/3] [GSOC14] FIX:- Race condition between initializing shared ring and mempaging.
  2014-07-20 21:49     ` Dushyant Behl
@ 2014-07-21  7:31       ` Andres Lagar Cavilla
  2014-07-21  9:01         ` Andrew Cooper
  0 siblings, 1 reply; 24+ messages in thread
From: Andres Lagar Cavilla @ 2014-07-21  7:31 UTC (permalink / raw)
  To: Dushyant Behl
  Cc: David Scott, Stefano Stabellini, Andrew Cooper, Ian Jackson,
	Xen Devel, Ian Campbell


[-- Attachment #1.1: Type: text/plain, Size: 2748 bytes --]

On Sun, Jul 20, 2014 at 5:49 PM, Dushyant Behl <myselfdushyantbehl@gmail.com
> wrote:

> Ian,
>
> Sorry for this extremely late response.
>
> On Fri, Jun 27, 2014 at 4:03 PM, Ian Campbell <Ian.Campbell@citrix.com>
> wrote:
> > On Mon, 2014-06-16 at 23:50 +0530, Dushyant Behl wrote:
> >> This patch is part of the work done under the gsoc project -
> >> Lazy Restore Using Memory Paging.
> >>
> >> This patch is meant to fix a known race condition bug in mempaging
> >> ring setup routines. The race conditon was between initializing
> >
> > "condition"
> >
> >> mem paging and initializing shared ring, earlier the code initialized
> >> mem paging before removing the ring page from guest's physical map
> >> which could enable the guest to interfere with the ring initialisation.
> >> Now the code removes the page from the guest's physical map before
> >> enabling mempaging so that the guest cannot clobber the ring after
> >> we initialise it.
> >>
> >> Signed-off-by: Dushyant Behl <myselfdushyantbehl@gmail.com>
> >> Reviewed-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> >> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >> ---
> >>  tools/libxc/xc_mem_paging_setup.c | 24 ++++++++++++++++++++----
> >>  1 file changed, 20 insertions(+), 4 deletions(-)
> >>
> >> diff --git a/tools/libxc/xc_mem_paging_setup.c
> b/tools/libxc/xc_mem_paging_setup.c
> >> index 679c606..a4c4798 100644
> >> --- a/tools/libxc/xc_mem_paging_setup.c
> >> +++ b/tools/libxc/xc_mem_paging_setup.c
> >> @@ -73,6 +73,24 @@ int xc_mem_paging_ring_setup(xc_interface *xch,
> >>          }
> >>      }
> >>
> >> +    /*
> >> +     * Remove the page from guest's physical map so that the guest
> will not
> >> +     * be able to break security by pretending to be toolstack for a
> while.
> >> +     */
> >> +    if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0,
> ring_page) )
> >
> > ring_page here is a void * mapping of the page, isn't it? Not a
> > xen_pfn_t[] array of pfns, so surely this isn't right.
>
> Yes, you are correct and the invocation should be with ring_pfn. But
> if I change this with
> the invocation done below as -
> if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0, &ring_pfn)
> )
>

> then the code fails to enable mem paging.
> Can anyone help as to why this would happen?
>
The call that enables paging will use the hvm param in the hypervisor to
look up the page by its pfn within the guest physical address space. If you
call decrease reservation, then the page is removed from the guest phys
address space, and the hvm param will point to a hole. And the call will
fail of course. So you need to call decrease reservation strictly after
enable paging.

Andres
Thanks and Regards,
Dushyant Behl

[-- Attachment #1.2: Type: text/html, Size: 3915 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 3/3] [GSOC14] FIX:- Race condition between initializing shared ring and mempaging.
  2014-07-21  7:31       ` Andres Lagar Cavilla
@ 2014-07-21  9:01         ` Andrew Cooper
  2014-07-21 14:11           ` Andres Lagar Cavilla
  0 siblings, 1 reply; 24+ messages in thread
From: Andrew Cooper @ 2014-07-21  9:01 UTC (permalink / raw)
  To: Andres Lagar Cavilla, Dushyant Behl
  Cc: David Scott, Stefano Stabellini, Ian Jackson, Ian Campbell, Xen Devel


[-- Attachment #1.1: Type: text/plain, Size: 3444 bytes --]

On 21/07/14 08:31, Andres Lagar Cavilla wrote:
> On Sun, Jul 20, 2014 at 5:49 PM, Dushyant Behl
> <myselfdushyantbehl@gmail.com <mailto:myselfdushyantbehl@gmail.com>>
> wrote:
>
>     Ian,
>
>     Sorry for this extremely late response.
>
>     On Fri, Jun 27, 2014 at 4:03 PM, Ian Campbell
>     <Ian.Campbell@citrix.com <mailto:Ian.Campbell@citrix.com>> wrote:
>     > On Mon, 2014-06-16 at 23:50 +0530, Dushyant Behl wrote:
>     >> This patch is part of the work done under the gsoc project -
>     >> Lazy Restore Using Memory Paging.
>     >>
>     >> This patch is meant to fix a known race condition bug in mempaging
>     >> ring setup routines. The race conditon was between initializing
>     >
>     > "condition"
>     >
>     >> mem paging and initializing shared ring, earlier the code
>     initialized
>     >> mem paging before removing the ring page from guest's physical map
>     >> which could enable the guest to interfere with the ring
>     initialisation.
>     >> Now the code removes the page from the guest's physical map before
>     >> enabling mempaging so that the guest cannot clobber the ring after
>     >> we initialise it.
>     >>
>     >> Signed-off-by: Dushyant Behl <myselfdushyantbehl@gmail.com
>     <mailto:myselfdushyantbehl@gmail.com>>
>     >> Reviewed-by: Andres Lagar-Cavilla <andres@lagarcavilla.org
>     <mailto:andres@lagarcavilla.org>>
>     >> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com
>     <mailto:andrew.cooper3@citrix.com>>
>     >> ---
>     >>  tools/libxc/xc_mem_paging_setup.c | 24 ++++++++++++++++++++----
>     >>  1 file changed, 20 insertions(+), 4 deletions(-)
>     >>
>     >> diff --git a/tools/libxc/xc_mem_paging_setup.c
>     b/tools/libxc/xc_mem_paging_setup.c
>     >> index 679c606..a4c4798 100644
>     >> --- a/tools/libxc/xc_mem_paging_setup.c
>     >> +++ b/tools/libxc/xc_mem_paging_setup.c
>     >> @@ -73,6 +73,24 @@ int xc_mem_paging_ring_setup(xc_interface *xch,
>     >>          }
>     >>      }
>     >>
>     >> +    /*
>     >> +     * Remove the page from guest's physical map so that the
>     guest will not
>     >> +     * be able to break security by pretending to be toolstack
>     for a while.
>     >> +     */
>     >> +    if ( xc_domain_decrease_reservation_exact(xch, domain_id,
>     1, 0, ring_page) )
>     >
>     > ring_page here is a void * mapping of the page, isn't it? Not a
>     > xen_pfn_t[] array of pfns, so surely this isn't right.
>
>     Yes, you are correct and the invocation should be with ring_pfn. But
>     if I change this with
>     the invocation done below as -
>     if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0,
>     &ring_pfn) ) 
>
>
>     then the code fails to enable mem paging.
>     Can anyone help as to why this would happen?
>
> The call that enables paging will use the hvm param in the hypervisor
> to look up the page by its pfn within the guest physical address
> space. If you call decrease reservation, then the page is removed from
> the guest phys address space, and the hvm param will point to a hole.
> And the call will fail of course. So you need to call decrease
> reservation strictly after enable paging.
>

You cannot have a period of time where the paging/access/events are
enabled and the guest can interfere with the ring page, as the Xen side
of ring pages are not robust to untrusted input.

How does this series interact with 6ae2df93 ?

~Andrew

[-- Attachment #1.2: Type: text/html, Size: 6599 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 3/3] [GSOC14] FIX:- Race condition between initializing shared ring and mempaging.
  2014-07-21  9:01         ` Andrew Cooper
@ 2014-07-21 14:11           ` Andres Lagar Cavilla
  2014-07-21 14:40             ` Andrew Cooper
  0 siblings, 1 reply; 24+ messages in thread
From: Andres Lagar Cavilla @ 2014-07-21 14:11 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: David Scott, Stefano Stabellini, Ian Jackson, Xen Devel,
	Dushyant Behl, Ian Campbell


[-- Attachment #1.1: Type: text/plain, Size: 3850 bytes --]

On Mon, Jul 21, 2014 at 5:01 AM, Andrew Cooper <andrew.cooper3@citrix.com>
wrote:

>  On 21/07/14 08:31, Andres Lagar Cavilla wrote:
>
>  On Sun, Jul 20, 2014 at 5:49 PM, Dushyant Behl <
> myselfdushyantbehl@gmail.com> wrote:
>
>> Ian,
>>
>> Sorry for this extremely late response.
>>
>> On Fri, Jun 27, 2014 at 4:03 PM, Ian Campbell <Ian.Campbell@citrix.com>
>> wrote:
>> > On Mon, 2014-06-16 at 23:50 +0530, Dushyant Behl wrote:
>> >> This patch is part of the work done under the gsoc project -
>> >> Lazy Restore Using Memory Paging.
>> >>
>> >> This patch is meant to fix a known race condition bug in mempaging
>> >> ring setup routines. The race conditon was between initializing
>> >
>> > "condition"
>> >
>> >> mem paging and initializing shared ring, earlier the code initialized
>> >> mem paging before removing the ring page from guest's physical map
>> >> which could enable the guest to interfere with the ring initialisation.
>> >> Now the code removes the page from the guest's physical map before
>> >> enabling mempaging so that the guest cannot clobber the ring after
>> >> we initialise it.
>> >>
>> >> Signed-off-by: Dushyant Behl <myselfdushyantbehl@gmail.com>
>> >> Reviewed-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
>> >> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> >> ---
>> >>  tools/libxc/xc_mem_paging_setup.c | 24 ++++++++++++++++++++----
>> >>  1 file changed, 20 insertions(+), 4 deletions(-)
>> >>
>> >> diff --git a/tools/libxc/xc_mem_paging_setup.c
>> b/tools/libxc/xc_mem_paging_setup.c
>> >> index 679c606..a4c4798 100644
>> >> --- a/tools/libxc/xc_mem_paging_setup.c
>> >> +++ b/tools/libxc/xc_mem_paging_setup.c
>> >> @@ -73,6 +73,24 @@ int xc_mem_paging_ring_setup(xc_interface *xch,
>> >>          }
>> >>      }
>> >>
>> >> +    /*
>> >> +     * Remove the page from guest's physical map so that the guest
>> will not
>> >> +     * be able to break security by pretending to be toolstack for a
>> while.
>> >> +     */
>> >> +    if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0,
>> ring_page) )
>> >
>> > ring_page here is a void * mapping of the page, isn't it? Not a
>> > xen_pfn_t[] array of pfns, so surely this isn't right.
>>
>>  Yes, you are correct and the invocation should be with ring_pfn. But
>> if I change this with
>> the invocation done below as -
>> if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0,
>> &ring_pfn) )
>>
>
>>  then the code fails to enable mem paging.
>> Can anyone help as to why this would happen?
>>
> The call that enables paging will use the hvm param in the hypervisor to
> look up the page by its pfn within the guest physical address space. If you
> call decrease reservation, then the page is removed from the guest phys
> address space, and the hvm param will point to a hole. And the call will
> fail of course. So you need to call decrease reservation strictly after
> enable paging.
>
>
> You cannot have a period of time where the paging/access/events are
> enabled and the guest can interfere with the ring page, as the Xen side of
> ring pages are not robust to untrusted input.
>
It's deja vu all over again!

Note that the XSA commmit, 6ae2df93c27, does exactly that. Enable
paging/access/sharing, and only after that decrease reservation (and after
that unpause). So the window is open ... methinks?

At some point the assertion "xen side ... not robust" has to be declared
false, because it's either FUD, or untenable to ship a hypervisor that can
be crashed by guest input. You fixed an important bug with the vcpu run
off, so I'll refrain from saying "I don't see any problems there". But I
think I don't see any problems there ....



> How does this series interact with 6ae2df93 ?
>
Likely not a problem, you are referring to pause count? Looks like
different things.

Thanks
Andres

>
>
> ~Andrew
>

[-- Attachment #1.2: Type: text/html, Size: 8014 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 3/3] [GSOC14] FIX:- Race condition between initializing shared ring and mempaging.
  2014-07-21 14:11           ` Andres Lagar Cavilla
@ 2014-07-21 14:40             ` Andrew Cooper
  2014-07-21 15:59               ` Andres Lagar Cavilla
  2014-07-23 10:14               ` Ian Campbell
  0 siblings, 2 replies; 24+ messages in thread
From: Andrew Cooper @ 2014-07-21 14:40 UTC (permalink / raw)
  To: Andres Lagar Cavilla
  Cc: David Scott, Stefano Stabellini, Ian Jackson, Xen Devel,
	Dushyant Behl, Ian Campbell


[-- Attachment #1.1: Type: text/plain, Size: 6044 bytes --]

On 21/07/14 15:11, Andres Lagar Cavilla wrote:
> On Mon, Jul 21, 2014 at 5:01 AM, Andrew Cooper
> <andrew.cooper3@citrix.com <mailto:andrew.cooper3@citrix.com>> wrote:
>
>     On 21/07/14 08:31, Andres Lagar Cavilla wrote:
>>     On Sun, Jul 20, 2014 at 5:49 PM, Dushyant Behl
>>     <myselfdushyantbehl@gmail.com
>>     <mailto:myselfdushyantbehl@gmail.com>> wrote:
>>
>>         Ian,
>>
>>         Sorry for this extremely late response.
>>
>>         On Fri, Jun 27, 2014 at 4:03 PM, Ian Campbell
>>         <Ian.Campbell@citrix.com <mailto:Ian.Campbell@citrix.com>> wrote:
>>         > On Mon, 2014-06-16 at 23:50 +0530, Dushyant Behl wrote:
>>         >> This patch is part of the work done under the gsoc project -
>>         >> Lazy Restore Using Memory Paging.
>>         >>
>>         >> This patch is meant to fix a known race condition bug in
>>         mempaging
>>         >> ring setup routines. The race conditon was between
>>         initializing
>>         >
>>         > "condition"
>>         >
>>         >> mem paging and initializing shared ring, earlier the code
>>         initialized
>>         >> mem paging before removing the ring page from guest's
>>         physical map
>>         >> which could enable the guest to interfere with the ring
>>         initialisation.
>>         >> Now the code removes the page from the guest's physical
>>         map before
>>         >> enabling mempaging so that the guest cannot clobber the
>>         ring after
>>         >> we initialise it.
>>         >>
>>         >> Signed-off-by: Dushyant Behl <myselfdushyantbehl@gmail.com
>>         <mailto:myselfdushyantbehl@gmail.com>>
>>         >> Reviewed-by: Andres Lagar-Cavilla <andres@lagarcavilla.org
>>         <mailto:andres@lagarcavilla.org>>
>>         >> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com
>>         <mailto:andrew.cooper3@citrix.com>>
>>         >> ---
>>         >>  tools/libxc/xc_mem_paging_setup.c | 24
>>         ++++++++++++++++++++----
>>         >>  1 file changed, 20 insertions(+), 4 deletions(-)
>>         >>
>>         >> diff --git a/tools/libxc/xc_mem_paging_setup.c
>>         b/tools/libxc/xc_mem_paging_setup.c
>>         >> index 679c606..a4c4798 100644
>>         >> --- a/tools/libxc/xc_mem_paging_setup.c
>>         >> +++ b/tools/libxc/xc_mem_paging_setup.c
>>         >> @@ -73,6 +73,24 @@ int
>>         xc_mem_paging_ring_setup(xc_interface *xch,
>>         >>          }
>>         >>      }
>>         >>
>>         >> +    /*
>>         >> +     * Remove the page from guest's physical map so that
>>         the guest will not
>>         >> +     * be able to break security by pretending to be
>>         toolstack for a while.
>>         >> +     */
>>         >> +    if ( xc_domain_decrease_reservation_exact(xch,
>>         domain_id, 1, 0, ring_page) )
>>         >
>>         > ring_page here is a void * mapping of the page, isn't it? Not a
>>         > xen_pfn_t[] array of pfns, so surely this isn't right.
>>
>>         Yes, you are correct and the invocation should be with
>>         ring_pfn. But
>>         if I change this with
>>         the invocation done below as -
>>         if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1,
>>         0, &ring_pfn) ) 
>>
>>
>>         then the code fails to enable mem paging.
>>         Can anyone help as to why this would happen?
>>
>>     The call that enables paging will use the hvm param in the
>>     hypervisor to look up the page by its pfn within the guest
>>     physical address space. If you call decrease reservation, then
>>     the page is removed from the guest phys address space, and the
>>     hvm param will point to a hole. And the call will fail of course.
>>     So you need to call decrease reservation strictly after enable
>>     paging.
>>
>
>     You cannot have a period of time where the paging/access/events
>     are enabled and the guest can interfere with the ring page, as the
>     Xen side of ring pages are not robust to untrusted input.
>
> It's deja vu all over again!
>
> Note that the XSA commmit, 6ae2df93c27, does exactly that. Enable
> paging/access/sharing, and only after that decrease reservation (and
> after that unpause). So the window is open ... methinks?

No - it does a domain pause around this set of critical operations, so
the guest is guaranteed not to be running, and therefore cannot interfere.

>
> At some point the assertion "xen side ... not robust" has to be
> declared false, because it's either FUD, or untenable to ship a
> hypervisor that can be crashed by guest input. You fixed an important
> bug with the vcpu run off, so I'll refrain from saying "I don't see
> any problems there". But I think I don't see any problems there ....

My patches have not been committed so there known (and well documented)
holes in the Xen side of the interface.  Therefore, the assertion is
completely correct at the moment.  The situation will certainly change
(for the better) once my patches are taken.

I would like to hope that I got all the issues, but I did not performed
an extensive analysis of the interface.  I discovered the issues when
reviewing the bitdefender code which copied the vcpu overrun bug, and
then discovered the pause_count issue when fixing the vcpu overrun.

I got all the issues I could spot, given no specific knowledge of the
mem_event stuff.  However, I feel it would be naive to assume that the
rest of the interface is secure.  (This might well be my
particularity-pessimistic attitude to security, but it does tend to be
more reliable than the optimistic attitude.)  A proper audit of the
interface is required as part of resolving XSA-77.

>
>
>
>     How does this series interact with 6ae2df93 ?
>
> Likely not a problem, you are referring to pause count? Looks like
> different things.

Let me rephrase.  Does this series do anything which 6ae2df93 (which is
now committed) didn't do?  It would certainly appear that 6ae2df93 does
the vast majority of what this series is attempting to do.

~Andrew

[-- Attachment #1.2: Type: text/html, Size: 13439 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 3/3] [GSOC14] FIX:- Race condition between initializing shared ring and mempaging.
  2014-07-21 14:40             ` Andrew Cooper
@ 2014-07-21 15:59               ` Andres Lagar Cavilla
  2014-07-22 18:21                 ` Dushyant Behl
  2014-07-23 10:14               ` Ian Campbell
  1 sibling, 1 reply; 24+ messages in thread
From: Andres Lagar Cavilla @ 2014-07-21 15:59 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: David Scott, Stefano Stabellini, Ian Jackson, Xen-devel,
	Dushyant Behl, Ian Campbell


[-- Attachment #1.1: Type: text/plain, Size: 5548 bytes --]

On Jul 21, 2014 7:40 AM, "Andrew Cooper" <andrew.cooper3@citrix.com> wrote:
>
> On 21/07/14 15:11, Andres Lagar Cavilla wrote:
>>
>> On Mon, Jul 21, 2014 at 5:01 AM, Andrew Cooper <andrew.cooper3@citrix.com>
wrote:
>>>
>>> On 21/07/14 08:31, Andres Lagar Cavilla wrote:
>>>>
>>>> On Sun, Jul 20, 2014 at 5:49 PM, Dushyant Behl <
myselfdushyantbehl@gmail.com> wrote:
>>>>>
>>>>> Ian,
>>>>>
>>>>> Sorry for this extremely late response.
>>>>>
>>>>> On Fri, Jun 27, 2014 at 4:03 PM, Ian Campbell <Ian.Campbell@citrix.com>
wrote:
>>>>> > On Mon, 2014-06-16 at 23:50 +0530, Dushyant Behl wrote:
>>>>> >> This patch is part of the work done under the gsoc project -
>>>>> >> Lazy Restore Using Memory Paging.
>>>>> >>
>>>>> >> This patch is meant to fix a known race condition bug in mempaging
>>>>> >> ring setup routines. The race conditon was between initializing
>>>>> >
>>>>> > "condition"
>>>>> >
>>>>> >> mem paging and initializing shared ring, earlier the code
initialized
>>>>> >> mem paging before removing the ring page from guest's physical map
>>>>> >> which could enable the guest to interfere with the ring
initialisation.
>>>>> >> Now the code removes the page from the guest's physical map before
>>>>> >> enabling mempaging so that the guest cannot clobber the ring after
>>>>> >> we initialise it.
>>>>> >>
>>>>> >> Signed-off-by: Dushyant Behl <myselfdushyantbehl@gmail.com>
>>>>> >> Reviewed-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
>>>>> >> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>> >> ---
>>>>> >>  tools/libxc/xc_mem_paging_setup.c | 24 ++++++++++++++++++++----
>>>>> >>  1 file changed, 20 insertions(+), 4 deletions(-)
>>>>> >>
>>>>> >> diff --git a/tools/libxc/xc_mem_paging_setup.c
b/tools/libxc/xc_mem_paging_setup.c
>>>>> >> index 679c606..a4c4798 100644
>>>>> >> --- a/tools/libxc/xc_mem_paging_setup.c
>>>>> >> +++ b/tools/libxc/xc_mem_paging_setup.c
>>>>> >> @@ -73,6 +73,24 @@ int xc_mem_paging_ring_setup(xc_interface *xch,
>>>>> >>          }
>>>>> >>      }
>>>>> >>
>>>>> >> +    /*
>>>>> >> +     * Remove the page from guest's physical map so that the
guest will not
>>>>> >> +     * be able to break security by pretending to be toolstack
for a while.
>>>>> >> +     */
>>>>> >> +    if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1,
0, ring_page) )
>>>>> >
>>>>> > ring_page here is a void * mapping of the page, isn't it? Not a
>>>>> > xen_pfn_t[] array of pfns, so surely this isn't right.
>>>>>
>>>>> Yes, you are correct and the invocation should be with ring_pfn. But
>>>>> if I change this with
>>>>> the invocation done below as -
>>>>> if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0,
&ring_pfn) )
>>>>>
>>>>>
>>>>> then the code fails to enable mem paging.
>>>>> Can anyone help as to why this would happen?
>>>>
>>>> The call that enables paging will use the hvm param in the hypervisor
to look up the page by its pfn within the guest physical address space. If
you call decrease reservation, then the page is removed from the guest phys
address space, and the hvm param will point to a hole. And the call will
fail of course. So you need to call decrease reservation strictly after
enable paging.
>>>>
>>>
>>> You cannot have a period of time where the paging/access/events are
enabled and the guest can interfere with the ring page, as the Xen side of
ring pages are not robust to untrusted input.
>>
>> It's deja vu all over again!
>>
>> Note that the XSA commmit, 6ae2df93c27, does exactly that. Enable
paging/access/sharing, and only after that decrease reservation (and after
that unpause). So the window is open ... methinks?
>
>
> No - it does a domain pause around this set of critical operations, so
the guest is guaranteed not to be running, and therefore cannot interfere.

Right. Just as the patches in this thread do, since introduced. So no issue
per se.

I think

Andres
>
>
>>
>> At some point the assertion "xen side ... not robust" has to be declared
false, because it's either FUD, or untenable to ship a hypervisor that can
be crashed by guest input. You fixed an important bug with the vcpu run
off, so I'll refrain from saying "I don't see any problems there". But I
think I don't see any problems there ....
>
>
> My patches have not been committed so there known (and well documented)
holes in the Xen side of the interface.  Therefore, the assertion is
completely correct at the moment.  The situation will certainly change (for
the better) once my patches are taken.
>
> I would like to hope that I got all the issues, but I did not performed
an extensive analysis of the interface.  I discovered the issues when
reviewing the bitdefender code which copied the vcpu overrun bug, and then
discovered the pause_count issue when fixing the vcpu overrun.
>
> I got all the issues I could spot, given no specific knowledge of the
mem_event stuff.  However, I feel it would be naive to assume that the rest
of the interface is secure.  (This might well be my
particularity-pessimistic attitude to security, but it does tend to be more
reliable than the optimistic attitude.)  A proper audit of the interface is
required as part of resolving XSA-77.
>
>
>>
>>
>>>
>>> How does this series interact with 6ae2df93 ?
>>
>> Likely not a problem, you are referring to pause count? Looks like
different things.
>
>
> Let me rephrase.  Does this series do anything which 6ae2df93 (which is
now committed) didn't do?  It would certainly appear that 6ae2df93 does the
vast majority of what this series is attempting to do.
>
> ~Andrew

[-- Attachment #1.2: Type: text/html, Size: 7891 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 3/3] [GSOC14] FIX:- Race condition between initializing shared ring and mempaging.
  2014-07-21 15:59               ` Andres Lagar Cavilla
@ 2014-07-22 18:21                 ` Dushyant Behl
  0 siblings, 0 replies; 24+ messages in thread
From: Dushyant Behl @ 2014-07-22 18:21 UTC (permalink / raw)
  To: Andres Lagar Cavilla
  Cc: David Scott, Stefano Stabellini, Andrew Cooper, Ian Jackson,
	Xen-devel, Ian Campbell

On Mon, Jul 21, 2014 at 9:29 PM, Andres Lagar Cavilla
<andres@lagarcavilla.org> wrote:
>
> On Jul 21, 2014 7:40 AM, "Andrew Cooper" <andrew.cooper3@citrix.com> wrote:
>>
>> On 21/07/14 15:11, Andres Lagar Cavilla wrote:
>>>
>>> On Mon, Jul 21, 2014 at 5:01 AM, Andrew Cooper
>>> <andrew.cooper3@citrix.com> wrote:
>>>>
>>>> On 21/07/14 08:31, Andres Lagar Cavilla wrote:
>>>>>
>>>>> On Sun, Jul 20, 2014 at 5:49 PM, Dushyant Behl
>>>>> <myselfdushyantbehl@gmail.com> wrote:
>>>>>>
>>>>>> Ian,
>>>>>>
>>>>>> Sorry for this extremely late response.
>>>>>>
>>>>>> On Fri, Jun 27, 2014 at 4:03 PM, Ian Campbell
>>>>>> <Ian.Campbell@citrix.com> wrote:
>>>>>> > On Mon, 2014-06-16 at 23:50 +0530, Dushyant Behl wrote:
>>>>>> >> This patch is part of the work done under the gsoc project -
>>>>>> >> Lazy Restore Using Memory Paging.
>>>>>> >>
>>>>>> >> This patch is meant to fix a known race condition bug in mempaging
>>>>>> >> ring setup routines. The race conditon was between initializing
>>>>>> >
>>>>>> > "condition"
>>>>>> >
>>>>>> >> mem paging and initializing shared ring, earlier the code
>>>>>> >> initialized
>>>>>> >> mem paging before removing the ring page from guest's physical map
>>>>>> >> which could enable the guest to interfere with the ring
>>>>>> >> initialisation.
>>>>>> >> Now the code removes the page from the guest's physical map before
>>>>>> >> enabling mempaging so that the guest cannot clobber the ring after
>>>>>> >> we initialise it.
>>>>>> >>
>>>>>> >> Signed-off-by: Dushyant Behl <myselfdushyantbehl@gmail.com>
>>>>>> >> Reviewed-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
>>>>>> >> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>>> >> ---
>>>>>> >>  tools/libxc/xc_mem_paging_setup.c | 24 ++++++++++++++++++++----
>>>>>> >>  1 file changed, 20 insertions(+), 4 deletions(-)
>>>>>> >>
>>>>>> >> diff --git a/tools/libxc/xc_mem_paging_setup.c
>>>>>> >> b/tools/libxc/xc_mem_paging_setup.c
>>>>>> >> index 679c606..a4c4798 100644
>>>>>> >> --- a/tools/libxc/xc_mem_paging_setup.c
>>>>>> >> +++ b/tools/libxc/xc_mem_paging_setup.c
>>>>>> >> @@ -73,6 +73,24 @@ int xc_mem_paging_ring_setup(xc_interface *xch,
>>>>>> >>          }
>>>>>> >>      }
>>>>>> >>
>>>>>> >> +    /*
>>>>>> >> +     * Remove the page from guest's physical map so that the guest
>>>>>> >> will not
>>>>>> >> +     * be able to break security by pretending to be toolstack for
>>>>>> >> a while.
>>>>>> >> +     */
>>>>>> >> +    if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1,
>>>>>> >> 0, ring_page) )
>>>>>> >
>>>>>> > ring_page here is a void * mapping of the page, isn't it? Not a
>>>>>> > xen_pfn_t[] array of pfns, so surely this isn't right.
>>>>>>
>>>>>> Yes, you are correct and the invocation should be with ring_pfn. But
>>>>>> if I change this with
>>>>>> the invocation done below as -
>>>>>> if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0,
>>>>>> &ring_pfn) )
>>>>>>
>>>>>>
>>>>>> then the code fails to enable mem paging.
>>>>>> Can anyone help as to why this would happen?
>>>>>
>>>>> The call that enables paging will use the hvm param in the hypervisor
>>>>> to look up the page by its pfn within the guest physical address space. If
>>>>> you call decrease reservation, then the page is removed from the guest phys
>>>>> address space, and the hvm param will point to a hole. And the call will
>>>>> fail of course. So you need to call decrease reservation strictly after
>>>>> enable paging.
>>>>>
>>>>
>>>> You cannot have a period of time where the paging/access/events are
>>>> enabled and the guest can interfere with the ring page, as the Xen side of
>>>> ring pages are not robust to untrusted input.
>>>
>>> It's deja vu all over again!
>>>
>>> Note that the XSA commmit, 6ae2df93c27, does exactly that. Enable
>>> paging/access/sharing, and only after that decrease reservation (and after
>>> that unpause). So the window is open ... methinks?
>>
>>
>> No - it does a domain pause around this set of critical operations, so the
>> guest is guaranteed not to be running, and therefore cannot interfere.
>
> Right. Just as the patches in this thread do, since introduced. So no issue
> per se.

Sorry but actually we don't have xc_domain_pause and unpause calls
around ring setup in xenpaging code.


>
> I think
>
> Andres

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 3/3] [GSOC14] FIX:- Race condition between initializing shared ring and mempaging.
  2014-07-21 14:40             ` Andrew Cooper
  2014-07-21 15:59               ` Andres Lagar Cavilla
@ 2014-07-23 10:14               ` Ian Campbell
  2014-07-23 10:32                 ` Andrew Cooper
  1 sibling, 1 reply; 24+ messages in thread
From: Ian Campbell @ 2014-07-23 10:14 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: David Scott, Stefano Stabellini, Ian Jackson, Xen Devel,
	Andres Lagar Cavilla, Dushyant Behl

On Mon, 2014-07-21 at 15:40 +0100, Andrew Cooper wrote:

> > 
> > Note that the XSA commmit, 6ae2df93c27, does exactly that. Enable
> > paging/access/sharing, and only after that decrease reservation (and
> > after that unpause). So the window is open ... methinks?
> 
> No - it does a domain pause around this set of critical operations, so
> the guest is guaranteed not to be running, and therefore cannot
> interfere.

Shouldn't there be a memset in here somewhere? To clear out any bogus
material in the ring? (maybe the caller of this code always clears the
ring itself, I didn't check that)

Ian.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 3/3] [GSOC14] FIX:- Race condition between initializing shared ring and mempaging.
  2014-07-23 10:14               ` Ian Campbell
@ 2014-07-23 10:32                 ` Andrew Cooper
  2014-07-23 10:35                   ` Ian Campbell
  0 siblings, 1 reply; 24+ messages in thread
From: Andrew Cooper @ 2014-07-23 10:32 UTC (permalink / raw)
  To: Ian Campbell
  Cc: David Scott, Stefano Stabellini, Ian Jackson, Xen Devel,
	Andres Lagar Cavilla, Dushyant Behl

On 23/07/14 11:14, Ian Campbell wrote:
> On Mon, 2014-07-21 at 15:40 +0100, Andrew Cooper wrote:
>
>>> Note that the XSA commmit, 6ae2df93c27, does exactly that. Enable
>>> paging/access/sharing, and only after that decrease reservation (and
>>> after that unpause). So the window is open ... methinks?
>> No - it does a domain pause around this set of critical operations, so
>> the guest is guaranteed not to be running, and therefore cannot
>> interfere.
> Shouldn't there be a memset in here somewhere? To clear out any bogus
> material in the ring? (maybe the caller of this code always clears the
> ring itself, I didn't check that)
>
> Ian.
>

Erm.  There probably should be.  Xen sets up the ring frontend
correctly, but nothing I can spot cleans up any stale backend state
which was preexisting in the page.

It seems possible for a guest can map the affected pages and leave some
crafted backend replies which will be picked up as soon as mem_events
are enabled.  Combined with the lack of validation of responses in Xen,
this is quite bad.

~Andrew

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 3/3] [GSOC14] FIX:- Race condition between initializing shared ring and mempaging.
  2014-07-23 10:32                 ` Andrew Cooper
@ 2014-07-23 10:35                   ` Ian Campbell
  2014-07-23 10:46                     ` Andrew Cooper
  0 siblings, 1 reply; 24+ messages in thread
From: Ian Campbell @ 2014-07-23 10:35 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: David Scott, Stefano Stabellini, Ian Jackson, Xen Devel,
	Andres Lagar Cavilla, Dushyant Behl

On Wed, 2014-07-23 at 11:32 +0100, Andrew Cooper wrote:
> On 23/07/14 11:14, Ian Campbell wrote:
> > On Mon, 2014-07-21 at 15:40 +0100, Andrew Cooper wrote:
> >
> >>> Note that the XSA commmit, 6ae2df93c27, does exactly that. Enable
> >>> paging/access/sharing, and only after that decrease reservation (and
> >>> after that unpause). So the window is open ... methinks?
> >> No - it does a domain pause around this set of critical operations, so
> >> the guest is guaranteed not to be running, and therefore cannot
> >> interfere.
> > Shouldn't there be a memset in here somewhere? To clear out any bogus
> > material in the ring? (maybe the caller of this code always clears the
> > ring itself, I didn't check that)
> >
> > Ian.
> >
> 
> Erm.  There probably should be.  Xen sets up the ring frontend
> correctly, but nothing I can spot cleans up any stale backend state
> which was preexisting in the page.
> 
> It seems possible for a guest can map the affected pages and leave some
> crafted backend replies which will be picked up as soon as mem_events
> are enabled.  Combined with the lack of validation of responses in Xen,
> this is quite bad.

Indeed!

BTW, I wonder why we don't make the initial P2M mapping R/O. That would
still allow the tools to use the gpfn as a handle to pickup the page
later but would stop the guest messing with it in the meantime.

Ian.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 3/3] [GSOC14] FIX:- Race condition between initializing shared ring and mempaging.
  2014-07-23 10:35                   ` Ian Campbell
@ 2014-07-23 10:46                     ` Andrew Cooper
  0 siblings, 0 replies; 24+ messages in thread
From: Andrew Cooper @ 2014-07-23 10:46 UTC (permalink / raw)
  To: Ian Campbell
  Cc: David Scott, Stefano Stabellini, Ian Jackson, Xen Devel,
	Andres Lagar Cavilla, Dushyant Behl

On 23/07/14 11:35, Ian Campbell wrote:
> On Wed, 2014-07-23 at 11:32 +0100, Andrew Cooper wrote:
>> On 23/07/14 11:14, Ian Campbell wrote:
>>> On Mon, 2014-07-21 at 15:40 +0100, Andrew Cooper wrote:
>>>
>>>>> Note that the XSA commmit, 6ae2df93c27, does exactly that. Enable
>>>>> paging/access/sharing, and only after that decrease reservation (and
>>>>> after that unpause). So the window is open ... methinks?
>>>> No - it does a domain pause around this set of critical operations, so
>>>> the guest is guaranteed not to be running, and therefore cannot
>>>> interfere.
>>> Shouldn't there be a memset in here somewhere? To clear out any bogus
>>> material in the ring? (maybe the caller of this code always clears the
>>> ring itself, I didn't check that)
>>>
>>> Ian.
>>>
>> Erm.  There probably should be.  Xen sets up the ring frontend
>> correctly, but nothing I can spot cleans up any stale backend state
>> which was preexisting in the page.
>>
>> It seems possible for a guest can map the affected pages and leave some
>> crafted backend replies which will be picked up as soon as mem_events
>> are enabled.  Combined with the lack of validation of responses in Xen,
>> this is quite bad.
> Indeed!
>
> BTW, I wonder why we don't make the initial P2M mapping R/O. That would
> still allow the tools to use the gpfn as a handle to pickup the page
> later but would stop the guest messing with it in the meantime.
>
> Ian.
>

Because the nominated pfn is in a RW hvmparam, so free to be altered by
the guest before mem_events are set up.

The correct solution to a whole slue of problems along these lines is
for an explicit notion of emulator pages for a domain, accounted to that
domain, but have never been part of the guest p2m.  Also, far more
restrictions on hvmparams. 

~Andrew

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2014-07-23 10:46 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-06-16 18:20 [PATCH v2 0/3] Refactoring mempaging code from xenpaging to libxc and few updates Dushyant Behl
2014-06-16 18:20 ` [PATCH v2 1/3] [GSOC14] refactored mempaging code from xenpaging to libxc Dushyant Behl
2014-06-27 10:27   ` Ian Campbell
2014-06-27 10:37     ` Ian Campbell
2014-06-27 10:41       ` Andrew Cooper
2014-06-27 10:39     ` Dushyant Behl
2014-06-27 12:39       ` Ian Campbell
2014-06-28  3:32         ` Dushyant Behl
2014-06-30 10:37           ` Ian Campbell
2014-06-27 10:39   ` Ian Campbell
2014-06-16 18:20 ` [PATCH v2 2/3] [GSOC14] Replacing Deprecated Function Calls Dushyant Behl
2014-06-16 18:20 ` [PATCH v2 3/3] [GSOC14] FIX:- Race condition between initializing shared ring and mempaging Dushyant Behl
2014-06-27 10:33   ` Ian Campbell
2014-07-20 21:49     ` Dushyant Behl
2014-07-21  7:31       ` Andres Lagar Cavilla
2014-07-21  9:01         ` Andrew Cooper
2014-07-21 14:11           ` Andres Lagar Cavilla
2014-07-21 14:40             ` Andrew Cooper
2014-07-21 15:59               ` Andres Lagar Cavilla
2014-07-22 18:21                 ` Dushyant Behl
2014-07-23 10:14               ` Ian Campbell
2014-07-23 10:32                 ` Andrew Cooper
2014-07-23 10:35                   ` Ian Campbell
2014-07-23 10:46                     ` Andrew Cooper

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.