All of lore.kernel.org
 help / color / mirror / Atom feed
* Refactoring mempaging code from xenpaging to libxc and few updates
@ 2014-08-07 20:40 Dushyant Behl
  2014-08-07 20:40 ` [PATCH v3 1/3] tools/libxc: refactored mempaging code from xenpaging to libxc Dushyant Behl
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Dushyant Behl @ 2014-08-07 20:40 UTC (permalink / raw)
  To: xen-devel
  Cc: David Scott, Stefano Stabellini, Andrew Cooper, Ian Jackson,
	Andres Lagar Cavilla, Dushyant Behl

The following patches refactor memepaging code from xenpaging to libxc -

[PATCH v3 1/3] tools/libxc: refactored mempaging code from xenpaging
This patch adds two routines xc_mem_paging_ring_setup and
xc_mem_paging_ring_teardown to libxc at the path
tools/libxc/xc_mem_paging_setup.c, which is a new file.

[PATCH v3 2/3] tools/libxc/xc_mem_paging_setup.c: replacing deprecated functions
Replaces calls to deprecated function xc_map_foreign_batch with calls to
xc_map_foreign_bulk.

[PATCH v3 3/3] tools/libxc/xc_mem_paging_setup.c: FIX- Race condition
Fix for a known race condition bug in mempaging ring setup code, this race
condition is actually simmilar to that concerned by XSA-99. This patch tries to 
take care of all the points mentioned in XSA-99, along with clearing the ringpage 
to remove any bogus input.

Signed-off-by: Dushyant Behl <myselfdushyantbehl@gmail.com>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v3 1/3] tools/libxc: refactored mempaging code from xenpaging to libxc.
  2014-08-07 20:40 Refactoring mempaging code from xenpaging to libxc and few updates Dushyant Behl
@ 2014-08-07 20:40 ` Dushyant Behl
  2014-08-07 20:40 ` [PATCH v3 2/3] tools/libxc/xc_mem_paging_setup.c: replacing deprecated function calls Dushyant Behl
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 7+ messages in thread
From: Dushyant Behl @ 2014-08-07 20:40 UTC (permalink / raw)
  To: xen-devel
  Cc: David Scott, Stefano Stabellini, Andrew Cooper, Ian Jackson,
	Andres Lagar Cavilla, Dushyant Behl

This patch moves the code to initialize and teardown mempaging from xenpaging
to libxc. The code refactored from xenpaging is the code which sets up paging,
initializes a shared ring and event channel to communicate with xen. This
communication is done between the hypervisor and the dom0 tool which performs
the activity of pager. Another routine is also added by this patch which will
teardown the ring and mem paging setup. The xenpaging code is changed to use
the newly created routines and is tested to properly build and work with this code.
The refactored code is to be placed in libxc with LGPL license while the old
xenpaging code was under GPL2+ license, hence proper permission has been taken
from previous copyright holders to relicense the code.

The refactoring is done so that the tools lazy restore pager and xenpaging which
rely on memory paging can use the same routines to initialize mempaging. This refactoring
will also allow any future (in-tree) tools to use mempaging. The refactored code in
xc_mem_paging_ring_setup is to be compiled into libxenguest.

Note:- The name of a macro in the file tools/ocaml/libs/xc/xenctrl_stubs.c is changed
from RING_SIZE to BUF_RING_SIZE because the earlier name collides with xen shared ring
definitions in io/ring.h

Signed-off-by: Dushyant Behl <myselfdushyantbehl@gmail.com>
Reviewed-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: David Scott <dave.scott@citrix.com>
---
 tools/libxc/Makefile                |   2 +
 tools/libxc/xc_mem_paging_setup.c   | 184 ++++++++++++++++++++++++++++++++++++
 tools/libxc/xenguest.h              |  42 ++++++++
 tools/ocaml/libs/xc/xenctrl_stubs.c |   7 +-
 tools/xenpaging/Makefile            |   4 +-
 tools/xenpaging/xenpaging.c         | 110 +++++----------------
 6 files changed, 255 insertions(+), 94 deletions(-)
 create mode 100644 tools/libxc/xc_mem_paging_setup.c

diff --git a/tools/libxc/Makefile b/tools/libxc/Makefile
index 22eef8e..68c5f08 100644
--- a/tools/libxc/Makefile
+++ b/tools/libxc/Makefile
@@ -70,6 +70,8 @@ GUEST_SRCS-$(CONFIG_ARM)     += xc_dom_armzimageloader.c
 GUEST_SRCS-y                 += xc_dom_binloader.c
 GUEST_SRCS-y                 += xc_dom_compat_linux.c
 
+GUEST_SRCS-y                 += xc_mem_paging_setup.c
+
 GUEST_SRCS-$(CONFIG_X86)     += xc_dom_x86.c
 GUEST_SRCS-$(CONFIG_X86)     += xc_cpuid_x86.c
 GUEST_SRCS-$(CONFIG_X86)     += xc_hvm_build_x86.c
diff --git a/tools/libxc/xc_mem_paging_setup.c b/tools/libxc/xc_mem_paging_setup.c
new file mode 100644
index 0000000..bfb9a9b
--- /dev/null
+++ b/tools/libxc/xc_mem_paging_setup.c
@@ -0,0 +1,184 @@
+/*
+ * tools/libxc/xc_mem_paging_setup.c
+ *
+ * Routines to initialize and teardown memory paging.
+ * Create shared ring and event channels to communicate
+ * with the hypervisor.
+ *
+ * Copyright (c) 2014 Dushyant Behl <myselfdushyantbehl@gmail.com>
+ * Copyright (c) 2009 by Citrix Systems, Inc. (Patrick Colp)
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#include "xg_private.h"
+#include <xen/event_channel.h>
+#include <xen/mem_event.h>
+
+/*
+ * Mem paging ring and event channel setup routine.
+ * Setup a shared ring and an event channel to communicate between
+ * hypervisor and the tool performing mem paging operations.
+ * The function will return zero on successful completion and will
+ * return -1 on failure at any intermediate step setting up errno
+ * properly.
+ */
+int xc_mem_paging_ring_setup(xc_interface *xch,
+                             xc_evtchn *xce_handle,
+                             domid_t domain_id,
+                             void *ring_page,
+                             int *port,
+                             uint32_t *evtchn_port,
+                             mem_event_back_ring_t *back_ring)
+{
+    int rc;
+    uint64_t pfn;
+    xen_pfn_t ring_pfn, mmap_pfn;
+
+    /* Map the ring page */
+    xc_hvm_param_get(xch, domain_id, HVM_PARAM_PAGING_RING_PFN, &pfn);
+
+    ring_pfn = pfn;
+    mmap_pfn = ring_pfn;
+    ring_page = xc_map_foreign_batch(xch, domain_id,
+                                        PROT_READ | PROT_WRITE, &mmap_pfn, 1);
+    if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
+    {
+        /* Map failed, populate ring page */
+        rc = xc_domain_populate_physmap_exact(xch, domain_id,
+                                              1, 0, 0, &ring_pfn);
+        if ( rc != 0 )
+        {
+            PERROR("Failed to populate ring gfn\n");
+            return -1;
+        }
+
+        mmap_pfn = ring_pfn;
+        ring_page = xc_map_foreign_batch(xch, domain_id,
+                                        PROT_READ | PROT_WRITE, &mmap_pfn, 1);
+
+        if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
+        {
+            PERROR("Could not map the ring page\n");
+            return -1;
+        }
+    }
+
+    /* Initialise Xen */
+    rc = xc_mem_paging_enable(xch, domain_id, evtchn_port);
+    if ( rc != 0 )
+    {
+        switch ( errno ) {
+            case EBUSY:
+                ERROR("mempaging is (or was) active on this domain");
+                break;
+            case ENODEV:
+                ERROR("mempaging requires Hardware Assisted Paging");
+                break;
+            case EMLINK:
+                ERROR("mempaging not supported while iommu passthrough is enabled");
+                break;
+            case EXDEV:
+                ERROR("mempaging not supported in a PoD guest");
+                break;
+            default:
+                PERROR("mempaging: error initialising shared page");
+                break;
+        }
+        return -1;
+    }
+
+    /* Bind event notification */
+    rc = xc_evtchn_bind_interdomain(xce_handle, domain_id, *evtchn_port);
+    if ( rc < 0 )
+    {
+        PERROR("Failed to bind event channel");
+        return -1;
+    }
+    *port = rc;
+
+    /* Initialise ring */
+    SHARED_RING_INIT((mem_event_sring_t *)ring_page);
+    BACK_RING_INIT(back_ring, (mem_event_sring_t *)ring_page, PAGE_SIZE);
+
+    /* Now that the ring is set, remove it from the guest's physmap */
+    if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0, &ring_pfn) )
+    {
+        PERROR("Failed to remove ring_pfn from guest physmap");
+        return -1;
+    }
+
+    return 0;
+}
+
+/*
+ * Mem paging ring and event channel teardown routine.
+ * The function call invalidates the arguments ring_page and port.
+ * The function will return zero on successful completion and will
+ * return -1 on failure at any intermediate step setting up errno
+ * properly.
+ */
+int xc_mem_paging_ring_teardown(xc_interface *xch,
+                                 xc_evtchn *xce_handle,
+                                 domid_t domain_id,
+                                 void *ring_page,
+                                 int *port)
+{
+    int rc;
+
+    /* Remove the ring page. */
+    rc = munmap(ring_page, PAGE_SIZE);
+    if ( rc < 0 )
+        PERROR("Error while disabling paging in xen");
+    else
+        DPRINTF("successfully unmapped ringpage");
+
+    ring_page = NULL;
+
+    /*Disable memory paging. */
+    rc = xc_mem_paging_disable(xch, domain_id);
+    if ( rc < 0 )
+    {
+        PERROR("Error while disabling paging in xen");
+        goto out;
+    }
+    else
+        DPRINTF("successfully disabled mempaging");
+
+    /* Unbind the event channel. */
+    rc = xc_evtchn_unbind(xce_handle, *port);
+    if ( rc < 0 )
+    {
+        PERROR("Error unbinding event port");
+        goto out;
+    }
+    else
+        DPRINTF("successfully unbinded event channel");
+
+    /* Set the port to invalid. */
+    *port = -1;
+
+    out:
+        return rc;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/tools/libxc/xenguest.h b/tools/libxc/xenguest.h
index 341da66..dd66a2f 100644
--- a/tools/libxc/xenguest.h
+++ b/tools/libxc/xenguest.h
@@ -32,6 +32,8 @@
 #define X86_64_B_SIZE   64 
 #define X86_32_B_SIZE   32
 
+#include <xen/mem_event.h>
+
 /* callbacks provided by xc_domain_save */
 struct save_callbacks {
     /* Called after expiration of checkpoint interval,
@@ -322,4 +324,44 @@ xen_pfn_t *xc_map_m2p(xc_interface *xch,
                       unsigned long max_mfn,
                       int prot,
                       unsigned long *mfn0);
+
+/**
+ * Mem paging ring and event channel setup routine.
+ * Setup a shared ring and an event channel to communicate between
+ * hypervisor and the tool performing mem paging operations.
+ * The function will return zero on successful completion and will
+ * return -1 on failure at any intermediate step setting up errno
+ * properly.
+ * @parm xch a handle to an open hypervisor interface
+ * @parm xce_handle a handle to an open event channel
+ * @parm domain_id number identifier of the domain
+ * @parm ring_page passed NULL, returned pointer to the shared ring page
+ * @parm port pointer to the integer for storing event port
+ * @pram evtchn_port pointer to the integer for storing event channel port
+ * @pram back_ring pointer to the reader end of the memory event channel ring
+ */
+int xc_mem_paging_ring_setup(xc_interface *xch,
+                             xc_evtchn *xce_handle,
+                             domid_t domain_id,
+                             void *ring_page,
+                             int *port,
+                             uint32_t *evtchn_port,
+                             mem_event_back_ring_t *back_ring);
+/**
+ * Mem paging ring and event channel teardown routine.
+ * The function call invalidates the arguments ring_page and port.
+ * The function will return zero on successful completion and will
+ * return -1 on failure at any intermediate step setting up errno
+ * properly.
+ * @parm xch a handle to an open hypervisor interface
+ * @parm xce_handle a handle to an open event channel
+ * @parm domain_id number identifier of the domain
+ * @parm ring_page pointer to the shared ring page
+ * @parm port pointer to the integer storing event port
+ */
+int xc_mem_paging_ring_teardown(xc_interface *xch,
+                                 xc_evtchn *xce_handle,
+                                 domid_t domain_id,
+                                 void *ring_page,
+                                 int *port);
 #endif /* XENGUEST_H */
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index f0810eb..8ce957e 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -525,13 +525,12 @@ CAMLprim value stub_xc_evtchn_reset(value xch, value domid)
 	CAMLreturn(Val_unit);
 }
 
-
-#define RING_SIZE 32768
-static char ring[RING_SIZE];
+#define BUF_RING_SIZE 32768
+static char ring[BUF_RING_SIZE];
 
 CAMLprim value stub_xc_readconsolering(value xch)
 {
-	unsigned int size = RING_SIZE - 1;
+	unsigned int size = BUF_RING_SIZE - 1;
 	char *ring_ptr = ring;
 	int retval;
 
diff --git a/tools/xenpaging/Makefile b/tools/xenpaging/Makefile
index 548d9dd..ea370d3 100644
--- a/tools/xenpaging/Makefile
+++ b/tools/xenpaging/Makefile
@@ -1,8 +1,8 @@
 XEN_ROOT=$(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenstore) $(PTHREAD_CFLAGS)
-LDLIBS += $(LDLIBS_libxenctrl) $(LDLIBS_libxenstore) $(PTHREAD_LIBS)
+CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenguest) $(CFLAGS_libxenstore) $(PTHREAD_CFLAGS)
+LDLIBS += $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxenstore) $(PTHREAD_LIBS)
 LDFLAGS += $(PTHREAD_LDFLAGS)
 
 POLICY    = default
diff --git a/tools/xenpaging/xenpaging.c b/tools/xenpaging/xenpaging.c
index 82c1ee4..77ed42f 100644
--- a/tools/xenpaging/xenpaging.c
+++ b/tools/xenpaging/xenpaging.c
@@ -29,6 +29,7 @@
 #include <unistd.h>
 #include <poll.h>
 #include <xc_private.h>
+#include <xenguest.h>
 #include <xenstore.h>
 #include <getopt.h>
 
@@ -281,7 +282,6 @@ static struct xenpaging *xenpaging_init(int argc, char *argv[])
     xentoollog_logger *dbg = NULL;
     char *p;
     int rc;
-    unsigned long ring_pfn, mmap_pfn;
 
     /* Allocate memory */
     paging = calloc(1, sizeof(struct xenpaging));
@@ -338,61 +338,6 @@ static struct xenpaging *xenpaging_init(int argc, char *argv[])
         goto err;
     }
 
-    /* Map the ring page */
-    xc_get_hvm_param(xch, paging->mem_event.domain_id, 
-                        HVM_PARAM_PAGING_RING_PFN, &ring_pfn);
-    mmap_pfn = ring_pfn;
-    paging->mem_event.ring_page = 
-        xc_map_foreign_batch(xch, paging->mem_event.domain_id, 
-                                PROT_READ | PROT_WRITE, &mmap_pfn, 1);
-    if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
-    {
-        /* Map failed, populate ring page */
-        rc = xc_domain_populate_physmap_exact(paging->xc_handle, 
-                                              paging->mem_event.domain_id,
-                                              1, 0, 0, &ring_pfn);
-        if ( rc != 0 )
-        {
-            PERROR("Failed to populate ring gfn\n");
-            goto err;
-        }
-
-        mmap_pfn = ring_pfn;
-        paging->mem_event.ring_page = 
-            xc_map_foreign_batch(xch, paging->mem_event.domain_id, 
-                                    PROT_READ | PROT_WRITE, &mmap_pfn, 1);
-        if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
-        {
-            PERROR("Could not map the ring page\n");
-            goto err;
-        }
-    }
-    
-    /* Initialise Xen */
-    rc = xc_mem_paging_enable(xch, paging->mem_event.domain_id,
-                             &paging->mem_event.evtchn_port);
-    if ( rc != 0 )
-    {
-        switch ( errno ) {
-            case EBUSY:
-                ERROR("xenpaging is (or was) active on this domain");
-                break;
-            case ENODEV:
-                ERROR("xenpaging requires Hardware Assisted Paging");
-                break;
-            case EMLINK:
-                ERROR("xenpaging not supported while iommu passthrough is enabled");
-                break;
-            case EXDEV:
-                ERROR("xenpaging not supported in a PoD guest");
-                break;
-            default:
-                PERROR("Error initialising shared page");
-                break;
-        }
-        goto err;
-    }
-
     /* Open event channel */
     paging->mem_event.xce_handle = xc_evtchn_open(NULL, 0);
     if ( paging->mem_event.xce_handle == NULL )
@@ -401,28 +346,21 @@ static struct xenpaging *xenpaging_init(int argc, char *argv[])
         goto err;
     }
 
-    /* Bind event notification */
-    rc = xc_evtchn_bind_interdomain(paging->mem_event.xce_handle,
-                                    paging->mem_event.domain_id,
-                                    paging->mem_event.evtchn_port);
-    if ( rc < 0 )
+    /* Initialize paging by setting up ring and event channel. */
+    rc = xc_mem_paging_ring_setup(paging->xc_handle,
+                                  paging->mem_event.xce_handle,
+                                  paging->mem_event.domain_id,
+                                  paging->mem_event.ring_page,
+                                  &paging->mem_event.port,
+                                  &paging->mem_event.evtchn_port,
+                                  &paging->mem_event.back_ring);
+    if( rc )
     {
-        PERROR("Failed to bind event channel");
+        PERROR("Could not initialize mem paging\n");
         goto err;
     }
 
-    paging->mem_event.port = rc;
-
-    /* Initialise ring */
-    SHARED_RING_INIT((mem_event_sring_t *)paging->mem_event.ring_page);
-    BACK_RING_INIT(&paging->mem_event.back_ring,
-                   (mem_event_sring_t *)paging->mem_event.ring_page,
-                   PAGE_SIZE);
-
-    /* Now that the ring is set, remove it from the guest's physmap */
-    if ( xc_domain_decrease_reservation_exact(xch, 
-                    paging->mem_event.domain_id, 1, 0, &ring_pfn) )
-        PERROR("Failed to remove ring from guest physmap");
+    DPRINTF("ring and event channel setup successful");
 
     /* Get max_pages from guest if not provided via cmdline */
     if ( !paging->max_pages )
@@ -523,21 +461,18 @@ static void xenpaging_teardown(struct xenpaging *paging)
     xs_unwatch(paging->xs_handle, "@releaseDomain", watch_token);
 
     paging->xc_handle = NULL;
-    /* Tear down domain paging in Xen */
-    munmap(paging->mem_event.ring_page, PAGE_SIZE);
-    rc = xc_mem_paging_disable(xch, paging->mem_event.domain_id);
-    if ( rc != 0 )
-    {
-        PERROR("Error tearing down domain paging in xen");
-    }
 
-    /* Unbind VIRQ */
-    rc = xc_evtchn_unbind(paging->mem_event.xce_handle, paging->mem_event.port);
-    if ( rc != 0 )
+    /* Tear down domain paging and ring setup in Xen */
+    rc = xc_mem_paging_ring_teardown(xch, paging->mem_event.xce_handle,
+                                     paging->mem_event.domain_id,
+                                     paging->mem_event.ring_page,
+                                     &paging->mem_event.port);
+    if ( rc < 0 )
     {
-        PERROR("Error unbinding event port");
+        PERROR("Error in mem paging teardown");
     }
-    paging->mem_event.port = -1;
+    else
+        DPRINTF("tear down ring and mempaging successful");
 
     /* Close event channel */
     rc = xc_evtchn_close(paging->mem_event.xce_handle);
@@ -545,8 +480,7 @@ static void xenpaging_teardown(struct xenpaging *paging)
     {
         PERROR("Error closing event channel");
     }
-    paging->mem_event.xce_handle = NULL;
-    
+
     /* Close connection to xenstore */
     xs_close(paging->xs_handle);
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v3 2/3] tools/libxc/xc_mem_paging_setup.c: replacing deprecated function calls.
  2014-08-07 20:40 Refactoring mempaging code from xenpaging to libxc and few updates Dushyant Behl
  2014-08-07 20:40 ` [PATCH v3 1/3] tools/libxc: refactored mempaging code from xenpaging to libxc Dushyant Behl
@ 2014-08-07 20:40 ` Dushyant Behl
  2014-08-07 20:40 ` [PATCH v3 3/3] tools/libxc/xc_mem_paging_setup.c: FIX- Race condition between initializing shared ring and mempaging Dushyant Behl
  2014-08-13 16:28 ` Refactoring mempaging code from xenpaging to libxc and few updates Dushyant Behl
  3 siblings, 0 replies; 7+ messages in thread
From: Dushyant Behl @ 2014-08-07 20:40 UTC (permalink / raw)
  To: xen-devel
  Cc: David Scott, Stefano Stabellini, Andrew Cooper, Ian Jackson,
	Andres Lagar Cavilla, Dushyant Behl

This patch replaces the calls to the deprecated function
xc_map_foreign_batch with the calls to xc_map_foreign_bulk
in tools/libxc/xc_mem_paging_ring_setup.c
The function xc_map_foreign_bulk has a cleaner error reporting
interface than xc_map_foreign_batch.

Signed-off-by: Dushyant Behl <myselfdushyantbehl@gmail.com>
Reviewed-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 tools/libxc/xc_mem_paging_setup.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/tools/libxc/xc_mem_paging_setup.c b/tools/libxc/xc_mem_paging_setup.c
index bfb9a9b..9741dc9 100644
--- a/tools/libxc/xc_mem_paging_setup.c
+++ b/tools/libxc/xc_mem_paging_setup.c
@@ -43,7 +43,7 @@ int xc_mem_paging_ring_setup(xc_interface *xch,
                              uint32_t *evtchn_port,
                              mem_event_back_ring_t *back_ring)
 {
-    int rc;
+    int rc, err;
     uint64_t pfn;
     xen_pfn_t ring_pfn, mmap_pfn;
 
@@ -52,9 +52,9 @@ int xc_mem_paging_ring_setup(xc_interface *xch,
 
     ring_pfn = pfn;
     mmap_pfn = ring_pfn;
-    ring_page = xc_map_foreign_batch(xch, domain_id,
-                                        PROT_READ | PROT_WRITE, &mmap_pfn, 1);
-    if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
+    ring_page = xc_map_foreign_bulk(xch, domain_id,
+                                    PROT_READ | PROT_WRITE, &mmap_pfn, &err, 1);
+    if ( (err != 0) || (ring_page == NULL) )
     {
         /* Map failed, populate ring page */
         rc = xc_domain_populate_physmap_exact(xch, domain_id,
@@ -66,10 +66,10 @@ int xc_mem_paging_ring_setup(xc_interface *xch,
         }
 
         mmap_pfn = ring_pfn;
-        ring_page = xc_map_foreign_batch(xch, domain_id,
-                                        PROT_READ | PROT_WRITE, &mmap_pfn, 1);
+        ring_page = xc_map_foreign_bulk(xch, domain_id, PROT_READ | PROT_WRITE,
+                                        &mmap_pfn, &err, 1);
 
-        if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
+        if ( (err != 0) || (ring_page == NULL) )
         {
             PERROR("Could not map the ring page\n");
             return -1;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v3 3/3] tools/libxc/xc_mem_paging_setup.c: FIX- Race condition between initializing shared ring and mempaging.
  2014-08-07 20:40 Refactoring mempaging code from xenpaging to libxc and few updates Dushyant Behl
  2014-08-07 20:40 ` [PATCH v3 1/3] tools/libxc: refactored mempaging code from xenpaging to libxc Dushyant Behl
  2014-08-07 20:40 ` [PATCH v3 2/3] tools/libxc/xc_mem_paging_setup.c: replacing deprecated function calls Dushyant Behl
@ 2014-08-07 20:40 ` Dushyant Behl
  2014-08-08  2:37   ` Andres Lagar Cavilla
  2014-08-13 16:28 ` Refactoring mempaging code from xenpaging to libxc and few updates Dushyant Behl
  3 siblings, 1 reply; 7+ messages in thread
From: Dushyant Behl @ 2014-08-07 20:40 UTC (permalink / raw)
  To: xen-devel
  Cc: David Scott, Stefano Stabellini, Andrew Cooper, Ian Jackson,
	Andres Lagar Cavilla, Dushyant Behl

This patch is meant to fix a known race condition bug in mempaging
ring setup routines. The race condition was between initializing
mem paging and initializing shared ring, earlier the code initialized
mem paging before removing the ring page from guest's physical map
which could enable the guest to interfere with the ring initialisation.
Now the code removes the page from the guest's physical map before
enabling mempaging so that the guest cannot clobber the ring after
we initialise it.

Signed-off-by: Dushyant Behl <myselfdushyantbehl@gmail.com>
Reviewed-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 tools/libxc/xc_mem_paging_setup.c | 29 +++++++++++++++++++++++++----
 1 file changed, 25 insertions(+), 4 deletions(-)

diff --git a/tools/libxc/xc_mem_paging_setup.c b/tools/libxc/xc_mem_paging_setup.c
index 9741dc9..12469b4 100644
--- a/tools/libxc/xc_mem_paging_setup.c
+++ b/tools/libxc/xc_mem_paging_setup.c
@@ -76,6 +76,22 @@ int xc_mem_paging_ring_setup(xc_interface *xch,
         }
     }
 
+    /* Clear the ring_pfn */
+    memset(ring_page, 0, PAGE_SIZE);
+
+    rc = xc_domain_pause(xch, domain_id);
+    if ( rc != 0 )
+    {
+        PERROR("Unable to pause domain");
+        return -1;
+    }
+    DPRINTF("Domain pause successful");
+
+    /* Initialise ring */
+    SHARED_RING_INIT((mem_event_sring_t *)ring_page);
+    BACK_RING_INIT(back_ring, (mem_event_sring_t *)ring_page, PAGE_SIZE);
+    DPRINTF("ininialized shared ring");
+
     /* Initialise Xen */
     rc = xc_mem_paging_enable(xch, domain_id, evtchn_port);
     if ( rc != 0 )
@@ -99,6 +115,7 @@ int xc_mem_paging_ring_setup(xc_interface *xch,
         }
         return -1;
     }
+    DPRINTF("enabled mempaging");
 
     /* Bind event notification */
     rc = xc_evtchn_bind_interdomain(xce_handle, domain_id, *evtchn_port);
@@ -109,10 +126,6 @@ int xc_mem_paging_ring_setup(xc_interface *xch,
     }
     *port = rc;
 
-    /* Initialise ring */
-    SHARED_RING_INIT((mem_event_sring_t *)ring_page);
-    BACK_RING_INIT(back_ring, (mem_event_sring_t *)ring_page, PAGE_SIZE);
-
     /* Now that the ring is set, remove it from the guest's physmap */
     if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0, &ring_pfn) )
     {
@@ -120,6 +133,14 @@ int xc_mem_paging_ring_setup(xc_interface *xch,
         return -1;
     }
 
+    rc = xc_domain_unpause(xch, domain_id);
+    if ( rc != 0 )
+    {
+        PERROR("Unable to unpause domain");
+        return -1;
+    }
+    DPRINTF("Domain unpause successful");
+
     return 0;
 }
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v3 3/3] tools/libxc/xc_mem_paging_setup.c: FIX- Race condition between initializing shared ring and mempaging.
  2014-08-07 20:40 ` [PATCH v3 3/3] tools/libxc/xc_mem_paging_setup.c: FIX- Race condition between initializing shared ring and mempaging Dushyant Behl
@ 2014-08-08  2:37   ` Andres Lagar Cavilla
  0 siblings, 0 replies; 7+ messages in thread
From: Andres Lagar Cavilla @ 2014-08-08  2:37 UTC (permalink / raw)
  To: Dushyant Behl
  Cc: Ian Campbell, Stefano Stabellini, Andrew Cooper, Ian Jackson,
	Xen-devel, David Scott


[-- Attachment #1.1: Type: text/plain, Size: 3571 bytes --]

On Thu, Aug 7, 2014 at 4:40 PM, Dushyant Behl <myselfdushyantbehl@gmail.com>
wrote:

> This patch is meant to fix a known race condition bug in mempaging
> ring setup routines. The race condition was between initializing
> mem paging and initializing shared ring, earlier the code initialized
> mem paging before removing the ring page from guest's physical map
> which could enable the guest to interfere with the ring initialisation.
> Now the code removes the page from the guest's physical map before
> enabling mempaging so that the guest cannot clobber the ring after
> we initialise it.
>
> Signed-off-by: Dushyant Behl <myselfdushyantbehl@gmail.com>
> Reviewed-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
>

(+ IanC)

This patch has a few problems, but raises a couple interesting points about
xc_mem_event_enable. See below:

---
>  tools/libxc/xc_mem_paging_setup.c | 29 +++++++++++++++++++++++++----
>  1 file changed, 25 insertions(+), 4 deletions(-)
>
> diff --git a/tools/libxc/xc_mem_paging_setup.c
> b/tools/libxc/xc_mem_paging_setup.c
> index 9741dc9..12469b4 100644
> --- a/tools/libxc/xc_mem_paging_setup.c
> +++ b/tools/libxc/xc_mem_paging_setup.c
> @@ -76,6 +76,22 @@ int xc_mem_paging_ring_setup(xc_interface *xch,
>          }
>      }
>
> +    /* Clear the ring_pfn */
> +    memset(ring_page, 0, PAGE_SIZE);
>

Once xc_mem_event_enable maps the ring page, surely there is no downside to
clearing it out? i.e. moving this memset there should help.

+
> +    rc = xc_domain_pause(xch, domain_id);
> +    if ( rc != 0 )
> +    {
> +        PERROR("Unable to pause domain");
> +        return -1;
> +    }
> +    DPRINTF("Domain pause successful");
> +
> +    /* Initialise ring */
> +    SHARED_RING_INIT((mem_event_sring_t *)ring_page);
>

Likewise. Once xc_mem_event_enable has mapped (and cleared) the ring page,
and since it's holding the domain paused, there is no downside in applying
SHARED_RING_INIT, methinks. Every consumer of xc_mem_event_enable is bound
to do that. And doing it prior to enabling mem event and unpausing the
domain ensures Xen will have a proper ring to place events at.

If those two make sense, then we can cut a patch against
xc_mem_event_enable.

Thanks
Andres

> +    BACK_RING_INIT(back_ring, (mem_event_sring_t *)ring_page, PAGE_SIZE);
> +    DPRINTF("ininialized shared ring");
> +
>      /* Initialise Xen */
>      rc = xc_mem_paging_enable(xch, domain_id, evtchn_port);
>      if ( rc != 0 )
> @@ -99,6 +115,7 @@ int xc_mem_paging_ring_setup(xc_interface *xch,
>          }
>          return -1;
>      }
> +    DPRINTF("enabled mempaging");
>
>      /* Bind event notification */
>      rc = xc_evtchn_bind_interdomain(xce_handle, domain_id, *evtchn_port);
> @@ -109,10 +126,6 @@ int xc_mem_paging_ring_setup(xc_interface *xch,
>      }
>      *port = rc;
>
> -    /* Initialise ring */
> -    SHARED_RING_INIT((mem_event_sring_t *)ring_page);
> -    BACK_RING_INIT(back_ring, (mem_event_sring_t *)ring_page, PAGE_SIZE);
> -
>      /* Now that the ring is set, remove it from the guest's physmap */
>      if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0,
> &ring_pfn) )
>      {
> @@ -120,6 +133,14 @@ int xc_mem_paging_ring_setup(xc_interface *xch,
>          return -1;
>      }
>
> +    rc = xc_domain_unpause(xch, domain_id);
> +    if ( rc != 0 )
> +    {
> +        PERROR("Unable to unpause domain");
> +        return -1;
> +    }
> +    DPRINTF("Domain unpause successful");
> +
>      return 0;
>  }
>
> --
> 1.9.1
>
>

[-- Attachment #1.2: Type: text/html, Size: 5068 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Refactoring mempaging code from xenpaging to libxc and few updates
  2014-08-07 20:40 Refactoring mempaging code from xenpaging to libxc and few updates Dushyant Behl
                   ` (2 preceding siblings ...)
  2014-08-07 20:40 ` [PATCH v3 3/3] tools/libxc/xc_mem_paging_setup.c: FIX- Race condition between initializing shared ring and mempaging Dushyant Behl
@ 2014-08-13 16:28 ` Dushyant Behl
  2014-08-13 20:54   ` Dushyant Behl
  3 siblings, 1 reply; 7+ messages in thread
From: Dushyant Behl @ 2014-08-13 16:28 UTC (permalink / raw)
  To: Xen Devel
  Cc: David Scott, Stefano Stabellini, Andrew Cooper, Ian Jackson,
	Andres Lagar Cavilla, Dushyant Behl


[-- Attachment #1.1: Type: text/plain, Size: 1495 bytes --]

Hi Everyone,

I am extremely sorry that I didn't notice these patches had most of the
code which did exactly what xc_mem_event_enable (added by XSA-99) did.
I request you to kindly ignore this patch series, I have prepared a new
patch series which takes some necessary bits and pieces from this patch
series and adds them to xc_mem_event_enable.
I'll be sending the new patch series, Please have a look at those.

I am really sorry for the inconvenience.

Thanks and Regards,
Dushyant Behl


On Fri, Aug 8, 2014 at 2:10 AM, Dushyant Behl <myselfdushyantbehl@gmail.com>
wrote:

> The following patches refactor memepaging code from xenpaging to libxc -
>
> [PATCH v3 1/3] tools/libxc: refactored mempaging code from xenpaging
> This patch adds two routines xc_mem_paging_ring_setup and
> xc_mem_paging_ring_teardown to libxc at the path
> tools/libxc/xc_mem_paging_setup.c, which is a new file.
>
> [PATCH v3 2/3] tools/libxc/xc_mem_paging_setup.c: replacing deprecated
> functions
> Replaces calls to deprecated function xc_map_foreign_batch with calls to
> xc_map_foreign_bulk.
>
> [PATCH v3 3/3] tools/libxc/xc_mem_paging_setup.c: FIX- Race condition
> Fix for a known race condition bug in mempaging ring setup code, this race
> condition is actually simmilar to that concerned by XSA-99. This patch
> tries to
> take care of all the points mentioned in XSA-99, along with clearing the
> ringpage
> to remove any bogus input.
>
> Signed-off-by: Dushyant Behl <myselfdushyantbehl@gmail.com>
>

[-- Attachment #1.2: Type: text/html, Size: 2031 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Refactoring mempaging code from xenpaging to libxc and few updates
  2014-08-13 16:28 ` Refactoring mempaging code from xenpaging to libxc and few updates Dushyant Behl
@ 2014-08-13 20:54   ` Dushyant Behl
  0 siblings, 0 replies; 7+ messages in thread
From: Dushyant Behl @ 2014-08-13 20:54 UTC (permalink / raw)
  To: Xen Devel
  Cc: David Scott, Stefano Stabellini, Andrew Cooper, Ian Jackson,
	Andres Lagar Cavilla, Dushyant Behl


[-- Attachment #1.1: Type: text/plain, Size: 1813 bytes --]

The updated patch series is present at the following thread -
http://lists.xenproject.org/archives/html/xen-devel/2014-08/msg01609.html.

Thanks and Regards,
Dushyant Behl


On Wed, Aug 13, 2014 at 9:58 PM, Dushyant Behl <myselfdushyantbehl@gmail.com
> wrote:

> Hi Everyone,
>
> I am extremely sorry that I didn't notice these patches had most of the
> code which did exactly what xc_mem_event_enable (added by XSA-99) did.
> I request you to kindly ignore this patch series, I have prepared a new
> patch series which takes some necessary bits and pieces from this patch
> series and adds them to xc_mem_event_enable.
> I'll be sending the new patch series, Please have a look at those.
>
> I am really sorry for the inconvenience.
>
> Thanks and Regards,
> Dushyant Behl
>
>
> On Fri, Aug 8, 2014 at 2:10 AM, Dushyant Behl <
> myselfdushyantbehl@gmail.com> wrote:
>
>> The following patches refactor memepaging code from xenpaging to libxc -
>>
>> [PATCH v3 1/3] tools/libxc: refactored mempaging code from xenpaging
>> This patch adds two routines xc_mem_paging_ring_setup and
>> xc_mem_paging_ring_teardown to libxc at the path
>> tools/libxc/xc_mem_paging_setup.c, which is a new file.
>>
>> [PATCH v3 2/3] tools/libxc/xc_mem_paging_setup.c: replacing deprecated
>> functions
>> Replaces calls to deprecated function xc_map_foreign_batch with calls to
>> xc_map_foreign_bulk.
>>
>> [PATCH v3 3/3] tools/libxc/xc_mem_paging_setup.c: FIX- Race condition
>> Fix for a known race condition bug in mempaging ring setup code, this race
>> condition is actually simmilar to that concerned by XSA-99. This patch
>> tries to
>> take care of all the points mentioned in XSA-99, along with clearing the
>> ringpage
>> to remove any bogus input.
>>
>> Signed-off-by: Dushyant Behl <myselfdushyantbehl@gmail.com>
>>
>
>

[-- Attachment #1.2: Type: text/html, Size: 2798 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2014-08-13 20:54 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-08-07 20:40 Refactoring mempaging code from xenpaging to libxc and few updates Dushyant Behl
2014-08-07 20:40 ` [PATCH v3 1/3] tools/libxc: refactored mempaging code from xenpaging to libxc Dushyant Behl
2014-08-07 20:40 ` [PATCH v3 2/3] tools/libxc/xc_mem_paging_setup.c: replacing deprecated function calls Dushyant Behl
2014-08-07 20:40 ` [PATCH v3 3/3] tools/libxc/xc_mem_paging_setup.c: FIX- Race condition between initializing shared ring and mempaging Dushyant Behl
2014-08-08  2:37   ` Andres Lagar Cavilla
2014-08-13 16:28 ` Refactoring mempaging code from xenpaging to libxc and few updates Dushyant Behl
2014-08-13 20:54   ` Dushyant Behl

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.