All of lore.kernel.org
 help / color / mirror / Atom feed
* [Xen-Devel] [PATCH] [GSOC14] refactored mempaging code from xenpaging to libxc.
@ 2014-06-12  9:23 Dushyant Behl
  2014-06-12  9:42 ` Dave Scott
  2014-06-12 13:22 ` Andrew Cooper
  0 siblings, 2 replies; 23+ messages in thread
From: Dushyant Behl @ 2014-06-12  9:23 UTC (permalink / raw)
  To: xen-devel
  Cc: andres.lagarcavilla, Dushyant Behl, dave.scott, stefano.stabellini

This patch is part of the work done under the gsoc project -
Lazy Restore Using Memory Paging.

This patch moves the code to initialize mempaging from xenpaging to libxc.
The code refractored from xenpaging is the code which sets up paging,
initializes a shared ring and event channel to communicate with xen. This
communication is done between the hypervisor and the dom0 tool which performs
the activity of pager. The xenpaging code is changed to use the newly created
routines and is tested to properly build and work with this code.

The refractoring is done so that any tool which will act as pager in
lazy restore or use memory paging can use a same routines to initialize mempaging.
This refactoring will also allow any future (in-tree) tools to use mempaging.

Signed-off-by: Dushyant Behl <myselfdushyantbehl@gmail.com>
Reviewed-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
---
 tools/libxc/Makefile                |   3 +
 tools/libxc/xc_mem_paging_setup.c   | 132 ++++++++++++++++++++++++++++++++++++
 tools/libxc/xenctrl.h               |  12 ++++
 tools/ocaml/libs/xc/xenctrl_stubs.c |  11 +--
 tools/xenpaging/Makefile            |   4 +-
 tools/xenpaging/xenpaging.c         |  93 +++----------------------
 6 files changed, 167 insertions(+), 88 deletions(-)
 create mode 100644 tools/libxc/xc_mem_paging_setup.c

diff --git a/tools/libxc/Makefile b/tools/libxc/Makefile
index a74b19e..2439853 100644
--- a/tools/libxc/Makefile
+++ b/tools/libxc/Makefile
@@ -69,6 +69,9 @@ GUEST_SRCS-$(CONFIG_ARM)     += xc_dom_armzimageloader.c
 GUEST_SRCS-y                 += xc_dom_binloader.c
 GUEST_SRCS-y                 += xc_dom_compat_linux.c
 
+# mem paging setup
+GUEST_SRCS-y                 += xc_mem_paging_setup.c
+
 GUEST_SRCS-$(CONFIG_X86)     += xc_dom_x86.c
 GUEST_SRCS-$(CONFIG_X86)     += xc_cpuid_x86.c
 GUEST_SRCS-$(CONFIG_X86)     += xc_hvm_build_x86.c
diff --git a/tools/libxc/xc_mem_paging_setup.c b/tools/libxc/xc_mem_paging_setup.c
new file mode 100644
index 0000000..125d203
--- /dev/null
+++ b/tools/libxc/xc_mem_paging_setup.c
@@ -0,0 +1,132 @@
+/*
+ * tools/libxc/xc_mem_paging_setup.c
+ *
+ * Routines to initialize memory paging. Create shared ring
+ * and event channels to communicate with the hypervisor.
+ *
+ * Copyright (c) 2014 Dushyant Behl
+ * Copyright (c) 2009 by Citrix Systems, Inc. (Patrick Colp)
+ * 
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#include "xc_private.h"
+#include <xen/event_channel.h>
+#include <xen/mem_event.h>
+
+/*
+ * Mem paging ring and event channel setup routine.
+ * Setup a shared ring and an event channel to communicate between
+ * hypervisor and the tool performing mem paging operations.
+ */
+int xc_mem_paging_ring_setup(xc_interface *xch, domid_t domain_id,
+                                void *ring_page, int *port,
+                                uint32_t *evtchn_port,
+                                xc_evtchn **xce_handle,
+                                mem_event_back_ring_t *back_ring)
+{
+    int rc;
+    unsigned long ring_pfn, mmap_pfn;
+
+    /* Map the ring page */
+    xc_get_hvm_param(xch, domain_id, HVM_PARAM_PAGING_RING_PFN, &ring_pfn);
+    mmap_pfn = ring_pfn;
+    ring_page = xc_map_foreign_batch(xch, domain_id,
+                                        PROT_READ | PROT_WRITE, &mmap_pfn, 1);
+    if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
+    {
+        /* Map failed, populate ring page */
+        rc = xc_domain_populate_physmap_exact(xch, domain_id,
+                                              1, 0, 0, &ring_pfn);
+        if ( rc != 0 )
+        {
+            PERROR("Failed to populate ring gfn\n");
+            return 1;
+        }
+
+        mmap_pfn = ring_pfn;
+        ring_page = xc_map_foreign_batch(xch, domain_id,
+                                        PROT_READ | PROT_WRITE, &mmap_pfn, 1);
+
+        if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
+        {
+            PERROR("Could not map the ring page\n");
+            return 1;
+        }
+    }
+
+    /* Initialise Xen */
+    rc = xc_mem_paging_enable(xch, domain_id, evtchn_port);
+    if ( rc != 0 )
+    {
+        switch ( errno ) {
+            case EBUSY:
+                ERROR("mempaging is (or was) active on this domain");
+                break;
+            case ENODEV:
+                ERROR("mempaging requires Hardware Assisted Paging");
+                break;
+            case EMLINK:
+                ERROR("mempaging not supported while iommu passthrough is enabled");
+                break;
+            case EXDEV:
+                ERROR("mempaging not supported in a PoD guest");
+                break;
+            default:
+                PERROR("mempaging: error initialising shared page");
+                break;
+        }
+        return 1;
+    }
+
+    /* Open event channel */
+    *xce_handle = xc_evtchn_open(NULL, 0);
+    if ( *xce_handle == NULL )
+    {
+        PERROR("Failed to open event channel");
+        return 1;
+    }
+
+    /* Bind event notification */
+    rc = xc_evtchn_bind_interdomain(*xce_handle, domain_id, *evtchn_port);
+    if ( rc < 0 )
+    {
+        PERROR("Failed to bind event channel");
+        return 1;
+    }
+    *port = rc;
+
+    /* Initialise ring */
+    SHARED_RING_INIT((mem_event_sring_t *)ring_page);
+    BACK_RING_INIT(back_ring, (mem_event_sring_t *)ring_page, PAGE_SIZE);
+
+    /* Now that the ring is set, remove it from the guest's physmap */
+    if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0, &ring_pfn) )
+    {
+        PERROR("Failed to remove ring from guest physmap");
+        return 1;
+    }
+
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 02129f7..c871a2f 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -47,6 +47,7 @@
 #include <xen/xsm/flask_op.h>
 #include <xen/tmem.h>
 #include <xen/kexec.h>
+#include <xen/mem_event.h>
 
 #include "xentoollog.h"
 
@@ -2039,6 +2040,17 @@ int xc_mem_paging_prep(xc_interface *xch, domid_t domain_id, unsigned long gfn);
 int xc_mem_paging_load(xc_interface *xch, domid_t domain_id, 
                         unsigned long gfn, void *buffer);
 
+/*
+ * Mem paging ring and event channel setup routine.
+ * Setup a shared ring and an event channel to communicate between
+ * hypervisor and the tool performing mem paging operations.
+ */
+int xc_mem_paging_ring_setup(xc_interface *xch, domid_t domain_id,
+                                void *ring_page, int *port,
+                                uint32_t *evtchn_port,
+                                xc_evtchn **xceh,
+                                mem_event_back_ring_t *_back_ring);
+
 /** 
  * Access tracking operations.
  * Supported only on Intel EPT 64 bit processors.
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index ff29b47..37a4db7 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -521,13 +521,16 @@ CAMLprim value stub_xc_evtchn_reset(value xch, value domid)
 	CAMLreturn(Val_unit);
 }
 
-
-#define RING_SIZE 32768
-static char ring[RING_SIZE];
+/*
+ * The name is kept BUF_RING_SIZE, because the name RING_SIZE 
+ * collides with the xen shared ring definitions in io/ring.h
+ */
+#define BUF_RING_SIZE 32768
+static char ring[BUF_RING_SIZE];
 
 CAMLprim value stub_xc_readconsolering(value xch)
 {
-	unsigned int size = RING_SIZE - 1;
+	unsigned int size = BUF_RING_SIZE - 1;
 	char *ring_ptr = ring;
 	int retval;
 
diff --git a/tools/xenpaging/Makefile b/tools/xenpaging/Makefile
index 548d9dd..ea370d3 100644
--- a/tools/xenpaging/Makefile
+++ b/tools/xenpaging/Makefile
@@ -1,8 +1,8 @@
 XEN_ROOT=$(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenstore) $(PTHREAD_CFLAGS)
-LDLIBS += $(LDLIBS_libxenctrl) $(LDLIBS_libxenstore) $(PTHREAD_LIBS)
+CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenguest) $(CFLAGS_libxenstore) $(PTHREAD_CFLAGS)
+LDLIBS += $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxenstore) $(PTHREAD_LIBS)
 LDFLAGS += $(PTHREAD_LDFLAGS)
 
 POLICY    = default
diff --git a/tools/xenpaging/xenpaging.c b/tools/xenpaging/xenpaging.c
index 5ef2f09..2b81408 100644
--- a/tools/xenpaging/xenpaging.c
+++ b/tools/xenpaging/xenpaging.c
@@ -281,7 +281,6 @@ static struct xenpaging *xenpaging_init(int argc, char *argv[])
     xentoollog_logger *dbg = NULL;
     char *p;
     int rc;
-    unsigned long ring_pfn, mmap_pfn;
 
     /* Allocate memory */
     paging = calloc(1, sizeof(struct xenpaging));
@@ -338,91 +337,21 @@ static struct xenpaging *xenpaging_init(int argc, char *argv[])
         goto err;
     }
 
-    /* Map the ring page */
-    xc_get_hvm_param(xch, paging->mem_event.domain_id, 
-                        HVM_PARAM_PAGING_RING_PFN, &ring_pfn);
-    mmap_pfn = ring_pfn;
-    paging->mem_event.ring_page = 
-        xc_map_foreign_batch(xch, paging->mem_event.domain_id, 
-                                PROT_READ | PROT_WRITE, &mmap_pfn, 1);
-    if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
-    {
-        /* Map failed, populate ring page */
-        rc = xc_domain_populate_physmap_exact(paging->xc_handle, 
-                                              paging->mem_event.domain_id,
-                                              1, 0, 0, &ring_pfn);
-        if ( rc != 0 )
-        {
-            PERROR("Failed to populate ring gfn\n");
-            goto err;
-        }
-
-        mmap_pfn = ring_pfn;
-        paging->mem_event.ring_page = 
-            xc_map_foreign_batch(xch, paging->mem_event.domain_id, 
-                                    PROT_READ | PROT_WRITE, &mmap_pfn, 1);
-        if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
-        {
-            PERROR("Could not map the ring page\n");
-            goto err;
-        }
-    }
-    
-    /* Initialise Xen */
-    rc = xc_mem_paging_enable(xch, paging->mem_event.domain_id,
-                             &paging->mem_event.evtchn_port);
-    if ( rc != 0 )
-    {
-        switch ( errno ) {
-            case EBUSY:
-                ERROR("xenpaging is (or was) active on this domain");
-                break;
-            case ENODEV:
-                ERROR("xenpaging requires Hardware Assisted Paging");
-                break;
-            case EMLINK:
-                ERROR("xenpaging not supported while iommu passthrough is enabled");
-                break;
-            case EXDEV:
-                ERROR("xenpaging not supported in a PoD guest");
-                break;
-            default:
-                PERROR("Error initialising shared page");
-                break;
-        }
-        goto err;
-    }
-
-    /* Open event channel */
-    paging->mem_event.xce_handle = xc_evtchn_open(NULL, 0);
-    if ( paging->mem_event.xce_handle == NULL )
-    {
-        PERROR("Failed to open event channel");
-        goto err;
-    }
-
-    /* Bind event notification */
-    rc = xc_evtchn_bind_interdomain(paging->mem_event.xce_handle,
+    /* Initialize paging by setting up ring and event channel. */
+    rc = xc_mem_paging_ring_setup(paging->xc_handle,
                                     paging->mem_event.domain_id,
-                                    paging->mem_event.evtchn_port);
-    if ( rc < 0 )
-    {
-        PERROR("Failed to bind event channel");
+                                    paging->mem_event.ring_page,
+                                    &paging->mem_event.port,
+                                    &paging->mem_event.evtchn_port,
+                                    &paging->mem_event.xce_handle,
+                                    &paging->mem_event.back_ring);
+    if( rc )
+    {
+        PERROR("Could not initialize mem paging\n");
         goto err;
     }
 
-    paging->mem_event.port = rc;
-
-    /* Initialise ring */
-    SHARED_RING_INIT((mem_event_sring_t *)paging->mem_event.ring_page);
-    BACK_RING_INIT(&paging->mem_event.back_ring,
-                   (mem_event_sring_t *)paging->mem_event.ring_page,
-                   PAGE_SIZE);
-
-    /* Now that the ring is set, remove it from the guest's physmap */
-    if ( xc_domain_decrease_reservation_exact(xch, 
-                    paging->mem_event.domain_id, 1, 0, &ring_pfn) )
-        PERROR("Failed to remove ring from guest physmap");
+    DPRINTF("ring and event channel setup successful\n");
 
     /* Get max_pages from guest if not provided via cmdline */
     if ( !paging->max_pages )
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [Xen-Devel] [PATCH] [GSOC14] refactored mempaging code from xenpaging to libxc.
  2014-06-12  9:23 [Xen-Devel] [PATCH] [GSOC14] refactored mempaging code from xenpaging to libxc Dushyant Behl
@ 2014-06-12  9:42 ` Dave Scott
  2014-06-12 13:22 ` Andrew Cooper
  1 sibling, 0 replies; 23+ messages in thread
From: Dave Scott @ 2014-06-12  9:42 UTC (permalink / raw)
  To: Dushyant Behl
  Cc: Dave Scott, Stefano Stabellini, andres.lagarcavilla, xen-devel


On 12 Jun 2014, at 10:23, Dushyant Behl <myselfdushyantbehl@gmail.com> wrote:

> This patch is part of the work done under the gsoc project -
> Lazy Restore Using Memory Paging.

[ snip ]

I can only comment meaningfully on this minor part of the diff:

> diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
> index ff29b47..37a4db7 100644
> --- a/tools/ocaml/libs/xc/xenctrl_stubs.c
> +++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
> @@ -521,13 +521,16 @@ CAMLprim value stub_xc_evtchn_reset(value xch, value domid)
> 	CAMLreturn(Val_unit);
> }
> 
> -
> -#define RING_SIZE 32768
> -static char ring[RING_SIZE];
> +/*
> + * The name is kept BUF_RING_SIZE, because the name RING_SIZE 
> + * collides with the xen shared ring definitions in io/ring.h
> + */
> +#define BUF_RING_SIZE 32768
> +static char ring[BUF_RING_SIZE];
> 
> CAMLprim value stub_xc_readconsolering(value xch)
> {
> -	unsigned int size = RING_SIZE - 1;
> +	unsigned int size = BUF_RING_SIZE - 1;
> 	char *ring_ptr = ring;
> 	int retval;
> 

This is fine by me.

Acked-by: David Scott <dave.scott@citrix.com>

Cheers,
Dave

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Xen-Devel] [PATCH] [GSOC14] refactored mempaging code from xenpaging to libxc.
  2014-06-12  9:23 [Xen-Devel] [PATCH] [GSOC14] refactored mempaging code from xenpaging to libxc Dushyant Behl
  2014-06-12  9:42 ` Dave Scott
@ 2014-06-12 13:22 ` Andrew Cooper
  2014-06-12 13:27   ` Ian Campbell
  2014-06-12 13:59   ` Andres Lagar Cavilla
  1 sibling, 2 replies; 23+ messages in thread
From: Andrew Cooper @ 2014-06-12 13:22 UTC (permalink / raw)
  To: Dushyant Behl
  Cc: andres.lagarcavilla, stefano.stabellini, dave.scott, xen-devel

On 12/06/14 10:23, Dushyant Behl wrote:
> This patch is part of the work done under the gsoc project -
> Lazy Restore Using Memory Paging.
>
> This patch moves the code to initialize mempaging from xenpaging to libxc.

The source code is named libxc, but technically you are moving it into
libxenguest which is a separate library from libxenctrl.

> The code refractored from xenpaging is the code which sets up paging,
> initializes a shared ring and event channel to communicate with xen. This
> communication is done between the hypervisor and the dom0 tool which performs
> the activity of pager. The xenpaging code is changed to use the newly created
> routines and is tested to properly build and work with this code.
>
> The refractoring is done so that any tool which will act as pager in
> lazy restore or use memory paging can use a same routines to initialize mempaging.
> This refactoring will also allow any future (in-tree) tools to use mempaging.
>
> Signed-off-by: Dushyant Behl <myselfdushyantbehl@gmail.com>
> Reviewed-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> ---
>  tools/libxc/Makefile                |   3 +
>  tools/libxc/xc_mem_paging_setup.c   | 132 ++++++++++++++++++++++++++++++++++++
>  tools/libxc/xenctrl.h               |  12 ++++
>  tools/ocaml/libs/xc/xenctrl_stubs.c |  11 +--
>  tools/xenpaging/Makefile            |   4 +-
>  tools/xenpaging/xenpaging.c         |  93 +++----------------------
>  6 files changed, 167 insertions(+), 88 deletions(-)
>  create mode 100644 tools/libxc/xc_mem_paging_setup.c
>
> diff --git a/tools/libxc/Makefile b/tools/libxc/Makefile
> index a74b19e..2439853 100644
> --- a/tools/libxc/Makefile
> +++ b/tools/libxc/Makefile
> @@ -69,6 +69,9 @@ GUEST_SRCS-$(CONFIG_ARM)     += xc_dom_armzimageloader.c
>  GUEST_SRCS-y                 += xc_dom_binloader.c
>  GUEST_SRCS-y                 += xc_dom_compat_linux.c
>  
> +# mem paging setup
> +GUEST_SRCS-y                 += xc_mem_paging_setup.c
> +

I don't think you need the comment here.  It is obvious from the name of
the file, although I would suggest you name it "xc_dom_paging.c" or
something a tad less specific.

>  GUEST_SRCS-$(CONFIG_X86)     += xc_dom_x86.c
>  GUEST_SRCS-$(CONFIG_X86)     += xc_cpuid_x86.c
>  GUEST_SRCS-$(CONFIG_X86)     += xc_hvm_build_x86.c
> diff --git a/tools/libxc/xc_mem_paging_setup.c b/tools/libxc/xc_mem_paging_setup.c
> new file mode 100644
> index 0000000..125d203
> --- /dev/null
> +++ b/tools/libxc/xc_mem_paging_setup.c
> @@ -0,0 +1,132 @@
> +/*
> + * tools/libxc/xc_mem_paging_setup.c
> + *
> + * Routines to initialize memory paging. Create shared ring
> + * and event channels to communicate with the hypervisor.
> + *
> + * Copyright (c) 2014 Dushyant Behl
> + * Copyright (c) 2009 by Citrix Systems, Inc. (Patrick Colp)
> + * 
> + * This library is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation; either
> + * version 2.1 of the License, or (at your option) any later version.
> + *
> + * This library is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with this library; if not, write to the Free Software
> + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301 USA
> + */
> +
> +#include "xc_private.h"
> +#include <xen/event_channel.h>
> +#include <xen/mem_event.h>
> +
> +/*
> + * Mem paging ring and event channel setup routine.
> + * Setup a shared ring and an event channel to communicate between
> + * hypervisor and the tool performing mem paging operations.

Please document here, or in the header file, what the error semantics of
the function are.  Libxc is a complete mess so these things need
spelling out.

> + */
> +int xc_mem_paging_ring_setup(xc_interface *xch, domid_t domain_id,
> +                                void *ring_page, int *port,
> +                                uint32_t *evtchn_port,
> +                                xc_evtchn **xce_handle,
> +                                mem_event_back_ring_t *back_ring)

The parameters should be aligned with the xc_interface.

> +{
> +    int rc;
> +    unsigned long ring_pfn, mmap_pfn;

uint64_t's as these are hvmparams

David Vrabel's GenID series fixes the prototypes of
xc_[gs]et_hvm_param() to be sane.

> +
> +    /* Map the ring page */
> +    xc_get_hvm_param(xch, domain_id, HVM_PARAM_PAGING_RING_PFN, &ring_pfn);
> +    mmap_pfn = ring_pfn;
> +    ring_page = xc_map_foreign_batch(xch, domain_id,
> +                                        PROT_READ | PROT_WRITE, &mmap_pfn, 1);

You are stuffing a pointer into an unsigned long?

> +    if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )

What is this check for?

> +    {
> +        /* Map failed, populate ring page */
> +        rc = xc_domain_populate_physmap_exact(xch, domain_id,
> +                                              1, 0, 0, &ring_pfn);
> +        if ( rc != 0 )
> +        {
> +            PERROR("Failed to populate ring gfn\n");
> +            return 1;

You are (implicitly) setting errno.  Please return -1;

> +        }
> +
> +        mmap_pfn = ring_pfn;
> +        ring_page = xc_map_foreign_batch(xch, domain_id,
> +                                        PROT_READ | PROT_WRITE, &mmap_pfn, 1);
> +
> +        if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
> +        {
> +            PERROR("Could not map the ring page\n");
> +            return 1;
> +        }
> +    }
> +
> +    /* Initialise Xen */
> +    rc = xc_mem_paging_enable(xch, domain_id, evtchn_port);
> +    if ( rc != 0 )
> +    {
> +        switch ( errno ) {
> +            case EBUSY:
> +                ERROR("mempaging is (or was) active on this domain");
> +                break;
> +            case ENODEV:
> +                ERROR("mempaging requires Hardware Assisted Paging");
> +                break;
> +            case EMLINK:
> +                ERROR("mempaging not supported while iommu passthrough is enabled");
> +                break;
> +            case EXDEV:
> +                ERROR("mempaging not supported in a PoD guest");
> +                break;
> +            default:
> +                PERROR("mempaging: error initialising shared page");
> +                break;
> +        }
> +        return 1;
> +    }
> +
> +    /* Open event channel */
> +    *xce_handle = xc_evtchn_open(NULL, 0);
> +    if ( *xce_handle == NULL )
> +    {
> +        PERROR("Failed to open event channel");
> +        return 1;
> +    }
> +
> +    /* Bind event notification */
> +    rc = xc_evtchn_bind_interdomain(*xce_handle, domain_id, *evtchn_port);
> +    if ( rc < 0 )
> +    {
> +        PERROR("Failed to bind event channel");
> +        return 1;
> +    }
> +    *port = rc;
> +
> +    /* Initialise ring */
> +    SHARED_RING_INIT((mem_event_sring_t *)ring_page);
> +    BACK_RING_INIT(back_ring, (mem_event_sring_t *)ring_page, PAGE_SIZE);
> +
> +    /* Now that the ring is set, remove it from the guest's physmap */
> +    if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0, &ring_pfn) )
> +    {
> +        PERROR("Failed to remove ring from guest physmap");
> +        return 1;
> +    }
> +
> +    return 0;
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
> index 02129f7..c871a2f 100644
> --- a/tools/libxc/xenctrl.h
> +++ b/tools/libxc/xenctrl.h
> @@ -47,6 +47,7 @@
>  #include <xen/xsm/flask_op.h>
>  #include <xen/tmem.h>
>  #include <xen/kexec.h>
> +#include <xen/mem_event.h>
>  
>  #include "xentoollog.h"
>  
> @@ -2039,6 +2040,17 @@ int xc_mem_paging_prep(xc_interface *xch, domid_t domain_id, unsigned long gfn);
>  int xc_mem_paging_load(xc_interface *xch, domid_t domain_id, 
>                          unsigned long gfn, void *buffer);
>  
> +/*
> + * Mem paging ring and event channel setup routine.
> + * Setup a shared ring and an event channel to communicate between
> + * hypervisor and the tool performing mem paging operations.
> + */
> +int xc_mem_paging_ring_setup(xc_interface *xch, domid_t domain_id,
> +                                void *ring_page, int *port,
> +                                uint32_t *evtchn_port,
> +                                xc_evtchn **xceh,
> +                                mem_event_back_ring_t *_back_ring);
> +
>  /** 
>   * Access tracking operations.
>   * Supported only on Intel EPT 64 bit processors.
> diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
> index ff29b47..37a4db7 100644
> --- a/tools/ocaml/libs/xc/xenctrl_stubs.c
> +++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
> @@ -521,13 +521,16 @@ CAMLprim value stub_xc_evtchn_reset(value xch, value domid)
>  	CAMLreturn(Val_unit);
>  }
>  
> -
> -#define RING_SIZE 32768
> -static char ring[RING_SIZE];
> +/*
> + * The name is kept BUF_RING_SIZE, because the name RING_SIZE 
> + * collides with the xen shared ring definitions in io/ring.h
> + */
> +#define BUF_RING_SIZE 32768
> +static char ring[BUF_RING_SIZE];
>  
>  CAMLprim value stub_xc_readconsolering(value xch)
>  {
> -	unsigned int size = RING_SIZE - 1;
> +	unsigned int size = BUF_RING_SIZE - 1;
>  	char *ring_ptr = ring;
>  	int retval;
>  
> diff --git a/tools/xenpaging/Makefile b/tools/xenpaging/Makefile
> index 548d9dd..ea370d3 100644
> --- a/tools/xenpaging/Makefile
> +++ b/tools/xenpaging/Makefile
> @@ -1,8 +1,8 @@
>  XEN_ROOT=$(CURDIR)/../..
>  include $(XEN_ROOT)/tools/Rules.mk
>  
> -CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenstore) $(PTHREAD_CFLAGS)
> -LDLIBS += $(LDLIBS_libxenctrl) $(LDLIBS_libxenstore) $(PTHREAD_LIBS)
> +CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenguest) $(CFLAGS_libxenstore) $(PTHREAD_CFLAGS)
> +LDLIBS += $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxenstore) $(PTHREAD_LIBS)

You don't introduce any libxenstore dependencies as far as I can see.

>  LDFLAGS += $(PTHREAD_LDFLAGS)
>  
>  POLICY    = default
> diff --git a/tools/xenpaging/xenpaging.c b/tools/xenpaging/xenpaging.c
> index 5ef2f09..2b81408 100644
> --- a/tools/xenpaging/xenpaging.c
> +++ b/tools/xenpaging/xenpaging.c
> @@ -281,7 +281,6 @@ static struct xenpaging *xenpaging_init(int argc, char *argv[])
>      xentoollog_logger *dbg = NULL;
>      char *p;
>      int rc;
> -    unsigned long ring_pfn, mmap_pfn;
>  
>      /* Allocate memory */
>      paging = calloc(1, sizeof(struct xenpaging));
> @@ -338,91 +337,21 @@ static struct xenpaging *xenpaging_init(int argc, char *argv[])
>          goto err;
>      }
>  
> -    /* Map the ring page */
> -    xc_get_hvm_param(xch, paging->mem_event.domain_id, 
> -                        HVM_PARAM_PAGING_RING_PFN, &ring_pfn);
> -    mmap_pfn = ring_pfn;
> -    paging->mem_event.ring_page = 
> -        xc_map_foreign_batch(xch, paging->mem_event.domain_id, 
> -                                PROT_READ | PROT_WRITE, &mmap_pfn, 1);
> -    if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
> -    {
> -        /* Map failed, populate ring page */
> -        rc = xc_domain_populate_physmap_exact(paging->xc_handle, 
> -                                              paging->mem_event.domain_id,
> -                                              1, 0, 0, &ring_pfn);
> -        if ( rc != 0 )
> -        {
> -            PERROR("Failed to populate ring gfn\n");
> -            goto err;
> -        }
> -
> -        mmap_pfn = ring_pfn;
> -        paging->mem_event.ring_page = 
> -            xc_map_foreign_batch(xch, paging->mem_event.domain_id, 
> -                                    PROT_READ | PROT_WRITE, &mmap_pfn, 1);
> -        if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
> -        {
> -            PERROR("Could not map the ring page\n");
> -            goto err;
> -        }
> -    }
> -    
> -    /* Initialise Xen */
> -    rc = xc_mem_paging_enable(xch, paging->mem_event.domain_id,
> -                             &paging->mem_event.evtchn_port);
> -    if ( rc != 0 )
> -    {
> -        switch ( errno ) {
> -            case EBUSY:
> -                ERROR("xenpaging is (or was) active on this domain");
> -                break;
> -            case ENODEV:
> -                ERROR("xenpaging requires Hardware Assisted Paging");
> -                break;
> -            case EMLINK:
> -                ERROR("xenpaging not supported while iommu passthrough is enabled");
> -                break;
> -            case EXDEV:
> -                ERROR("xenpaging not supported in a PoD guest");
> -                break;
> -            default:
> -                PERROR("Error initialising shared page");
> -                break;
> -        }
> -        goto err;
> -    }
> -
> -    /* Open event channel */
> -    paging->mem_event.xce_handle = xc_evtchn_open(NULL, 0);
> -    if ( paging->mem_event.xce_handle == NULL )
> -    {
> -        PERROR("Failed to open event channel");
> -        goto err;
> -    }
> -
> -    /* Bind event notification */
> -    rc = xc_evtchn_bind_interdomain(paging->mem_event.xce_handle,
> +    /* Initialize paging by setting up ring and event channel. */
> +    rc = xc_mem_paging_ring_setup(paging->xc_handle,
>                                      paging->mem_event.domain_id,
> -                                    paging->mem_event.evtchn_port);
> -    if ( rc < 0 )
> -    {
> -        PERROR("Failed to bind event channel");
> +                                    paging->mem_event.ring_page,
> +                                    &paging->mem_event.port,
> +                                    &paging->mem_event.evtchn_port,
> +                                    &paging->mem_event.xce_handle,
> +                                    &paging->mem_event.back_ring);
> +    if( rc )
> +    {
> +        PERROR("Could not initialize mem paging\n");
>          goto err;
>      }
>  
> -    paging->mem_event.port = rc;
> -
> -    /* Initialise ring */
> -    SHARED_RING_INIT((mem_event_sring_t *)paging->mem_event.ring_page);
> -    BACK_RING_INIT(&paging->mem_event.back_ring,
> -                   (mem_event_sring_t *)paging->mem_event.ring_page,
> -                   PAGE_SIZE);
> -
> -    /* Now that the ring is set, remove it from the guest's physmap */
> -    if ( xc_domain_decrease_reservation_exact(xch, 
> -                    paging->mem_event.domain_id, 1, 0, &ring_pfn) )
> -        PERROR("Failed to remove ring from guest physmap");
> +    DPRINTF("ring and event channel setup successful\n");

You don't need trailing \n's in DPRINTF()s

~Andrew

>  
>      /* Get max_pages from guest if not provided via cmdline */
>      if ( !paging->max_pages )

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Xen-Devel] [PATCH] [GSOC14] refactored mempaging code from xenpaging to libxc.
  2014-06-12 13:22 ` Andrew Cooper
@ 2014-06-12 13:27   ` Ian Campbell
  2014-06-12 13:29     ` Andrew Cooper
  2014-06-12 13:59   ` Andres Lagar Cavilla
  1 sibling, 1 reply; 23+ messages in thread
From: Ian Campbell @ 2014-06-12 13:27 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: andres.lagarcavilla, xen-devel, Dushyant Behl, dave.scott,
	stefano.stabellini

On Thu, 2014-06-12 at 14:22 +0100, Andrew Cooper wrote:
> > +# mem paging setup
> > +GUEST_SRCS-y                 += xc_mem_paging_setup.c
> > +
> 
> I don't think you need the comment here.  It is obvious from the name of
> the file, although I would suggest you name it "xc_dom_paging.c" or
> something a tad less specific.

xc_dom_* are, for better or worse, generally domain builder related.

Given the API is xen_mem_paging_* I don't think the name here is so bad,
I might be inclined to drop the _setup if there is any chance of
anything else being added here.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Xen-Devel] [PATCH] [GSOC14] refactored mempaging code from xenpaging to libxc.
  2014-06-12 13:27   ` Ian Campbell
@ 2014-06-12 13:29     ` Andrew Cooper
  2014-06-12 13:49       ` Andres Lagar Cavilla
  0 siblings, 1 reply; 23+ messages in thread
From: Andrew Cooper @ 2014-06-12 13:29 UTC (permalink / raw)
  To: Ian Campbell
  Cc: andres.lagarcavilla, xen-devel, Dushyant Behl, dave.scott,
	stefano.stabellini

On 12/06/14 14:27, Ian Campbell wrote:
> On Thu, 2014-06-12 at 14:22 +0100, Andrew Cooper wrote:
>>> +# mem paging setup
>>> +GUEST_SRCS-y                 += xc_mem_paging_setup.c
>>> +
>> I don't think you need the comment here.  It is obvious from the name of
>> the file, although I would suggest you name it "xc_dom_paging.c" or
>> something a tad less specific.
> xc_dom_* are, for better or worse, generally domain builder related.
>
> Given the API is xen_mem_paging_* I don't think the name here is so bad,
> I might be inclined to drop the _setup if there is any chance of
> anything else being added here.
>
>

Hmm yes I suppose.  I would certainly drop the _setup, but
"xc_mem_paging.c" is fine.

~Andrew

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Xen-Devel] [PATCH] [GSOC14] refactored mempaging code from xenpaging to libxc.
  2014-06-12 13:29     ` Andrew Cooper
@ 2014-06-12 13:49       ` Andres Lagar Cavilla
  2014-06-12 13:55         ` Ian Campbell
  0 siblings, 1 reply; 23+ messages in thread
From: Andres Lagar Cavilla @ 2014-06-12 13:49 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: dave.scott, xen-devel, Dushyant Behl, Ian Campbell, stefano.stabellini


[-- Attachment #1.1: Type: text/plain, Size: 939 bytes --]

On Thu, Jun 12, 2014 at 6:29 AM, Andrew Cooper <andrew.cooper3@citrix.com>
wrote:

> On 12/06/14 14:27, Ian Campbell wrote:
> > On Thu, 2014-06-12 at 14:22 +0100, Andrew Cooper wrote:
> >>> +# mem paging setup
> >>> +GUEST_SRCS-y                 += xc_mem_paging_setup.c
> >>> +
> >> I don't think you need the comment here.  It is obvious from the name of
> >> the file, although I would suggest you name it "xc_dom_paging.c" or
> >> something a tad less specific.
> > xc_dom_* are, for better or worse, generally domain builder related.
> >
> > Given the API is xen_mem_paging_* I don't think the name here is so bad,
> > I might be inclined to drop the _setup if there is any chance of
> > anything else being added here.
> >
> >
>
> Hmm yes I suppose.  I would certainly drop the _setup, but
> "xc_mem_paging.c" is fine.
>
xc_mem_paging.c is part of libxenctrl and wraps the hypercalls to page  in,
page out etc.
Andres

>
> ~Andrew
>

[-- Attachment #1.2: Type: text/html, Size: 1648 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Xen-Devel] [PATCH] [GSOC14] refactored mempaging code from xenpaging to libxc.
  2014-06-12 13:49       ` Andres Lagar Cavilla
@ 2014-06-12 13:55         ` Ian Campbell
  2014-06-12 14:01           ` Andres Lagar Cavilla
  0 siblings, 1 reply; 23+ messages in thread
From: Ian Campbell @ 2014-06-12 13:55 UTC (permalink / raw)
  To: Andres Lagar Cavilla
  Cc: Andrew Cooper, xen-devel, Dushyant Behl, dave.scott, stefano.stabellini

On Thu, 2014-06-12 at 06:49 -0700, Andres Lagar Cavilla wrote:
> On Thu, Jun 12, 2014 at 6:29 AM, Andrew Cooper
> <andrew.cooper3@citrix.com> wrote:
>         On 12/06/14 14:27, Ian Campbell wrote:
>         > On Thu, 2014-06-12 at 14:22 +0100, Andrew Cooper wrote:
>         >>> +# mem paging setup
>         >>> +GUEST_SRCS-y                 += xc_mem_paging_setup.c
>         >>> +
>         >> I don't think you need the comment here.  It is obvious
>         from the name of
>         >> the file, although I would suggest you name it
>         "xc_dom_paging.c" or
>         >> something a tad less specific.
>         > xc_dom_* are, for better or worse, generally domain builder
>         related.
>         >
>         > Given the API is xen_mem_paging_* I don't think the name
>         here is so bad,
>         > I might be inclined to drop the _setup if there is any
>         chance of
>         > anything else being added here.
>         >
>         >
>         
>         
>         Hmm yes I suppose.  I would certainly drop the _setup, but
>         "xc_mem_paging.c" is fine.
> xc_mem_paging.c is part of libxenctrl and wraps the hypercalls to page
> in, page out etc.

Would it be inappropriate to put this new function in that file then?

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Xen-Devel] [PATCH] [GSOC14] refactored mempaging code from xenpaging to libxc.
  2014-06-12 13:22 ` Andrew Cooper
  2014-06-12 13:27   ` Ian Campbell
@ 2014-06-12 13:59   ` Andres Lagar Cavilla
  2014-06-12 14:03     ` Dushyant Behl
  2014-06-12 14:11     ` Andrew Cooper
  1 sibling, 2 replies; 23+ messages in thread
From: Andres Lagar Cavilla @ 2014-06-12 13:59 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: stefano.stabellini, Dushyant Behl, dave.scott, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 16225 bytes --]

On Thu, Jun 12, 2014 at 6:22 AM, Andrew Cooper <andrew.cooper3@citrix.com>
wrote:

> On 12/06/14 10:23, Dushyant Behl wrote:
> > This patch is part of the work done under the gsoc project -
> > Lazy Restore Using Memory Paging.
> >
> > This patch moves the code to initialize mempaging from xenpaging to
> libxc.
>
> The source code is named libxc, but technically you are moving it into
> libxenguest which is a separate library from libxenctrl.
>
> > The code refractored from xenpaging is the code which sets up paging,
> > initializes a shared ring and event channel to communicate with xen. This
> > communication is done between the hypervisor and the dom0 tool which
> performs
> > the activity of pager. The xenpaging code is changed to use the newly
> created
> > routines and is tested to properly build and work with this code.
> >
> > The refractoring is done so that any tool which will act as pager in
> > lazy restore or use memory paging can use a same routines to initialize
> mempaging.
> > This refactoring will also allow any future (in-tree) tools to use
> mempaging.
> >
> > Signed-off-by: Dushyant Behl <myselfdushyantbehl@gmail.com>
> > Reviewed-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
>
> ---
> >  tools/libxc/Makefile                |   3 +
> >  tools/libxc/xc_mem_paging_setup.c   | 132
> ++++++++++++++++++++++++++++++++++++
> >  tools/libxc/xenctrl.h               |  12 ++++
> >  tools/ocaml/libs/xc/xenctrl_stubs.c |  11 +--
> >  tools/xenpaging/Makefile            |   4 +-
> >  tools/xenpaging/xenpaging.c         |  93 +++----------------------
> >  6 files changed, 167 insertions(+), 88 deletions(-)
> >  create mode 100644 tools/libxc/xc_mem_paging_setup.c
> >
> > diff --git a/tools/libxc/Makefile b/tools/libxc/Makefile
> > index a74b19e..2439853 100644
> > --- a/tools/libxc/Makefile
> > +++ b/tools/libxc/Makefile
> > @@ -69,6 +69,9 @@ GUEST_SRCS-$(CONFIG_ARM)     +=
> xc_dom_armzimageloader.c
> >  GUEST_SRCS-y                 += xc_dom_binloader.c
> >  GUEST_SRCS-y                 += xc_dom_compat_linux.c
> >
> > +# mem paging setup
> > +GUEST_SRCS-y                 += xc_mem_paging_setup.c
> > +
>
> I don't think you need the comment here.  It is obvious from the name of
> the file, although I would suggest you name it "xc_dom_paging.c" or
> something a tad less specific.
>
> >  GUEST_SRCS-$(CONFIG_X86)     += xc_dom_x86.c
> >  GUEST_SRCS-$(CONFIG_X86)     += xc_cpuid_x86.c
> >  GUEST_SRCS-$(CONFIG_X86)     += xc_hvm_build_x86.c
> > diff --git a/tools/libxc/xc_mem_paging_setup.c
> b/tools/libxc/xc_mem_paging_setup.c
> > new file mode 100644
> > index 0000000..125d203
> > --- /dev/null
> > +++ b/tools/libxc/xc_mem_paging_setup.c
> > @@ -0,0 +1,132 @@
> > +/*
> > + * tools/libxc/xc_mem_paging_setup.c
> > + *
> > + * Routines to initialize memory paging. Create shared ring
> > + * and event channels to communicate with the hypervisor.
> > + *
> > + * Copyright (c) 2014 Dushyant Behl
> > + * Copyright (c) 2009 by Citrix Systems, Inc. (Patrick Colp)
> > + *
> > + * This library is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU Lesser General Public
> > + * License as published by the Free Software Foundation; either
> > + * version 2.1 of the License, or (at your option) any later version.
> > + *
> > + * This library is distributed in the hope that it will be useful,
> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> > + * Lesser General Public License for more details.
> > + *
> > + * You should have received a copy of the GNU Lesser General Public
> > + * License along with this library; if not, write to the Free Software
> > + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
>  02110-1301 USA
> > + */
> > +
> > +#include "xc_private.h"
> > +#include <xen/event_channel.h>
> > +#include <xen/mem_event.h>
> > +
> > +/*
> > + * Mem paging ring and event channel setup routine.
> > + * Setup a shared ring and an event channel to communicate between
> > + * hypervisor and the tool performing mem paging operations.
>
> Please document here, or in the header file, what the error semantics of
> the function are.  Libxc is a complete mess so these things need
> spelling out.
>
> > + */
> > +int xc_mem_paging_ring_setup(xc_interface *xch, domid_t domain_id,
> > +                                void *ring_page, int *port,
> > +                                uint32_t *evtchn_port,
> > +                                xc_evtchn **xce_handle,
> > +                                mem_event_back_ring_t *back_ring)
>
> The parameters should be aligned with the xc_interface.
>
> > +{
> > +    int rc;
> > +    unsigned long ring_pfn, mmap_pfn;
>
> uint64_t's as these are hvmparams
>
> David Vrabel's GenID series fixes the prototypes of
> xc_[gs]et_hvm_param() to be sane.
>
Andrew, we were thinking this should be rebased onto your git branch for
save restore rework, which I assume contains these GenID changes as well.
Can you point to the right git coordinates? Thanks

>
> > +
> > +    /* Map the ring page */
> > +    xc_get_hvm_param(xch, domain_id, HVM_PARAM_PAGING_RING_PFN,
> &ring_pfn);
> > +    mmap_pfn = ring_pfn;
> > +    ring_page = xc_map_foreign_batch(xch, domain_id,
> > +                                        PROT_READ | PROT_WRITE,
> &mmap_pfn, 1);
>
> You are stuffing a pointer into an unsigned long?
>
ring_page is void * ... mmap_pfn is ulong ....

>
> > +    if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
>
> What is this check for?
>
foreign_batch has the old semantics of setting the MSB of the pfn in error.
Maybe use map_foreign_bulk with cleaner error reporting?

>
> > +    {
> > +        /* Map failed, populate ring page */
> > +        rc = xc_domain_populate_physmap_exact(xch, domain_id,
> > +                                              1, 0, 0, &ring_pfn);
> > +        if ( rc != 0 )
> > +        {
> > +            PERROR("Failed to populate ring gfn\n");
> > +            return 1;
>
> You are (implicitly) setting errno.  Please return -1;
>
> > +        }
> > +
> > +        mmap_pfn = ring_pfn;
> > +        ring_page = xc_map_foreign_batch(xch, domain_id,
> > +                                        PROT_READ | PROT_WRITE,
> &mmap_pfn, 1);
> > +
> > +        if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
> > +        {
> > +            PERROR("Could not map the ring page\n");
> > +            return 1;
> > +        }
> > +    }
> > +
> > +    /* Initialise Xen */
> > +    rc = xc_mem_paging_enable(xch, domain_id, evtchn_port);
> > +    if ( rc != 0 )
> > +    {
> > +        switch ( errno ) {
> > +            case EBUSY:
> > +                ERROR("mempaging is (or was) active on this domain");
> > +                break;
> > +            case ENODEV:
> > +                ERROR("mempaging requires Hardware Assisted Paging");
> > +                break;
> > +            case EMLINK:
> > +                ERROR("mempaging not supported while iommu passthrough
> is enabled");
> > +                break;
> > +            case EXDEV:
> > +                ERROR("mempaging not supported in a PoD guest");
> > +                break;
> > +            default:
> > +                PERROR("mempaging: error initialising shared page");
> > +                break;
> > +        }
> > +        return 1;
> > +    }
> > +
> > +    /* Open event channel */
> > +    *xce_handle = xc_evtchn_open(NULL, 0);
> > +    if ( *xce_handle == NULL )
> > +    {
> > +        PERROR("Failed to open event channel");
> > +        return 1;
> > +    }
> > +
> > +    /* Bind event notification */
> > +    rc = xc_evtchn_bind_interdomain(*xce_handle, domain_id,
> *evtchn_port);
> > +    if ( rc < 0 )
> > +    {
> > +        PERROR("Failed to bind event channel");
> > +        return 1;
> > +    }
> > +    *port = rc;
> > +
> > +    /* Initialise ring */
> > +    SHARED_RING_INIT((mem_event_sring_t *)ring_page);
> > +    BACK_RING_INIT(back_ring, (mem_event_sring_t *)ring_page,
> PAGE_SIZE);
> > +
> > +    /* Now that the ring is set, remove it from the guest's physmap */
> > +    if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0,
> &ring_pfn) )
> > +    {
> > +        PERROR("Failed to remove ring from guest physmap");
> > +        return 1;
> > +    }
> > +
> > +    return 0;
> > +}
> > +
> > +/*
> > + * Local variables:
> > + * mode: C
> > + * c-file-style: "BSD"
> > + * c-basic-offset: 4
> > + * indent-tabs-mode: nil
> > + * End:
> > + */
> > diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
> > index 02129f7..c871a2f 100644
> > --- a/tools/libxc/xenctrl.h
> > +++ b/tools/libxc/xenctrl.h
> > @@ -47,6 +47,7 @@
> >  #include <xen/xsm/flask_op.h>
> >  #include <xen/tmem.h>
> >  #include <xen/kexec.h>
> > +#include <xen/mem_event.h>
> >
> >  #include "xentoollog.h"
> >
> > @@ -2039,6 +2040,17 @@ int xc_mem_paging_prep(xc_interface *xch, domid_t
> domain_id, unsigned long gfn);
> >  int xc_mem_paging_load(xc_interface *xch, domid_t domain_id,
> >                          unsigned long gfn, void *buffer);
> >
> > +/*
> > + * Mem paging ring and event channel setup routine.
> > + * Setup a shared ring and an event channel to communicate between
> > + * hypervisor and the tool performing mem paging operations.
> > + */
> > +int xc_mem_paging_ring_setup(xc_interface *xch, domid_t domain_id,
> > +                                void *ring_page, int *port,
> > +                                uint32_t *evtchn_port,
> > +                                xc_evtchn **xceh,
> > +                                mem_event_back_ring_t *_back_ring);
> > +
> >  /**
> >   * Access tracking operations.
> >   * Supported only on Intel EPT 64 bit processors.
> > diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c
> b/tools/ocaml/libs/xc/xenctrl_stubs.c
> > index ff29b47..37a4db7 100644
> > --- a/tools/ocaml/libs/xc/xenctrl_stubs.c
> > +++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
> > @@ -521,13 +521,16 @@ CAMLprim value stub_xc_evtchn_reset(value xch,
> value domid)
> >       CAMLreturn(Val_unit);
> >  }
> >
> > -
> > -#define RING_SIZE 32768
> > -static char ring[RING_SIZE];
> > +/*
> > + * The name is kept BUF_RING_SIZE, because the name RING_SIZE
> > + * collides with the xen shared ring definitions in io/ring.h
> > + */
> > +#define BUF_RING_SIZE 32768
> > +static char ring[BUF_RING_SIZE];
> >
> >  CAMLprim value stub_xc_readconsolering(value xch)
> >  {
> > -     unsigned int size = RING_SIZE - 1;
> > +     unsigned int size = BUF_RING_SIZE - 1;
> >       char *ring_ptr = ring;
> >       int retval;
> >
> > diff --git a/tools/xenpaging/Makefile b/tools/xenpaging/Makefile
> > index 548d9dd..ea370d3 100644
> > --- a/tools/xenpaging/Makefile
> > +++ b/tools/xenpaging/Makefile
> > @@ -1,8 +1,8 @@
> >  XEN_ROOT=$(CURDIR)/../..
> >  include $(XEN_ROOT)/tools/Rules.mk
> >
> > -CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenstore) $(PTHREAD_CFLAGS)
> > -LDLIBS += $(LDLIBS_libxenctrl) $(LDLIBS_libxenstore) $(PTHREAD_LIBS)
> > +CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenguest)
> $(CFLAGS_libxenstore) $(PTHREAD_CFLAGS)
> > +LDLIBS += $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest)
> $(LDLIBS_libxenstore) $(PTHREAD_LIBS)
>
> You don't introduce any libxenstore dependencies as far as I can see.
>
Ditto.

Andres

>
> >  LDFLAGS += $(PTHREAD_LDFLAGS)
> >
> >  POLICY    = default
> > diff --git a/tools/xenpaging/xenpaging.c b/tools/xenpaging/xenpaging.c
> > index 5ef2f09..2b81408 100644
> > --- a/tools/xenpaging/xenpaging.c
> > +++ b/tools/xenpaging/xenpaging.c
> > @@ -281,7 +281,6 @@ static struct xenpaging *xenpaging_init(int argc,
> char *argv[])
> >      xentoollog_logger *dbg = NULL;
> >      char *p;
> >      int rc;
> > -    unsigned long ring_pfn, mmap_pfn;
> >
> >      /* Allocate memory */
> >      paging = calloc(1, sizeof(struct xenpaging));
> > @@ -338,91 +337,21 @@ static struct xenpaging *xenpaging_init(int argc,
> char *argv[])
> >          goto err;
> >      }
> >
> > -    /* Map the ring page */
> > -    xc_get_hvm_param(xch, paging->mem_event.domain_id,
> > -                        HVM_PARAM_PAGING_RING_PFN, &ring_pfn);
> > -    mmap_pfn = ring_pfn;
> > -    paging->mem_event.ring_page =
> > -        xc_map_foreign_batch(xch, paging->mem_event.domain_id,
> > -                                PROT_READ | PROT_WRITE, &mmap_pfn, 1);
> > -    if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
> > -    {
> > -        /* Map failed, populate ring page */
> > -        rc = xc_domain_populate_physmap_exact(paging->xc_handle,
> > -
>  paging->mem_event.domain_id,
> > -                                              1, 0, 0, &ring_pfn);
> > -        if ( rc != 0 )
> > -        {
> > -            PERROR("Failed to populate ring gfn\n");
> > -            goto err;
> > -        }
> > -
> > -        mmap_pfn = ring_pfn;
> > -        paging->mem_event.ring_page =
> > -            xc_map_foreign_batch(xch, paging->mem_event.domain_id,
> > -                                    PROT_READ | PROT_WRITE, &mmap_pfn,
> 1);
> > -        if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
> > -        {
> > -            PERROR("Could not map the ring page\n");
> > -            goto err;
> > -        }
> > -    }
> > -
> > -    /* Initialise Xen */
> > -    rc = xc_mem_paging_enable(xch, paging->mem_event.domain_id,
> > -                             &paging->mem_event.evtchn_port);
> > -    if ( rc != 0 )
> > -    {
> > -        switch ( errno ) {
> > -            case EBUSY:
> > -                ERROR("xenpaging is (or was) active on this domain");
> > -                break;
> > -            case ENODEV:
> > -                ERROR("xenpaging requires Hardware Assisted Paging");
> > -                break;
> > -            case EMLINK:
> > -                ERROR("xenpaging not supported while iommu passthrough
> is enabled");
> > -                break;
> > -            case EXDEV:
> > -                ERROR("xenpaging not supported in a PoD guest");
> > -                break;
> > -            default:
> > -                PERROR("Error initialising shared page");
> > -                break;
> > -        }
> > -        goto err;
> > -    }
> > -
> > -    /* Open event channel */
> > -    paging->mem_event.xce_handle = xc_evtchn_open(NULL, 0);
> > -    if ( paging->mem_event.xce_handle == NULL )
> > -    {
> > -        PERROR("Failed to open event channel");
> > -        goto err;
> > -    }
> > -
> > -    /* Bind event notification */
> > -    rc = xc_evtchn_bind_interdomain(paging->mem_event.xce_handle,
> > +    /* Initialize paging by setting up ring and event channel. */
> > +    rc = xc_mem_paging_ring_setup(paging->xc_handle,
> >                                      paging->mem_event.domain_id,
> > -                                    paging->mem_event.evtchn_port);
> > -    if ( rc < 0 )
> > -    {
> > -        PERROR("Failed to bind event channel");
> > +                                    paging->mem_event.ring_page,
> > +                                    &paging->mem_event.port,
> > +                                    &paging->mem_event.evtchn_port,
> > +                                    &paging->mem_event.xce_handle,
> > +                                    &paging->mem_event.back_ring);
> > +    if( rc )
> > +    {
> > +        PERROR("Could not initialize mem paging\n");
> >          goto err;
> >      }
> >
> > -    paging->mem_event.port = rc;
> > -
> > -    /* Initialise ring */
> > -    SHARED_RING_INIT((mem_event_sring_t *)paging->mem_event.ring_page);
> > -    BACK_RING_INIT(&paging->mem_event.back_ring,
> > -                   (mem_event_sring_t *)paging->mem_event.ring_page,
> > -                   PAGE_SIZE);
> > -
> > -    /* Now that the ring is set, remove it from the guest's physmap */
> > -    if ( xc_domain_decrease_reservation_exact(xch,
> > -                    paging->mem_event.domain_id, 1, 0, &ring_pfn) )
> > -        PERROR("Failed to remove ring from guest physmap");
> > +    DPRINTF("ring and event channel setup successful\n");
>
> You don't need trailing \n's in DPRINTF()s
>
> ~Andrew
>
> >
> >      /* Get max_pages from guest if not provided via cmdline */
> >      if ( !paging->max_pages )
>
>

[-- Attachment #1.2: Type: text/html, Size: 21686 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Xen-Devel] [PATCH] [GSOC14] refactored mempaging code from xenpaging to libxc.
  2014-06-12 13:55         ` Ian Campbell
@ 2014-06-12 14:01           ` Andres Lagar Cavilla
  0 siblings, 0 replies; 23+ messages in thread
From: Andres Lagar Cavilla @ 2014-06-12 14:01 UTC (permalink / raw)
  To: Ian Campbell
  Cc: Andrew Cooper, xen-devel, Dushyant Behl, dave.scott, Stefano Stabellini


[-- Attachment #1.1: Type: text/plain, Size: 1670 bytes --]

On Thu, Jun 12, 2014 at 6:55 AM, Ian Campbell <Ian.Campbell@citrix.com>
wrote:

> On Thu, 2014-06-12 at 06:49 -0700, Andres Lagar Cavilla wrote:
> > On Thu, Jun 12, 2014 at 6:29 AM, Andrew Cooper
> > <andrew.cooper3@citrix.com> wrote:
> >         On 12/06/14 14:27, Ian Campbell wrote:
> >         > On Thu, 2014-06-12 at 14:22 +0100, Andrew Cooper wrote:
> >         >>> +# mem paging setup
> >         >>> +GUEST_SRCS-y                 += xc_mem_paging_setup.c
> >         >>> +
> >         >> I don't think you need the comment here.  It is obvious
> >         from the name of
> >         >> the file, although I would suggest you name it
> >         "xc_dom_paging.c" or
> >         >> something a tad less specific.
> >         > xc_dom_* are, for better or worse, generally domain builder
> >         related.
> >         >
> >         > Given the API is xen_mem_paging_* I don't think the name
> >         here is so bad,
> >         > I might be inclined to drop the _setup if there is any
> >         chance of
> >         > anything else being added here.
> >         >
> >         >
> >
> >
> >         Hmm yes I suppose.  I would certainly drop the _setup, but
> >         "xc_mem_paging.c" is fine.
> > xc_mem_paging.c is part of libxenctrl and wraps the hypercalls to page
> > in, page out etc.
>
> Would it be inappropriate to put this new function in that file then?
>
Heh, I worked hard to convince Dushyant otherwise. My understanding is that
libxenctrl essentially offers domain control hypercall wrappers, and if you
want to apply more complex transformations to a domain (like saving it or
changing its CPUID policy) you go to libxenguest.

Andres

[-- Attachment #1.2: Type: text/html, Size: 2485 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Xen-Devel] [PATCH] [GSOC14] refactored mempaging code from xenpaging to libxc.
  2014-06-12 13:59   ` Andres Lagar Cavilla
@ 2014-06-12 14:03     ` Dushyant Behl
  2014-06-12 14:11     ` Andrew Cooper
  1 sibling, 0 replies; 23+ messages in thread
From: Dushyant Behl @ 2014-06-12 14:03 UTC (permalink / raw)
  To: Andres Lagar Cavilla
  Cc: Andrew Cooper, Stefano Stabellini, dave.scott, xen-devel

Andres and I had a discussion over the same thing but we didn't put
the function in xc_mem_paging.c because
the functionality offered by this file was a lot heavier weight rather
than the control functionality provided by libxenctrl.
The other file has page-in, page-out like functions which belong to
libxenctrl but this functionality we thought should
go should belong to libxenguest, hence named it differently and
created a new file.


On Thu, Jun 12, 2014 at 7:29 PM, Andres Lagar Cavilla
<andres@lagarcavilla.org> wrote:
> On Thu, Jun 12, 2014 at 6:22 AM, Andrew Cooper <andrew.cooper3@citrix.com>
> wrote:
>>
>> On 12/06/14 10:23, Dushyant Behl wrote:
>> > This patch is part of the work done under the gsoc project -
>> > Lazy Restore Using Memory Paging.
>> >
>> > This patch moves the code to initialize mempaging from xenpaging to
>> > libxc.
>>
>> The source code is named libxc, but technically you are moving it into
>> libxenguest which is a separate library from libxenctrl.
>>
>> > The code refractored from xenpaging is the code which sets up paging,
>> > initializes a shared ring and event channel to communicate with xen.
>> > This
>> > communication is done between the hypervisor and the dom0 tool which
>> > performs
>> > the activity of pager. The xenpaging code is changed to use the newly
>> > created
>> > routines and is tested to properly build and work with this code.
>> >
>> > The refractoring is done so that any tool which will act as pager in
>> > lazy restore or use memory paging can use a same routines to initialize
>> > mempaging.
>> > This refactoring will also allow any future (in-tree) tools to use
>> > mempaging.
>> >
>> > Signed-off-by: Dushyant Behl <myselfdushyantbehl@gmail.com>
>> > Reviewed-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
>>
>> > ---
>> >  tools/libxc/Makefile                |   3 +
>> >  tools/libxc/xc_mem_paging_setup.c   | 132
>> > ++++++++++++++++++++++++++++++++++++
>> >  tools/libxc/xenctrl.h               |  12 ++++
>> >  tools/ocaml/libs/xc/xenctrl_stubs.c |  11 +--
>> >  tools/xenpaging/Makefile            |   4 +-
>> >  tools/xenpaging/xenpaging.c         |  93 +++----------------------
>> >  6 files changed, 167 insertions(+), 88 deletions(-)
>> >  create mode 100644 tools/libxc/xc_mem_paging_setup.c
>> >
>> > diff --git a/tools/libxc/Makefile b/tools/libxc/Makefile
>> > index a74b19e..2439853 100644
>> > --- a/tools/libxc/Makefile
>> > +++ b/tools/libxc/Makefile
>> > @@ -69,6 +69,9 @@ GUEST_SRCS-$(CONFIG_ARM)     +=
>> > xc_dom_armzimageloader.c
>> >  GUEST_SRCS-y                 += xc_dom_binloader.c
>> >  GUEST_SRCS-y                 += xc_dom_compat_linux.c
>> >
>> > +# mem paging setup
>> > +GUEST_SRCS-y                 += xc_mem_paging_setup.c
>> > +
>>
>> I don't think you need the comment here.  It is obvious from the name of
>> the file, although I would suggest you name it "xc_dom_paging.c" or
>> something a tad less specific.
>>
>> >  GUEST_SRCS-$(CONFIG_X86)     += xc_dom_x86.c
>> >  GUEST_SRCS-$(CONFIG_X86)     += xc_cpuid_x86.c
>> >  GUEST_SRCS-$(CONFIG_X86)     += xc_hvm_build_x86.c
>> > diff --git a/tools/libxc/xc_mem_paging_setup.c
>> > b/tools/libxc/xc_mem_paging_setup.c
>> > new file mode 100644
>> > index 0000000..125d203
>> > --- /dev/null
>> > +++ b/tools/libxc/xc_mem_paging_setup.c
>> > @@ -0,0 +1,132 @@
>> > +/*
>> > + * tools/libxc/xc_mem_paging_setup.c
>> > + *
>> > + * Routines to initialize memory paging. Create shared ring
>> > + * and event channels to communicate with the hypervisor.
>> > + *
>> > + * Copyright (c) 2014 Dushyant Behl
>> > + * Copyright (c) 2009 by Citrix Systems, Inc. (Patrick Colp)
>> > + *
>> > + * This library is free software; you can redistribute it and/or
>> > + * modify it under the terms of the GNU Lesser General Public
>> > + * License as published by the Free Software Foundation; either
>> > + * version 2.1 of the License, or (at your option) any later version.
>> > + *
>> > + * This library is distributed in the hope that it will be useful,
>> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
>> > + * Lesser General Public License for more details.
>> > + *
>> > + * You should have received a copy of the GNU Lesser General Public
>> > + * License along with this library; if not, write to the Free Software
>> > + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
>> > 02110-1301 USA
>> > + */
>> > +
>> > +#include "xc_private.h"
>> > +#include <xen/event_channel.h>
>> > +#include <xen/mem_event.h>
>> > +
>> > +/*
>> > + * Mem paging ring and event channel setup routine.
>> > + * Setup a shared ring and an event channel to communicate between
>> > + * hypervisor and the tool performing mem paging operations.
>>
>> Please document here, or in the header file, what the error semantics of
>> the function are.  Libxc is a complete mess so these things need
>> spelling out.
>>
>> > + */
>> > +int xc_mem_paging_ring_setup(xc_interface *xch, domid_t domain_id,
>> > +                                void *ring_page, int *port,
>> > +                                uint32_t *evtchn_port,
>> > +                                xc_evtchn **xce_handle,
>> > +                                mem_event_back_ring_t *back_ring)
>>
>> The parameters should be aligned with the xc_interface.
>>
>> > +{
>> > +    int rc;
>> > +    unsigned long ring_pfn, mmap_pfn;
>>
>> uint64_t's as these are hvmparams
>>
>> David Vrabel's GenID series fixes the prototypes of
>> xc_[gs]et_hvm_param() to be sane.
>
> Andrew, we were thinking this should be rebased onto your git branch for
> save restore rework, which I assume contains these GenID changes as well.
> Can you point to the right git coordinates? Thanks
>>
>>
>> > +
>> > +    /* Map the ring page */
>> > +    xc_get_hvm_param(xch, domain_id, HVM_PARAM_PAGING_RING_PFN,
>> > &ring_pfn);
>> > +    mmap_pfn = ring_pfn;
>> > +    ring_page = xc_map_foreign_batch(xch, domain_id,
>> > +                                        PROT_READ | PROT_WRITE,
>> > &mmap_pfn, 1);
>>
>> You are stuffing a pointer into an unsigned long?
>
> ring_page is void * ... mmap_pfn is ulong ....
>>
>>
>> > +    if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
>>
>> What is this check for?
>
> foreign_batch has the old semantics of setting the MSB of the pfn in error.
> Maybe use map_foreign_bulk with cleaner error reporting?
>>
>>
>> > +    {
>> > +        /* Map failed, populate ring page */
>> > +        rc = xc_domain_populate_physmap_exact(xch, domain_id,
>> > +                                              1, 0, 0, &ring_pfn);
>> > +        if ( rc != 0 )
>> > +        {
>> > +            PERROR("Failed to populate ring gfn\n");
>> > +            return 1;
>>
>> You are (implicitly) setting errno.  Please return -1;
>>
>> > +        }
>> > +
>> > +        mmap_pfn = ring_pfn;
>> > +        ring_page = xc_map_foreign_batch(xch, domain_id,
>> > +                                        PROT_READ | PROT_WRITE,
>> > &mmap_pfn, 1);
>> > +
>> > +        if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
>> > +        {
>> > +            PERROR("Could not map the ring page\n");
>> > +            return 1;
>> > +        }
>> > +    }
>> > +
>> > +    /* Initialise Xen */
>> > +    rc = xc_mem_paging_enable(xch, domain_id, evtchn_port);
>> > +    if ( rc != 0 )
>> > +    {
>> > +        switch ( errno ) {
>> > +            case EBUSY:
>> > +                ERROR("mempaging is (or was) active on this domain");
>> > +                break;
>> > +            case ENODEV:
>> > +                ERROR("mempaging requires Hardware Assisted Paging");
>> > +                break;
>> > +            case EMLINK:
>> > +                ERROR("mempaging not supported while iommu passthrough
>> > is enabled");
>> > +                break;
>> > +            case EXDEV:
>> > +                ERROR("mempaging not supported in a PoD guest");
>> > +                break;
>> > +            default:
>> > +                PERROR("mempaging: error initialising shared page");
>> > +                break;
>> > +        }
>> > +        return 1;
>> > +    }
>> > +
>> > +    /* Open event channel */
>> > +    *xce_handle = xc_evtchn_open(NULL, 0);
>> > +    if ( *xce_handle == NULL )
>> > +    {
>> > +        PERROR("Failed to open event channel");
>> > +        return 1;
>> > +    }
>> > +
>> > +    /* Bind event notification */
>> > +    rc = xc_evtchn_bind_interdomain(*xce_handle, domain_id,
>> > *evtchn_port);
>> > +    if ( rc < 0 )
>> > +    {
>> > +        PERROR("Failed to bind event channel");
>> > +        return 1;
>> > +    }
>> > +    *port = rc;
>> > +
>> > +    /* Initialise ring */
>> > +    SHARED_RING_INIT((mem_event_sring_t *)ring_page);
>> > +    BACK_RING_INIT(back_ring, (mem_event_sring_t *)ring_page,
>> > PAGE_SIZE);
>> > +
>> > +    /* Now that the ring is set, remove it from the guest's physmap */
>> > +    if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0,
>> > &ring_pfn) )
>> > +    {
>> > +        PERROR("Failed to remove ring from guest physmap");
>> > +        return 1;
>> > +    }
>> > +
>> > +    return 0;
>> > +}
>> > +
>> > +/*
>> > + * Local variables:
>> > + * mode: C
>> > + * c-file-style: "BSD"
>> > + * c-basic-offset: 4
>> > + * indent-tabs-mode: nil
>> > + * End:
>> > + */
>> > diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
>> > index 02129f7..c871a2f 100644
>> > --- a/tools/libxc/xenctrl.h
>> > +++ b/tools/libxc/xenctrl.h
>> > @@ -47,6 +47,7 @@
>> >  #include <xen/xsm/flask_op.h>
>> >  #include <xen/tmem.h>
>> >  #include <xen/kexec.h>
>> > +#include <xen/mem_event.h>
>> >
>> >  #include "xentoollog.h"
>> >
>> > @@ -2039,6 +2040,17 @@ int xc_mem_paging_prep(xc_interface *xch, domid_t
>> > domain_id, unsigned long gfn);
>> >  int xc_mem_paging_load(xc_interface *xch, domid_t domain_id,
>> >                          unsigned long gfn, void *buffer);
>> >
>> > +/*
>> > + * Mem paging ring and event channel setup routine.
>> > + * Setup a shared ring and an event channel to communicate between
>> > + * hypervisor and the tool performing mem paging operations.
>> > + */
>> > +int xc_mem_paging_ring_setup(xc_interface *xch, domid_t domain_id,
>> > +                                void *ring_page, int *port,
>> > +                                uint32_t *evtchn_port,
>> > +                                xc_evtchn **xceh,
>> > +                                mem_event_back_ring_t *_back_ring);
>> > +
>> >  /**
>> >   * Access tracking operations.
>> >   * Supported only on Intel EPT 64 bit processors.
>> > diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c
>> > b/tools/ocaml/libs/xc/xenctrl_stubs.c
>> > index ff29b47..37a4db7 100644
>> > --- a/tools/ocaml/libs/xc/xenctrl_stubs.c
>> > +++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
>> > @@ -521,13 +521,16 @@ CAMLprim value stub_xc_evtchn_reset(value xch,
>> > value domid)
>> >       CAMLreturn(Val_unit);
>> >  }
>> >
>> > -
>> > -#define RING_SIZE 32768
>> > -static char ring[RING_SIZE];
>> > +/*
>> > + * The name is kept BUF_RING_SIZE, because the name RING_SIZE
>> > + * collides with the xen shared ring definitions in io/ring.h
>> > + */
>> > +#define BUF_RING_SIZE 32768
>> > +static char ring[BUF_RING_SIZE];
>> >
>> >  CAMLprim value stub_xc_readconsolering(value xch)
>> >  {
>> > -     unsigned int size = RING_SIZE - 1;
>> > +     unsigned int size = BUF_RING_SIZE - 1;
>> >       char *ring_ptr = ring;
>> >       int retval;
>> >
>> > diff --git a/tools/xenpaging/Makefile b/tools/xenpaging/Makefile
>> > index 548d9dd..ea370d3 100644
>> > --- a/tools/xenpaging/Makefile
>> > +++ b/tools/xenpaging/Makefile
>> > @@ -1,8 +1,8 @@
>> >  XEN_ROOT=$(CURDIR)/../..
>> >  include $(XEN_ROOT)/tools/Rules.mk
>> >
>> > -CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenstore) $(PTHREAD_CFLAGS)
>> > -LDLIBS += $(LDLIBS_libxenctrl) $(LDLIBS_libxenstore) $(PTHREAD_LIBS)
>> > +CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenguest)
>> > $(CFLAGS_libxenstore) $(PTHREAD_CFLAGS)
>> > +LDLIBS += $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest)
>> > $(LDLIBS_libxenstore) $(PTHREAD_LIBS)
>>
>> You don't introduce any libxenstore dependencies as far as I can see.
>
> Ditto.
>
> Andres
>>
>>
>> >  LDFLAGS += $(PTHREAD_LDFLAGS)
>> >
>> >  POLICY    = default
>> > diff --git a/tools/xenpaging/xenpaging.c b/tools/xenpaging/xenpaging.c
>> > index 5ef2f09..2b81408 100644
>> > --- a/tools/xenpaging/xenpaging.c
>> > +++ b/tools/xenpaging/xenpaging.c
>> > @@ -281,7 +281,6 @@ static struct xenpaging *xenpaging_init(int argc,
>> > char *argv[])
>> >      xentoollog_logger *dbg = NULL;
>> >      char *p;
>> >      int rc;
>> > -    unsigned long ring_pfn, mmap_pfn;
>> >
>> >      /* Allocate memory */
>> >      paging = calloc(1, sizeof(struct xenpaging));
>> > @@ -338,91 +337,21 @@ static struct xenpaging *xenpaging_init(int argc,
>> > char *argv[])
>> >          goto err;
>> >      }
>> >
>> > -    /* Map the ring page */
>> > -    xc_get_hvm_param(xch, paging->mem_event.domain_id,
>> > -                        HVM_PARAM_PAGING_RING_PFN, &ring_pfn);
>> > -    mmap_pfn = ring_pfn;
>> > -    paging->mem_event.ring_page =
>> > -        xc_map_foreign_batch(xch, paging->mem_event.domain_id,
>> > -                                PROT_READ | PROT_WRITE, &mmap_pfn, 1);
>> > -    if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
>> > -    {
>> > -        /* Map failed, populate ring page */
>> > -        rc = xc_domain_populate_physmap_exact(paging->xc_handle,
>> > -
>> > paging->mem_event.domain_id,
>> > -                                              1, 0, 0, &ring_pfn);
>> > -        if ( rc != 0 )
>> > -        {
>> > -            PERROR("Failed to populate ring gfn\n");
>> > -            goto err;
>> > -        }
>> > -
>> > -        mmap_pfn = ring_pfn;
>> > -        paging->mem_event.ring_page =
>> > -            xc_map_foreign_batch(xch, paging->mem_event.domain_id,
>> > -                                    PROT_READ | PROT_WRITE, &mmap_pfn,
>> > 1);
>> > -        if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
>> > -        {
>> > -            PERROR("Could not map the ring page\n");
>> > -            goto err;
>> > -        }
>> > -    }
>> > -
>> > -    /* Initialise Xen */
>> > -    rc = xc_mem_paging_enable(xch, paging->mem_event.domain_id,
>> > -                             &paging->mem_event.evtchn_port);
>> > -    if ( rc != 0 )
>> > -    {
>> > -        switch ( errno ) {
>> > -            case EBUSY:
>> > -                ERROR("xenpaging is (or was) active on this domain");
>> > -                break;
>> > -            case ENODEV:
>> > -                ERROR("xenpaging requires Hardware Assisted Paging");
>> > -                break;
>> > -            case EMLINK:
>> > -                ERROR("xenpaging not supported while iommu passthrough
>> > is enabled");
>> > -                break;
>> > -            case EXDEV:
>> > -                ERROR("xenpaging not supported in a PoD guest");
>> > -                break;
>> > -            default:
>> > -                PERROR("Error initialising shared page");
>> > -                break;
>> > -        }
>> > -        goto err;
>> > -    }
>> > -
>> > -    /* Open event channel */
>> > -    paging->mem_event.xce_handle = xc_evtchn_open(NULL, 0);
>> > -    if ( paging->mem_event.xce_handle == NULL )
>> > -    {
>> > -        PERROR("Failed to open event channel");
>> > -        goto err;
>> > -    }
>> > -
>> > -    /* Bind event notification */
>> > -    rc = xc_evtchn_bind_interdomain(paging->mem_event.xce_handle,
>> > +    /* Initialize paging by setting up ring and event channel. */
>> > +    rc = xc_mem_paging_ring_setup(paging->xc_handle,
>> >                                      paging->mem_event.domain_id,
>> > -                                    paging->mem_event.evtchn_port);
>> > -    if ( rc < 0 )
>> > -    {
>> > -        PERROR("Failed to bind event channel");
>> > +                                    paging->mem_event.ring_page,
>> > +                                    &paging->mem_event.port,
>> > +                                    &paging->mem_event.evtchn_port,
>> > +                                    &paging->mem_event.xce_handle,
>> > +                                    &paging->mem_event.back_ring);
>> > +    if( rc )
>> > +    {
>> > +        PERROR("Could not initialize mem paging\n");
>> >          goto err;
>> >      }
>> >
>> > -    paging->mem_event.port = rc;
>> > -
>> > -    /* Initialise ring */
>> > -    SHARED_RING_INIT((mem_event_sring_t *)paging->mem_event.ring_page);
>> > -    BACK_RING_INIT(&paging->mem_event.back_ring,
>> > -                   (mem_event_sring_t *)paging->mem_event.ring_page,
>> > -                   PAGE_SIZE);
>> > -
>> > -    /* Now that the ring is set, remove it from the guest's physmap */
>> > -    if ( xc_domain_decrease_reservation_exact(xch,
>> > -                    paging->mem_event.domain_id, 1, 0, &ring_pfn) )
>> > -        PERROR("Failed to remove ring from guest physmap");
>> > +    DPRINTF("ring and event channel setup successful\n");
>>
>> You don't need trailing \n's in DPRINTF()s
>>
>> ~Andrew
>>
>> >
>> >      /* Get max_pages from guest if not provided via cmdline */
>> >      if ( !paging->max_pages )
>>
>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Xen-Devel] [PATCH] [GSOC14] refactored mempaging code from xenpaging to libxc.
  2014-06-12 13:59   ` Andres Lagar Cavilla
  2014-06-12 14:03     ` Dushyant Behl
@ 2014-06-12 14:11     ` Andrew Cooper
  2014-06-12 14:21       ` Ian Campbell
  1 sibling, 1 reply; 23+ messages in thread
From: Andrew Cooper @ 2014-06-12 14:11 UTC (permalink / raw)
  To: Andres Lagar Cavilla
  Cc: stefano.stabellini, Dushyant Behl, dave.scott, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 6912 bytes --]

On 12/06/14 14:59, Andres Lagar Cavilla wrote:
> On Thu, Jun 12, 2014 at 6:22 AM, Andrew Cooper
> <andrew.cooper3@citrix.com <mailto:andrew.cooper3@citrix.com>> wrote:
>
>     On 12/06/14 10:23, Dushyant Behl wrote:
>     > This patch is part of the work done under the gsoc project -
>     > Lazy Restore Using Memory Paging.
>     >
>     > This patch moves the code to initialize mempaging from xenpaging
>     to libxc.
>
>     The source code is named libxc, but technically you are moving it into
>     libxenguest which is a separate library from libxenctrl.
>
>     > The code refractored from xenpaging is the code which sets up
>     paging,
>     > initializes a shared ring and event channel to communicate with
>     xen. This
>     > communication is done between the hypervisor and the dom0 tool
>     which performs
>     > the activity of pager. The xenpaging code is changed to use the
>     newly created
>     > routines and is tested to properly build and work with this code.
>     >
>     > The refractoring is done so that any tool which will act as pager in
>     > lazy restore or use memory paging can use a same routines to
>     initialize mempaging.
>     > This refactoring will also allow any future (in-tree) tools to
>     use mempaging.
>     >
>     > Signed-off-by: Dushyant Behl <myselfdushyantbehl@gmail.com
>     <mailto:myselfdushyantbehl@gmail.com>>
>     > Reviewed-by: Andres Lagar-Cavilla <andres@lagarcavilla.org
>     <mailto:andres@lagarcavilla.org>> 
>
>     > ---
>     >  tools/libxc/Makefile                |   3 +
>     >  tools/libxc/xc_mem_paging_setup.c   | 132
>     ++++++++++++++++++++++++++++++++++++
>     >  tools/libxc/xenctrl.h               |  12 ++++
>     >  tools/ocaml/libs/xc/xenctrl_stubs.c |  11 +--
>     >  tools/xenpaging/Makefile            |   4 +-
>     >  tools/xenpaging/xenpaging.c         |  93 +++----------------------
>     >  6 files changed, 167 insertions(+), 88 deletions(-)
>     >  create mode 100644 tools/libxc/xc_mem_paging_setup.c
>     >
>     > diff --git a/tools/libxc/Makefile b/tools/libxc/Makefile
>     > index a74b19e..2439853 100644
>     > --- a/tools/libxc/Makefile
>     > +++ b/tools/libxc/Makefile
>     > @@ -69,6 +69,9 @@ GUEST_SRCS-$(CONFIG_ARM)     +=
>     xc_dom_armzimageloader.c
>     >  GUEST_SRCS-y                 += xc_dom_binloader.c
>     >  GUEST_SRCS-y                 += xc_dom_compat_linux.c
>     >
>     > +# mem paging setup
>     > +GUEST_SRCS-y                 += xc_mem_paging_setup.c
>     > +
>
>     I don't think you need the comment here.  It is obvious from the
>     name of
>     the file, although I would suggest you name it "xc_dom_paging.c" or
>     something a tad less specific.
>
>     >  GUEST_SRCS-$(CONFIG_X86)     += xc_dom_x86.c
>     >  GUEST_SRCS-$(CONFIG_X86)     += xc_cpuid_x86.c
>     >  GUEST_SRCS-$(CONFIG_X86)     += xc_hvm_build_x86.c
>     > diff --git a/tools/libxc/xc_mem_paging_setup.c
>     b/tools/libxc/xc_mem_paging_setup.c
>     > new file mode 100644
>     > index 0000000..125d203
>     > --- /dev/null
>     > +++ b/tools/libxc/xc_mem_paging_setup.c
>     > @@ -0,0 +1,132 @@
>     > +/*
>     > + * tools/libxc/xc_mem_paging_setup.c
>     > + *
>     > + * Routines to initialize memory paging. Create shared ring
>     > + * and event channels to communicate with the hypervisor.
>     > + *
>     > + * Copyright (c) 2014 Dushyant Behl
>     > + * Copyright (c) 2009 by Citrix Systems, Inc. (Patrick Colp)
>     > + *
>     > + * This library is free software; you can redistribute it and/or
>     > + * modify it under the terms of the GNU Lesser General Public
>     > + * License as published by the Free Software Foundation; either
>     > + * version 2.1 of the License, or (at your option) any later
>     version.
>     > + *
>     > + * This library is distributed in the hope that it will be useful,
>     > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>     > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See
>     the GNU
>     > + * Lesser General Public License for more details.
>     > + *
>     > + * You should have received a copy of the GNU Lesser General Public
>     > + * License along with this library; if not, write to the Free
>     Software
>     > + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
>     MA  02110-1301 USA
>     > + */
>     > +
>     > +#include "xc_private.h"
>     > +#include <xen/event_channel.h>
>     > +#include <xen/mem_event.h>
>     > +
>     > +/*
>     > + * Mem paging ring and event channel setup routine.
>     > + * Setup a shared ring and an event channel to communicate between
>     > + * hypervisor and the tool performing mem paging operations.
>
>     Please document here, or in the header file, what the error
>     semantics of
>     the function are.  Libxc is a complete mess so these things need
>     spelling out.
>
>     > + */
>     > +int xc_mem_paging_ring_setup(xc_interface *xch, domid_t domain_id,
>     > +                                void *ring_page, int *port,
>     > +                                uint32_t *evtchn_port,
>     > +                                xc_evtchn **xce_handle,
>     > +                                mem_event_back_ring_t *back_ring)
>
>     The parameters should be aligned with the xc_interface.
>
>     > +{
>     > +    int rc;
>     > +    unsigned long ring_pfn, mmap_pfn;
>
>     uint64_t's as these are hvmparams
>
>     David Vrabel's GenID series fixes the prototypes of
>     xc_[gs]et_hvm_param() to be sane.
>
> Andrew, we were thinking this should be rebased onto your git branch
> for save restore rework, which I assume contains these GenID changes
> as well. Can you point to the right git coordinates? Thanks

As David and I are both working on our series at the same time, my
save/restore is a revision out of date and doesn't yet contain the new
prototypes.  If you can wait for Davids work to get accepted, I shall
rebase and provide a v5.5 or v6.

>
>     > +
>     > +    /* Map the ring page */
>     > +    xc_get_hvm_param(xch, domain_id, HVM_PARAM_PAGING_RING_PFN,
>     &ring_pfn);
>     > +    mmap_pfn = ring_pfn;
>     > +    ring_page = xc_map_foreign_batch(xch, domain_id,
>     > +                                        PROT_READ | PROT_WRITE,
>     &mmap_pfn, 1);
>
>     You are stuffing a pointer into an unsigned long?
>
> ring_page is void * ... mmap_pfn is ulong ....

So it is.  I got ring_pfn and ring_page confused.

>
>     > +    if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
>
>     What is this check for?
>
> foreign_batch has the old semantics of setting the MSB of the pfn in
> error. Maybe use map_foreign_bulk with cleaner error reporting?

Eww.  That is a nasty interface, and will completely break 32bit
toolstacks on machines with hotplug regions above the 8TB boundary.

Please do use map_foreign_bulk().

~Andrew

[-- Attachment #1.2: Type: text/html, Size: 12808 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Xen-Devel] [PATCH] [GSOC14] refactored mempaging code from xenpaging to libxc.
  2014-06-12 14:11     ` Andrew Cooper
@ 2014-06-12 14:21       ` Ian Campbell
  2014-06-12 14:46         ` Andrew Cooper
  0 siblings, 1 reply; 23+ messages in thread
From: Ian Campbell @ 2014-06-12 14:21 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: dave.scott, xen-devel, Dushyant Behl, Andres Lagar Cavilla,
	stefano.stabellini

On Thu, 2014-06-12 at 15:11 +0100, Andrew Cooper wrote:


> >         
> >         > +    if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
> >         
> >         
> >         What is this check for?
> > foreign_batch has the old semantics of setting the MSB of the pfn in
> > error. Maybe use map_foreign_bulk with cleaner error reporting? 
> > 
> 
> Eww.  That is a nasty interface, and will completely break 32bit
> toolstacks on machines with hotplug regions above the 8TB boundary.
> 
> Please do use map_foreign_bulk().

The stated purpose of this patch is code motion. Please don't encourage
people to also make functional changes in such patches.

If there is to be any cleanups/improvements then they should be done
later, but that won't affect the acceptance of this patch (assuming it
really is just motion, I've not checked in detail).

Ian.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Xen-Devel] [PATCH] [GSOC14] refactored mempaging code from xenpaging to libxc.
  2014-06-12 14:21       ` Ian Campbell
@ 2014-06-12 14:46         ` Andrew Cooper
  2014-06-13 14:16           ` Dushyant Behl
  0 siblings, 1 reply; 23+ messages in thread
From: Andrew Cooper @ 2014-06-12 14:46 UTC (permalink / raw)
  To: Ian Campbell
  Cc: dave.scott, xen-devel, Dushyant Behl, Andres Lagar Cavilla,
	stefano.stabellini

On 12/06/14 15:21, Ian Campbell wrote:
> On Thu, 2014-06-12 at 15:11 +0100, Andrew Cooper wrote:
>
>
>>>         
>>>         > +    if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
>>>         
>>>         
>>>         What is this check for?
>>> foreign_batch has the old semantics of setting the MSB of the pfn in
>>> error. Maybe use map_foreign_bulk with cleaner error reporting? 
>>>
>> Eww.  That is a nasty interface, and will completely break 32bit
>> toolstacks on machines with hotplug regions above the 8TB boundary.
>>
>> Please do use map_foreign_bulk().
> The stated purpose of this patch is code motion. Please don't encourage
> people to also make functional changes in such patches.
>
> If there is to be any cleanups/improvements then they should be done
> later, but that won't affect the acceptance of this patch (assuming it
> really is just motion, I've not checked in detail).
>
> Ian.
>

Refactoring is not necessarily code motion, although in this case it
does appear to be almost exclusively motion.

However, it is motion of code from an example utility into a main
library, and I do not feel it is appropriate to be moving code with bugs
like this into libxc.  (Perhaps I am overly jaded by the amount of
fixing up the migration code needed in areas like this)

Two solutions come to mind: either change this patch to be "implement
xenpaging initialisation in libxenguest" and fix the code up as it goes
in, or fix it up in xenpaging and move it as a second patch.

~Andrew

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [Xen-Devel] [PATCH] [GSOC14] refactored mempaging code from xenpaging to libxc.
  2014-06-12 14:46         ` Andrew Cooper
@ 2014-06-13 14:16           ` Dushyant Behl
  2014-06-13 14:23             ` Dushyant Behl
  2014-06-13 14:51             ` Andrew Cooper
  0 siblings, 2 replies; 23+ messages in thread
From: Dushyant Behl @ 2014-06-13 14:16 UTC (permalink / raw)
  To: xen-devel
  Cc: dave.scott, stefano.stabellini, andrew.cooper3,
	andres.lagarcavilla, Dushyant Behl, Ian.Campbell

This patch is part of the work done under the gsoc project -
Lazy Restore Using Memory Paging.

This patch moves the code to initialize mempaging from xenpaging to libxc.
The code refractored from xenpaging is the code which sets up paging,
initializes a shared ring and event channel to communicate with xen. This
communication is done between the hypervisor and the dom0 tool which performs
the activity of pager. The xenpaging code is changed to use the newly created
routines and is tested to properly build and work with this code.

The refractoring is done so that any tool which will act as pager in
lazy restore or use memory paging can use a same routines to initialize mempaging.
This refactoring will also allow any future (in-tree) tools to use mempaging.

The refractored code in xc_mem_paging_ring_setup is to be compiled into
libxenguest.

Signed-off-by: Dushyant Behl <myselfdushyantbehl@gmail.com>
Reviewed-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
Acked-by: David Scott <dave.scott@citrix.com>
---
 tools/libxc/Makefile                |   2 +
 tools/libxc/xc_mem_paging_setup.c   | 135 ++++++++++++++++++++++++++++++++++++
 tools/libxc/xenctrl.h               |  15 ++++
 tools/ocaml/libs/xc/xenctrl_stubs.c |  11 +--
 tools/xenpaging/Makefile            |   4 +-
 tools/xenpaging/xenpaging.c         |  93 +++----------------------
 6 files changed, 172 insertions(+), 88 deletions(-)
 create mode 100644 tools/libxc/xc_mem_paging_setup.c

diff --git a/tools/libxc/Makefile b/tools/libxc/Makefile
index a74b19e..6cf14f0 100644
--- a/tools/libxc/Makefile
+++ b/tools/libxc/Makefile
@@ -69,6 +69,8 @@ GUEST_SRCS-$(CONFIG_ARM)     += xc_dom_armzimageloader.c
 GUEST_SRCS-y                 += xc_dom_binloader.c
 GUEST_SRCS-y                 += xc_dom_compat_linux.c
 
+GUEST_SRCS-y                 += xc_mem_paging_setup.c
+
 GUEST_SRCS-$(CONFIG_X86)     += xc_dom_x86.c
 GUEST_SRCS-$(CONFIG_X86)     += xc_cpuid_x86.c
 GUEST_SRCS-$(CONFIG_X86)     += xc_hvm_build_x86.c
diff --git a/tools/libxc/xc_mem_paging_setup.c b/tools/libxc/xc_mem_paging_setup.c
new file mode 100644
index 0000000..7b3ab38
--- /dev/null
+++ b/tools/libxc/xc_mem_paging_setup.c
@@ -0,0 +1,135 @@
+/*
+ * tools/libxc/xc_mem_paging_setup.c
+ *
+ * Routines to initialize memory paging. Create shared ring
+ * and event channels to communicate with the hypervisor.
+ *
+ * Copyright (c) 2014 Dushyant Behl
+ * Copyright (c) 2009 by Citrix Systems, Inc. (Patrick Colp)
+ * 
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301 USA
+ */
+
+#include "xc_private.h"
+#include <xen/event_channel.h>
+#include <xen/mem_event.h>
+
+/*
+ * Mem paging ring and event channel setup routine.
+ * Setup a shared ring and an event channel to communicate between
+ * hypervisor and the tool performing mem paging operations.
+ * The function will return zero on successful completion and will
+ * return -1 on failure at any intermediate step setting up errno
+ * properly. 
+ */
+int xc_mem_paging_ring_setup(xc_interface *xch, domid_t domain_id,
+                                void *ring_page, int *port,
+                                uint32_t *evtchn_port,
+                                xc_evtchn **xce_handle,
+                                mem_event_back_ring_t *back_ring)
+{
+    int rc;
+    uint64_t ring_pfn, mmap_pfn;
+
+    /* Map the ring page */
+    xc_get_hvm_param(xch, domain_id, HVM_PARAM_PAGING_RING_PFN, &ring_pfn);
+    mmap_pfn = ring_pfn;
+    ring_page = xc_map_foreign_batch(xch, domain_id,
+                                        PROT_READ | PROT_WRITE, &mmap_pfn, 1);
+    if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
+    {
+        /* Map failed, populate ring page */
+        rc = xc_domain_populate_physmap_exact(xch, domain_id,
+                                              1, 0, 0, &ring_pfn);
+        if ( rc != 0 )
+        {
+            PERROR("Failed to populate ring gfn\n");
+            return -1;
+        }
+
+        mmap_pfn = ring_pfn;
+        ring_page = xc_map_foreign_batch(xch, domain_id,
+                                        PROT_READ | PROT_WRITE, &mmap_pfn, 1);
+
+        if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
+        {
+            PERROR("Could not map the ring page\n");
+            return -1;
+        }
+    }
+
+    /* Initialise Xen */
+    rc = xc_mem_paging_enable(xch, domain_id, evtchn_port);
+    if ( rc != 0 )
+    {
+        switch ( errno ) {
+            case EBUSY:
+                ERROR("mempaging is (or was) active on this domain");
+                break;
+            case ENODEV:
+                ERROR("mempaging requires Hardware Assisted Paging");
+                break;
+            case EMLINK:
+                ERROR("mempaging not supported while iommu passthrough is enabled");
+                break;
+            case EXDEV:
+                ERROR("mempaging not supported in a PoD guest");
+                break;
+            default:
+                PERROR("mempaging: error initialising shared page");
+                break;
+        }
+        return -1;
+    }
+
+    /* Open event channel */
+    *xce_handle = xc_evtchn_open(NULL, 0);
+    if ( *xce_handle == NULL )
+    {
+        PERROR("Failed to open event channel");
+        return -1;
+    }
+
+    /* Bind event notification */
+    rc = xc_evtchn_bind_interdomain(*xce_handle, domain_id, *evtchn_port);
+    if ( rc < 0 )
+    {
+        PERROR("Failed to bind event channel");
+        return -1;
+    }
+    *port = rc;
+
+    /* Initialise ring */
+    SHARED_RING_INIT((mem_event_sring_t *)ring_page);
+    BACK_RING_INIT(back_ring, (mem_event_sring_t *)ring_page, PAGE_SIZE);
+
+    /* Now that the ring is set, remove it from the guest's physmap */
+    if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0, &ring_pfn) )
+    {
+        PERROR("Failed to remove ring from guest physmap");
+        return -1;
+    }
+
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 02129f7..77a0302 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -47,6 +47,7 @@
 #include <xen/xsm/flask_op.h>
 #include <xen/tmem.h>
 #include <xen/kexec.h>
+#include <xen/mem_event.h>
 
 #include "xentoollog.h"
 
@@ -2039,6 +2040,20 @@ int xc_mem_paging_prep(xc_interface *xch, domid_t domain_id, unsigned long gfn);
 int xc_mem_paging_load(xc_interface *xch, domid_t domain_id, 
                         unsigned long gfn, void *buffer);
 
+/*
+ * Mem paging ring and event channel setup routine.
+ * Setup a shared ring and an event channel to communicate between
+ * hypervisor and the tool performing mem paging operations.
+ * The function will return zero on successful completion and will
+ * return -1 on failure at any intermediate step setting up errno
+ * properly. 
+ */
+int xc_mem_paging_ring_setup(xc_interface *xch, domid_t domain_id,
+                                void *ring_page, int *port,
+                                uint32_t *evtchn_port,
+                                xc_evtchn **xceh,
+                                mem_event_back_ring_t *_back_ring);
+
 /** 
  * Access tracking operations.
  * Supported only on Intel EPT 64 bit processors.
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index ff29b47..37a4db7 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -521,13 +521,16 @@ CAMLprim value stub_xc_evtchn_reset(value xch, value domid)
 	CAMLreturn(Val_unit);
 }
 
-
-#define RING_SIZE 32768
-static char ring[RING_SIZE];
+/*
+ * The name is kept BUF_RING_SIZE, because the name RING_SIZE 
+ * collides with the xen shared ring definitions in io/ring.h
+ */
+#define BUF_RING_SIZE 32768
+static char ring[BUF_RING_SIZE];
 
 CAMLprim value stub_xc_readconsolering(value xch)
 {
-	unsigned int size = RING_SIZE - 1;
+	unsigned int size = BUF_RING_SIZE - 1;
 	char *ring_ptr = ring;
 	int retval;
 
diff --git a/tools/xenpaging/Makefile b/tools/xenpaging/Makefile
index 548d9dd..ea370d3 100644
--- a/tools/xenpaging/Makefile
+++ b/tools/xenpaging/Makefile
@@ -1,8 +1,8 @@
 XEN_ROOT=$(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenstore) $(PTHREAD_CFLAGS)
-LDLIBS += $(LDLIBS_libxenctrl) $(LDLIBS_libxenstore) $(PTHREAD_LIBS)
+CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenguest) $(CFLAGS_libxenstore) $(PTHREAD_CFLAGS)
+LDLIBS += $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxenstore) $(PTHREAD_LIBS)
 LDFLAGS += $(PTHREAD_LDFLAGS)
 
 POLICY    = default
diff --git a/tools/xenpaging/xenpaging.c b/tools/xenpaging/xenpaging.c
index 5ef2f09..50c1dd1 100644
--- a/tools/xenpaging/xenpaging.c
+++ b/tools/xenpaging/xenpaging.c
@@ -281,7 +281,6 @@ static struct xenpaging *xenpaging_init(int argc, char *argv[])
     xentoollog_logger *dbg = NULL;
     char *p;
     int rc;
-    unsigned long ring_pfn, mmap_pfn;
 
     /* Allocate memory */
     paging = calloc(1, sizeof(struct xenpaging));
@@ -338,91 +337,21 @@ static struct xenpaging *xenpaging_init(int argc, char *argv[])
         goto err;
     }
 
-    /* Map the ring page */
-    xc_get_hvm_param(xch, paging->mem_event.domain_id, 
-                        HVM_PARAM_PAGING_RING_PFN, &ring_pfn);
-    mmap_pfn = ring_pfn;
-    paging->mem_event.ring_page = 
-        xc_map_foreign_batch(xch, paging->mem_event.domain_id, 
-                                PROT_READ | PROT_WRITE, &mmap_pfn, 1);
-    if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
-    {
-        /* Map failed, populate ring page */
-        rc = xc_domain_populate_physmap_exact(paging->xc_handle, 
-                                              paging->mem_event.domain_id,
-                                              1, 0, 0, &ring_pfn);
-        if ( rc != 0 )
-        {
-            PERROR("Failed to populate ring gfn\n");
-            goto err;
-        }
-
-        mmap_pfn = ring_pfn;
-        paging->mem_event.ring_page = 
-            xc_map_foreign_batch(xch, paging->mem_event.domain_id, 
-                                    PROT_READ | PROT_WRITE, &mmap_pfn, 1);
-        if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
-        {
-            PERROR("Could not map the ring page\n");
-            goto err;
-        }
-    }
-    
-    /* Initialise Xen */
-    rc = xc_mem_paging_enable(xch, paging->mem_event.domain_id,
-                             &paging->mem_event.evtchn_port);
-    if ( rc != 0 )
-    {
-        switch ( errno ) {
-            case EBUSY:
-                ERROR("xenpaging is (or was) active on this domain");
-                break;
-            case ENODEV:
-                ERROR("xenpaging requires Hardware Assisted Paging");
-                break;
-            case EMLINK:
-                ERROR("xenpaging not supported while iommu passthrough is enabled");
-                break;
-            case EXDEV:
-                ERROR("xenpaging not supported in a PoD guest");
-                break;
-            default:
-                PERROR("Error initialising shared page");
-                break;
-        }
-        goto err;
-    }
-
-    /* Open event channel */
-    paging->mem_event.xce_handle = xc_evtchn_open(NULL, 0);
-    if ( paging->mem_event.xce_handle == NULL )
-    {
-        PERROR("Failed to open event channel");
-        goto err;
-    }
-
-    /* Bind event notification */
-    rc = xc_evtchn_bind_interdomain(paging->mem_event.xce_handle,
+    /* Initialize paging by setting up ring and event channel. */
+    rc = xc_mem_paging_ring_setup(paging->xc_handle,
                                     paging->mem_event.domain_id,
-                                    paging->mem_event.evtchn_port);
-    if ( rc < 0 )
-    {
-        PERROR("Failed to bind event channel");
+                                    paging->mem_event.ring_page,
+                                    &paging->mem_event.port,
+                                    &paging->mem_event.evtchn_port,
+                                    &paging->mem_event.xce_handle,
+                                    &paging->mem_event.back_ring);
+    if( rc )
+    {
+        PERROR("Could not initialize mem paging\n");
         goto err;
     }
 
-    paging->mem_event.port = rc;
-
-    /* Initialise ring */
-    SHARED_RING_INIT((mem_event_sring_t *)paging->mem_event.ring_page);
-    BACK_RING_INIT(&paging->mem_event.back_ring,
-                   (mem_event_sring_t *)paging->mem_event.ring_page,
-                   PAGE_SIZE);
-
-    /* Now that the ring is set, remove it from the guest's physmap */
-    if ( xc_domain_decrease_reservation_exact(xch, 
-                    paging->mem_event.domain_id, 1, 0, &ring_pfn) )
-        PERROR("Failed to remove ring from guest physmap");
+    DPRINTF("ring and event channel setup successful");
 
     /* Get max_pages from guest if not provided via cmdline */
     if ( !paging->max_pages )
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [Xen-Devel] [PATCH] [GSOC14] refactored mempaging code from xenpaging to libxc.
  2014-06-13 14:16           ` Dushyant Behl
@ 2014-06-13 14:23             ` Dushyant Behl
  2014-06-13 14:37               ` Andrew Cooper
  2014-06-13 15:03               ` Andres Lagar Cavilla
  2014-06-13 14:51             ` Andrew Cooper
  1 sibling, 2 replies; 23+ messages in thread
From: Dushyant Behl @ 2014-06-13 14:23 UTC (permalink / raw)
  To: xen-devel
  Cc: dave.scott, Stefano Stabellini, Andrew Cooper,
	Andres Lagar Cavilla, Dushyant Behl, Ian Campbell

This is the updated patch, Please have a look and provide feedback.
I have also created a new patch to be applied on top of this which
will replace the call to deprecated function xc_map_foreign_batch with
the call to xc_map_foreign_bulk.
I think that the interface to both functions is same but they report
error in different ways, where xc_map_foreign_bulk sets errno in err
argument. I am not able to find out how to decode the errno to
get the error messages, If you could please point me how to check
which errno corresponds to which error then i'll send the next patch
right after this.

Also, The arguments are properly aligned with xc_interface in the
patch and the source code, maybe the misalignment here is caused in
between somewhere.

> +int xc_mem_paging_ring_setup(xc_interface *xch, domid_t domain_id,
> +                                void *ring_page, int *port,
> +                                uint32_t *evtchn_port,
> +                                xc_evtchn **xce_handle,
> +                                mem_event_back_ring_t *back_ring)

Thanks,
Dushyant Behl

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Xen-Devel] [PATCH] [GSOC14] refactored mempaging code from xenpaging to libxc.
  2014-06-13 14:23             ` Dushyant Behl
@ 2014-06-13 14:37               ` Andrew Cooper
  2014-06-13 15:03               ` Andres Lagar Cavilla
  1 sibling, 0 replies; 23+ messages in thread
From: Andrew Cooper @ 2014-06-13 14:37 UTC (permalink / raw)
  To: Dushyant Behl
  Cc: Andres Lagar Cavilla, Stefano Stabellini, dave.scott,
	Ian Campbell, xen-devel

On 13/06/14 15:23, Dushyant Behl wrote:
>
> Also, The arguments are properly aligned with xc_interface in the
> patch and the source code, maybe the misalignment here is caused in
> between somewhere.

This most likely means you have a tabs/spaces issue in your editor.  The
patch as sent by git is consistently 3 spaces out.

~Andrew

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Xen-Devel] [PATCH] [GSOC14] refactored mempaging code from xenpaging to libxc.
  2014-06-13 14:16           ` Dushyant Behl
  2014-06-13 14:23             ` Dushyant Behl
@ 2014-06-13 14:51             ` Andrew Cooper
  2014-06-13 15:00               ` Dushyant Behl
  2014-06-13 15:07               ` Andres Lagar Cavilla
  1 sibling, 2 replies; 23+ messages in thread
From: Andrew Cooper @ 2014-06-13 14:51 UTC (permalink / raw)
  To: Dushyant Behl
  Cc: andres.lagarcavilla, stefano.stabellini, dave.scott,
	Ian.Campbell, xen-devel

On 13/06/14 15:16, Dushyant Behl wrote:
> This patch is part of the work done under the gsoc project -
> Lazy Restore Using Memory Paging.
>
> This patch moves the code to initialize mempaging from xenpaging to libxc.
> The code refractored from xenpaging is the code which sets up paging,
> initializes a shared ring and event channel to communicate with xen. This
> communication is done between the hypervisor and the dom0 tool which performs
> the activity of pager. The xenpaging code is changed to use the newly created
> routines and is tested to properly build and work with this code.
>
> The refractoring is done so that any tool which will act as pager in
> lazy restore or use memory paging can use a same routines to initialize mempaging.
> This refactoring will also allow any future (in-tree) tools to use mempaging.
>
> The refractored code in xc_mem_paging_ring_setup is to be compiled into
> libxenguest.
>
> Signed-off-by: Dushyant Behl <myselfdushyantbehl@gmail.com>
> Reviewed-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> Acked-by: David Scott <dave.scott@citrix.com>
> ---
>  tools/libxc/Makefile                |   2 +
>  tools/libxc/xc_mem_paging_setup.c   | 135 ++++++++++++++++++++++++++++++++++++
>  tools/libxc/xenctrl.h               |  15 ++++
>  tools/ocaml/libs/xc/xenctrl_stubs.c |  11 +--
>  tools/xenpaging/Makefile            |   4 +-
>  tools/xenpaging/xenpaging.c         |  93 +++----------------------
>  6 files changed, 172 insertions(+), 88 deletions(-)
>  create mode 100644 tools/libxc/xc_mem_paging_setup.c
>
> diff --git a/tools/libxc/Makefile b/tools/libxc/Makefile
> index a74b19e..6cf14f0 100644
> --- a/tools/libxc/Makefile
> +++ b/tools/libxc/Makefile
> @@ -69,6 +69,8 @@ GUEST_SRCS-$(CONFIG_ARM)     += xc_dom_armzimageloader.c
>  GUEST_SRCS-y                 += xc_dom_binloader.c
>  GUEST_SRCS-y                 += xc_dom_compat_linux.c
>  
> +GUEST_SRCS-y                 += xc_mem_paging_setup.c
> +
>  GUEST_SRCS-$(CONFIG_X86)     += xc_dom_x86.c
>  GUEST_SRCS-$(CONFIG_X86)     += xc_cpuid_x86.c
>  GUEST_SRCS-$(CONFIG_X86)     += xc_hvm_build_x86.c
> diff --git a/tools/libxc/xc_mem_paging_setup.c b/tools/libxc/xc_mem_paging_setup.c
> new file mode 100644
> index 0000000..7b3ab38
> --- /dev/null
> +++ b/tools/libxc/xc_mem_paging_setup.c
> @@ -0,0 +1,135 @@
> +/*
> + * tools/libxc/xc_mem_paging_setup.c
> + *
> + * Routines to initialize memory paging. Create shared ring
> + * and event channels to communicate with the hypervisor.
> + *
> + * Copyright (c) 2014 Dushyant Behl
> + * Copyright (c) 2009 by Citrix Systems, Inc. (Patrick Colp)
> + * 
> + * This library is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation; either
> + * version 2.1 of the License, or (at your option) any later version.
> + *
> + * This library is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with this library; if not, write to the Free Software
> + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301 USA
> + */
> +
> +#include "xc_private.h"
> +#include <xen/event_channel.h>
> +#include <xen/mem_event.h>
> +
> +/*
> + * Mem paging ring and event channel setup routine.
> + * Setup a shared ring and an event channel to communicate between
> + * hypervisor and the tool performing mem paging operations.
> + * The function will return zero on successful completion and will
> + * return -1 on failure at any intermediate step setting up errno
> + * properly. 
> + */
> +int xc_mem_paging_ring_setup(xc_interface *xch, domid_t domain_id,
> +                                void *ring_page, int *port,
> +                                uint32_t *evtchn_port,
> +                                xc_evtchn **xce_handle,
> +                                mem_event_back_ring_t *back_ring)
> +{
> +    int rc;
> +    uint64_t ring_pfn, mmap_pfn;
> +
> +    /* Map the ring page */
> +    xc_get_hvm_param(xch, domain_id, HVM_PARAM_PAGING_RING_PFN, &ring_pfn);
> +    mmap_pfn = ring_pfn;
> +    ring_page = xc_map_foreign_batch(xch, domain_id,
> +                                        PROT_READ | PROT_WRITE, &mmap_pfn, 1);
> +    if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
> +    {
> +        /* Map failed, populate ring page */
> +        rc = xc_domain_populate_physmap_exact(xch, domain_id,
> +                                              1, 0, 0, &ring_pfn);
> +        if ( rc != 0 )
> +        {
> +            PERROR("Failed to populate ring gfn\n");
> +            return -1;
> +        }
> +
> +        mmap_pfn = ring_pfn;
> +        ring_page = xc_map_foreign_batch(xch, domain_id,
> +                                        PROT_READ | PROT_WRITE, &mmap_pfn, 1);
> +
> +        if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
> +        {
> +            PERROR("Could not map the ring page\n");
> +            return -1;
> +        }
> +    }
> +
> +    /* Initialise Xen */
> +    rc = xc_mem_paging_enable(xch, domain_id, evtchn_port);
> +    if ( rc != 0 )
> +    {
> +        switch ( errno ) {
> +            case EBUSY:
> +                ERROR("mempaging is (or was) active on this domain");
> +                break;
> +            case ENODEV:
> +                ERROR("mempaging requires Hardware Assisted Paging");
> +                break;
> +            case EMLINK:
> +                ERROR("mempaging not supported while iommu passthrough is enabled");
> +                break;
> +            case EXDEV:
> +                ERROR("mempaging not supported in a PoD guest");
> +                break;
> +            default:
> +                PERROR("mempaging: error initialising shared page");
> +                break;
> +        }
> +        return -1;
> +    }
> +
> +    /* Open event channel */
> +    *xce_handle = xc_evtchn_open(NULL, 0);
> +    if ( *xce_handle == NULL )
> +    {
> +        PERROR("Failed to open event channel");
> +        return -1;
> +    }

This is inappropriate inside libxenguest.  The user of the library
possibly already has open evtchn handle.  While this is only wasteful of
an fd under linux, I believe it will result in open() failing under some
of the *BSDs

The correct way of doing this is to have the caller of
xc_mem_paging_ring_setup() provide their xce_handle, and require them to
open one if they need to.

> +
> +    /* Bind event notification */
> +    rc = xc_evtchn_bind_interdomain(*xce_handle, domain_id, *evtchn_port);
> +    if ( rc < 0 )
> +    {
> +        PERROR("Failed to bind event channel");
> +        return -1;
> +    }
> +    *port = rc;
> +
> +    /* Initialise ring */
> +    SHARED_RING_INIT((mem_event_sring_t *)ring_page);
> +    BACK_RING_INIT(back_ring, (mem_event_sring_t *)ring_page, PAGE_SIZE);
> +
> +    /* Now that the ring is set, remove it from the guest's physmap */
> +    if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0, &ring_pfn) )
> +    {
> +        PERROR("Failed to remove ring from guest physmap");
> +        return -1;
> +    }

There is a race condition here where the guest can play with the
post-initialised ring state.

> +
> +    return 0;
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
> index 02129f7..77a0302 100644
> --- a/tools/libxc/xenctrl.h
> +++ b/tools/libxc/xenctrl.h
> @@ -47,6 +47,7 @@
>  #include <xen/xsm/flask_op.h>
>  #include <xen/tmem.h>
>  #include <xen/kexec.h>
> +#include <xen/mem_event.h>
>  
>  #include "xentoollog.h"
>  
> @@ -2039,6 +2040,20 @@ int xc_mem_paging_prep(xc_interface *xch, domid_t domain_id, unsigned long gfn);
>  int xc_mem_paging_load(xc_interface *xch, domid_t domain_id, 
>                          unsigned long gfn, void *buffer);
>  
> +/*
> + * Mem paging ring and event channel setup routine.
> + * Setup a shared ring and an event channel to communicate between
> + * hypervisor and the tool performing mem paging operations.
> + * The function will return zero on successful completion and will
> + * return -1 on failure at any intermediate step setting up errno
> + * properly. 
> + */
> +int xc_mem_paging_ring_setup(xc_interface *xch, domid_t domain_id,
> +                                void *ring_page, int *port,
> +                                uint32_t *evtchn_port,
> +                                xc_evtchn **xceh,
> +                                mem_event_back_ring_t *_back_ring);
> +
>  /** 
>   * Access tracking operations.
>   * Supported only on Intel EPT 64 bit processors.
> diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
> index ff29b47..37a4db7 100644
> --- a/tools/ocaml/libs/xc/xenctrl_stubs.c
> +++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
> @@ -521,13 +521,16 @@ CAMLprim value stub_xc_evtchn_reset(value xch, value domid)
>  	CAMLreturn(Val_unit);
>  }
>  
> -
> -#define RING_SIZE 32768
> -static char ring[RING_SIZE];
> +/*
> + * The name is kept BUF_RING_SIZE, because the name RING_SIZE 
> + * collides with the xen shared ring definitions in io/ring.h
> + */
> +#define BUF_RING_SIZE 32768
> +static char ring[BUF_RING_SIZE];
>  
>  CAMLprim value stub_xc_readconsolering(value xch)
>  {
> -	unsigned int size = RING_SIZE - 1;
> +	unsigned int size = BUF_RING_SIZE - 1;
>  	char *ring_ptr = ring;
>  	int retval;
>  
> diff --git a/tools/xenpaging/Makefile b/tools/xenpaging/Makefile
> index 548d9dd..ea370d3 100644
> --- a/tools/xenpaging/Makefile
> +++ b/tools/xenpaging/Makefile
> @@ -1,8 +1,8 @@
>  XEN_ROOT=$(CURDIR)/../..
>  include $(XEN_ROOT)/tools/Rules.mk
>  
> -CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenstore) $(PTHREAD_CFLAGS)
> -LDLIBS += $(LDLIBS_libxenctrl) $(LDLIBS_libxenstore) $(PTHREAD_LIBS)
> +CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenguest) $(CFLAGS_libxenstore) $(PTHREAD_CFLAGS)
> +LDLIBS += $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxenstore) $(PTHREAD_LIBS)

You are not introducing any libxenstore calls into xenpaging.  This
change is bogus.

~Andrew

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Xen-Devel] [PATCH] [GSOC14] refactored mempaging code from xenpaging to libxc.
  2014-06-13 14:51             ` Andrew Cooper
@ 2014-06-13 15:00               ` Dushyant Behl
  2014-06-13 15:06                 ` Andrew Cooper
  2014-06-13 15:07               ` Andres Lagar Cavilla
  1 sibling, 1 reply; 23+ messages in thread
From: Dushyant Behl @ 2014-06-13 15:00 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Andres Lagar Cavilla, Stefano Stabellini, dave.scott,
	Ian Campbell, xen-devel

>> -CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenstore) $(PTHREAD_CFLAGS)
>> -LDLIBS += $(LDLIBS_libxenctrl) $(LDLIBS_libxenstore) $(PTHREAD_LIBS)
>> +CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenguest) $(CFLAGS_libxenstore) $(PTHREAD_CFLAGS)
>> +LDLIBS += $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxenstore) $(PTHREAD_LIBS)
>
> You are not introducing any libxenstore calls into xenpaging.  This
> change is bogus.

Sorry but, I have only introduced libxenguest flags and libs, because
the file xc_mem_paging_ring_setup.c which I have created is compiled
into libxenguest, hence the dependency.
libxenstore on the other hand was already present here because
xenpaging.c has a used a lot of libxenstore calls.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Xen-Devel] [PATCH] [GSOC14] refactored mempaging code from xenpaging to libxc.
  2014-06-13 14:23             ` Dushyant Behl
  2014-06-13 14:37               ` Andrew Cooper
@ 2014-06-13 15:03               ` Andres Lagar Cavilla
  1 sibling, 0 replies; 23+ messages in thread
From: Andres Lagar Cavilla @ 2014-06-13 15:03 UTC (permalink / raw)
  To: Dushyant Behl
  Cc: Andrew Cooper, Stefano Stabellini, dave.scott, Ian Campbell, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 1640 bytes --]

xc_map_foreign_bulk will return -1 if there is a global error, and set
errno appropriately (currently, EFAULT).

If any of the target pfns are paged out, it will return -ENOENT. Otherwise
it returns zero. *But*, even though it returned zero, you need to check the
individual entries in the err array for mapping error codes for each pfn.

In other words, for your case, if rc == 0 and *err == 0 you are ok.

Andres



On Fri, Jun 13, 2014 at 7:23 AM, Dushyant Behl <myselfdushyantbehl@gmail.com
> wrote:

> This is the updated patch, Please have a look and provide feedback.
> I have also created a new patch to be applied on top of this which
> will replace the call to deprecated function xc_map_foreign_batch with
> the call to xc_map_foreign_bulk.
> I think that the interface to both functions is same but they report
> error in different ways, where xc_map_foreign_bulk sets errno in err
> argument. I am not able to find out how to decode the errno to
> get the error messages, If you could please point me how to check
> which errno corresponds to which error then i'll send the next patch
> right after this.
>
> Also, The arguments are properly aligned with xc_interface in the
> patch and the source code, maybe the misalignment here is caused in
> between somewhere.
>
> > +int xc_mem_paging_ring_setup(xc_interface *xch, domid_t domain_id,
> > +                                void *ring_page, int *port,
> > +                                uint32_t *evtchn_port,
> > +                                xc_evtchn **xce_handle,
> > +                                mem_event_back_ring_t *back_ring)
>
> Thanks,
> Dushyant Behl
>

[-- Attachment #1.2: Type: text/html, Size: 2208 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Xen-Devel] [PATCH] [GSOC14] refactored mempaging code from xenpaging to libxc.
  2014-06-13 15:00               ` Dushyant Behl
@ 2014-06-13 15:06                 ` Andrew Cooper
  0 siblings, 0 replies; 23+ messages in thread
From: Andrew Cooper @ 2014-06-13 15:06 UTC (permalink / raw)
  To: Dushyant Behl
  Cc: Andres Lagar Cavilla, Stefano Stabellini, dave.scott,
	Ian Campbell, xen-devel

On 13/06/14 16:00, Dushyant Behl wrote:
>>> -CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenstore) $(PTHREAD_CFLAGS)
>>> -LDLIBS += $(LDLIBS_libxenctrl) $(LDLIBS_libxenstore) $(PTHREAD_LIBS)
>>> +CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenguest) $(CFLAGS_libxenstore) $(PTHREAD_CFLAGS)
>>> +LDLIBS += $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxenstore) $(PTHREAD_LIBS)
>> You are not introducing any libxenstore calls into xenpaging.  This
>> change is bogus.
> Sorry but, I have only introduced libxenguest flags and libs, because
> the file xc_mem_paging_ring_setup.c which I have created is compiled
> into libxenguest, hence the dependency.
> libxenstore on the other hand was already present here because
> xenpaging.c has a used a lot of libxenstore calls.

Oh sorry - my mistake.  I misread that as an introduction of
libxenstore, whereas it is actually an insertion of libxenguest into the
middle.

That's fine then.

~Andrew

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Xen-Devel] [PATCH] [GSOC14] refactored mempaging code from xenpaging to libxc.
  2014-06-13 14:51             ` Andrew Cooper
  2014-06-13 15:00               ` Dushyant Behl
@ 2014-06-13 15:07               ` Andres Lagar Cavilla
  2014-06-13 15:55                 ` Andrew Cooper
  1 sibling, 1 reply; 23+ messages in thread
From: Andres Lagar Cavilla @ 2014-06-13 15:07 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: dave.scott, Stefano Stabellini, Dushyant Behl, Ian Campbell, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 11791 bytes --]

On Fri, Jun 13, 2014 at 7:51 AM, Andrew Cooper <andrew.cooper3@citrix.com>
wrote:

> On 13/06/14 15:16, Dushyant Behl wrote:
> > This patch is part of the work done under the gsoc project -
> > Lazy Restore Using Memory Paging.
> >
> > This patch moves the code to initialize mempaging from xenpaging to
> libxc.
> > The code refractored from xenpaging is the code which sets up paging,
> > initializes a shared ring and event channel to communicate with xen. This
> > communication is done between the hypervisor and the dom0 tool which
> performs
> > the activity of pager. The xenpaging code is changed to use the newly
> created
> > routines and is tested to properly build and work with this code.
> >
> > The refractoring is done so that any tool which will act as pager in
> > lazy restore or use memory paging can use a same routines to initialize
> mempaging.
> > This refactoring will also allow any future (in-tree) tools to use
> mempaging.
> >
> > The refractored code in xc_mem_paging_ring_setup is to be compiled into
> > libxenguest.
> >
> > Signed-off-by: Dushyant Behl <myselfdushyantbehl@gmail.com>
> > Reviewed-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
> > Acked-by: David Scott <dave.scott@citrix.com>
> > ---
> >  tools/libxc/Makefile                |   2 +
> >  tools/libxc/xc_mem_paging_setup.c   | 135
> ++++++++++++++++++++++++++++++++++++
> >  tools/libxc/xenctrl.h               |  15 ++++
> >  tools/ocaml/libs/xc/xenctrl_stubs.c |  11 +--
> >  tools/xenpaging/Makefile            |   4 +-
> >  tools/xenpaging/xenpaging.c         |  93 +++----------------------
> >  6 files changed, 172 insertions(+), 88 deletions(-)
> >  create mode 100644 tools/libxc/xc_mem_paging_setup.c
> >
> > diff --git a/tools/libxc/Makefile b/tools/libxc/Makefile
> > index a74b19e..6cf14f0 100644
> > --- a/tools/libxc/Makefile
> > +++ b/tools/libxc/Makefile
> > @@ -69,6 +69,8 @@ GUEST_SRCS-$(CONFIG_ARM)     +=
> xc_dom_armzimageloader.c
> >  GUEST_SRCS-y                 += xc_dom_binloader.c
> >  GUEST_SRCS-y                 += xc_dom_compat_linux.c
> >
> > +GUEST_SRCS-y                 += xc_mem_paging_setup.c
> > +
> >  GUEST_SRCS-$(CONFIG_X86)     += xc_dom_x86.c
> >  GUEST_SRCS-$(CONFIG_X86)     += xc_cpuid_x86.c
> >  GUEST_SRCS-$(CONFIG_X86)     += xc_hvm_build_x86.c
> > diff --git a/tools/libxc/xc_mem_paging_setup.c
> b/tools/libxc/xc_mem_paging_setup.c
> > new file mode 100644
> > index 0000000..7b3ab38
> > --- /dev/null
> > +++ b/tools/libxc/xc_mem_paging_setup.c
> > @@ -0,0 +1,135 @@
> > +/*
> > + * tools/libxc/xc_mem_paging_setup.c
> > + *
> > + * Routines to initialize memory paging. Create shared ring
> > + * and event channels to communicate with the hypervisor.
> > + *
> > + * Copyright (c) 2014 Dushyant Behl
> > + * Copyright (c) 2009 by Citrix Systems, Inc. (Patrick Colp)
> > + *
> > + * This library is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU Lesser General Public
> > + * License as published by the Free Software Foundation; either
> > + * version 2.1 of the License, or (at your option) any later version.
> > + *
> > + * This library is distributed in the hope that it will be useful,
> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> > + * Lesser General Public License for more details.
> > + *
> > + * You should have received a copy of the GNU Lesser General Public
> > + * License along with this library; if not, write to the Free Software
> > + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
>  02110-1301 USA
> > + */
> > +
> > +#include "xc_private.h"
> > +#include <xen/event_channel.h>
> > +#include <xen/mem_event.h>
> > +
> > +/*
> > + * Mem paging ring and event channel setup routine.
> > + * Setup a shared ring and an event channel to communicate between
> > + * hypervisor and the tool performing mem paging operations.
> > + * The function will return zero on successful completion and will
> > + * return -1 on failure at any intermediate step setting up errno
> > + * properly.
> > + */
> > +int xc_mem_paging_ring_setup(xc_interface *xch, domid_t domain_id,
> > +                                void *ring_page, int *port,
> > +                                uint32_t *evtchn_port,
> > +                                xc_evtchn **xce_handle,
> > +                                mem_event_back_ring_t *back_ring)
> > +{
> > +    int rc;
> > +    uint64_t ring_pfn, mmap_pfn;
> > +
> > +    /* Map the ring page */
> > +    xc_get_hvm_param(xch, domain_id, HVM_PARAM_PAGING_RING_PFN,
> &ring_pfn);
> > +    mmap_pfn = ring_pfn;
> > +    ring_page = xc_map_foreign_batch(xch, domain_id,
> > +                                        PROT_READ | PROT_WRITE,
> &mmap_pfn, 1);
> > +    if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
> > +    {
> > +        /* Map failed, populate ring page */
> > +        rc = xc_domain_populate_physmap_exact(xch, domain_id,
> > +                                              1, 0, 0, &ring_pfn);
> > +        if ( rc != 0 )
> > +        {
> > +            PERROR("Failed to populate ring gfn\n");
> > +            return -1;
> > +        }
> > +
> > +        mmap_pfn = ring_pfn;
> > +        ring_page = xc_map_foreign_batch(xch, domain_id,
> > +                                        PROT_READ | PROT_WRITE,
> &mmap_pfn, 1);
> > +
> > +        if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
> > +        {
> > +            PERROR("Could not map the ring page\n");
> > +            return -1;
> > +        }
> > +    }
> > +
> > +    /* Initialise Xen */
> > +    rc = xc_mem_paging_enable(xch, domain_id, evtchn_port);
> > +    if ( rc != 0 )
> > +    {
> > +        switch ( errno ) {
> > +            case EBUSY:
> > +                ERROR("mempaging is (or was) active on this domain");
> > +                break;
> > +            case ENODEV:
> > +                ERROR("mempaging requires Hardware Assisted Paging");
> > +                break;
> > +            case EMLINK:
> > +                ERROR("mempaging not supported while iommu passthrough
> is enabled");
> > +                break;
> > +            case EXDEV:
> > +                ERROR("mempaging not supported in a PoD guest");
> > +                break;
> > +            default:
> > +                PERROR("mempaging: error initialising shared page");
> > +                break;
> > +        }
> > +        return -1;
> > +    }
> > +
> > +    /* Open event channel */
> > +    *xce_handle = xc_evtchn_open(NULL, 0);
> > +    if ( *xce_handle == NULL )
> > +    {
> > +        PERROR("Failed to open event channel");
> > +        return -1;
> > +    }
>
> This is inappropriate inside libxenguest.  The user of the library
> possibly already has open evtchn handle.  While this is only wasteful of
> an fd under linux, I believe it will result in open() failing under some
> of the *BSDs
>
> The correct way of doing this is to have the caller of
> xc_mem_paging_ring_setup() provide their xce_handle, and require them to
> open one if they need to.
>
I think this is heavy handed. I mean, the idea here is to relieve the
caller from doing all the setup work.

In fact, you could argue that a follow-up patch should encapsulate all the
cleanup.

And then consumers of this particular module call setup, teardown, use the
intermediate result, no concerns.

LGTM, to be honest.

>
> > +
> > +    /* Bind event notification */
> > +    rc = xc_evtchn_bind_interdomain(*xce_handle, domain_id,
> *evtchn_port);
> > +    if ( rc < 0 )
> > +    {
> > +        PERROR("Failed to bind event channel");
> > +        return -1;
> > +    }
> > +    *port = rc;
> > +
> > +    /* Initialise ring */
> > +    SHARED_RING_INIT((mem_event_sring_t *)ring_page);
> > +    BACK_RING_INIT(back_ring, (mem_event_sring_t *)ring_page,
> PAGE_SIZE);
> > +
> > +    /* Now that the ring is set, remove it from the guest's physmap */
> > +    if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0,
> &ring_pfn) )
> > +    {
> > +        PERROR("Failed to remove ring from guest physmap");
> > +        return -1;
> > +    }
>
> There is a race condition here where the guest can play with the
> post-initialised ring state.
>
Well-known, not the place to fix it here.

This was far worse in 2011. We've gotten this to a place in which you pause
the guest, and this works safely. Not unreasonable IMHO, and the status-quo
for a while.

A *true* solution would require hypervisor surgery by adding a XENMEM op
with a new special map space. Maybe.

>
> > +
> > +    return 0;
> > +}
> > +
> > +/*
> > + * Local variables:
> > + * mode: C
> > + * c-file-style: "BSD"
> > + * c-basic-offset: 4
> > + * indent-tabs-mode: nil
> > + * End:
> > + */
> > diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
> > index 02129f7..77a0302 100644
> > --- a/tools/libxc/xenctrl.h
> > +++ b/tools/libxc/xenctrl.h
> > @@ -47,6 +47,7 @@
> >  #include <xen/xsm/flask_op.h>
> >  #include <xen/tmem.h>
> >  #include <xen/kexec.h>
> > +#include <xen/mem_event.h>
> >
> >  #include "xentoollog.h"
> >
> > @@ -2039,6 +2040,20 @@ int xc_mem_paging_prep(xc_interface *xch, domid_t
> domain_id, unsigned long gfn);
> >  int xc_mem_paging_load(xc_interface *xch, domid_t domain_id,
> >                          unsigned long gfn, void *buffer);
> >
> > +/*
> > + * Mem paging ring and event channel setup routine.
> > + * Setup a shared ring and an event channel to communicate between
> > + * hypervisor and the tool performing mem paging operations.
> > + * The function will return zero on successful completion and will
> > + * return -1 on failure at any intermediate step setting up errno
> > + * properly.
> > + */
> > +int xc_mem_paging_ring_setup(xc_interface *xch, domid_t domain_id,
> > +                                void *ring_page, int *port,
> > +                                uint32_t *evtchn_port,
> > +                                xc_evtchn **xceh,
> > +                                mem_event_back_ring_t *_back_ring);
> > +
> >  /**
> >   * Access tracking operations.
> >   * Supported only on Intel EPT 64 bit processors.
> > diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c
> b/tools/ocaml/libs/xc/xenctrl_stubs.c
> > index ff29b47..37a4db7 100644
> > --- a/tools/ocaml/libs/xc/xenctrl_stubs.c
> > +++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
> > @@ -521,13 +521,16 @@ CAMLprim value stub_xc_evtchn_reset(value xch,
> value domid)
> >       CAMLreturn(Val_unit);
> >  }
> >
> > -
> > -#define RING_SIZE 32768
> > -static char ring[RING_SIZE];
> > +/*
> > + * The name is kept BUF_RING_SIZE, because the name RING_SIZE
> > + * collides with the xen shared ring definitions in io/ring.h
> > + */
> > +#define BUF_RING_SIZE 32768
> > +static char ring[BUF_RING_SIZE];
> >
> >  CAMLprim value stub_xc_readconsolering(value xch)
> >  {
> > -     unsigned int size = RING_SIZE - 1;
> > +     unsigned int size = BUF_RING_SIZE - 1;
> >       char *ring_ptr = ring;
> >       int retval;
> >
> > diff --git a/tools/xenpaging/Makefile b/tools/xenpaging/Makefile
> > index 548d9dd..ea370d3 100644
> > --- a/tools/xenpaging/Makefile
> > +++ b/tools/xenpaging/Makefile
> > @@ -1,8 +1,8 @@
> >  XEN_ROOT=$(CURDIR)/../..
> >  include $(XEN_ROOT)/tools/Rules.mk
> >
> > -CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenstore) $(PTHREAD_CFLAGS)
> > -LDLIBS += $(LDLIBS_libxenctrl) $(LDLIBS_libxenstore) $(PTHREAD_LIBS)
> > +CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenguest)
> $(CFLAGS_libxenstore) $(PTHREAD_CFLAGS)
> > +LDLIBS += $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest)
> $(LDLIBS_libxenstore) $(PTHREAD_LIBS)
>
> You are not introducing any libxenstore calls into xenpaging.  This
> change is bogus.
>
The diff is not introducing libxenstore.
Andres

>
> ~Andrew
>

[-- Attachment #1.2: Type: text/html, Size: 15252 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Xen-Devel] [PATCH] [GSOC14] refactored mempaging code from xenpaging to libxc.
  2014-06-13 15:07               ` Andres Lagar Cavilla
@ 2014-06-13 15:55                 ` Andrew Cooper
  2014-06-13 16:09                   ` Andres Lagar Cavilla
  0 siblings, 1 reply; 23+ messages in thread
From: Andrew Cooper @ 2014-06-13 15:55 UTC (permalink / raw)
  To: Andres Lagar Cavilla
  Cc: dave.scott, Stefano Stabellini, Dushyant Behl, Ian Campbell, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 10937 bytes --]

On 13/06/14 16:07, Andres Lagar Cavilla wrote:
> On Fri, Jun 13, 2014 at 7:51 AM, Andrew Cooper
> <andrew.cooper3@citrix.com <mailto:andrew.cooper3@citrix.com>> wrote:
>
>     On 13/06/14 15:16, Dushyant Behl wrote:
>     > This patch is part of the work done under the gsoc project -
>     > Lazy Restore Using Memory Paging.
>     >
>     > This patch moves the code to initialize mempaging from xenpaging
>     to libxc.
>     > The code refractored from xenpaging is the code which sets up
>     paging,
>     > initializes a shared ring and event channel to communicate with
>     xen. This
>     > communication is done between the hypervisor and the dom0 tool
>     which performs
>     > the activity of pager. The xenpaging code is changed to use the
>     newly created
>     > routines and is tested to properly build and work with this code.
>     >
>     > The refractoring is done so that any tool which will act as pager in
>     > lazy restore or use memory paging can use a same routines to
>     initialize mempaging.
>     > This refactoring will also allow any future (in-tree) tools to
>     use mempaging.
>     >
>     > The refractored code in xc_mem_paging_ring_setup is to be
>     compiled into
>     > libxenguest.
>     >
>     > Signed-off-by: Dushyant Behl <myselfdushyantbehl@gmail.com
>     <mailto:myselfdushyantbehl@gmail.com>>
>     > Reviewed-by: Andres Lagar-Cavilla <andres@lagarcavilla.org
>     <mailto:andres@lagarcavilla.org>>
>     > Acked-by: David Scott <dave.scott@citrix.com
>     <mailto:dave.scott@citrix.com>>
>     > ---
>     >  tools/libxc/Makefile                |   2 +
>     >  tools/libxc/xc_mem_paging_setup.c   | 135
>     ++++++++++++++++++++++++++++++++++++
>     >  tools/libxc/xenctrl.h               |  15 ++++
>     >  tools/ocaml/libs/xc/xenctrl_stubs.c |  11 +--
>     >  tools/xenpaging/Makefile            |   4 +-
>     >  tools/xenpaging/xenpaging.c         |  93 +++----------------------
>     >  6 files changed, 172 insertions(+), 88 deletions(-)
>     >  create mode 100644 tools/libxc/xc_mem_paging_setup.c
>     >
>     > diff --git a/tools/libxc/Makefile b/tools/libxc/Makefile
>     > index a74b19e..6cf14f0 100644
>     > --- a/tools/libxc/Makefile
>     > +++ b/tools/libxc/Makefile
>     > @@ -69,6 +69,8 @@ GUEST_SRCS-$(CONFIG_ARM)     +=
>     xc_dom_armzimageloader.c
>     >  GUEST_SRCS-y                 += xc_dom_binloader.c
>     >  GUEST_SRCS-y                 += xc_dom_compat_linux.c
>     >
>     > +GUEST_SRCS-y                 += xc_mem_paging_setup.c
>     > +
>     >  GUEST_SRCS-$(CONFIG_X86)     += xc_dom_x86.c
>     >  GUEST_SRCS-$(CONFIG_X86)     += xc_cpuid_x86.c
>     >  GUEST_SRCS-$(CONFIG_X86)     += xc_hvm_build_x86.c
>     > diff --git a/tools/libxc/xc_mem_paging_setup.c
>     b/tools/libxc/xc_mem_paging_setup.c
>     > new file mode 100644
>     > index 0000000..7b3ab38
>     > --- /dev/null
>     > +++ b/tools/libxc/xc_mem_paging_setup.c
>     > @@ -0,0 +1,135 @@
>     > +/*
>     > + * tools/libxc/xc_mem_paging_setup.c
>     > + *
>     > + * Routines to initialize memory paging. Create shared ring
>     > + * and event channels to communicate with the hypervisor.
>     > + *
>     > + * Copyright (c) 2014 Dushyant Behl
>     > + * Copyright (c) 2009 by Citrix Systems, Inc. (Patrick Colp)
>     > + *
>     > + * This library is free software; you can redistribute it and/or
>     > + * modify it under the terms of the GNU Lesser General Public
>     > + * License as published by the Free Software Foundation; either
>     > + * version 2.1 of the License, or (at your option) any later
>     version.
>     > + *
>     > + * This library is distributed in the hope that it will be useful,
>     > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>     > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See
>     the GNU
>     > + * Lesser General Public License for more details.
>     > + *
>     > + * You should have received a copy of the GNU Lesser General Public
>     > + * License along with this library; if not, write to the Free
>     Software
>     > + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
>     MA  02110-1301 USA
>     > + */
>     > +
>     > +#include "xc_private.h"
>     > +#include <xen/event_channel.h>
>     > +#include <xen/mem_event.h>
>     > +
>     > +/*
>     > + * Mem paging ring and event channel setup routine.
>     > + * Setup a shared ring and an event channel to communicate between
>     > + * hypervisor and the tool performing mem paging operations.
>     > + * The function will return zero on successful completion and will
>     > + * return -1 on failure at any intermediate step setting up errno
>     > + * properly.
>     > + */
>     > +int xc_mem_paging_ring_setup(xc_interface *xch, domid_t domain_id,
>     > +                                void *ring_page, int *port,
>     > +                                uint32_t *evtchn_port,
>     > +                                xc_evtchn **xce_handle,
>     > +                                mem_event_back_ring_t *back_ring)
>     > +{
>     > +    int rc;
>     > +    uint64_t ring_pfn, mmap_pfn;
>     > +
>     > +    /* Map the ring page */
>     > +    xc_get_hvm_param(xch, domain_id, HVM_PARAM_PAGING_RING_PFN,
>     &ring_pfn);
>     > +    mmap_pfn = ring_pfn;
>     > +    ring_page = xc_map_foreign_batch(xch, domain_id,
>     > +                                        PROT_READ | PROT_WRITE,
>     &mmap_pfn, 1);
>     > +    if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
>     > +    {
>     > +        /* Map failed, populate ring page */
>     > +        rc = xc_domain_populate_physmap_exact(xch, domain_id,
>     > +                                              1, 0, 0, &ring_pfn);
>     > +        if ( rc != 0 )
>     > +        {
>     > +            PERROR("Failed to populate ring gfn\n");
>     > +            return -1;
>     > +        }
>     > +
>     > +        mmap_pfn = ring_pfn;
>     > +        ring_page = xc_map_foreign_batch(xch, domain_id,
>     > +                                        PROT_READ | PROT_WRITE,
>     &mmap_pfn, 1);
>     > +
>     > +        if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
>     > +        {
>     > +            PERROR("Could not map the ring page\n");
>     > +            return -1;
>     > +        }
>     > +    }
>     > +
>     > +    /* Initialise Xen */
>     > +    rc = xc_mem_paging_enable(xch, domain_id, evtchn_port);
>     > +    if ( rc != 0 )
>     > +    {
>     > +        switch ( errno ) {
>     > +            case EBUSY:
>     > +                ERROR("mempaging is (or was) active on this
>     domain");
>     > +                break;
>     > +            case ENODEV:
>     > +                ERROR("mempaging requires Hardware Assisted
>     Paging");
>     > +                break;
>     > +            case EMLINK:
>     > +                ERROR("mempaging not supported while iommu
>     passthrough is enabled");
>     > +                break;
>     > +            case EXDEV:
>     > +                ERROR("mempaging not supported in a PoD guest");
>     > +                break;
>     > +            default:
>     > +                PERROR("mempaging: error initialising shared
>     page");
>     > +                break;
>     > +        }
>     > +        return -1;
>     > +    }
>     > +
>     > +    /* Open event channel */
>     > +    *xce_handle = xc_evtchn_open(NULL, 0);
>     > +    if ( *xce_handle == NULL )
>     > +    {
>     > +        PERROR("Failed to open event channel");
>     > +        return -1;
>     > +    }
>
>     This is inappropriate inside libxenguest.  The user of the library
>     possibly already has open evtchn handle.  While this is only
>     wasteful of
>     an fd under linux, I believe it will result in open() failing
>     under some
>     of the *BSDs
>
>     The correct way of doing this is to have the caller of
>     xc_mem_paging_ring_setup() provide their xce_handle, and require
>     them to
>     open one if they need to.
>
> I think this is heavy handed. I mean, the idea here is to relieve the
> caller from doing all the setup work.

calling xc_evtchn_open() is *not* something which should ever be
relieved from the caller.

>
> In fact, you could argue that a follow-up patch should encapsulate all
> the cleanup.
> And then consumers of this particular module call setup, teardown, use
> the intermediate result, no concerns.

I realise I am not a maintainer, so ultimately it is Ian/Ian you have to
convince in this matter.


If I were doing this, there would be two patches.  The first touches
libxc alone and introduces this setup function and a teardown function,
both written correctly and bugfree.  The second patch modifies xenpaging
to use the new setup and teardown functions.

As it currently stands, this is a halfbaked copy and paste of too much
setup code, moving into a library without its equivalent teardown, and a
promise of a fixup patch.

>
> LGTM, to be honest. 
>
>
>     > +
>     > +    /* Bind event notification */
>     > +    rc = xc_evtchn_bind_interdomain(*xce_handle, domain_id,
>     *evtchn_port);
>     > +    if ( rc < 0 )
>     > +    {
>     > +        PERROR("Failed to bind event channel");
>     > +        return -1;
>     > +    }
>     > +    *port = rc;
>     > +
>     > +    /* Initialise ring */
>     > +    SHARED_RING_INIT((mem_event_sring_t *)ring_page);
>     > +    BACK_RING_INIT(back_ring, (mem_event_sring_t *)ring_page,
>     PAGE_SIZE);
>     > +
>     > +    /* Now that the ring is set, remove it from the guest's
>     physmap */
>     > +    if ( xc_domain_decrease_reservation_exact(xch, domain_id,
>     1, 0, &ring_pfn) )
>     > +    {
>     > +        PERROR("Failed to remove ring from guest physmap");
>     > +        return -1;
>     > +    }
>
>     There is a race condition here where the guest can play with the
>     post-initialised ring state.
>
> Well-known, not the place to fix it here.

I don't know how exactly the security policy applies here.  My
understanding is that the hypervisor side paging code has been declared
stable and therefore subject to XSAs.

The libxc codes is, is far as I can tell, just thin wrappers around the
hypercalls.  At the moment, the xenpaging utility is just example code,
so is probably fine from an XSA point of view.

However, moving code from xenpaging into libxc changes this split.


Speaking now with my XenServer security hat on,

Irrespective of whether it would technically be declared an XSA or not,
it is stupid to knowingly put a latent security bug in a library.  The
best case is that the development tree has a security bug in it for a
while.  The worst case is that you forget to fix it before the next Xen
release and everyone in the world picks up the security bug as part of
their stable code.

So no - this is absolutely the place to fix it.

~Andrew

[-- Attachment #1.2: Type: text/html, Size: 18567 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Xen-Devel] [PATCH] [GSOC14] refactored mempaging code from xenpaging to libxc.
  2014-06-13 15:55                 ` Andrew Cooper
@ 2014-06-13 16:09                   ` Andres Lagar Cavilla
  0 siblings, 0 replies; 23+ messages in thread
From: Andres Lagar Cavilla @ 2014-06-13 16:09 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: dave.scott, Stefano Stabellini, Dushyant Behl, Ian Campbell, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 10577 bytes --]

On Fri, Jun 13, 2014 at 8:55 AM, Andrew Cooper <andrew.cooper3@citrix.com>
wrote:

>  On 13/06/14 16:07, Andres Lagar Cavilla wrote:
>
>  On Fri, Jun 13, 2014 at 7:51 AM, Andrew Cooper <andrew.cooper3@citrix.com
> > wrote:
>
>>  On 13/06/14 15:16, Dushyant Behl wrote:
>> > This patch is part of the work done under the gsoc project -
>> > Lazy Restore Using Memory Paging.
>> >
>> > This patch moves the code to initialize mempaging from xenpaging to
>> libxc.
>> > The code refractored from xenpaging is the code which sets up paging,
>> > initializes a shared ring and event channel to communicate with xen.
>> This
>> > communication is done between the hypervisor and the dom0 tool which
>> performs
>> > the activity of pager. The xenpaging code is changed to use the newly
>> created
>> > routines and is tested to properly build and work with this code.
>> >
>> > The refractoring is done so that any tool which will act as pager in
>> > lazy restore or use memory paging can use a same routines to initialize
>> mempaging.
>> > This refactoring will also allow any future (in-tree) tools to use
>> mempaging.
>> >
>> > The refractored code in xc_mem_paging_ring_setup is to be compiled into
>> > libxenguest.
>> >
>> > Signed-off-by: Dushyant Behl <myselfdushyantbehl@gmail.com>
>> > Reviewed-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
>> > Acked-by: David Scott <dave.scott@citrix.com>
>> > ---
>> >  tools/libxc/Makefile                |   2 +
>> >  tools/libxc/xc_mem_paging_setup.c   | 135
>> ++++++++++++++++++++++++++++++++++++
>> >  tools/libxc/xenctrl.h               |  15 ++++
>> >  tools/ocaml/libs/xc/xenctrl_stubs.c |  11 +--
>> >  tools/xenpaging/Makefile            |   4 +-
>> >  tools/xenpaging/xenpaging.c         |  93 +++----------------------
>> >  6 files changed, 172 insertions(+), 88 deletions(-)
>> >  create mode 100644 tools/libxc/xc_mem_paging_setup.c
>> >
>> > diff --git a/tools/libxc/Makefile b/tools/libxc/Makefile
>> > index a74b19e..6cf14f0 100644
>> > --- a/tools/libxc/Makefile
>> > +++ b/tools/libxc/Makefile
>> > @@ -69,6 +69,8 @@ GUEST_SRCS-$(CONFIG_ARM)     +=
>> xc_dom_armzimageloader.c
>> >  GUEST_SRCS-y                 += xc_dom_binloader.c
>> >  GUEST_SRCS-y                 += xc_dom_compat_linux.c
>> >
>> > +GUEST_SRCS-y                 += xc_mem_paging_setup.c
>> > +
>> >  GUEST_SRCS-$(CONFIG_X86)     += xc_dom_x86.c
>> >  GUEST_SRCS-$(CONFIG_X86)     += xc_cpuid_x86.c
>> >  GUEST_SRCS-$(CONFIG_X86)     += xc_hvm_build_x86.c
>> > diff --git a/tools/libxc/xc_mem_paging_setup.c
>> b/tools/libxc/xc_mem_paging_setup.c
>> > new file mode 100644
>> > index 0000000..7b3ab38
>> > --- /dev/null
>> > +++ b/tools/libxc/xc_mem_paging_setup.c
>> > @@ -0,0 +1,135 @@
>> > +/*
>> > + * tools/libxc/xc_mem_paging_setup.c
>> > + *
>> > + * Routines to initialize memory paging. Create shared ring
>> > + * and event channels to communicate with the hypervisor.
>> > + *
>> > + * Copyright (c) 2014 Dushyant Behl
>> > + * Copyright (c) 2009 by Citrix Systems, Inc. (Patrick Colp)
>> > + *
>> > + * This library is free software; you can redistribute it and/or
>> > + * modify it under the terms of the GNU Lesser General Public
>> > + * License as published by the Free Software Foundation; either
>> > + * version 2.1 of the License, or (at your option) any later version.
>> > + *
>> > + * This library is distributed in the hope that it will be useful,
>> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
>> > + * Lesser General Public License for more details.
>> > + *
>> > + * You should have received a copy of the GNU Lesser General Public
>> > + * License along with this library; if not, write to the Free Software
>> > + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
>>  02110-1301 USA
>> > + */
>> > +
>> > +#include "xc_private.h"
>> > +#include <xen/event_channel.h>
>> > +#include <xen/mem_event.h>
>> > +
>> > +/*
>> > + * Mem paging ring and event channel setup routine.
>> > + * Setup a shared ring and an event channel to communicate between
>> > + * hypervisor and the tool performing mem paging operations.
>> > + * The function will return zero on successful completion and will
>> > + * return -1 on failure at any intermediate step setting up errno
>> > + * properly.
>> > + */
>> > +int xc_mem_paging_ring_setup(xc_interface *xch, domid_t domain_id,
>> > +                                void *ring_page, int *port,
>> > +                                uint32_t *evtchn_port,
>> > +                                xc_evtchn **xce_handle,
>> > +                                mem_event_back_ring_t *back_ring)
>> > +{
>> > +    int rc;
>> > +    uint64_t ring_pfn, mmap_pfn;
>> > +
>> > +    /* Map the ring page */
>> > +    xc_get_hvm_param(xch, domain_id, HVM_PARAM_PAGING_RING_PFN,
>> &ring_pfn);
>> > +    mmap_pfn = ring_pfn;
>> > +    ring_page = xc_map_foreign_batch(xch, domain_id,
>> > +                                        PROT_READ | PROT_WRITE,
>> &mmap_pfn, 1);
>> > +    if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
>> > +    {
>> > +        /* Map failed, populate ring page */
>> > +        rc = xc_domain_populate_physmap_exact(xch, domain_id,
>> > +                                              1, 0, 0, &ring_pfn);
>> > +        if ( rc != 0 )
>> > +        {
>> > +            PERROR("Failed to populate ring gfn\n");
>> > +            return -1;
>> > +        }
>> > +
>> > +        mmap_pfn = ring_pfn;
>> > +        ring_page = xc_map_foreign_batch(xch, domain_id,
>> > +                                        PROT_READ | PROT_WRITE,
>> &mmap_pfn, 1);
>> > +
>> > +        if ( mmap_pfn & XEN_DOMCTL_PFINFO_XTAB )
>> > +        {
>> > +            PERROR("Could not map the ring page\n");
>> > +            return -1;
>> > +        }
>> > +    }
>> > +
>> > +    /* Initialise Xen */
>> > +    rc = xc_mem_paging_enable(xch, domain_id, evtchn_port);
>> > +    if ( rc != 0 )
>> > +    {
>> > +        switch ( errno ) {
>> > +            case EBUSY:
>> > +                ERROR("mempaging is (or was) active on this domain");
>> > +                break;
>> > +            case ENODEV:
>> > +                ERROR("mempaging requires Hardware Assisted Paging");
>> > +                break;
>> > +            case EMLINK:
>> > +                ERROR("mempaging not supported while iommu passthrough
>> is enabled");
>> > +                break;
>> > +            case EXDEV:
>> > +                ERROR("mempaging not supported in a PoD guest");
>> > +                break;
>> > +            default:
>> > +                PERROR("mempaging: error initialising shared page");
>> > +                break;
>> > +        }
>> > +        return -1;
>> > +    }
>> > +
>> > +    /* Open event channel */
>> > +    *xce_handle = xc_evtchn_open(NULL, 0);
>> > +    if ( *xce_handle == NULL )
>> > +    {
>> > +        PERROR("Failed to open event channel");
>> > +        return -1;
>> > +    }
>>
>>  This is inappropriate inside libxenguest.  The user of the library
>> possibly already has open evtchn handle.  While this is only wasteful of
>> an fd under linux, I believe it will result in open() failing under some
>> of the *BSDs
>>
>> The correct way of doing this is to have the caller of
>> xc_mem_paging_ring_setup() provide their xce_handle, and require them to
>> open one if they need to.
>>
> I think this is heavy handed. I mean, the idea here is to relieve the
> caller from doing all the setup work.
>
>
> calling xc_evtchn_open() is *not* something which should ever be relieved
> from the caller.
>
>
>
>  In fact, you could argue that a follow-up patch should encapsulate all
> the cleanup.
> And then consumers of this particular module call setup, teardown, use the
> intermediate result, no concerns.
>
>
> I realise I am not a maintainer, so ultimately it is Ian/Ian you have to
> convince in this matter.
>
>
> If I were doing this, there would be two patches.  The first touches libxc
> alone and introduces this setup function and a teardown function, both
> written correctly and bugfree.  The second patch modifies xenpaging to use
> the new setup and teardown functions.
>
> As it currently stands, this is a halfbaked copy and paste of too much
> setup code, moving into a library without its equivalent teardown, and a
> promise of a fixup patch.
>
Ok, so next submit should have both setup and teardown. I'm guessing the
Ian's will have final word on the evtchn thing?


>
>
>
>  LGTM, to be honest.
>
>>
>> > +
>> > +    /* Bind event notification */
>> > +    rc = xc_evtchn_bind_interdomain(*xce_handle, domain_id,
>> *evtchn_port);
>> > +    if ( rc < 0 )
>> > +    {
>> > +        PERROR("Failed to bind event channel");
>> > +        return -1;
>> > +    }
>> > +    *port = rc;
>> > +
>> > +    /* Initialise ring */
>> > +    SHARED_RING_INIT((mem_event_sring_t *)ring_page);
>> > +    BACK_RING_INIT(back_ring, (mem_event_sring_t *)ring_page,
>> PAGE_SIZE);
>> > +
>> > +    /* Now that the ring is set, remove it from the guest's physmap */
>> > +    if ( xc_domain_decrease_reservation_exact(xch, domain_id, 1, 0,
>> &ring_pfn) )
>> > +    {
>> > +        PERROR("Failed to remove ring from guest physmap");
>> > +        return -1;
>> > +    }
>>
>>  There is a race condition here where the guest can play with the
>> post-initialised ring state.
>>
> Well-known, not the place to fix it here.
>
>
> I don't know how exactly the security policy applies here.  My
> understanding is that the hypervisor side paging code has been declared
> stable and therefore subject to XSAs.
>
> The libxc codes is, is far as I can tell, just thin wrappers around the
> hypercalls.  At the moment, the xenpaging utility is just example code, so
> is probably fine from an XSA point of view.
>
> However, moving code from xenpaging into libxc changes this split.
>
>
> Speaking now with my XenServer security hat on,
>
> Irrespective of whether it would technically be declared an XSA or not, it
> is stupid to knowingly put a latent security bug in a library.  The best
> case is that the development tree has a security bug in it for a while.
> The worst case is that you forget to fix it before the next Xen release and
> everyone in the world picks up the security bug as part of their stable
> code.
>
You do realize all this does is put a ring into the guest addressable
memory. Exactly what PV drivers do, and Xen on behalf of qemu does.

So ... ?

Andres

>
> So no - this is absolutely the place to fix it.
>
> ~Andrew
>

[-- Attachment #1.2: Type: text/html, Size: 19568 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2014-06-13 16:09 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-06-12  9:23 [Xen-Devel] [PATCH] [GSOC14] refactored mempaging code from xenpaging to libxc Dushyant Behl
2014-06-12  9:42 ` Dave Scott
2014-06-12 13:22 ` Andrew Cooper
2014-06-12 13:27   ` Ian Campbell
2014-06-12 13:29     ` Andrew Cooper
2014-06-12 13:49       ` Andres Lagar Cavilla
2014-06-12 13:55         ` Ian Campbell
2014-06-12 14:01           ` Andres Lagar Cavilla
2014-06-12 13:59   ` Andres Lagar Cavilla
2014-06-12 14:03     ` Dushyant Behl
2014-06-12 14:11     ` Andrew Cooper
2014-06-12 14:21       ` Ian Campbell
2014-06-12 14:46         ` Andrew Cooper
2014-06-13 14:16           ` Dushyant Behl
2014-06-13 14:23             ` Dushyant Behl
2014-06-13 14:37               ` Andrew Cooper
2014-06-13 15:03               ` Andres Lagar Cavilla
2014-06-13 14:51             ` Andrew Cooper
2014-06-13 15:00               ` Dushyant Behl
2014-06-13 15:06                 ` Andrew Cooper
2014-06-13 15:07               ` Andres Lagar Cavilla
2014-06-13 15:55                 ` Andrew Cooper
2014-06-13 16:09                   ` Andres Lagar Cavilla

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.